On Fri, Jul 13, 2018 at 4:11 AM, Steven D'Aprano <firstname.lastname@example.org> wrote:
> There is no reason why primality testing can't be deterministic up to
> 2**64, and probabilistic with a ludicrously small chance of false
> positives beyond that. The implementation I use can be expected to fail
> on average once every 18 thousand years if you did nothing but test
> primes every millisecond of the day, 24 hours a day. That's good enough
> for most purposes :-)
What about false negatives? Guaranteed none? The failure mode of the
function should, IMO, be a defined and documented aspect of it.
Miller-Rabin or other pseudo-primality tests do not produce false negatives IIUC.
I'm more concerned in the space/time tradeoff in the primes() generator. I like this implementation for teaching purposes, but I'm well aware that it grows in memory usage rather quickly (the one Tim Peters posted is probably a better balance; but it depends on how many of these you create and how long you run them).
from math import sqrt, ceil
"Pretty good Sieve; skip the even numbers, stop at sqrt(candidate)"
lim = int(ceil(sqrt(candidate)))
if all(candidate % prime != 0 for prime in up_to(found, lim)):
So then how do you implement isprime(). One naive way is to compare it against elements of sieve_generator() until we are equal or larger than the test element. But that's not super fast. A pseudo-primality test is much faster (except for in the first few hundred thousand primes, maybe).
Keeping medicines from the bloodstreams of the sick; food
from the bellies of the hungry; books from the hands of the
uneducated; technology from the underdeveloped; and putting
advocates of freedom in prisons. Intellectual property is
to the 21st century what the slave trade was to the 16th.