List Count
Steven D'Aprano
steve+comp.lang.python at pearwood.info
Mon Apr 22 19:28:17 EDT 2013
On Mon, 22 Apr 2013 22:25:50 +0100, Blind Anagram wrote:
> I have looked at solutions based on listing primes and here I have found
> that they are very much slower than my existing solution when the sieve
> is not large (which is the majority use case).
Yes. This is hardly surprising. Algorithms suitable for dealing with the
first million primes are not suitable for dealing with the first trillion
primes, and vice versa. We like to pretend that computer programming is
an abstraction, and for small enough data we often can get away with
that, but like all abstractions eventually it breaks and the cost of
dealing with real hardware becomes significant.
But I must ask, given that the primes are so widely distributed, why are
you storing them in a list instead of a sparse array (i.e. a dict)? There
are 50,847,534 primes less than or equal to 1,000,000,000, so you are
storing roughly 18 False values for every True value. That ratio will
only get bigger. With a billion entries, you are using 18 times more
memory than necessary.
--
Steven
More information about the Python-list
mailing list