How can I speed up a script that iterates over a large range (600 billion)?
Ian Kelly
ian.g.kelly at gmail.com
Tue Jun 21 16:02:21 EDT 2011
On Tue, Jun 21, 2011 at 1:48 PM, John Salerno <johnjsal at gmail.com> wrote:
> Here is what I have so far. Initially the get_factors function just
> iterated over the entire range(2, n + 1), but since a number can't
> have a factor greater than half of itself, I tried to shorten the
> range by doing range(2, n //2), but that still leaves 300 billion
> numbers to go through.
Without giving you the answer, I will note that the range can be
further reduced by quite a lot (I'm talking orders of magnitude, not
just by half).
> def get_primes(number_list):
> primes = number_list[:]
>
> for n in number_list:
> for x in range(2, n):
> if n % x == 0:
> primes.remove(n)
> break
>
> return primes
Also, primality testing and factorization are very similar problems,
and the same range optimization could be applied here as well.
Cheers,
Ian
More information about the Python-list
mailing list