# range() vs xrange() Python2|3 issues for performance

Peter Otten __peter__ at web.de
Tue Aug 2 11:05:43 CEST 2011

```harrismh777 wrote:

> The following is intended as a helpful small extension to the xrange()
> range() discussion brought up this past weekend by Billy Mays...
>
> With Python2 you basically have two ways to get a range of numbers:
>     range() , which returns a list,  and
>    xrange() , which returns an iterator.
>
> With Python3 you must use range(), which produces an iterator; while
> xrange() does not exist at all (at least not on 3.2).
>
>     I have been doing some research in number theory related to Mersenne
> Primes and perfect numbers (perfects, those integers whose primary
> divisors when summed result in the number, not including the number
> itself)... the first few of those being  6, 28, 496, 8128, 33550336, etc
>
>     Never mind, but you know... are there an infinite number of them?
> ... and of course, are there any "odd" perfect numbers... well not under
> 10^1500....   I digress, sorry ...
>
>     This brought up the whole range() xrange() thing for me again
> because Python in any case is just not fast enough (no brag, just fact).
> So my perfect number stuff is written in C, for the moment. But, what
> about the differences in performance (supposing we were to stay in
> Python for small numbers) between xrange() vs range() [on Python2]
> versus range() [on Python3]?   I have put my code snips below, with some
> explanation below that...  these will run on either Python2 or
> Python3... except that if you substitute xrange() for range() for
> Python2  they will throw an exception on Python3... doh.

try:
range = xrange
except NameError:
pass
>
> So, here is PyPerfectNumbers.py ----------------------------
>
> def PNums(q):
>      for i in range(2, q):
>          m = 1
>          s = 0
>          while m <= i/2:

i/2 returns a float in Python 3; you should use i//2 for consistency.

>              if not i%m:
>                  s += m
>              m += 1
>          if i == s:
>               print(i)
>      return 0
>
> def perf(n):
>      sum = 0
>      for i in range(1, n):
>          if n % i == 0:
>              sum += i
>      return sum == n
>
> fperf = lambda n: n == sum(i for i in range(1, n) if n % i == 0)
>
> -----------------/end---------------------------------------
>
> PNums(8200) will crunch out the perfect numbers below 8200.
>
> perf(33550336) will test to see if 33550336 is a perfect number
>
> fperf(33550336) is the lambda equivalent of perf()
>
>
>     These are coded with range().  The interesting thing to note is that
> xrange() on Python2 runs "considerably" faster than the same code using
> range() on Python3. For large perfect numbers (above 8128) the
> performance difference for perf() is orders of magnitude.

Python 3's range() is indeed slower, but not orders of magnitude:

\$ python3.2 -m timeit -s"r = range(33550336)" "for i in r: pass"
10 loops, best of 3: 1.88 sec per loop
\$ python2.7 -m timeit -s"r = xrange(33550336)" "for i in r: pass"
10 loops, best of 3: 1.62 sec per loop

\$ cat tmp.py
try:
range = xrange
except NameError:
pass

def fperf(n):
return n == sum(i for i in range(1, n) if not n % i)

if __name__ == "__main__":
print(fperf(33550336))
\$ time python2.7 tmp.py
True

real    0m6.481s
user    0m6.100s
sys     0m0.000s
\$ time python3.2 tmp.py
True

real    0m7.925s
user    0m7.520s
sys     0m0.040s

I don't know what's causing the slowdown, maybe the int/long unification is
to blame.

> Actually,
> range() on Python2 runs somewhat slower than xrange() on Python2, but
> things are much worse on Python3.
>     This is something I never thought to test before Billy's question,
> because I had already decided to work in C for most of my integer
> stuff... like perfects. But now that it sparked my interest, I'm
> wondering if there might be some focus placed on range() performance in
> Python3 for the future, PEP?

```