[Python-ideas] Expansion of the range of small integers

Terry Reedy tjreedy at udel.edu
Mon Sep 17 19:49:07 CEST 2012

On 9/17/2012 8:41 AM, Serhiy Storchaka wrote:
> Now in the CPython small integer numbers from -5 up to 256 inclusive are
> preallocated at the start. It allows to reduce memory consumption and
> time of creation of the integers in this range. In particular this
> affects the speed of short enumerations. Increasing the range to the
> maximum (from -32767 to 32767 inclusive), we can speed up longer
> enumerations.

In 2.x before 3.0, the range was about -5 to 10 or so ;-).
It was expanded when bytes were added.

It might be interesting to instrument the int allocator to count 
allocations of ints up to say 10000 in real apps.

> Microbenchmarks:
> ./python -m timeit  "for i in range(10000): pass"
> ./python -m timeit  -s "a=[0]*10000"  "for i, x in enumerate(a): pass"
> ./python -m timeit  -s "a=[0]*10000"  "i=0"  "for x in a: i+=1"
> ./python -m timeit  -s "a=[0]*10000"  "for i in range(len(a)): x=a[i]"
> Results:
>   non-patched     patched
>     530 usec      337 usec    57%
>    1.06 msec      811 usec    31%
>    1.34 msec     1.13 msec    19%
>    1.42 msec     1.22 msec    16%
> Shortcomings:
> 1) Memory consumption increases by constant 1-1.5 MB. Or half of it if
> the range is expanded only in a positive direction. This is not a
> problem on most modern computers. But would be better if the parameters
> NSMALLPOSINTS and NSMALLNEGINTS have been configurable at build time.

They are -- by patching as you did ;-). The general philosophy seems to 
be to discourage user tuning by not making it too easy.
> 2) A little bit larger Python start time. I was not able to measure the
> difference, it is too small.

What is hard to guess is the effect on cache hits and misses in real apps.
Terry Jan Reedy

More information about the Python-ideas mailing list