![](https://secure.gravatar.com/avatar/713ff48a0efeccf34727d8938993c30a.jpg?s=120&d=mm&r=g)
I've done my own profiling with a different script. I was able to verify Andrews results. The pymalloc approach makes int creation slightly slower and has almost no effect on floats. The creation of 1000 times a list of 1000 ints (range(1000)) is about 20msec slower. It almost doubles the time for a trivial test "for i in range(1000): range(1000)" but that's an artificial test. Current allocation schema ------------------------- for i in range(1000): [float(x) for x in range(1000)] 10 loops, best of 3: 760 msec per loop for i in range(1000): range(1000) 10 loops, best of 3: 27 msec per loop for i in range(1000): [x for x in range(1000)]" 10 loops, best of 3: 218 msec per loop Without a free list ------------------- for i in range(1000): [float(x) for x in range(1000)] 10 loops, best of 3: 792 msec per loop for i in range(1000): range(1000)" 10 loops, best of 3: 51.5 msec per loop for i in range(1000): [x for x in range(1000)] 10 loops, best of 3: 241 msec per loop With a fixed free list of 2,500 objects each -------------------------------------------- for i in range(1000): [float(x) for x in range(1000)]" 10 loops, best of 3: 736 msec per loop for i in range(1000): range(1000)" 10 loops, best of 3: 25 msec per loop for i in range(1000): [x for x in range(1000)]" 10 loops, best of 3: 198 msec per loop As you can clearly see an approach with a small free list in a fixed size array like static PyFloatObject *free_list[PyFloat_MAXFREELIST] is even faster than block allocation. A small free list with 80 objects each would speed up the creation of floats and ints a bit *and* result in a quicker return of memory to the OS. Christian