Combinatorics
Alex Martelli
aleax at aleax.it
Thu Oct 10 09:25:57 EDT 2002
Peter Dobcsanyi wrote:
...
> Alex Martelli <aleax at aleax.it> wrote:
>> I think a simpler optimization is to memoize fact (untested, I'm
...
> I don't know which one is simpler, but for "large" numbers the memoizing
> version is not a win in this case, it has a rather big memory footprint.
>
> For example, clocking the time of comb(32000, 31000) I got:
>
> time memory size
>
> combo 0.02 sec 2.5 Mbyte
> memoizing 0.42 sec 839 Mbyte !!!
Wow, hadn't considered that -- a wonder that everything isn't
thrashing to death with THAT much memory taken!
> The time for the memoizing version was taken the second time around, so
> _facts was already populated. Continuing with the populated _facts:
>
> n k combo memoizing
>
> 32000 16000 1.51 1.99
> 15000 7500 0.35 0.42
> 15000 14000 0.02 0.16
> 10000 5000 0.17 0.19
> 10000 9000 0.02 0.09
> 5000 2500 0.04 0.04
> 5000 4500 0.01 0.02
> 1000 500 0.0 0.0
Incidentally, if you do a lot of combinatorics and need reasonable
performance, you may want to look at gmpy.sf.net:
>>> import gmpy
>>> import time
>>> def timeco(n, k):
... reps = [None]*10
... start = time.clock()
... for x in reps: gmpy.comb(n, k)
... stend = time.clock()
... return (stend-start)/10.0
...
>>> print timeco(32000, 16000)
0.059
>>> print timeco(32000, 31000)
0.001
>>> print timeco(15000, 7500)
0.013
>>> print timeco(15000, 14000)
0.001
Of course my machine's likely different from yours, but it's
oldish & nothing special, just a 1.2 GHz Athlon, 256M DDRAM.
Alex
More information about the Python-list
mailing list