[Python-ideas] Yet another sum function (fractions.sum)

Stefan Behnel stefan_ml at behnel.de
Mon Aug 19 11:35:34 CEST 2013


David Mertz, 16.08.2013 22:06:
> My intuition was that one might do better than mergesum() by binning
> numbers.  I.e. this implementation:
> 
>   def binsum(seq):
>       bins = dict()
>       for f in seq:
>           bins[f.denominator] = bins.get(f.denominator, 0) + f
>       return mergesum(list(bins.values()))

    def mergesum(seq):
        while len(seq) > 1:
            new = [a + b for a, b in zip(seq[:-1:2], seq[1::2])]
            if len(seq) % 2:
                new.append(seq[-1])
            seq = new
        return seq[0]

> Indeed I am right, but the effect doesn't show up until one looks at a
> fairly large collection of fractions:
> 
>   538-code % python3 -m timeit -s 'from tmpsum import mergesum, binsum,
> nums;
>   nums=nums[:50000]' 'mergesum(nums)'
>   10 loops, best of 3: 806 msec per loop
>   539-code % python3 -m timeit -s 'from tmpsum import mergesum, binsum,
> nums;
>   nums=nums[:50000]' 'binsum(nums)'
>   10 loops, best of 3: 627 msec per loop

Simply sorting by denominator gives me a visible advantage over the above:

def sortsum(seq):
    def key(f):
        return f.denominator if isinstance(f, F) else 1
    seq = sorted(seq, key=key)
    if len(seq) < 3:
        return sum(seq)
    return mergesum(seq)

$ python3.4 -m timeit -s '...; c=nums[:10000]' 'sortsum(c)'
10 loops, best of 3: 76.9 msec per loop
$ python3.4 -m timeit -s '...; c=nums[:10000]' 'binsum(c)'
10 loops, best of 3: 83.2 msec per loop
$ python3.4 -m timeit -s '...; c=nums[:10000]' 'mergesum(c)'
10 loops, best of 3: 106 msec per loop

$ python3.4 -m timeit -s '...; c=nums[:1000]' 'sortsum(c)'
100 loops, best of 3: 9.49 msec per loop
$ python3.4 -m timeit -s '...; c=nums[:1000]' 'binsum(c)'
100 loops, best of 3: 12.9 msec per loop
$ python3.4 -m timeit -s '...; c=nums[:1000]' 'mergesum(c)'
100 loops, best of 3: 9.66 msec per loop

$ python3.4 -m timeit -s '...; c=nums[:100]' 'sortsum(c)'
1000 loops, best of 3: 951 usec per loop
$ python3.4 -m timeit -s '...; c=nums[:100]' 'mergesum(c)'
1000 loops, best of 3: 937 usec per loop

$ python3.4 -m timeit -s '...; c=nums[:10]' 'sortsum(c)'
10000 loops, best of 3: 88.8 usec per loop
$ python3.4 -m timeit -s '...; c=nums[:10]' 'mergesum(c)'
10000 loops, best of 3: 80.2 usec per loop


So, it's a bit slower for small sequences (15 microseconds for <100 items
sounds acceptable to me), but it's quite a bit faster for long sequences.

It seems to be slowing down a bit for really long sequences, though:

$ python3.4 -m timeit -s '...; c=nums[:100000]' 'mergesum(c)'
10 loops, best of 3: 1 sec per loop
$ python3.4 -m timeit -s '...; c=nums[:100000]' 'sortsum(c)'
10 loops, best of 3: 748 msec per loop
$ python3.4 -m timeit -s '...; c=nums[:100000]' 'binsum(c)'
10 loops, best of 3: 743 msec per loop


However, unpacking the fractions for the bin summing makes this way faster
for larger sequences:

def binsum2(seq):
    bins = dict()
    get_bin = bins.get
    _isinstance = isinstance
    for f in seq:
        d, n = (f.denominator,f.numerator) if _isinstance(f,F) else (1,f)
        bins[d] = get_bin(d, 0) + n
    return mergesum([ F(n,d) for d, n in sorted(bins.items()) ])

$ python3.4 -m timeit -s '...; c=nums[:10000]' 'sortsum(c)'
10 loops, best of 3: 76.9 msec per loop
$ python3.4 -m timeit -s '...; c=nums[:10000]' 'binsum2(c)'
10 loops, best of 3: 21 msec per loop

$ python3.4 -m timeit -s '...; c=nums[:1000]' 'sortsum(c)'
100 loops, best of 3: 9.49 msec per loop
$ python3.4 -m timeit -s '...; c=nums[:1000]' 'binsum2(c)'
100 loops, best of 3: 8.7 msec per loop

But again, slower for short ones:

$ python3.4 -m timeit -s '...; c=nums[:100]' 'mergesum(c)'
1000 loops, best of 3: 937 usec per loop
$ python3.4 -m timeit -s '...; c=nums[:100]' 'binsum2(c)'
1000 loops, best of 3: 1.34 msec per loop

$ python3.4 -m timeit -s '...; c=nums[:10]' 'mergesum(c)'
10000 loops, best of 3: 80.2 usec per loop
$ python3.4 -m timeit -s '...; c=nums[:10]' 'binsum2(c)'
10000 loops, best of 3: 137 usec per loop


Which is expected, because short sequences make it less likely to actually
find common denominators. Assuming that the set of distinct denominators is
usually small compared to the number of values, this would be the right
tradeoff.

Maybe inlining the denominator normalisation instead of creating Fraction
instances at the end would give another boost here.

In any case, this huge difference in performance speaks for providing some
kind of specialised sum() function in the fractions module.

Stefan




More information about the Python-ideas mailing list