tim.peters at gmail.com
Sun Mar 13 02:30:03 CET 2005
> The approach I'm using relies on being able to exactly multiply the 0 or
> 1 bit error term mantissas by an integer (a count of how many times the
> term occurs). With a Python dictionary keeping the counts, many steps
> are saved and the tool becomes much more memory friendly than with the
> previous queue approach.
Cool! That's a nice approach.
For contrast, here's one that doesn't use frexp(), and is probably
faster because of that; internally, len(sums) probably won't exceed 5
in real life (but could get as large as 2K for pathological inputs,
spanning fp's full dynamic range):
sums = [0.0]
for x in iterable:
for i in xrange(len(sums)-2, -1, -1):
y = sums[i]
if abs(x) < abs(y):
x, y = y, x
hi = x+y
lo = y - (hi - x)
sums[i+1] = lo
x = hi
sums = x
return sum(reversed(sums), 0.0)
In effect, that keeps an arbitrarily long list of "error terms" so
that no info is lost before the final sum(). I think it's surprising
at first how short that list usually is.
> Aside from being a nice recipe, this was a good exercise in seeing what
> pure Python can do with IEEE-754 guarantees.
Now you know I how feel everytime sometime proclaims that fp
arithmetic is inherently unreliable <0.3 wink>. Learning how to use
it correctly is indeed the blackest of obscure arts, though.
> The modest memory consumption and the typical O(n) runtime are a nice
> plus (no pun intended).
Yup, they're all emminently practical if you don't care too much about
speed. The one above is the fastest I tried, and it still takes some
40x longer than plain sum() over a million-element random.random()
More information about the Python-Dev