
Now that the checkin is done, I don't think it needs to be reverted. But, in general, we should probably abstain from making wholesale revisions that add zero value for the users.
The stylistic change from: ValueError, 'foo' ValueError('foo') is fine.
Changing MockThreading to subclass from object though borders on being a semantic change and should be done with care. In this particular case, I see no harm in it, but then I haven't tested it on a Py2.3 build with threading disabled.
As promised in the decimal.py header, the spec updates should all be considered as bugs and backported at some point after they are fully tested and we're happy with them all around. Also, as promised, the module should continue to run on Py2.3.
For the most part, many of the new operations can be implemented in terms of the existing ops or in terms of the support functions that we already use internally. Ideally, you can refactor common code while leaving almost all of the exisiting algorithm implementation code untouched.
The spec's choice of new method names is unfortunate. You'll have to come-up with something better than copy() and class().
FWIW, I think the new decimal development should probably be done in a branch off of the current head. That way, you can check-in at will and get feedback from everyone without risking the integrity of the head.
If you want to discuss anything during development, I'm usually available on AOL instant messaging with the screename: raymondewt
Likewise, consider soliciting Tim's input on how to implement the ln() operation. That one will be tricky to get done efficiently and correctly.
Raymond

Raymond Hettinger wrote:
As promised in the decimal.py header, the spec updates should all be considered as bugs and backported at some point after they are fully tested and we're happy with them all around. Also, as promised, the module should continue to run on Py2.3.
Ok. So far, I'm dealing just with this. decimal.py passes, for example, the old quantize.decTest, but not the new one.
My first step in this journey is to get the new test cases pass ok.
For the most part, many of the new operations can be implemented in terms of the existing ops or in terms of the support functions that we already use internally. Ideally, you can refactor common code while leaving almost all of the exisiting algorithm implementation code untouched.
Yes. Some of the existing code will be touched, but mostly for bug fixing.
The spec's choice of new method names is unfortunate. You'll have to come-up with something better than copy() and class().
The names, as the new functions will be discussed here in the second step. For example, I'm not absolute sure that something like...
Decimal("1100").xor(Decimal("0110")
Decimal("1010")
...is actually needed.
FWIW, I think the new decimal development should probably be done in a branch off of the current head. That way, you can check-in at will and get feedback from everyone without risking the integrity of the head.
This is a very good idea.
If you want to discuss anything during development, I'm usually available on AOL instant messaging with the screename: raymondewt
Likewise, consider soliciting Tim's input on how to implement the ln() operation. That one will be tricky to get done efficiently and correctly.
Great, thank you!!

[Raymond Hettinger]
... Likewise, consider soliciting Tim's input on how to implement the ln() operation. That one will be tricky to get done efficiently and correctly.
One low-effort approach is to use a general root-finding algorithm and build ln(x) on top of exp() via (numerically) solving the equation exp(ln(x)) == x for ln(x). That appears to be what Don Peterson did in his implementation of transcendental functions for Decimal:
http://cheeseshop.python.org/pypi/decimalfuncs/1.4
In a bit of disguised form, that appears to be what Brian Beck and Christopher Hesse also did:
http://cheeseshop.python.org/pypi/dmath/0.9
The former is GPL-licensed and the latter MIT, so the latter would be easier to start from for core (re)distribution.
However, the IBM spec requires < 1 ULP worst-case error, and that may be unreasonably hard to meet with a root-finding approach. If this can wait a couple months, I'd be happy to own it. A possible saving grace for ln() is that while the mathematical function is one-to-one, in any fixed precision it's necessarily many-to-one (e.g., log10 of the representable numbers between 10 and 1e100 must be a representable number between 1 and 100, and there are a lot more of the former than of the latter -- many distinct representable numbers must map to the same representable log).

On 4/13/07, Tim Peters tim.peters@gmail.com wrote:
One low-effort approach is to use a general root-finding algorithm and build ln(x) on top of exp() via (numerically) solving the equation exp(ln(x)) == x for ln(x). That appears to be what Don Peterson did in his implementation of transcendental functions for Decimal:
http://cheeseshop.python.org/pypi/decimalfuncs/1.4
In a bit of disguised form, that appears to be what Brian Beck and Christopher Hesse also did:
http://cheeseshop.python.org/pypi/dmath/0.9
Whatever they do, they are very slow :-) I get the following timings with decimalfuncs (dmath is even slower):
exp(Decimal(1)): 0.02 seconds at 28 digits, 0.2 seconds at 100 digits ln(Decimal(2)): 0.1 seconds at 28 digits, 0.6 seconds at 100 digits
A while back I implemented exp and ln using fixed-point integer arithmetic, which unsurprisingly turned out to be much faster than working with Decimals. If I convert a decimal input to fixed-point, calculate exp or log, and convert the output back to decimal, I get the following timings:
exp(1): 0.0008 seconds at 28 digits, 0.001 seconds at 100 digits ln(2): 0.001 seconds at 28 digits, 0.007 seconds at 100 digits
[Note: I didn't use the exact numbers 1 and 2 for these timings; I added noise digits to make sure the decimal conversion code wasn't taking any shortcuts.]
I use the Taylor series for exp(x) but first divide x by 2^32 to improve the convergence rate and finally square the sum 32 times. (32 could be replaced by any number; 32 just happens to be fast for a large range of precisions on my system.) Looking more closely at the 28-digit exp calculation:
0.0005 s is spent converting the Decimal to fixed-point 0.0001 s is spent summing the Taylor series (integer arithmetic) 0.0002 s is spent converting the fixed-point number back to Decimal
For comparison, multiplying two 28-digit decimals takes 0.0003 seconds. So, computing exp this way is only about twice as expensive as multiplication at 28-digit precision; at 100 digit precision, I find that they are equally expensive.
This approach assumes that the input has magnitude close to 1. For very large or very small numbers, you have to use the functional properties of exp to get a number close to 1, then convert back, all probably best (at least most easily) done entirely using Decimal arithmetic.
For ln, I use Newton's method to invert exp, with an initial estimate from math.log, doubling the working precision at each iteration so that only one full-precision evaluation of exp is necessary. Using fixed-point internally during the Newton iteration, ln should be at most twice as slow as exp.
Of course, this could all be irrelevant if the decimal module ever gets replaced by a C implementation...
Fredrik

Tim Peters wrote:
can wait a couple months, I'd be happy to own it. A possible saving grace for ln() is that while the mathematical function is one-to-one,
I'm working right now in making the old operation to pass the new tests ok.
Soon I'll cut a branch to work publicly on this (good idea from Raymond), and I'll be pleased to get your help here.
Two months is ok: the "updated decimal" won't be finished before that.
Thank you!
participants (4)
-
Facundo Batista
-
Fredrik Johansson
-
Raymond Hettinger
-
Tim Peters