Jewett, Jim J
jim.jewett at eds.com
Tue Mar 30 15:26:39 EST 2004
> In fact, you've just given a stronger reason for keeping "1.1".
> Currently, compiling a .py file containing "1.1" produces a .pyc file
> containing "1.1000000000000001". .pyc files are supposed to be
> platform-independent. If these files are then run on a platform with
> different floating-point precision, the .py and the .pyc will produce
> different results.
In a previous job, every system upgrade meant a C compiler upgrade. We
would recompile everything and rerun a week of production data as a
regression test. We would get different results. Then I had to find
each difference to let the customer decide whether it was large enough
to really matter. (It never was.)
I would have been very grateful if I could have flipped a switch to say
"Do the math like the old version, even if it was buggy. Do it just this
once, so that I can show the customer that any changes are intentional!"
Running a .pyc created on system1 should produce the same results you
got on system1, even if system2 could do a better job. Printing a
dozen zeros followed by a 1 tells system2 just how precise the
calculations should be.
Yes, Decimal would be better for this, but historically it wasn't there.
For that matter, Decimal might be a better default format for 1.1, if
a language were starting fresh. It still wouldn't be perfect, though.
How many digits should 1.1/3 print?
More information about the Python-Dev