Python fails on math
steve+comp.lang.python at pearwood.info
Fri Feb 25 01:33:15 CET 2011
On Thu, 24 Feb 2011 10:40:45 -0600, Robert Kern wrote:
> On 2/24/11 5:55 AM, Steven D'Aprano wrote:
>> On Wed, 23 Feb 2011 13:26:05 -0800, John Nagle wrote:
>>> The IEEE 754 compliant FPU on most machines today, though, has an
>>> 80-bit internal representation. If you do a sequence of operations
>>> that retain all the intermediate results in the FPU registers, you get
>>> 16 more bits of precision than if you store after each operation.
>> That's a big if though. Which languages support such a thing? C doubles
>> are 64 bit, same as Python.
> C double *variables* are, but as John suggests, C compilers are allowed
> (to my knowledge) to keep intermediate results of an expression in the
> larger-precision FPU registers. The final result does get shoved back
> into a 64-bit double when it is at last assigned back to a variable or
> passed to a function that takes a double.
(1) you can't rely on it, because it's only "allowed" and not mandatory;
(2) you may or may not have any control over whether or not it happens;
(3) it only works for calculations that are simple enough to fit in a
single expression; and
(4) we could say the same thing about Python -- there's no prohibition on
Python using extended precision when performing intermediate results, so
it too could be said to be "allowed".
It seems rather unfair to me to single Python out as somehow lacking
(compared to which other languages?) and to gloss over the difficulties
in "If you do a sequence of operations that retain all the intermediate
results..." Yes, *if* you do so, you get more precision, but *how* do you
do so? Such a thing will be language or even implementation dependent,
and the implication that it just automatically happens without any effort
seems dubious to me.
But I could be wrong, of course. It may be that Python, alone of all
modern high-level languages, fails to take advantage of 80-bit registers
in FPUs *wink*
More information about the Python-list