Python2 distutils question: how to best "autoconfig"...?

Alex Martelli aleaxit at yahoo.com
Mon Dec 11 11:46:52 EST 2000


An issue came up in the GMPY project that looks like it
would need to be solved through some kind of 'automatic
configuration' -- compile a small auxiliary C program
(under exactly the same flags/options/whatever as those
Python has been compiled on, on this installation), run
it, check the results, and determine compilation flags
accordingly.

The specific issue is "how many bits of significance
does a Python float have".  (Or is there somewhere in
the Python headers a #define for this that I missed...?)
I first tried a rather naive algorithm (halving a
number that starts at 1.0, and keep adding it to 1.0
until the result equals 1.0), which, on Visual C++,
gives the expected result -- 53 bits of precision.

People using gcc (on Intel hardware) meanwhile seemed
to be measuring by this same algorithm '64 bits' of
precision -- which aren't really there for _most_
float computations... apparently gcc is sometimes
using the 80-bit "temporary" (extended precision)
format, which does have 64 bits of precision... but
most places it's down to 53, of course.

This matters (most particularly) when I'm trying to
get the 'heuristically best' rational number to
match a given floating-point number.  If I know the
real precision of the float is 53 or thereabouts, I
can get very nice results through a Stern-Brocot
tree (as suggested by Pearu Peterson) -- the resulting
rational number equals the 'exact fraction' that was
used for all 'small' numbers i and j, when I turn
float(i)/j into a rational (gmpy.mpq).  But if I
run under the misleading impression that the float
has 64 bits of precision, well, I end up with a huge,
'unreadable' rational-number.

Keith Briggs has very kindly supplied a simple C
implementation of Linnainmaa's algorithm to determine
a float's precision -- it *does* seem to work, but
he suspects it might depend on the optimization
setting and other flags of gcc, _unless_ one inserts
appropriate inline assembler-code to defeat gcc's
"helpful" (!) attempts to use 80-bits when it can.

Some "automatic-configuration" idea would appear to
be the best way out of this mess -- but now, I'm not
sure of how (if at all) the distutils are meant to
allow this.  Any guidance or suggestions would be
appreciated... thanks!


Alex






More information about the Python-list mailing list