PROPOSAL: exposure of values in limits.h and float.h

I apologize if I am hitting covered ground. What about a module (called limits or something like that) that would expose some appropriate #define's in limits.h and float.h. For example: limits.FLT_EPSILON could expose the C DBL_EPSILON limits.FLT_MAX could expose the C DBL_MAX limits.INT_MAX could expose the C LONG_MAX (although that particulay name would cause confusion with the actual C INT_MAX) - Does this kind of thing already exist somewhere? Maybe in NumPy. - If we ever (perhaps in Py3K) turn the basic types into classes then these could turn into constant attributes of those classes, i.e.: f = 3.14159 f.EPSILON = <as set by C's DBL_EPSILON> - I thought of these values being useful when I thought of comparing two floats for equality. Doing a straight comparison of floats is dangerous/wrong but is it not okay to consider two floats reasonably equal iff: -EPSILON < float2 - float1 < EPSILON Or maybe that should be two or three EPSILONs. It has been a while since I've done any numerical analysis stuff. I suppose the answer to my question is: "It depends on the situation." Could this algorithm for float comparison be a better default than the status quo? I know that Mark H. and others have suggested that Python should maybe not provide a float comparsion operator at all to beginners. Trent -- Trent Mick trentm@activestate.com

[Trent Mick]
I apologize if I am hitting covered ground. What about a module (called limits or something like that) that would expose some appropriate #define's in limits.h and float.h.
I personally have little use for these.
For example:
limits.FLT_EPSILON could expose the C DBL_EPSILON limits.FLT_MAX could expose the C DBL_MAX
Hmm -- all evidence suggests that your "O" and "A" keys work fine, so where did the absurdly abbreviated FLT come from <wink>?
limits.INT_MAX could expose the C LONG_MAX (although that particulay name would cause confusion with the actual C INT_MAX)
That one is available as sys.maxint.
- Does this kind of thing already exist somewhere? Maybe in NumPy.
Dunno. I compute the floating-point limits when needed with Python code, and observing what the hardware actually does is a heck of a lot more trustworthy than platform C header files (and especially when cross-compiling).
- If we ever (perhaps in Py3K) turn the basic types into classes then these could turn into constant attributes of those classes, i.e.: f = 3.14159 f.EPSILON = <as set by C's DBL_EPSILON>
That sounds better.
- I thought of these values being useful when I thought of comparing two floats for equality. Doing a straight comparison of floats is dangerous/wrong
This is a myth whose only claim to veracity is the frequency and intensity with which it's mechanically repeated <0.6 wink>. It's no more dangerous than adding two floats: you're potentially screwed if you don't know what you're doing in either case, but you're in no trouble at all if you do.
but is it not okay to consider two floats reasonably equal iff: -EPSILON < float2 - float1 < EPSILON
Knuth (Vol 2) gives a reasonable defn of approximate float equality. Yours is measuring absolute error, which is almost never reasonable; relative error is the measure of interest, but then 0.0 is an especially irksome comparand.
... I suppose the answer to my question is: "It depends on the situation."
Yes.
Could this algorithm for float comparison be a better default than the status quo?
No.
I know that Mark H. and others have suggested that Python should maybe not provide a float comparsion operator at all to beginners.
There's a good case to be made for not exposing *anything* about fp to beginners, but comparisons aren't especially surprising. This usually gets suggested when a newbie is surprised that e.g. 1./49*49 != 1. Telling them they *are* equal is simply a lie, and they'll pay for that false comfort twice over a little bit later down the fp road. For example, int(1./49*49) is 0 on IEEE-754 platforms, which is awfully surprising for an expression that "equals" 1(!). The next suggestion is then to fudge int() too, and so on and so on. It's like the arcade Whack-A-Mole game: each mole you knock into its hole pops up two more where you weren't looking. Before you know it, not even a bona fide expert can guess what code will actually do anymore. the-754-committee-probably-did-the-best-job-of-fixing-binary-fp- that-can-be-done-ly y'rs - tim
participants (2)
-
Tim Peters
-
Trent Mick