Why isn't Python king of the hill?

E. Mark Ping emarkp at CSUA.Berkeley.EDU
Sat Jun 2 17:28:45 EDT 2001


In article <slrn9hi7mr.uv.grante at tuxtop.visi.com>,
Grant Edwards <grante at visi.com> wrote:
>On Sat, 2 Jun 2001 16:49:58 +0000 (UTC), E. Mark Ping
><emarkp at CSUA.Berkeley.EDU> wrote:

>>That's silly and just plain wrong.  You should depend on
>>understanding how FP works.  And in this case, 1.0 + 2.0 == 3.0
>>every time on every platform that I've run into--I don't know that
>>because I just happened to try it out, but rather I know how the FP
>>works on those platforms and know that it is correct.
>
>So do I.  But every time I've seen an application _depend_ on exact
>representations, its caused problems.  Perhaps the FP code you've
>seen was better, but in my experience code that depends on exact
>represention has caused problems.

Ah, but your example was exact representation of *integers*, not just
any value.

>>You should definitely read Pete Becker's articles in the June, July
>>and October 2000 issues of The C/C++ Users Journal.  He explains why
>>you're statement is insufficient and incorrect.  For instance, two
>>very large numbers might really be the same, but be off by a bit.
>
>Huh?  They're the same, but they're not the same?

Sorry I didn't state it very clearly.  I meant that two values might
have been arrived at by different computation paths and be different
only because of limitations of FP.  That is, had they not been limited
by finite precision, they would be the same value.

>>Checking the difference of the values will yield another large
>>number, and a method like "IsCloseTo" will incorrectly fail.
>
>How can it fail?  Either A is within X% of B or it isn't.  What's the
>problem?

But your algorithm didn't specify %, rather you said:

  fabs((1.0 + 2.0) - 3.0) < [a small number] 

If it were percentage, you'd be adding a division operation too.
That's getting to be a large amount of overhead.

The point is to understand FP well enough to use it correctly in your
particular code.  Sometimes that may be exact representation,
sometimes not.

>>Really, floating point arithmetic has well-defined semantics;
>>guessing and treating them as if they have random components is the
>>lazy and error-prone way to use them.
>
>I'm not guessing.  I know how IEEE floating point works and have
>implimented various parts of it over the years.  The applications
>programmers I've dealt with would have been far, far better off if
>they never depended on exact representation.

The reason I said "guessing" was because you claimed that:

>>>Of course it is not true.  But if you pretend it is, you've got a
>>>much better chance of producing working code.

When your code should never have a "chance" of working.  It should be
correct and you should know why.  Again, I realize that we've just
about all knowingly taken chances in the past, but that's not the same
as advocating it.

-- 
Mark Ping                     
emarkp at soda.CSUA.Berkeley.EDU 



More information about the Python-list mailing list