[Python-ideas] Floating point "closeness" Proposal Outline

Steven D'Aprano steve at pearwood.info
Tue Jan 20 11:40:12 CET 2015

On Mon, Jan 19, 2015 at 08:10:35PM -0800, Neil Girdhar wrote:

> If you decide to invent a relative error function, 

The error functions we have been talking about are hardly "invented". 
They're mathematically simple and obvious, and can be found in just 
about any undergraduate book on numerical computing:

- The absolute error between two quantities a and b is the 
  absolute difference between them, abs(a-b). 

- The relative error is the difference relative to some 
  denominator d, abs(a-b)/abs(d), typically with d=a or d=b. 

If you happen to know that b is the correct value, then it is common to 
choose b as the denominator. If you have no a priori reason to think 
either a or b is correct, or if you prefer a symmetrical function, a 
common choice is to use d = min(abs(a), abs(b)).

See, for example:


> my suggestion is: 
> (a-b)/b + log(b/a), which is nonnegative, zero only at equality, and 
> otherwise penalizes positive a for being different than some target 
> positive b.  To me, it seems like guessing b using 1.9b is better than 
> guessing it as 0.1b, and so on.  This corresponds to exponential KL 
> divergence, which has a clear statistical meaning, but only applies to 
> positive numbers.

Do you have a reference or derivation for this? I'm happy to admit that 
I'm no Knuth or Kahan, but I've read a bit of numerical computing[1] and 
I've never seen anyone add a log term. I'm not even sure why you would 
do so.

[1] I know just enough to know how much I don't know.


More information about the Python-ideas mailing list