[Python-ideas] Way to check for floating point "closeness"?
ron3200 at gmail.com
Fri Jan 16 20:26:45 CET 2015
On 01/16/2015 09:13 AM, Neil Girdhar wrote:
> Actually, I was wrong about the exponential distribution's KL divergence.
> It's the relative error (b-a)/b plus another term: log(b/a) — so I guess I
> don't see what relative error means except as a heuristic.
> Anyway, even if your symmetric error makes sense to you, does anyone
> already use it? If it were up to me, relative error would be b-a/b +
> log(b/a), but since no one uses that, I think it's a bad idea.
In this case, we are only confirming a single value is within an already
In most cases that is enough to confirm the coding of the algorithm is
correct, where as the more stringent methods you are thinking of is used to
confirm the behaviour of the algorithm is valid.
More information about the Python-ideas