sys.float_info.epsilon

Tim Rowe digitig at gmail.com
Wed Feb 4 16:44:49 EST 2009


2009/2/4 Mark Dickinson <dickinsm at gmail.com>:

> There are many positive floating-point values smaller than
> sys.float_info.epsilon.
>
> sys.float_info.epsilon is defined as the difference between 1.0 and
> the next largest representable floating-point number.  On your system,
> the next largest float is almost certainly 1 + 2**-52, so
> sys.float_info.epsilon will be exactly 2**-52, which is around
> 2.2e-16.  This number is a good guide to the relative error
> from rounding that you can expect from a basic floating-point
> operation.
>
> The smallest positive floating-point number is *much* smaller:
> again, unless you have a very unusual platform, it's going to
> be 2**-1074, or around 4.9e-324.  In between 2**-1074 and
> 2**-52 there are approximately 4.4 million million million
> different floating-point numbers.  Take your pick!

Ok, that makes a lot of sense, thanks. I was thinking in terms of
Ada's floating point delta (rather than epsilon), which as I remember
it, if specified, applies uniformly across the whole floating point
range, not just to a particular point on the scale.

That just leaves me puzzled as to why Mark Summerfield used it instead
of a check against zero on user input. There's a later division by
2*x, so small values of x matter; there's no protection against
overflow (the numerator could perfectly well be somewhere up near
sys.float_info.max; a quick check in the shell tells me that Python3
will happily do sys.float_info.max/(2*sys.float_info.epsilon) and will
give me the answer "inf") so presumably he's trying to protect against
divide by zero. So my next question is whether there is any x that can
be returned by float() such that x != 0 but some_number / (2 * x)
raises a ZeroDivisionError?


-- 
Tim Rowe



More information about the Python-list mailing list