# bug in modulus?

Tim Peters tim.peters at gmail.com
Tue May 2 19:30:14 CEST 2006

```[Andrew Koenig, on the counter intuitive -1e-050 % 2.0 == 2.0 example]
>> I disagree.  For any two floating-point numbers a and b, with b != 0, it
>> is always possible to represent the exact value of a mod b as a
>> floating-point number--at least on every floating-point system I have ever
>> encountered. The implementation is not even that difficult.

[also Andrew]
> Oops... This statement is true for the Fortran definition of modulus (result
> has the sign of the dividend) but not the Python definition (result has the
> sign of the divisor).  In the Python world, it's true only when the dividend
> and divisor have the same sign.

Note that you can have it in Python too, by using math.fmod(a, b)

IMO, it was a mistake (and partly my fault cuz I didn't whine early)
for Python to try to define % the same way for ints and floats.  The
hardware realities are too different, and so are the pragmatics.  For
floats, it's actually most useful most often to have both that a % b
is exact and that 0.0 <= abs(a % b) <= abs(b/2).  Then the sign of a%b
bears no relationship to the signs of a and b, but for purposes of
modular reduction it yields the result with the smallest possible
absolute value.  That's often valuable for floats (e.g., think of
argument reduction feeding into a series expansion, where time to
convergence typically depends on the magnitude of the input and
couldn't care less about the input's sign), but rarely useful for
ints.

I'd like to see this change in Python 3000.  Note that IBM's proposed
standard for decimal arithmetic (which Python's "decimal" module
implements) requires two operations here, one that works like
math.fmod(a, b) (exact and sign of a), and the other as described
above (exact and |a%b| <= |b/2|).  Those are really the only sane
definitions for a floating point modulo/remainder.

```