Turn off ZeroDivisionError?

Mark Dickinson dickinsm at gmail.com
Mon Feb 11 02:12:56 CET 2008

On Feb 10, 5:50 pm, Ben Finney <bignose+hates-s... at benfinney.id.au>
> Most people would not want this behaviour either::
>     >>> 0.1
>     0.10000000000000001

Sure. And if it weren't for backwards-compatibility and speed issues
could reasonably propose making Decimal the default floating-point
in Python (whilst still giving access to the hardware binary floating
I dare say that the backwards-compatibility isn't really a problem:  I
imagine a migration strategy resulting in Decimal default floats in
Python 4.0  ;-).  But there are orders-of-magnitude differences in
that aren't going to be solved by merely rewriting decimal.py in C.

I guess it's all about tradeoffs.

> But the justification for this violation of surprise is "Python just
> does whatever the underlying hardware does with floating-point
> numbers". If that's the rule, it shouldn't be broken in the special
> case of division by zero.

I'm not convinced that this is really the justification, but I'm not
quite sure
what we're talking about here.  The justification for *printing*
0.1000...1 instead
of 0.1 has to do with not hiding binary floating-point strangeness
from users, since
they're eventually going to have to deal with it anyway, and hiding it
causes worse difficulties in understanding.  The justification for
the literal 0.1 not *be* exactly the number 0.1:  well, what are the
Decimal and Rational are very slow in comparison with float, and
Decimal wasn't even available until recently.


More information about the Python-list mailing list