Python -- floating point arithmetic

Zooko O'Whielacronx zooko at zooko.com
Wed Jul 7 23:41:29 EDT 2010


I'm starting to think that one should use Decimals by default and
reserve floats for special cases.

This is somewhat analogous to the way that Python provides
arbitrarily-big integers by default and Python programmers only use
old-fashioned fixed-size integers for special cases, such as
interoperation with external systems or highly optimized pieces (in
numpy or in native extension modules, for example).

Floats are confusing. I've studied them more than once over the years
but I still can't really predict confidently what floats will do in
various situations.

And most of the time (in my experience) the inputs and outputs to your
system and the literals in your code are actually decimal, so
converting them to float immediately introduces a lossy data
conversion before you've even done any computation. Decimal doesn't
have that problem.

>From now on I'll probably try to use Decimals exclusively in all my
new Python code and switch to floats only if I need to interoperate
with an external system that requires floats or I have some tight
inner loop that needs to be highly optimized.

Regards,

Zooko



More information about the Python-list mailing list