[Python-ideas] Python Numbers as Human Concept Decimal System

Andrew Barnert abarnert at yahoo.com
Wed Mar 5 09:10:18 CET 2014


My previous reply got google-grouped, so let me try again.


From: Mark H. Harris <harrismh777 at gmail.com>
Sent: Tuesday, March 4, 2014 7:42 PM


>   The Python language might be changed to adopt the python number

>concept for all math processing, unless explicitly modified. This goes 
>somewhat beyond using decimal floating point as a default numerical 
>type.  It means using human numeric expressions that meet human 
>expectation for numeric processing by default.


Different humans have different expectations in different situations. Trying to pick a single type that can handle all such expectations is an impossible dream. 

>   Under the covers (whatever we mean by that) processing is handled
>by decimal.Decimal, unless explicitly modified. What does this mean for
>python users in general?  Well, no more worrying about types at all... no ints,
>no floats, no rationals, no irrationals, no fractions, and certainly no binaries.
>In short, if its a human number, its a python number.


This is not really a meaningful concept. The types you dislike are not programming concepts that get in the way of human mathematics, they are human mathematical concepts. The integers, the rationals, and the reals all behave differently. And they're not the only types of numbers—Python handles complex numbers natively, and it's very easy to extend it with, say, quaternions or even arbitrary matrices that act like numbers.

The types that _are_ programming concepts—decimal and binary floats—are necessary, because you simply can't store real numbers in finite storage. And the very fact that they are inexact approximations means you can't ignore the types. For some uses, IEEE binary floats are best; for others, decimal floats are best; for others, fractions are best; for others, you even want to handle symbolic numbers like pi/2 exactly.

>   I am expecting that (just as in Rexx numbers) defining very clearly what

>is a python number  will be key for wide adaptation of the concept. But there
>should be no surprises for users, particularly average users. Anyone with 
>a middle school expectation of a numeric format should be able to use 
>python numbers without surprises.

Anyone with a middle school expectation will expect 1/3 to be a fraction—or, at least, something they can multiply by 3 to get exactly 1. Using an inexact decimal float instead of an inexact binary float is no improvement at all. Sure, it's an improvement in some _other_ cases, like 0.123, but if you want to deal with 1/3, today you can do so explicitly by using, say, Fraction(1, 3), while in your world it will no longer be possible.

And if you take the obvious way around that, you run into the same problem with 2 ** 0.5. Normal humans who write that would expect to be able to square it and get back 2, not an approximation to 2.

>However, for advanced users the full
>interface should be available (as is the context for Decimal) through coding
>based on knowledge and experience, yet the default context for Decimal 
>should be based on average users in most environments and use cases.


Is there a problem with the current default context?

I think just using Decimal as it is in Python today gets you everything you're asking for. Sure, you might want a nicer way to type them and repr them, which goes back to the previous thread, but why do you need to get rid of all of the other types as well?



More information about the Python-ideas mailing list