On Tue, Mar 04, 2014 at 07:42:28PM -0800, Mark H. Harris wrote:
The idea of *python number* means that there are no types, no limits, no constraints, and that all *python numbers *are dynamic. The upper level syntax is also very simple; all* python numbers *are simply human.
What makes this a "python number"? In what way are they "dynamic"?
My influence for this preference is rooted in the Rexx programming language; attributed to Mike Cowlishaw, former IBM fellow. The Rexx programming language is dynamic, and has no types. To put it more succinctly for those of you who have not used Rexx, the only data type is a string of characters (that's it). *Rexx numbers* are simply those strings of characters that may be interpreted as a valid *Rexx number.*
I haven't used Rexx, but I have used Hypertalk, which worked the same way. If you don't care about performance, it can work quite well.
The Python language might be changed to adopt the *python number* concept for *all math processing*, unless explicitly modified.
Well, there's a bit of a problem here. Numbers in Python are not just used for maths processing. They're also used for indexing into lists, as keys in dicts, for bitwise operations, for compatibility with external libraries that have to interface with other languages, as flags, etc. For some of these purposes, we *really do* want to distinguish between ints and floats that happen to have the same value: mylist[3] # allowed mylist[3.0] # not allowed Now, you might argue that this distinction is unnecessary, but it runs quite deep in Python. You'd need to change that philosophy for this idea to work.
This goes somewhat beyond using decimal floating point as a default numerical type. It means using human numeric expressions that meet human expectation for numeric processing by default.
I don't understand what that means, unless it means that you want Python to somehow, magically, make all the unintuitive issues with floating point to disappear. Good luck with that one. If you want that, Decimal is not the answer. It would have to be a Rational type, like Fraction, although even that doesn't support surds. Fractions have their own problems too. Compare the status quo: py> 3**0.5 1.7320508075688772 with a hypothetical version that treats all numbers as exact fractions: py> 3**0.5 Fraction(3900231685776981, 2251799813685248) Which do you think the average person using Python as a calculator would prefer to see? And another issue: before he invented Python, Guido spent a lot of time working with ABC, which used Fractions as the native number type. The experience soured him on the idea for nearly two decades. Although Guido has softened his stance enough to allow the fractions module into the standard library, I doubt he would allow Fractions to become the underlying implementation of numbers in Python. The problem is that fractions can be unbounded in memory, and some simple operations become extremely inefficient. For example, without converting to float, which is bigger? Fraction(296, 701) Fraction(355, 594) For many purposes, the fact that floats (whether binary or decimal) have finite precision and hence introduce rounding error is actually a good thing. Compare: py> 1e-300 + 1e300 1e+300 versus fractions: py> from fractions import Fraction as F py> F(10)**-300 + F(10)**300 Fraction(100000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000001, 100000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000) There are *very few* applications where such precision is needed or wanted. The performance hit in calculating such excessively precise numbers when the user doesn't need it will be painful. The same applies to Decimal, although to a lesser extent since Decimals do have a finite precision. Unlike fractions, they cannot grow without limit: py> Decimal("1e-300") + Decimal("1e300") Decimal('1.000000000000000000000000000E+300') -- Steven