On Mar 6, 2014, at 19:53, "Mark H. Harris" <harrismh777@gmail.com> wrote:

On Thursday, March 6, 2014 8:53:35 PM UTC-6, Andrew Barnert wrote:

The only solution without changing Python is to train end-users to write something correct, like Decimal('.1').

       hi Andrew,  yes, that is the problem.  And, to be fair, that example is not really the worst
because it might be expected that the user should know how to construct Decimals, and 
could be educated to construct them properly--> Decimal('0.1');  I concur.
       It is worse in these two scenarios (and others too):

No, that's not the same problem but worse, it's a completely different problem.

In the first case, the user is trying to specify 0.1 as a decimal number, which is exactly representable, just not the way he's entered it.

In these cases, the numbers are irrational, and therefore inherently impossible to represent exactly. It doesn't matter whether you use decimal floats or binary floats.

       No one would expect that the user should know to quote the number---when is that ever done?

What does quoting have to do with anything? Do you not understand why Decimal('.1') works? It's not because Python is a weakly-typed language that allows you to use strings as numbers, but because Decimal has a constructor that takes strings, for a specific and well-documented reason. Quoting here would just give you a TypeError. And nothing in your proposal(s) would change that. 

       QED   this is broken.  Again, we know perfectly well why its happening (I am not ignorant) but its not right. 

The only way to fix this second problem is to use a symbolic representation instead of a numeric one.

The good news is that Python's design makes that pretty easy to add on--as SymPy demonstrates. Your proposal would actually make this kind of add-on much harder.

The obvious solution for changing Python is to make it easier to create Decimal numbers correctly and/or harder to create them incorrectly. 
       Yes, and at a bare minimum, that is the immediate change I have been asking for, for now; nothing more.

And people have agreed with that, and proposed feasible extensions to it. Presenting it as the first step toward some radical and ill-formed transformation of the whole language weakens the case for this suggestion. Implying that it would solve problems (like handling irrational numbers) that it obviously can't also weakens the case.

       I answered Guido's questions in hopes that he might be willing to dialogue --- not dismiss.

He was willing to dialogue, as evidenced by his initial replies. It was only after you demonstrated your ignorance of basics fundamentals of Python (as a user, not even about its implementation) and math/numerics, and implied that you had no interest in correcting that ignorance, that he dismissed you. And he has every right to do so. He's the one donating his free time to make a great language for you (and many others) to use, not the other way around.

You proposed that Python should handle numbers in an OO way, with numbers being real objects, instances of classes, with a hierarchy including abstract base classes; all of this is already there in Python.

       Yes, it is... but its not designed to use decimal floating point by default... in order to do that the entire OO 
setup will have to be changed (what Guido called a sweeping reform).

Nonsense. Python already has classes for both binary and decimal floats. They both fit into the hierarchy properly. They both interact with other types the way they should. Changing which one you get from the literal "0.1" would be a simple change to the parser, and have no effect whatsoever to the OO setup.

You went off on a long digression about how you could implement this using the details of C++-style inheritance, when Python has a completely (and more powerful) different solution to inheritance that has already been used to solve this problem.
        No, I did not.  I answered Guido's questions regarding context as clearly as I could. If python has a more powerful
way to handle this situation, gladly do it!  I will be happy as a clam to beta test or help with the coding.

It's already been done, years ago, so nobody has to do it. There's already an OO system that works, with abstract and concrete classes, and with a rich operator overloading system. And it's already been used to give you all of the abstract and concrete numeric types you want, and they do the things you asked for in this section, all without having to do any jumps through virtual pointer tables.

You proposed some complicated AI-based solution to solve the problem of using separate number classes in a single expression, even though Python (almost exactly like C++, in this case) has already solved that problem with operator overloading.

        No, I did not.  I suggested that unifying numbers in an (AI) way could solve this problem (conceptually) by 
regarding all numbers as PythonNumbers.

So you didn't suggest a complicated AI-based solution, you suggested a complicated AI-based solution?

Decimals should not only be default, they should be integrated, not
tacked on with duck tape. 

How does that have anything to do with the first half of this paragraph?

And in what way are Decimals "tacked on with duck tape"? They're instances of Number, and of Real. They act like numbers of other types, including interacting properly with other types like int. What is missing from the Decimal type and the ABCs in Number that makes you think we need a radical change?

If all you're suggesting is moving Decimal from the decimal module to builtins and/or adding parser support for decimal literals, those are not sweeping changes, they're both very simple changes. (That doesn't necessarily mean they're _desirable_ changes, but that's another argument--an argument nobody can actually begin until you make it clear whether or not that's what you're suggesting, which can't happen until you learn enough about using Python to know what you're suggesting.)

(And note that Python is flexible enough that third-party libraries can easily insert new types like quaternions, matrices, symbolic expressions, etc. into the hierarchy in a way that's transparent to end users. I can multiply a NumPy matrix of float64 values by the builtin in 2 just by writing "m * 2", and it works exactly the way you'd want it to. It's hard to imagine that would be even feasible with an AI-based solution, but with the current design, that's the easiest part of NumPy.)

       That's nice for you.  Because  sqrt(.23709)  does not behave as I expect, sadly, I have to train my users to enter  sqrt('0.23709'). 

How do you expect it to behave? Again, using decimal floats here would make no difference. Instead of needing an infinite number of binary digits, you need an infinite number of decimal digits. Dividing infinite by log2(10) still leaves infinity. If you're trying to fix that, you're not trying to fix Python, you're trying to fix irrational numbers. This list cannot help you with that. Prayer is the only option, but medieval mathematicians tried that and didn't get very far, and eventually we had to accept that there are numbers that cannot be represented finitely.

There are some ideas in your posts that are worth responding to,

        Thank you.  If a user goes to the time and trouble to present an idea clearly, I would expect the responders 
to respect the effort and respond to the points that make sense.

      Andrew, I respect you for taking the time to dialogue, I appreciate it.  Thanks.