On Thursday, March 6, 2014 8:53:35 PM UTC-6, Andrew Barnert wrote:

The only solution without changing Python is to train end-users to write something correct, like Decimal('.1').

       hi Andrew,  yes, that is the problem.  And, to be fair, that example is not really the worst
because it might be expected that the user should know how to construct Decimals, and 
could be educated to construct them properly--> Decimal('0.1');  I concur.
       It is worse in these two scenarios (and others too):
            sqrt(.1)
            sin(.1)
       No one would expect that the user should know to quote the number---when is that ever done?
       QED   this is broken.  Again, we know perfectly well why its happening (I am not ignorant) but its not right. 

The obvious solution for changing Python is to make it easier to create Decimal numbers correctly and/or harder to create them incorrectly. For example, a decimal suffix, as already proposed before this thread, would completely solve the problem:

    >>> a = 1d
    >>> b = .1d
    >>> a+b
    1.1d
  
       Yes, and at a bare minimum, that is the immediate change I have been asking for, for now; nothing more.

       I answered Guido's questions in hopes that he might be willing to dialogue --- not dismiss.
 
You proposed that Python should handle numbers in an OO way, with numbers being real objects, instances of classes, with a hierarchy including abstract base classes; all of this is already there in Python.

       Yes, it is... but its not designed to use decimal floating point by default... in order to do that the entire OO 
setup will have to be changed (what Guido called a sweeping reform).  Its like if we want to use floating point
decimals in the 21st century, using python, we have to duck tape modules on and then educate users in the
correct input of numbers.  Seems convoluted to me.  

You went off on a long digression about how you could implement this using the details of C++-style inheritance, when Python has a completely (and more powerful) different solution to inheritance that has already been used to solve this problem.
  
        No, I did not.  I answered Guido's questions regarding context as clearly as I could. If python has a more powerful
way to handle this situation, gladly do it!  I will be happy as a clam to beta test or help with the coding.
 

You proposed some complicated AI-based solution to solve the problem of using separate number classes in a single expression, even though Python (almost exactly like C++, in this case) has already solved that problem with operator overloading.

        No, I did not.  I suggested that unifying numbers in an (AI) way could solve this problem (conceptually) by 
regarding all numbers as PythonNumbers. Decimals should not only be default, they should be integrated, not
tacked on with duck tape. 

(And note that Python is flexible enough that third-party libraries can easily insert new types like quaternions, matrices, symbolic expressions, etc. into the hierarchy in a way that's transparent to end users. I can multiply a NumPy matrix of float64 values by the builtin in 2 just by writing "m * 2", and it works exactly the way you'd want it to. It's hard to imagine that would be even feasible with an AI-based solution, but with the current design, that's the easiest part of NumPy.)

       That's nice for you.  Because  sqrt(.23709)  does not behave as I expect, sadly, I have to train my users to enter  sqrt('0.23709'). 

There are some ideas in your posts that are worth responding to,

        Thank you.  If a user goes to the time and trouble to present an idea clearly, I would expect the responders 
to respect the effort and respond to the points that make sense.


      Andrew, I respect you for taking the time to dialogue, I appreciate it.  Thanks.

marcus