On Wednesday, March 5, 2014 11:56:02 PM UTC-6, Guido van Rossum wrote:
Do you actually have a degree in math, or do you just remember your high school algebra?

  hi Guido,  ouch.   Its a story, glad you asked.  My first trip to college 1974-1977  
(UMKC) was to study EE;  minor in mathematics.  I completed my math course work 
(Calc 1-5, Diff Eq, Linear Algebra, Theory of Stats and Prob, &c) ... all  (A).  Not 
that it matters, because in 1977 IBM made me an offer in their CE department at 
Kansas City... so I joined IBM for the next 25 years and did not complete my EE 
degree... but, I did complete my mathematics training.  Of course, prior to UMKC
 I attended a high school that offered calculus, math analysis, trig... and intro to 
linear algebra. So by the time I joined IBM I got the math twice.  So, yeah, I know
what I'm doing. I am an amateur mathematician today, computer scientist, and 
computer hobbyist. While at IBM I was a staff software engineer, Tampa, Chicago,
Atlanta, and at the lab at Rochester MN (where I left IBM in 2002; yet 
I still dwell there). Education is a life long commitment and I continue to study math, comp sci, music
and philosophy. I just completed my course work for the MDiv degree at Bethel Seminary.

Back in the day, I added the scientific and transcendental math functions to the Rexx
library for the internal VM370 systems, because Rexx (like python) also had no decimal 
floating point math package. So, yeah, its one of the things I know how to do, and its one
of those things that most people never think about;  but I've noticed that they appreciate
having it once the work is done.  My pdeclib package on PyPI is in infancy stage, probably 
has bugs (yet nobody has complained yet) and pdeclib will have to mature there for some time.
But that has really nothing to do with whether we continue to use IEEE 754 1985 floats & doubles
nor whether we discuss default decimal floating point arithmetic nor whether we discuss a unified
python number sytem (sometime in the distant future) that would make | allow common average 
ordinary people to leverage mathematics in computer science without having to understand the 
underlying mechanics of implementation including but not limited to types. 

We might agree to stick with the discussion, if you're are willing, and stay away from ad hominem 
attacks, nor credential bashing. A person either knows what they are doing, or they don't. And if
they are willing to contribute, why knock it, or them?
 
The numbers in math usually are quite strictly typed: whole theories only apply to integers or even positive integers, other things to all reals but not to complex numbers.

Everyone keeps making the points noted above!  Yes, I know... concur~
    Guido, all data types in most computer high level languages
are all very strictly statically bound also. So what?  You know this better
than anyone, because you have taken so much abuse from trolls about it over the
years. We all know that the best way to handle name binding is dynamically. And that
was a paradigm shift, was it not?  Go take a look at Rexx. It has NO types. None.
Not at the surface, anyway. Everything to the user of the language is a string. This is
even true for most of "object" Rexx too. Numbers are just strings of characters that
are parsed and interpreted by Rexx as valid Rexx Numbers. It works. There is no
real point in arguing that its a bad idea, because it served the Rexx community for
many years... even now, Rexx is not dead. Yes, under the covers (not magic) a complex
number is going to be handled differently than a real number.  Yeah, what is your point?
They are handled differently in my TI 89 too... so what,  I change the context on the 89
and now I'm doing complex number math (not often, I might add). If I set the context for
reals, then now I'm doing real math... it really is not that difficult.  Think about this for just
a minute. I have used complex math exactly three times in my life.  I used it in high school 
to study the concept in math analysis. I used it in my EE classes... electrical engineers 
get a lot out of complex numbers.  And I used it when I was interested in the Mendlebrot 
set many years ago when it was cool to plot the famous fractal on early cga screens.
When was the last time you used complex numbers?  When was the last time a bank, or
a hospital, or Montgomery Wards used complex numbers?  If someone needs complex
numbers, Python could change the context "dynamically" and move through the problem set.  
Why should the user need to "manually" change the context if  (AI)  could change it for them 
on-the-fly?  Just try to get a feel for the question, and stop with trying to beat me up on 
my credentials.
 
(And what about quaternions? Or various infinities? :-)

What about quaternions?  If you extend the complex number system you change the context. That's not hard...
You would not expect the context for "reals" processing to be the same for the context of 
processing three dimensional space with pairs of complex numbers, would you?  Python
does complex number processing now. But that is a very specialized category of use
case that requires special context and (under the covers) typing relevant to complex number
pairs. The context can change "dynamically" to suit the problem set, that's all I'm saying.

I am not saying in all of my dreaming here that typing is not important (down under). Python is
called an object based language, yes?  Not object oriented, right? But we all know that down
under the covers (not magic) python uses Classes and instances of classes--objects. We 
don't pretend that what makes object based languages work is magic? do we?  Unifying the 
number system on a computer does not have to be grounded in paradigms that serviced the 
industry (and academy) for the past 40 years; mostly because memory was expensive and
processing was slow.  I suggest that if memory had been cheap, back in the day, and processing
had been very fast (as it is today) IEEE 754 1985 floats and doubles would never have been 
used. We would have used Decimal floating points right from the get-go. Initially, that really is
all I'm asking for on the outset--- lets move to default decimal floating point arithmetic for real 
numbers processing.  

In the future I am thinking about systems of languages that interact with human speech; understanding
spoken number just like they will understand symbol number. There is really no reason (other
than paradigm) that this would not be possible either. 

Otherwise, Guido, we might all of us just continue to use C and code up our statically bound types by
hand using nothing but int, long, float, and double.  

Its time to innovate.

Kind regards, BDfL,

marcus