[Numpy-discussion] Re: numpy, overflow, inf, ieee, and rich , comparison

Donald O'Donnell donod at home.com
Wed Oct 25 00:41:27 EDT 2000


"Steven D. Majewski" wrote:
> 
> On Tue, 24 Oct 2000, Fredrik Lundh wrote:
> 
> > Alice uses python to control 3D objects, and they
> > found that users had real problems accepting that
> >
> >     myBunny.moveForward(2/3)
> >
> > left the bunny standing there, instead of moving it
> > 2/3 units forward.
> >
> > but for programmers, the real problem is that "/" in an
> > expression like "a/2" is sometimes an integer division,
> > and sometimes a floating point division, depending on
> > what data you happen to pass your function...

Correct, that's called operator overloading, a form of polymorphism.
I've seen postings in the past where someone has complained about
the fact that 5/2 results in 2.  I can't, for the life of me
understand why anyone would want it to be otherwise.  Every 
main-stream language I've ever used (Fortran, COBOL, Basic,
C, C++, Java,...) have all truncated the result of integer
division to an integer, and that can be very useful.  Try writing
a binary search (where you are constantly dividing your
range by 2) -- not much fun if the interpreter/compiler keeps
secretly changing your integer indexes to floats every time you 
have an odd sized range.  If you really want a floating point
result in the above example, all you need to do is use a/2.0 
-- see, you are in control this way, not the compiler

In my experience the two domains (int and float) rarely
cross over.  If I'm dealing with indexes or counting things,
I use integers.  If I'm dealing with measuring continuous 
things like length or weight, I use floats.  When dealing with
money, I use long integers and count the pennies, thus 
eliminating the floating point curse of 2.0 == 1.99999999...

I submit, that if you ever find it desirable to have your
compiler automatically convert your ints to floats at some
time during a calculation, then you should have been dealing
with floats exclusively from the start.  Or, if you must use
integers for measuring continuous data for some reason 
(such as performance or eliminating roundoff error in 
financial data), then use a unit of measure that is fine 
grained enough that you can stick with integers and still 
maintain the required precision, i.e., use cents/pence 
rather than dollars/pounds, millimeters instead of meters. 
And in the case of moving the bunny, rather than
moving it 2/3 of a unit, move it 667 milli-units.

Maybe I'm overlooking some some obscure field, like quantum
physics for example, where discreet and continuous phenomena
somehow blend together.  Seriously, if anyone can show me
a case where mixed-mode arithmetic is useful, I'd love to 
see it.

> > I've been programming Python full time for over five
> > years, and this is still causing me headaches from time
> > to time (I never had this problem when I was using C
> > and C++.  go figure ;-)
> 
> I figure it's cause C and C++ have explicitly declared static
> typing and even if you don't look back at the declarations,
> you're aware of them when programming.
> 
> For a dynamically typed language like Python, it's a bit odd
> to insist on closure: (int) OP (int) => (int).

Do you mean you would prefer: 
  (int) OP (int) => (int) sometimes and other times (float)
Or do you mean:
  (int) OP (int) => (float) always?
I think the first case would be confusing and the second
limiting.

> If rational was a builtin, I'ld argue for rational.
> Otherwise, it should be real.

Yes, a rational builtin type would be nice.  But automatic
conversion of int to rat?  No Way -- I want to be in control.

Don O'Donnell



More information about the Python-list mailing list