integer division rounding

Ralf Muschall ralf.muschall at alphasat.de
Wed Jul 18 10:46:49 EDT 2001


Marcin 'Qrczak' Kowalczyk <qrczak at knm.org.pl> writes:

> Tue, 17 Jul 2001 17:47:38 +0930, Michael Lunnay <mlunnay at ihug.com.au> pisze:

> Donald E. Knuth said (in "Concrete Mathematics") about div/mod with
> negative arguments: beware of programming languages which use a

He probably simply meant "do the right thing", the inventors of
Lisp and Scheme followed that way (it *is* mainly their philosophy),
and Guido followed them :-)

Finding now that Python does the right thing is just a tautology, and
saving the few cycles by doing the bad thing makes no sense in an
interpreted language.  In number theory, it would even be harmful (and
there seem to be people doing that, otherwise we would have seen a
proposal to remove longints).

Besides mathematical discussions about which version is holier, there
is a practical aspect: It is trivial to implement the bad thing using
the right thing (see below), but the inverse needs more work (nested
ifs etc.).

> specified to work "incorrectly". Intel processors natively implement
> the "incorrect" variant.

What does this mean?  I always understood C's way of dividing negative
ints a "do what is the simplest thing for the machine" (which most
often was something like

    a/b := sign(a)*sign(b)*abs(a)/abs(b)

).  This is justified by the purpose of C (being a portable
assembler), and most C users probably divide only positive numbers
(memory areas or widget pixels), where spending cycles for the right
thing would be a waste.

Is it now insure to use it at all, or does it just return the ugly result
mentioned by me?

Ralf



More information about the Python-list mailing list