
On Thu, May 6, 2010 at 5:43 PM, Mathias Panzenböck <grosser.meister.morti@gmx.net> wrote:
Shouldn't by mathematical definition -x // y be the same as -(x // y)?
Not necessarily, that's only one possible definition; it is the one that happens to be used in number theory, but it's not the *only* possible one. Most programming languages don't choose that definition and instead choose one of 2 other definitions that use the signs of the operands to determine what signs a%b and a//b will have (the difference between the 2 definitions being *which* operand's sign is used). See http://en.wikipedia.org/wiki/Modulo_operation for a thorough explanation.
I think this rather odd. Is there any deeper reason to this behaviour? I guess changing this will break a lot of code, but why does it behave like this?
I would suppose it's what programmers have found more useful/intuitive. Most programmers aren't number theorists. Cheers, Chris -- http://blog.rebertia.com