Re: [Python-Dev] gcc 4.2 exposes signed integer overflows
"Tim Peters" <tim.peters@gmail.com> wrote:
This is a wrong time in the release process to take on chance on discovering a flaky LONG_MIN on some box, so I want to keep the code as much as possible like what's already there (which worked fine for > 10 years on all known boxes) for now.
No, it didn't. I reported a bug a couple of years back. A blanket rule not to use symbols is clearly wrong, but there are good reasons not to want to rely on LONG_MIN (or INT_MIN for that matter). Because of some incredibly complex issues (which I only know some of), it hasn't been consistently -(1+LONG_MAX) on twos' complement machines. There are good reasons for making it -LONG_MAX, but they aren't the ones that actually cause it to be so. There are, however, very good reasons for using BOTH tests. I.e. if I have a C system which defines LONG_MIN to be -LONG_MAX because it uses -(1+LONG_MAX) for an integer NaN indicator in some contexts, you really DON'T want to create such a value. I don't know if there are any such C systems, but there have been other languages that did. I hope that Guido wasn't saying that Python should EVER rely on signed integer overflow wrapping in twos' complement. Despite the common usage, Java and all that, it is perhaps the worst systematic architectural change to have happened in 30 years, and accounts for a good 30% of all hard bugs in many classes of program. Simple buffer overflow is fairly easy to avoid by good programming style; integer overflow causing trashing of unpredictable data isn't. Any decent programming language (like Python!) regards integer overflow as an error, and the moves to make C copy Java semantics are yet another step away from software engineering in the direction of who-gives-a-damn hacking. Regards, Nick Maclaren, University of Cambridge Computing Service, New Museums Site, Pembroke Street, Cambridge CB2 3QH, England. Email: nmm1@cam.ac.uk Tel.: +44 1223 334761 Fax: +44 1223 334679
participants (1)
-
Nick Maclaren