Huh? C unsigned ints don't flag overflow either -- they perform perfect arithmetic mod 2**32.
I was talking about signed ints. Sorry about the confusion. Other scripting languages (e.g. perl) do not error on overflow.
C signed ints also don't flag overflow, nor do they -- as you point out -- in various other languages.
(c) The right place to do the overflow checks is in the API wrappers, not in the integer types.
That would be the "traditional" method.
I was trying to keep it an object-oriented API. What should "know" the overflow condition is the type object itself. It raises OverFlowError any time this occurs, for any operation, implicitly. I prefer to catch errors earlier, rather than later.
Sure, catch errors earlier. But *are* the things you'd catch earlier by having an unsigned-32-bit-integer type actually errors? Is it, e.g., an "error" to move the low 16 bits into the high part by writing x = (y<<16) & 0xFFFF0000 instead of x = (y&0xFFFF) << 16 or to add 1 mod 2^32 by writing x = (y+1) & 0xFFFFFFFF instead of if y == 0xFFFFFFFF: x = 0 else: x = y+1 ? Because it sure doesn't seem that way to me. Why is it better, or more "object-oriented", to have the checking done by a fixed-size integer type?
(b) I don't know what you call a "normal" integer any more; to me, unified long/int is as normal as they come. Trust me, that's the case for most users. Worrying about 32 bits becomes less and less normal.
By "normal" integer I mean the mathematical definition.
Then you aren't (to me) making sense. You were distinguishing this from a unified int/long. So far as I can see, a unified int/long type *does* implement (modulo implementation limits and bugs) the "mathematical definition". What am I missing?
Most Python users
don't have to worry about 32 bits now, that is a good thing when you are dealing only with Python. However, if one has to interface to other systems that have definite types with limits, then one must "hack around" this feature.
Why is checking the range of a parameter with a restricted range a "hack"?
Suppose some "other system" has a function in its interface that expects a non-zero integer argument, or one with its low bit set. Do we need a non-zero-integer type and an odd-integer type?
I was just thinking how nice it would be if Python had, in
addition to unified ("real", "normal") integers it also had built-in types that could be more easily mapped to external types (the typical set of signed, unsigned, short, long, etc.). Yes, you can check it at conversion time, but that would mean extra Python bytecode. It seems you think this is a special case, but I think Python may be used as a "glue language" fairly often, and some of us would benefit from having those extra types as built-ins.
Well, which extra types? One for each of 8, 16, 32, 64 bit and for each of signed, unsigned? Maybe also "non-negative signed" of each size? That's 12 new builtin types, so perhaps you'd be proposing a subset; what subset?
And how are they going to be used?
- If the conversion to one of these new limited types occurs immediately before calling whatever function it is that uses it, then what you're really doing is a single range-check. Why disguise it as a conversion?
- If the conversion occurs earlier, then you're restricting the ways in which you can calculate the parameter values in question. What's the extra value in that?
I expect I'm missing something important. Could you provide some actual examples of how code using this new feature would look?