![](https://secure.gravatar.com/avatar/f89277617ab0cb097e2f48d12f611ca1.jpg?s=120&d=mm&r=g)
Keith Dart wrote:
On Mon, 20 Jun 2005, Keith Dart wrote:
But then I wouldn't know if it overflowed 32 bits. In my usage, the integer will be translated to an unsigned (32 bit) integer in another system (SNMP). I want to know if it will fit, and I want to know early if there will be a problem, rather than later (at conversion time).
class unsigned(long):
I guess I just clarify this more. My "unsigned" type really is an object that represents a type of number from the external system. Previously, there was a nice, clean mapping between external types and Python types. Now there is not so clean a mapping. Not that that makes it a problem with Python itself.
However, since it is sometimes necessary to interface to other systems with Python, I see no reason why Python should not have a full set of built in numeric types corresponding to the machine types and, in turn, other system types. Then it would be easier (and probaby a bit faster) to interface to them. Perhaps Python could have an "integer" type for long/int unified types, and just "int" type as "normal" integers?
It seems to me, that maybe a single "byte_buffer" type, that can be defined to the exact needed byte lengths and have possible other characteristics to aid in interfacing to other languages or devices, would be a better choice. Then pythons ints, floats, etc... can uses what ever internal lengths is most efficient for the system it's compiled on and then the final result can be stored in the 'byte_buffer' for interfacing purposes. It would also be a good choice for bit manipulation when someone needs that, instead of trying to do it in an integer. Would something like that fulfill your need? Regards, Ron