Is PEP 237 final -- Unifying Long Integers and Integers
IIRC, there was a decision to not implement phase C and to keep the trailing L in representations of long integers. If so, I believe the PEP can be marked as final. We've done all we're going to do. Raymond
On 6/17/05, Raymond Hettinger <raymond.hettinger@verizon.net> wrote:
IIRC, there was a decision to not implement phase C and to keep the trailing L in representations of long integers.
Actually, the PEP says phase C will be implemented in Python 3.0 and that's still my plan.
If so, I believe the PEP can be marked as final. We've done all we're going to do.
For 2.x, yes. I'm fine with marking it as Final and adding this to PEP 3000 instead. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
Guido van Rossum wrote:
On 6/17/05, Raymond Hettinger <raymond.hettinger@verizon.net> wrote:
IIRC, there was a decision to not implement phase C and to keep the trailing L in representations of long integers. For 2.x, yes. I'm fine with marking it as Final and adding this to PEP 3000 instead.
Since PEP 313 has been rejected, the trailing L no longer introduces ambiguity in the representation of roman(40) vs. roman(10L). --Scott David Daniels Scott.Daniels@Acm.Org
Guido van Rossum wrote:
On 6/17/05, Raymond Hettinger <raymond.hettinger@verizon.net> wrote:
IIRC, there was a decision to not implement phase C and to keep the trailing L in representations of long integers.
Actually, the PEP says phase C will be implemented in Python 3.0 and that's still my plan.
If so, I believe the PEP can be marked as final. We've done all we're going to do.
For 2.x, yes. I'm fine with marking it as Final and adding this to PEP 3000 instead.
I am very concernced about something. The following code breaks with 2.4.1: fcntl.ioctl(self.rtc_fd, RTC_RD_TIME, ...) Where RTC_RD_TIME = 2149871625L In Python 2.3 it is -2145095671. Actually, this is supposed to be an unsigned int, and it was construced with hex values and shifts. Now, with the integer unification, how is ioctl() supposed to work? I cannot figure out how to make it work in this case. I suppose the best thing is to introduce an "unsignedint" type for this purpose. As it is right now, I cannot use 2.4 at all. -- -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Keith Dart <kdart@kdart.com> public key: ID: F3D288E4 =====================================================================
Keith Dart <kdart@kdart.com> writes:
I am very concernced about something. The following code breaks with 2.4.1:
fcntl.ioctl(self.rtc_fd, RTC_RD_TIME, ...)
Where RTC_RD_TIME = 2149871625L
In Python 2.3 it is -2145095671.
Well, you could always use "-2145095671"...
Actually, this is supposed to be an unsigned int, and it was construced with hex values and shifts.
But well, quite.
Now, with the integer unification, how is ioctl() supposed to work? I cannot figure out how to make it work in this case.
The shortest way I know of going from 2149871625L to -2145095671 is the still-fairly-gross:
v = 2149871625L ~int(~v&0xFFFFFFFF) -2145095671
I suppose the best thing is to introduce an "unsignedint" type for this purpose.
Or some kind of bitfield type, maybe. C uses integers both as bitfields and to count things, and at least in my opinion the default assumption in Python should be that this is what an integer is being used for, but when you need a bitfield it can all get a bit horrible. That said, I think in this case we can just make fcntl_ioctl use the (new-ish) 'I' format argument to PyArg_ParseTuple and then you'll just be able to use 2149871625L and be happy (I think, haven't tried this).
As it is right now, I cannot use 2.4 at all.
/Slightly/ odd place to mae this report! Hope this mail helped. Cheers, mwh -- I'm okay with intellegent buildings, I'm okay with non-sentient buildings. I have serious reservations about stupid buildings. -- Dan Sheppard, ucam.chat (from Owen Dunn's summary of the year)
On Sat, 18 Jun 2005, Michael Hudson wrote:
The shortest way I know of going from 2149871625L to -2145095671 is the still-fairly-gross:
v = 2149871625L ~int(~v&0xFFFFFFFF) -2145095671
I suppose the best thing is to introduce an "unsignedint" type for this purpose.
Or some kind of bitfield type, maybe.
C uses integers both as bitfields and to count things, and at least in my opinion the default assumption in Python should be that this is what an integer is being used for, but when you need a bitfield it can all get a bit horrible.
That said, I think in this case we can just make fcntl_ioctl use the (new-ish) 'I' format argument to PyArg_ParseTuple and then you'll just be able to use 2149871625L and be happy (I think, haven't tried this).
Thanks for the reply. I think I will go ahead and add some extension types to Python. Thankfully, Python is extensible with new objects. It is also useful (to me, anyway) to be able to map, one to one, external primitives from other systems to Python primitives. For example, CORBA and SNMP have a set of types (signed ints, unsigned ints, etc.) defined that I would like to interface to Python (actually I have already done this to some degree). But Python makes it a bit more difficult without that one-to-one mapping of basic types. Having an unsigned int type, for example, would make it easier to interface Python to SNMP or even some C libraries. In other words, Since the "Real World" has these types that I must sometimes interface to, it is useful to have these same (predictable) types in Python. So, it is worth extending the basic set of data types, and I will add it to my existing collection of Python extensions. Therefore, I would like to ask here if anyone has already started something like this? If not, I will go ahead and do it (if I have time). -- -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Keith Dart <kdart@kdart.com> public key: ID: F3D288E4 =====================================================================
Keith Dart <kdart@kdart.com> wrote:
Therefore, I would like to ask here if anyone has already started something like this? If not, I will go ahead and do it (if I have time).
If all you need to do is read or write C-like types to or from memory, you should spend some time looking through the 'struct' module if you haven't already. - Josiah
On Sun, 19 Jun 2005, Josiah Carlson wrote:
Keith Dart <kdart@kdart.com> wrote:
Therefore, I would like to ask here if anyone has already started something like this? If not, I will go ahead and do it (if I have time).
If all you need to do is read or write C-like types to or from memory, you should spend some time looking through the 'struct' module if you haven't already.
I know about 'struct'. However, it will just convert to Python "native" types. C unsigned become Python longs. u = struct.pack("I", 0xfffffffe) struct.unpack("I", u) (4294967294L,) In SNMP, for example, a Counter32 is basically an unsigned int, defined as "IMPLICIT INTEGER (0..4294967295)". One cannot efficiently translate and use that type in native Python. Currently, I have defined an "unsigned" type as a subclass of long, but I don't think that would be speed or storage efficient. On the other hand, adding my own type won't help with the ioctl() problem, since it won't know about it. -- -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Keith Dart <kdart@kdart.com> public key: ID: F3D288E4 =====================================================================
[Keith Dart]
In SNMP, for example, a Counter32 is basically an unsigned int, defined as "IMPLICIT INTEGER (0..4294967295)". One cannot efficiently translate and use that type in native Python. Currently, I have defined an "unsigned" type as a subclass of long, but I don't think that would be speed or storage efficient.
In my experience you can just use Python longs whenever a C API needs an "unsigned" long. There's no need to subtype, and your assumption that it would not be efficient enough is mistaken (unless you are manipulating arrays with millions of them, in which case you should be using Numeric, which has its own types for this purpose). (Want to argue about the efficiency? Write a typical use case and time it.) By far the easiest way to do arithmetic mod 2**32 is to just add "& 0xFFFFFFFF" to the end of your expression. For example, simulating the effect of multiplying an unsigned long by 3 would be x = (x * 3) & 0xFFFFFFFF. If there is a problem with ioctl() not taking long ints, that would be a bug in ioctl, not a lacking data type or a problem with long ints. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Mon, 20 Jun 2005, Guido van Rossum wrote:
[Keith Dart]
In SNMP, for example, a Counter32 is basically an unsigned int, defined as "IMPLICIT INTEGER (0..4294967295)". One cannot efficiently translate and use that type in native Python. Currently, I have defined an "unsigned" type as a subclass of long, but I don't think that would be speed or storage efficient.
In my experience you can just use Python longs whenever a C API needs an "unsigned" long. There's no need to subtype, and your assumption that it would not be efficient enough is mistaken (unless you are manipulating arrays with millions of them, in which case you should be using Numeric, which has its own types for this purpose). (Want to argue about the efficiency? Write a typical use case and time it.)
Ok, I'll take your word for it. I don't have any performance problems now, in my usage, but I wanted to make sure that Python "shows well" in certain "bake offs" ;-)
By far the easiest way to do arithmetic mod 2**32 is to just add "& 0xFFFFFFFF" to the end of your expression. For example, simulating the effect of multiplying an unsigned long by 3 would be x = (x * 3) & 0xFFFFFFFF.
But then I wouldn't know if it overflowed 32 bits. In my usage, the integer will be translated to an unsigned (32 bit) integer in another system (SNMP). I want to know if it will fit, and I want to know early if there will be a problem, rather than later (at conversion time). One of the "selling points" of Python in previous versions was that you would get an OverFlowError on overflow, where other languages did not (they overflowed silently). So I subclassed long in 2.3, to get the same overflow exception: class unsigned(long): floor = 0L ceiling = 4294967295L bits = 32 _mask = 0xFFFFFFFFL def __new__(cls, val): return long.__new__(cls, val) def __init__(self, val): if val < self.floor or val > self.ceiling: raise OverflowError, "value %s out of range for type %s" % (val, self.__class__.__name__) def __repr__(self): return "%s(%sL)" % (self.__class__.__name__, self) def __add__(self, other): return self.__class__(long.__add__(self, other)) ... Again, because I want to catch the error early, before conversion to the external type. BTW, the conversion is done in pure Python (to a BER representation), so using a C type (via ctypes, or pyrex, or whatever) is not possible.
If there is a problem with ioctl() not taking long ints, that would be a bug in ioctl, not a lacking data type or a problem with long ints.
That must be it, then. Shall I file a bug somewhere? -- -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Keith Dart <kdart@kdart.com> public key: ID: F3D288E4 =====================================================================
On Mon, 20 Jun 2005, Keith Dart wrote:
But then I wouldn't know if it overflowed 32 bits. In my usage, the integer will be translated to an unsigned (32 bit) integer in another system (SNMP). I want to know if it will fit, and I want to know early if there will be a problem, rather than later (at conversion time).
class unsigned(long):
I guess I just clarify this more. My "unsigned" type really is an object that represents a type of number from the external system. Previously, there was a nice, clean mapping between external types and Python types. Now there is not so clean a mapping. Not that that makes it a problem with Python itself. However, since it is sometimes necessary to interface to other systems with Python, I see no reason why Python should not have a full set of built in numeric types corresponding to the machine types and, in turn, other system types. Then it would be easier (and probaby a bit faster) to interface to them. Perhaps Python could have an "integer" type for long/int unified types, and just "int" type as "normal" integers? -- -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Keith Dart <kdart@kdart.com> public key: ID: F3D288E4 =====================================================================
Keith Dart wrote:
I guess I just clarify this more. My "unsigned" type really is an object that represents a type of number from the external system. Previously, there was a nice, clean mapping between external types and Python types. Now there is not so clean a mapping. Not that that makes it a problem with Python itself.
However, since it is sometimes necessary to interface to other systems with Python, I see no reason why Python should not have a full set of built in numeric types corresponding to the machine types and, in turn, other system types. Then it would be easier (and probaby a bit faster) to interface to them. Perhaps Python could have an "integer" type for long/int unified types, and just "int" type as "normal" integers?
For your purposes, would it work to use the struct module to detect overflows early?
import struct struct.pack('i', 2 ** 33) Traceback (most recent call last): File "<stdin>", line 1, in ? OverflowError: long int too large to convert to int
Another possibility would be to add to the "struct" module a full set of integer types with a fixed width: int8, uint8, int16, uint16, int32, uint32, int64, and uint64. Code that's focused on integration with other languages might benefit. Shane
On 6/20/05, Keith Dart <kdart@kdart.com> wrote:
However, since it is sometimes necessary to interface to other systems with Python, I see no reason why Python should not have a full set of built in numeric types corresponding to the machine types and, in turn, other system types. Then it would be easier (and probaby a bit faster) to interface to them. Perhaps Python could have an "integer" type for long/int unified types, and just "int" type as "normal" integers?
Strongly disagree. (a) Stop worrying about speed. The overhead of a single Python bytecode execution is probably more than the cost of an integer operation in most cases. (b) I don't know what you call a "normal" integer any more; to me, unified long/int is as normal as they come. Trust me, that's the case for most users. Worrying about 32 bits becomes less and less normal. (c) The right place to do the overflow checks is in the API wrappers, not in the integer types. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
Keith Dart wrote:
On Mon, 20 Jun 2005, Keith Dart wrote:
But then I wouldn't know if it overflowed 32 bits. In my usage, the integer will be translated to an unsigned (32 bit) integer in another system (SNMP). I want to know if it will fit, and I want to know early if there will be a problem, rather than later (at conversion time).
class unsigned(long):
I guess I just clarify this more. My "unsigned" type really is an object that represents a type of number from the external system. Previously, there was a nice, clean mapping between external types and Python types. Now there is not so clean a mapping. Not that that makes it a problem with Python itself.
However, since it is sometimes necessary to interface to other systems with Python, I see no reason why Python should not have a full set of built in numeric types corresponding to the machine types and, in turn, other system types. Then it would be easier (and probaby a bit faster) to interface to them. Perhaps Python could have an "integer" type for long/int unified types, and just "int" type as "normal" integers?
It seems to me, that maybe a single "byte_buffer" type, that can be defined to the exact needed byte lengths and have possible other characteristics to aid in interfacing to other languages or devices, would be a better choice. Then pythons ints, floats, etc... can uses what ever internal lengths is most efficient for the system it's compiled on and then the final result can be stored in the 'byte_buffer' for interfacing purposes. It would also be a good choice for bit manipulation when someone needs that, instead of trying to do it in an integer. Would something like that fulfill your need? Regards, Ron
On Tue, 21 Jun 2005, Ron Adam wrote:
It seems to me, that maybe a single "byte_buffer" type, that can be defined to the exact needed byte lengths and have possible other characteristics to aid in interfacing to other languages or devices, would be a better choice.
Then pythons ints, floats, etc... can uses what ever internal lengths is most efficient for the system it's compiled on and then the final result can be stored in the 'byte_buffer' for interfacing purposes.
It would also be a good choice for bit manipulation when someone needs that, instead of trying to do it in an integer.
Would something like that fulfill your need?
Sounds interresting. Not exactly stright-forward. What i have now is functional, but if speed becomes a problem then this might be useful. -- -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Keith Dart <kdart@kdart.com> public key: ID: F3D288E4 =====================================================================
On 6/20/05, Keith Dart <kdart@kdart.com> wrote:
On Mon, 20 Jun 2005, Guido van Rossum wrote: [...]
By far the easiest way to do arithmetic mod 2**32 is to just add "& 0xFFFFFFFF" to the end of your expression. For example, simulating the effect of multiplying an unsigned long by 3 would be x = (x * 3) & 0xFFFFFFFF.
But then I wouldn't know if it overflowed 32 bits.
Huh? C unsigned ints don't flag overflow either -- they perform perfect arithmetic mod 2**32.
In my usage, the integer will be translated to an unsigned (32 bit) integer in another system (SNMP). I want to know if it will fit, and I want to know early if there will be a problem, rather than later (at conversion time).
So check if it is >= 2**32 (or < 0, of course).
One of the "selling points" of Python in previous versions was that you would get an OverFlowError on overflow, where other languages did not (they overflowed silently). So I subclassed long in 2.3, to get the same overflow exception: ... Again, because I want to catch the error early, before conversion to the external type.
This is a very specialized application. Your best approach is to check for overflow before passing into the external API -- ideally the wrappers for that API should do so.
If there is a problem with ioctl() not taking long ints, that would be a bug in ioctl, not a lacking data type or a problem with long ints.
That must be it, then. Shall I file a bug somewhere?
SourceForge. (python.org/dev for more info) -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On Tue, 21 Jun 2005, Guido van Rossum wrote: [two messages mixed]
Huh? C unsigned ints don't flag overflow either -- they perform perfect arithmetic mod 2**32.
I was talking about signed ints. Sorry about the confusion. Other scripting languages (e.g. perl) do not error on overflow.
In my usage, the integer will be translated to an unsigned (32 bit) integer in another system (SNMP). I want to know if it will fit, and I want to know early if there will be a problem, rather than later (at conversion time).
So check if it is >= 2**32 (or < 0, of course).
That's exactly what I do. ;-) The "unsigned" subclass of long is part of the API, and checks the range when it is created (and they get created implicitly when operated on).
(a) Stop worrying about speed. The overhead of a single Python bytecode execution is probably more than the cost of an integer operation in most cases.
I am not thinking of the integer operation, but the extra Python bytecode necessary to implement the extra checks for overflow.
Again, because I want to catch the error early, before conversion to the external type.
This is a very specialized application. Your best approach is to check for overflow before passing into the external API -- ideally the wrappers for that API should do so.
(c) The right place to do the overflow checks is in the API wrappers, not in the integer types.
That would be the "traditional" method. I was trying to keep it an object-oriented API. What should "know" the overflow condition is the type object itself. It raises OverFlowError any time this occurs, for any operation, implicitly. I prefer to catch errors earlier, rather than later.
(b) I don't know what you call a "normal" integer any more; to me, unified long/int is as normal as they come. Trust me, that's the case for most users. Worrying about 32 bits becomes less and less normal.
By "normal" integer I mean the mathematical definition. Most Python users don't have to worry about 32 bits now, that is a good thing when you are dealing only with Python. However, if one has to interface to other systems that have definite types with limits, then one must "hack around" this feature. I was just thinking how nice it would be if Python had, in addition to unified ("real", "normal") integers it also had built-in types that could be more easily mapped to external types (the typical set of signed, unsigned, short, long, etc.).Yes, you can check it at conversion time, but that would mean extra Python bytecode. It seems you think this is a special case, but I think Python may be used as a "glue language" fairly often, and some of us would benefit from having those extra types as built-ins. -- -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Keith Dart <kdart@kdart.com> public key: ID: F3D288E4 =====================================================================
[GvR:]
Huh? C unsigned ints don't flag overflow either -- they perform perfect arithmetic mod 2**32.
[Keith Dart:]
I was talking about signed ints. Sorry about the confusion. Other scripting languages (e.g. perl) do not error on overflow.
C signed ints also don't flag overflow, nor do they -- as you point out -- in various other languages.
(c) The right place to do the overflow checks is in the API wrappers, not in the integer types.
That would be the "traditional" method.
I was trying to keep it an object-oriented API. What should "know" the overflow condition is the type object itself. It raises OverFlowError any time this occurs, for any operation, implicitly. I prefer to catch errors earlier, rather than later.
Why "should"? Sure, catch errors earlier. But *are* the things you'd catch earlier by having an unsigned-32-bit-integer type actually errors? Is it, e.g., an "error" to move the low 16 bits into the high part by writing x = (y<<16) & 0xFFFF0000 instead of x = (y&0xFFFF) << 16 or to add 1 mod 2^32 by writing x = (y+1) & 0xFFFFFFFF instead of if y == 0xFFFFFFFF: x = 0 else: x = y+1 ? Because it sure doesn't seem that way to me. Why is it better, or more "object-oriented", to have the checking done by a fixed-size integer type?
(b) I don't know what you call a "normal" integer any more; to me, unified long/int is as normal as they come. Trust me, that's the case for most users. Worrying about 32 bits becomes less and less normal.
By "normal" integer I mean the mathematical definition.
Then you aren't (to me) making sense. You were distinguishing this from a unified int/long. So far as I can see, a unified int/long type *does* implement (modulo implementation limits and bugs) the "mathematical definition". What am I missing?
Most Python users don't have to worry about 32 bits now, that is a good thing when you are dealing only with Python. However, if one has to interface to other systems that have definite types with limits, then one must "hack around" this feature.
Why is checking the range of a parameter with a restricted range a "hack"? Suppose some "other system" has a function in its interface that expects a non-zero integer argument, or one with its low bit set. Do we need a non-zero-integer type and an odd-integer type?
I was just thinking how nice it would be if Python had, in addition to unified ("real", "normal") integers it also had built-in types that could be more easily mapped to external types (the typical set of signed, unsigned, short, long, etc.). Yes, you can check it at conversion time, but that would mean extra Python bytecode. It seems you think this is a special case, but I think Python may be used as a "glue language" fairly often, and some of us would benefit from having those extra types as built-ins.
Well, which extra types? One for each of 8, 16, 32, 64 bit and for each of signed, unsigned? Maybe also "non-negative signed" of each size? That's 12 new builtin types, so perhaps you'd be proposing a subset; what subset? And how are they going to be used? - If the conversion to one of these new limited types occurs immediately before calling whatever function it is that uses it, then what you're really doing is a single range-check. Why disguise it as a conversion? - If the conversion occurs earlier, then you're restricting the ways in which you can calculate the parameter values in question. What's the extra value in that? I expect I'm missing something important. Could you provide some actual examples of how code using this new feature would look? -- g
Gareth McCaughan wrote:
[Keith Dart:]
By "normal" integer I mean the mathematical definition.
Then you aren't (to me) making sense. You were distinguishing this from a unified int/long. So far as I can see, a unified int/long type *does* implement (modulo implementation limits and bugs) the "mathematical definition". What am I missing?
Hmm, a 'mod_int' type might be an interesting concept (i.e. a type that performs integer arithmetic, only each operation is carried out modulo some integer). Then particular bit sizes would be simple ints, modulo the appropriate power of two. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia --------------------------------------------------------------- http://boredomandlaziness.blogspot.com
On Wednesday 2005-06-22 13:32, Nick Coghlan wrote:
Gareth McCaughan wrote:
[Keith Dart:]
By "normal" integer I mean the mathematical definition.
Then you aren't (to me) making sense. You were distinguishing this from a unified int/long. So far as I can see, a unified int/long type *does* implement (modulo implementation limits and bugs) the "mathematical definition". What am I missing?
Hmm, a 'mod_int' type might be an interesting concept (i.e. a type that performs integer arithmetic, only each operation is carried out modulo some integer).
Then particular bit sizes would be simple ints, modulo the appropriate power of two.
It might indeed, but it would be entirely the opposite of what (I think) Keith wants, namely something that raises an exception any time a value goes out of range :-). -- g
Nick Coghlan <ncoghlan@gmail.com> writes:
Gareth McCaughan wrote:
[Keith Dart:]
By "normal" integer I mean the mathematical definition.
Then you aren't (to me) making sense. You were distinguishing this from a unified int/long. So far as I can see, a unified int/long type *does* implement (modulo implementation limits and bugs) the "mathematical definition". What am I missing?
Hmm, a 'mod_int' type might be an interesting concept (i.e. a type that performs integer arithmetic, only each operation is carried out modulo some integer).
Then particular bit sizes would be simple ints, modulo the appropriate power of two.
ctypes provides mutable platform integers and floats of various sizes (and much more). Currently they don't have any methods, only the value attribute. Maybe it would be useful to implement the standard numeric methods on them. Thomas
[me]
(c) The right place to do the overflow checks is in the API wrappers, not in the integer types.
[Keith Dart]
That would be the "traditional" method.
I was trying to keep it an object-oriented API. What should "know" the overflow condition is the type object itself. It raises OverFlowError any time this occurs, for any operation, implicitly. I prefer to catch errors earlier, rather than later.
Isn't clear to me at all. I might compute a value using some formula that exceeds 2**32 in some intermediate result but produces an in-range value in the end. That should be acceptable as an argument. I also don't see why one approach is more OO than another; sounds like you have a case of buzzworditis. :-) -- --Guido van Rossum (home page: http://www.python.org/~guido/)
Keith Dart wrote:
On Sat, 18 Jun 2005, Michael Hudson wrote:
The shortest way I know of going from 2149871625L to -2145095671 is the still-fairly-gross:
v = 2149871625L ~int(~v&0xFFFFFFFF)
-2145095671
I suppose the best thing is to introduce an "unsignedint" type for this purpose.
Or some kind of bitfield type, maybe.
C uses integers both as bitfields and to count things, and at least in my opinion the default assumption in Python should be that this is what an integer is being used for, but when you need a bitfield it can all get a bit horrible.
That said, I think in this case we can just make fcntl_ioctl use the (new-ish) 'I' format argument to PyArg_ParseTuple and then you'll just be able to use 2149871625L and be happy (I think, haven't tried this).
Thanks for the reply. I think I will go ahead and add some extension types to Python. Thankfully, Python is extensible with new objects.
It is also useful (to me, anyway) to be able to map, one to one, external primitives from other systems to Python primitives. For example, CORBA and SNMP have a set of types (signed ints, unsigned ints, etc.) defined that I would like to interface to Python (actually I have already done this to some degree). But Python makes it a bit more difficult without that one-to-one mapping of basic types. Having an unsigned int type, for example, would make it easier to interface Python to SNMP or even some C libraries.
In other words, Since the "Real World" has these types that I must sometimes interface to, it is useful to have these same (predictable) types in Python.
So, it is worth extending the basic set of data types, and I will add it to my existing collection of Python extensions.
Therefore, I would like to ask here if anyone has already started something like this? If not, I will go ahead and do it (if I have time).
I should make you aware that the new Numeric (Numeric3 now called scipy.base) has a collection of C-types that represent each C-datatype. They are (arguably) useful in the context of eliminating a few problems in data-type coercion in scientific computing. These types are created in C and use multiple inheritance in C. This is very similiar to what you are proposing and so I thought I might make you aware. Right now, the math operations from each of these types comes mostly from Numeric but this could be modified as desired. The code is available in the Numeric3 CVS tree at the numeric python sourceforge site. -Travis Oliphant
participants (12)
-
Gareth McCaughan
-
Guido van Rossum
-
Josiah Carlson
-
Keith Dart
-
Michael Hudson
-
Nick Coghlan
-
Raymond Hettinger
-
Ron Adam
-
Scott David Daniels
-
Shane Hathaway
-
Thomas Heller
-
Travis Oliphant