(I'm shedding load; cleaning up my inbox in preparation for moving on to Py3K. I'll try to respond to some old mail in the process.)
On 2/6/06, Alex Martelli firstname.lastname@example.org wrote:
Essentially, you need to decide: does type(x) mostly refer to the protocol that x respects ("interface" plus semantics and pragmatics), or to the underlying implementation? If the latter, as your observation about "the philosophy" suggests, then it would NOT be nice if int was an exception wrt other types.
If int is to be a concrete type, then I'd MUCH rather it didn't get subclassed, for all sorts of both pratical and principled reasons. So, to me, the best solution would be the abstract base class with concrete implementation subclasses. Besides being usable for isinstance checks, like basestring, it should also work as a factory when called, returning an instance of the appropriate concrete subclass.
I like this approach, and I'd like to make it happen. (Not tomorrow. :-)
AND it would let me have (part of) what I was pining for a while ago -- an abstract base class that type gmpy.mpz can subclass to assert "I _am_ an integer type!", so lists will accept mpz instances as indices, etc etc.
I'm still dead set against this. Using type checks instead of interface checks is too big a deviation from the language's philosophy. It would be the end of duck typing as we know it! Using __index__ makes much more sense to me.
Now consider how nice it would be, on occasion, to be able to operate on an integer that's guaranteed to be 8, 16, 32, or 64 bits, to ensured the desired shifting/masking behavior for certain kinds of low-level programming; and also on one that's unsigned, in each of these sizes. Python could have a module offering signed8, unsigned16, and so forth (all combinations of size and signedness supported by the underlying C compiler), all subclassing the abstract int, and guarantee much happiness to people who are, for example, writing a Python prototype of code that's going to become C or assembly...
Why should these have to subclass int? They behave quite differently! I still don't see the incredible value of such types compared to simply doing standard arithmetic and adding "& 0xFF" or "& 0xFFFF" at the end, etc. (Slightly more complicated for signed arithmetic, but who really wants signed clipped arithmetic except if you're simulating a microprocessor?)
You can write these things in Python 2.5, and as long as they implement __index__ and do their own mixed-mode arithmetic when combined with regular int or long, all should well. (BTW a difficult design choice may be: if an int8 and an int meet, should the result be an int8 or an int?)
Similarly, it would help a slightly different kind of prototyping a lot if another Python module could offer 32-bit, 64-bit, 80-bit and 128-bit floating point types (if supported by the underlying C compiler) -- all subclassing an ABSTRACT 'float'; the concrete implementation that one gets by calling float or using a float literal would also subclass it... and so would the decimal type (why not? it's floating point -- 'float' doesn't mean 'BINARY fp';-). And I'd be happy, because gmpy.mpf could also subclass the abstract float!
I'd like concrete indications that the implementation of such a module runs into serious obstacles with the current approach. I'm not aware of any, apart from the occasional isinstance(x, float) check in the standard library. If that's all you're fighting, perhaps those occurrences should be fixed? They violate duck typing.
And then finally we could have an abstract superclass 'number', whose subclasses are the abstract int and the abstract float (dunno 'bout complex, I'd be happy either way), and Python's typesystem would finally start being nice and cleanly organized instead of grand-prarie-level flat ...!-)
I think you can have families of numbers separate from subclassing relationships. I'm not at all sure that subclassing doesn't create more problems than it solves here.
-- --Guido van Rossum (home page: http://www.python.org/~guido/)