Re: [Numpy-discussion] [Python-3000] PEP 31XX: A Type Hierarchy for Numbers (and other algebraic entities)
On 4/28/07, Baptiste Carvello <baptiste13@altern.org> wrote:
2) In the PEP, the concepts are used *inconsistently*. Complex derives from Ring because the set of complex numbers *is a* ring. Int derives from Complex because integer are complex numbers (or, alternatively, the set of integers *is included in* the set of complex numbers). The consistent way could be to make the Complex class an instance of Ring, not a subclass.
Good point. In this structure, isinstance(3, Ring) really means that 3 is a member of some (unspecified) ring, not that 3 isa Ring, but the ambiguity there is probably the root cause of the problem with mixed-mode operations. We should also say that isinstance(3, Complex) means that 3 is a member of some subring of the complex numbers, which preserves the claim that Complex is a subtype of Ring. Up to here, things make sense, but because of how ABCs work, we need issubclass(rational, Complex). I suppose that's true too, since isinstance(3.4, rational) means "3.4 is a member of the rational subring of the complex numbers", which implies that "3.4 is a member of some subring of the complex numbers." There may be better names for these concepts. Perhaps suffixing every numeric ABC with "Element"? Do you have suggestions? Jason Orendorff points out that Haskell typeclasses capture the fact that complex is an instance of Ring. I think imitating them as much as possible would indeed imply making the numeric ABCs into metaclasses (in Haskell terminology, "kinds"). To tell if the arguments to a function were in the same total order, you'd check if they had any common superclasses that were themselves instances of TotallyOrdered. I don't know enough about how metaclasses are typically used to know how that would conflict. -- Namasté, Jeffrey Yasskin
On 4/29/07, Jeffrey Yasskin <jyasskin@gmail.com> wrote:
On 4/28/07, Baptiste Carvello <baptiste13@altern.org> wrote:
2) In the PEP, the concepts are used *inconsistently*. Complex derives from Ring because the set of complex numbers *is a* ring. Int derives from Complex because integer are complex numbers (or, alternatively, the set of integers *is included in* the set of complex numbers). The consistent way could be to make the Complex class an instance of Ring, not a subclass.
Good point. In this structure, isinstance(3, Ring) really means that 3 is a member of some (unspecified) ring, not that 3 isa Ring,
To ask whether x is a Ring, you'd use issubclass(x, Ring). Now, in a different context (still in Python) you might define a class Ring whose *instances* are rings; but in the current draft of PEP 3141, Ring is a class whose subclasses are rings. [BTW "isa" is not an English word, nor used anywhere in Python. If "isa" means "is a" you might as well write proper English; if it has additional connotations, they're likely lost to this crowd so you can't count on your readers knowing the distinction.]
but the ambiguity there is probably the root cause of the problem with mixed-mode operations.
(Where by mixed-mode operations you mean e.g. trying to add or multiply members of two *different* rings, right?)
We should also say that isinstance(3, Complex) means that 3 is a member of some subring of the complex numbers, which preserves the claim that Complex is a subtype of Ring.
Hm... it's beginning to look like binary operations in the mathematical sense just don't have an exact equivalent in OO type theory. (Or vice versa.) There really isn't a way to define an operation binop(a: T, b: T) -> T in such a way that it is clear what should happen if a and b are members of two different subtypes of T, named T1 and T2. Classic OO seems to indicate that this must be defined, and there are plenty of examples that seem to agree, e.g. when T1 and T2 are trivial subtypes of T (maybe each adding a different inessential method). OTOH we have the counter-example where T==Ring and T1 and T2 are two different, unrelated Rings, and the result may not be defined. Hmm... Maybe the conclusion to draw from this is that we shouldn't make Ring a class? Maybe it ought to be a metaclass, so we could ask isinstance(Complex, Ring)? Perhaps a similar line of reasoning migtht apply to PartiallyOrdered and TotallyOrdered.
Up to here, things make sense, but because of how ABCs work, we need issubclass(rational, Complex). I suppose that's true too, since isinstance(3.4, rational) means "3.4 is a member of the rational subring of the complex numbers", which implies that "3.4 is a member of some subring of the complex numbers."
There may be better names for these concepts. Perhaps suffixing every numeric ABC with "Element"? Do you have suggestions?
Maybe we should stop trying to capture radically different mathematical number systems using classes or types, and limit ourselves to capturing the systems one learns in high school: C, R, Q, Z, and (perhaps) N (really N0). The concrete types would be complex <: C, float<:R, Decimal<:R, int<:Z. NumPy would have many more. One could argue that float and Decimal are <:Q, but I'm not sure if that makes things better pragmatically; I guess I'm coming from the old Algol school where float was actually called real (and in retrospect I wish I'd called it that in Python). I'd rather reserve membership of Q for an infinite precision rational type (which many people have independently implemented).
Jason Orendorff points out that Haskell typeclasses capture the fact that complex is an instance of Ring. I think imitating them as much as possible would indeed imply making the numeric ABCs into metaclasses (in Haskell terminology, "kinds"). To tell if the arguments to a function were in the same total order, you'd check if they had any common superclasses that were themselves instances of TotallyOrdered. I don't know enough about how metaclasses are typically used to know how that would conflict.
The more I think about it, it sounds like the right thing to do. To take PartiallyOrdered (let's say PO for brevity) as an example, the Set class should specify PO as a metaclass. The PO metaclass could require that the class implement __lt__ and __le__. If it found a class that didn't implement them, it could make the class abstract by adding the missing methods to its __abstractmethods__ attribute. Or, if it found that the class implemented one but not the other, it could inject a default implementation of the other in terms of the one and __eq__. This leaves us with the question of how to check whether an object is partially orderable. Though that may really be the wrong question -- perhaps you should ask whether two objects are partially orderable relative to each other. For that, you would first have to find the most derived common base class (if that is even always a defined operation(*)), and then check whether that class is an instance of PO. It seems easier to just try the comparison -- duck typing isn't dead yet! I don't think this is worth introducing a new inspection primitive ('ismetainstance(x, PO)'). The PO class may still be useful for introspection: at the meta-level, it may be useful occasionally to insist that or inquire whether a given *class* is PO. (Or TO, or a Ring, etc.) Now, you could argue that Complex should also be a metaclass. While that may mathematically meaningful (for all I know there are people doing complex number theory using Complex[Z/n]), for Python's numeric classes I think it's better to make Complex a regular class representing all the usual complex numbers (i.e. a pair of Real numbers). I expect that the complex subclasses used in practice are all happy under mixed arithmetic using the usual definition of mixed arithmetic: convert both arguments to a common base class and compute the operation in that domain. (*) consider classes AB derived from (B, A) and BA derived from (A, B). Would A or B be the most derived base class? Or would we have to skip both and continue the search with A's and B's base classes? -- --Guido van Rossum (home page: http://www.python.org/~guido/)
Guido van Rossum wrote:
Maybe we should stop trying to capture radically different mathematical number systems using classes or types, and limit ourselves to capturing the systems one learns in high school: C, R, Q, Z, and (perhaps) N (really N0). The concrete types would be complex <: C, float<:R, Decimal<:R, int<:Z. NumPy would have many more. One could argue that float and Decimal are <:Q, but I'm not sure if that makes things better pragmatically; I guess I'm coming from the old Algol school where float was actually called real (and in retrospect I wish I'd called it that in Python). I'd rather reserve membership of Q for an infinite precision rational type (which many people have independently implemented).
I haven't really been following this discussion, given my lack of understanding of the issues involved, but I want to make one observation about the discussion. Normally, when someone suggests an idea for a major addition to Python of this scope, the standard response is that they should go develop it as a 3rd-party package and see if becomes popular, and if it does it can be considered for inclusion in the standard library. Unfortunately, we are somewhat constrained in this case, because we're talking about altering the behavior of some fairly fundamental built-in types - which means that it's going to be hard to implement it as an add-on library. And yet, it seems to me that this particular set of features really does need a long gestation period compared to some others that we've discussed. Most of the 3000-series PEPs are really about a fairly small set of base decisions. Even the long PEPs are more about descriptions of the logical consequences of those decisions than about the decisions themselves. The ABC PEPs are different, in that they are standardizing a whole slew of things all at once. Moreover, I think there is a real danger here of a kind of ivory-tower decision making, isolated from day-to-day writing of applications to keep them grounded. The question of whether to limit to the "high school" number classes, or to go with the more mathematically abstract set of things is a decision which, it seems to me, ought to be made in the context of actual use cases, rather than abstract theorizing about use cases. (I'm also generally supportive about copying what other languages have done, on the theory that they too have been tested by real-world use.) If it were technically possible, I would recommend that this PEP have to run the same gauntlet that any other large library addition would, which is to go through a long period of community feedback and criticism, during which a large number of people actually attempt to use the feature for real work. I also think, in this case, that the special power of "core python developer fiat" should not be invoked unless it has to be, because I don't think that there is a firm basis for making such a judgment yet. I would also suggest that some thought be given to ways to allow for experimentation with different variations of this feature. If it is not possible to make these numeric classes definable in Python at runtime, then perhaps it is possible instead to allow for custom builds of the Python 3000 executable with different arrangements and configurations of these built-in classes. -- Talin
On 4/29/07, Talin <talin@acm.org> wrote:
If it were technically possible, I would recommend that this PEP have to run the same gauntlet that any other large library addition would, which is to go through a long period of community feedback and criticism, during which a large number of people actually attempt to use the feature for real work.
This sounds like a pretty good reason to add __isinstance__() and __issubclass__(). Then the various ABCs can be distributed as third-party modules but can still make sure that things like isinstance(42, Real) and issubclass(int, Complex) are True (or whatever other assertions people want to make). STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy
On 4/29/07, Steven Bethard <steven.bethard@gmail.com> wrote:
On 4/29/07, Talin <talin@acm.org> wrote:
If it were technically possible, I would recommend that this PEP have to run the same gauntlet that any other large library addition would, which is to go through a long period of community feedback and criticism, during which a large number of people actually attempt to use the feature for real work.
This sounds like a pretty good reason to add __isinstance__() and __issubclass__(). Then the various ABCs can be distributed as third-party modules but can still make sure that things like isinstance(42, Real) and issubclass(int, Complex) are True (or whatever other assertions people want to make).
Or isexample, so that we aren't locked into implementing ABCs as base classes. def isexample(val, ABC): return ABC.__example__(val) class ABC(object): def __example__(cls, val): if val in cls.__instance: return True if val in cls.__non_instance: return False for instclass in type(val).__mro__: if instclass in __class: return True if instclass in __non_class: return False return False ... methods to register classes, unregister subclasses, etc -jJ
On 4/29/07, Jim Jewett <jimjjewett@gmail.com> wrote:
Or isexample, so that we aren't locked into implementing ABCs as base classes.
You don't have to use the feature even if it exists. :-) I think there are good reasons to support overriding isinstance/issubclass beyond ABCs. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On 4/29/07, Talin <talin@acm.org> wrote:
Guido van Rossum wrote:
Maybe we should stop trying to capture radically different mathematical number systems using classes or types, and limit ourselves to capturing the systems one learns in high school: C, R, Q, Z, and (perhaps) N (really N0). The concrete types would be complex <: C, float<:R, Decimal<:R, int<:Z. NumPy would have many more. One could argue that float and Decimal are <:Q, but I'm not sure if that makes things better pragmatically; I guess I'm coming from the old Algol school where float was actually called real (and in retrospect I wish I'd called it that in Python). I'd rather reserve membership of Q for an infinite precision rational type (which many people have independently implemented).
I haven't really been following this discussion, given my lack of understanding of the issues involved, but I want to make one observation about the discussion.
Normally, when someone suggests an idea for a major addition to Python of this scope, the standard response is that they should go develop it as a 3rd-party package and see if becomes popular, and if it does it can be considered for inclusion in the standard library.
Unfortunately, we are somewhat constrained in this case, because we're talking about altering the behavior of some fairly fundamental built-in types - which means that it's going to be hard to implement it as an add-on library.
Not entirely true in the latest incarnation -- the proposed overloading of isinstance() and issubclass() will make it possible to add Ring-ness to float and int as an afterthough from a user-defined class.
And yet, it seems to me that this particular set of features really does need a long gestation period compared to some others that we've discussed. Most of the 3000-series PEPs are really about a fairly small set of base decisions. Even the long PEPs are more about descriptions of the logical consequences of those decisions than about the decisions themselves. The ABC PEPs are different, in that they are standardizing a whole slew of things all at once. Moreover, I think there is a real danger here of a kind of ivory-tower decision making, isolated from day-to-day writing of applications to keep them grounded.
The question of whether to limit to the "high school" number classes, or to go with the more mathematically abstract set of things is a decision which, it seems to me, ought to be made in the context of actual use cases, rather than abstract theorizing about use cases. (I'm also generally supportive about copying what other languages have done, on the theory that they too have been tested by real-world use.)
If that's the criterion (and I agree) it should be fairly obvious by now that Ring and MonoidUnderPlus and everything in between should be cast out, and we should stick with high school math. (Just note that Travis Oliphant, Mr. NumPy Himself, pretty much confessed being confused by the algebra stuff).
If it were technically possible, I would recommend that this PEP have to run the same gauntlet that any other large library addition would, which is to go through a long period of community feedback and criticism, during which a large number of people actually attempt to use the feature for real work. I also think, in this case, that the special power of "core python developer fiat" should not be invoked unless it has to be, because I don't think that there is a firm basis for making such a judgment yet.
I would also suggest that some thought be given to ways to allow for experimentation with different variations of this feature. If it is not possible to make these numeric classes definable in Python at runtime, then perhaps it is possible instead to allow for custom builds of the Python 3000 executable with different arrangements and configurations of these built-in classes.
So how about we reduce the scope of our (!) PEP (or perhaps of a new one) to two items: (a) add @abstractmethod, and (b) overload isinstance() and issubclass()? Library authors can do everything they want with those, and we can always add a specific set of ABCs for containers and/or numbers later in the 3.0 development cycle. -- --Guido van Rossum (home page: http://www.python.org/~guido/)
GvR wrote:
So how about we reduce the scope of our (!) PEP (or perhaps of a new one) to two items: (a) add @abstractmethod, and (b) overload isinstance() and issubclass()? Library authors can do everything they want with those, and we can always add a specific set of ABCs for containers and/or numbers later in the 3.0 development cycle.
-1. Adding mechanism without content seems less than ideal, despite Talin's misgivings. I'd recommend adding the base classes in, and see how they work in the earliest 3.0 releases, then modify them as necessary in subsequent releases. Bill
On 4/29/07, Guido van Rossum <guido@python.org> wrote:
Hmm... Maybe the conclusion to draw from this is that we shouldn't make Ring a class? Maybe it ought to be a metaclass, so we could ask isinstance(Complex, Ring)?
Yes; all the ABCs are assertions about the class. (Zope interfaces do support instance-specific interfaces, which has been brought up as a relative weakness of ABCs.) The only thing two subclasses of an *Abstract* class need to have in common is that they both (independently) meet the requirements of the ABC. If not for complexity of implementation, that would be better described as a common metaclass. Using a metaclass would also solve the "when to gripe" issue; the metaclass would gripe if it couldn't make every method concrete. If this just used the standard metaclass machinery, then it would mean a much deeper metaclass hierarchy than we're used to; MutableSet would a have highly dervived metaclass.
The more I think about it, it sounds like the right thing to do. To take PartiallyOrdered (let's say PO for brevity) as an example, the Set class should specify PO as a metaclass. The PO metaclass could require that the class implement __lt__ and __le__. If it found a class that didn't implement them, it could make the class abstract by adding the missing methods to its __abstractmethods__ attribute.
Or by making it a sub(meta)class, instead of a (regular instance) class.
if it found that the class implemented one but not the other, it could inject a default implementation of the other in terms of the one and __eq__.
This also allows greater freedom in specifying which subsets of methods must be defined.
Now, you could argue that Complex should also be a metaclass. While that may mathematically meaningful (for all I know there are people doing complex number theory using Complex[Z/n]), for Python's numeric classes I think it's better to make Complex a regular class representing all the usual complex numbers (i.e. a pair of Real numbers).
complex already meets that need. Complex would be the metaclass representing the restrictions on the class, so that independing implementations wouldn't have to fake-inherit from complex.
I expect that the complex subclasses used in practice are all happy under mixed arithmetic using the usual definition of mixed arithmetic: convert both arguments to a common base class and compute the operation in that domain.
It is reasonable to insist that all Complex classes have a way to tranform their instances into (builtin) complex instances, if only as a final fallback. There is no need for complex to be a base class. -jJ
On 4/29/07, Jim Jewett <jimjjewett@gmail.com> wrote:
On 4/29/07, Guido van Rossum <guido@python.org> wrote:
Hmm... Maybe the conclusion to draw from this is that we shouldn't make Ring a class? Maybe it ought to be a metaclass, so we could ask isinstance(Complex, Ring)?
Yes; all the ABCs are assertions about the class.
I don't think so. Many are quite useful for introspection of instances as well, e.g. Hashable/Iterable (the whole "One Trick Ponies" section) as well as the distinction between Sequence and Mapping. It's the binary operations where the class comes into play.
(Zope interfaces do support instance-specific interfaces, which has been brought up as a relative weakness of ABCs.)
The only thing two subclasses of an *Abstract* class need to have in common is that they both (independently) meet the requirements of the ABC. If not for complexity of implementation, that would be better described as a common metaclass.
Again, not so fast; it depends. The way the Set section of the PEP is currently written, all sets are comparable (in the subset/superset sense) to all other sets, and for ComposableSet instances the union, intersection and both types of differences are also computable across class boundaries.
Using a metaclass would also solve the "when to gripe" issue; the metaclass would gripe if it couldn't make every method concrete. If this just used the standard metaclass machinery, then it would mean a much deeper metaclass hierarchy than we're used to; MutableSet would a have highly dervived metaclass.
I think you're going way too fast here.
The more I think about it, it sounds like the right thing to do. To take PartiallyOrdered (let's say PO for brevity) as an example, the Set class should specify PO as a metaclass. The PO metaclass could require that the class implement __lt__ and __le__. If it found a class that didn't implement them, it could make the class abstract by adding the missing methods to its __abstractmethods__ attribute.
Or by making it a sub(meta)class, instead of a (regular instance) class.
That makes no sense. Deciding on the fly whether something should be a class or a metaclass sounds like a fine recipe for end-user confusion.
if it found that the class implemented one but not the other, it could inject a default implementation of the other in terms of the one and __eq__.
This also allows greater freedom in specifying which subsets of methods must be defined.
Now, you could argue that Complex should also be a metaclass. While that may mathematically meaningful (for all I know there are people doing complex number theory using Complex[Z/n]), for Python's numeric classes I think it's better to make Complex a regular class representing all the usual complex numbers (i.e. a pair of Real numbers).
complex already meets that need. Complex would be the metaclass representing the restrictions on the class, so that independing implementations wouldn't have to fake-inherit from complex.
I was thinking of other representations of complex numbers as found e.g. in numpy. These vary mostly by using fewer (or more?) bits for the real and imag parts. They can't realistically subclass complex, as their implementation is independent; they should subclass Complex, to indicate that they implement the Complex API. I really think you're going too far with the metaclass idea. Now, if we had parameterizable types (for which I've proposed a notation, e.g. list[int] would be a list of integers, and Mapping[String, Real] would be a mapping from strings to real numbers; but I don't expect this to be in py3k, as it needs more experimentation), Complex might be a parameterizable type, and e.g. the current concrete complex type could be equated to Complex[float]; but without that, I think it's fine to see Complex as the Abstract Base Class and complex as one concrete representation.
I expect that the complex subclasses used in practice are all happy under mixed arithmetic using the usual definition of mixed arithmetic: convert both arguments to a common base class and compute the operation in that domain.
It is reasonable to insist that all Complex classes have a way to tranform their instances into (builtin) complex instances, if only as a final fallback. There is no need for complex to be a base class.
I agree complex shouldn't be a base class (apologies if I implied that by using lowercase) but I still think Complex should be a base class. To be honest, I'm not sure what should happen with mixed operations between classes that only have an abstract common base class. The normal approach for binary operations is that each side gets a try. For pairs like int+float this is easy; int.__add__ returns NotImplemented in this case, and then float.__radd__ is called which converts the first argument to float and returns a float. For pairs like numpy's complex 32-bit float and numpy's complex 64-bit float it should also be easy (numpy is aware of both types and hence always gets to choose); and for numpy's complex combined with the built-in complex it's easy enough too (again, numpy always gets to choose, this time because the built-in complex doesn't know about numpy). But what if I wrote my own complex type based on decimal.Decimal, and I encountered a numpy complex? numpy doesn't know about my type and hence passes the ball to me; but perhaps I don't know about numpy either, and then a TypeError will be raised. So, to be a good citizen, I could check if the other arg was a Complex of unknown provenance, and then I could convert to the built-in complex and pass the ball to that type. But if all good citizens lived by that rule (and it *appears* to be a reasonable rule), then numpy, being the ultimate good citizen, would also convert *my* type to the built-in complex. But that would mean that if my type *did* know about numpy (but numpy didn't know about mine), I wouldn't get to choose what to do in half the cases (if the numpy instance was on the left, it would get and take the first opportunity). Perhaps we need to extend the built-in operation processing so that if both sides return NotImplemented, before raising TypeError, we look for some common base type implementing the same operation. The abstract Complex type could provide abstract implementations of the binary operators that would convert their arguments to the concrete complex type and compute the result that way. (Hah! Another use case for abstract methods with a useful implementation! :-) -- --Guido van Rossum (home page: http://www.python.org/~guido/)
On 4/29/07, Guido van Rossum <guido@python.org> wrote:
On 4/29/07, Jim Jewett <jimjjewett@gmail.com> wrote:
On 4/29/07, Guido van Rossum <guido@python.org> wrote:
Hmm... Maybe the conclusion to draw from this is that we shouldn't make Ring a class? Maybe it ought to be a metaclass, so we could ask isinstance(Complex, Ring)?
Yes; all the ABCs are assertions about the class.
I don't think so. Many are quite useful for introspection of instances as well, e.g. Hashable/Iterable (the whole "One Trick Ponies" section) as well as the distinction between Sequence and Mapping. It's the binary operations where the class comes into play.
I think those are one of the reasons it seemed like we should use inheritance. Treating them as assertions about each instance makes sense -- but it makes just as much sense to treat them as assertions about the class. (Is it meaningful to ask about the size of this? Well, for objects of this class, it is..)
The only thing two subclasses of an *Abstract* class need to have in common is that they both (independently) meet the requirements of the ABC. If not for complexity of implementation, that would be better described as a common metaclass.
Again, not so fast; it depends. The way the Set section of the PEP is currently written, all sets are comparable (in the subset/superset sense) to all other sets, and for ComposableSet instances the union, intersection and both types of differences are also computable across class boundaries.
That is an additional constraint which the Set metaclass imposes -- and it does this by effectively coercing instances of both Set classes to (buitin) frozenset instances to create the return value. (It doesn't actually create the intermediate frozenset instance, but mySet() | mySet() will return a frozenset rather than a MySet.
Using a metaclass would also solve the "when to gripe" issue; the metaclass would gripe if it couldn't make every method concrete. If this just used the standard metaclass machinery, then it would mean a much deeper metaclass hierarchy than we're used to; MutableSet would a have highly dervived metaclass.
I think you're going way too fast here.
To be more precise, classes implementing MutabSet would have MutableSet as a metaclass, which would mean (through inheritance) than they would also have ComposableSet, Set, Sized, Iterable, and PartiallyOrdered as metaclasses. This is legal today, but *I* haven't seen code in the wild with a metaclass hierarchy that deep.
The more I think about it, it sounds like the right thing to do. To take PartiallyOrdered (let's say PO for brevity) as an example, the Set class should specify PO as a metaclass. The PO metaclass could require that the class implement __lt__ and __le__. If it found a class that didn't implement them, it could make the class abstract by adding the missing methods to its __abstractmethods__ attribute.
Or by making it a sub(meta)class, instead of a (regular instance) class.
That makes no sense. Deciding on the fly whether something should be a class or a metaclass sounds like a fine recipe for end-user confusion.
Perhaps. But how is that different from deciding on the fly whether the class will be instantiable? Either way, the user does need to keep track of whether it is abstract; the difference is that when inheriting, they would have to say (metaclass=ABC) # I know it is an abstract class instead of (ABC) # Might be a regular class, but I can never instantiate it directly
I was thinking of other representations of complex numbers as found e.g. in numpy. These vary mostly by using fewer (or more?) bits for the real and imag parts. They can't realistically subclass complex, as their implementation is independent; they should subclass Complex, to indicate that they implement the Complex API. I really think you're going too far with the metaclass idea.
Or they could have Compex as a metaclass, which would serve the same introspective needs.
I expect that the complex subclasses used in practice are all happy under mixed arithmetic using the usual definition of mixed arithmetic: convert both arguments to a common base class and compute the operation in that domain.
It is reasonable to insist that all Complex classes have a way to tranform their instances into (builtin) complex instances, if only as a final fallback. There is no need for complex to be a base class.
I agree complex shouldn't be a base class (apologies if I implied that by using lowercase) but I still think Complex should be a base class.
Why? What do you get by inheriting from it, that you couldn't get by letting it inject any missing methods?
take the first opportunity). Perhaps we need to extend the built-in operation processing so that if both sides return NotImplemented, before raising TypeError, we look for some common base type implementing the same operation. The abstract Complex type could provide abstract implementations of the binary operators that would convert their arguments to the concrete complex type and compute the result that way. (Hah! Another use case for abstract methods with a useful implementation! :-)
That seems sensible -- but you could also do just do it in the _r* method, since the regular method already returned NotImplemented. So Complex could inject a __radd__ method that tried self.__add__(complex(other)) # since addition is commutative for Complex If the method weren't in either class, I would expect it to be a more general fallback, like "if we don't have __lt__, try __cmp__" -jJ
participants (6)
-
Bill Janssen
-
Guido van Rossum
-
Jeffrey Yasskin
-
Jim Jewett
-
Steven Bethard
-
Talin