Automatic total ordering
![](https://secure.gravatar.com/avatar/a9aa7fdd7a72ac870c603739a9c26b94.jpg?s=120&d=mm&r=g)
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__, it seems it's a good time to do something with the missing rich comparators gotcha, e.g. given a class that defines __eq__ and __lt__, automatically provide the rest missing comparisons. Yes, it can be done with a custom metaclass or (in 2.6+) with a class decorator [1] but (a) 99% of the time that's what one expects so "explicit is better than implicit" doesn't count and (b) a bulitin implementation might well be more efficient. There might be arguments why this would be a bad idea but I really can't think of any. George [1] http://www.voidspace.org.uk/python/weblog/arch_d7_2008_10_04.shtml
![](https://secure.gravatar.com/avatar/ca465da45735c9efed28478928fa9fbe.jpg?s=120&d=mm&r=g)
On Wed, Oct 15, 2008 at 11:57 AM, George Sakkis <george.sakkis@gmail.com> wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__, it seems it's a good time to do something with the missing rich comparators gotcha, e.g. given a class that defines __eq__ and __lt__, automatically provide the rest missing comparisons. Yes, it can be done with a custom metaclass or (in 2.6+) with a class decorator [1] but (a) 99% of the time that's what one expects so "explicit is better than implicit" doesn't count and (b) a bulitin implementation might well be more efficient. There might be arguments why this would be a bad idea but I really can't think of any.
+1 André
George
[1] http://www.voidspace.org.uk/python/weblog/arch_d7_2008_10_04.shtml
_______________________________________________ Python-ideas mailing list Python-ideas@python.org http://mail.python.org/mailman/listinfo/python-ideas
![](https://secure.gravatar.com/avatar/047f2332cde3730f1ed661eebb0c5686.jpg?s=120&d=mm&r=g)
Sure, but let's aim for 3.1. The goal for 3.0 is stability and getting it released! On Wed, Oct 15, 2008 at 7:57 AM, George Sakkis <george.sakkis@gmail.com> wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__, it seems it's a good time to do something with the missing rich comparators gotcha, e.g. given a class that defines __eq__ and __lt__, automatically provide the rest missing comparisons. Yes, it can be done with a custom metaclass or (in 2.6+) with a class decorator [1] but (a) 99% of the time that's what one expects so "explicit is better than implicit" doesn't count and (b) a bulitin implementation might well be more efficient. There might be arguments why this would be a bad idea but I really can't think of any.
George
[1] http://www.voidspace.org.uk/python/weblog/arch_d7_2008_10_04.shtml
-- --Guido van Rossum (home page: http://www.python.org/~guido/)
![](https://secure.gravatar.com/avatar/a9aa7fdd7a72ac870c603739a9c26b94.jpg?s=120&d=mm&r=g)
On Wed, Oct 15, 2008 at 1:55 PM, Guido van Rossum <guido@python.org> wrote: Sure, but let's aim for 3.1.
The goal for 3.0 is stability and getting it released!
Definitely, that's why it's posted on python-ideas and not on the python-3k list.
On Wed, Oct 15, 2008 at 7:57 AM, George Sakkis <george.sakkis@gmail.com> wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__, it seems it's a good time to do something with the missing rich comparators gotcha, e.g. given a class that defines __eq__ and __lt__, automatically provide the rest missing comparisons. Yes, it can be done with a custom metaclass or (in 2.6+) with a class decorator [1] but (a) 99% of the time that's what one expects so "explicit is better than implicit" doesn't count and (b) a bulitin implementation might well be more efficient. There might be arguments why this would be a bad idea but I really can't think of any.
George
[1] http://www.voidspace.org.uk/python/weblog/arch_d7_2008_10_04.shtml
-- --Guido van Rossum (home page: http://www.python.org/~guido/<http://www.python.org/%7Eguido/> )
![](https://secure.gravatar.com/avatar/d6b9415353e04ffa6de5a8f3aaea0553.jpg?s=120&d=mm&r=g)
George Sakkis wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__,
An entry for __cmp__ was in the 3.0c1 doc, which confused me. It is now gone in http://docs.python.org/dev/3.0/reference/datamodel.html#special-method-names Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
![](https://secure.gravatar.com/avatar/047f2332cde3730f1ed661eebb0c5686.jpg?s=120&d=mm&r=g)
On Wed, Oct 15, 2008 at 11:03 AM, Terry Reedy <tjreedy@udel.edu> wrote:
George Sakkis wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__,
An entry for __cmp__ was in the 3.0c1 doc, which confused me. It is now gone in http://docs.python.org/dev/3.0/reference/datamodel.html#special-method-names
Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
I should also note that part of George's proposal has already been implemented: if you define __eq__, you get a complementary __ne__ for free. However it doesn't work the other way around (defining __ne__ doesn't give you __eq__ for free), and there is no similar relationship for the ordering operators. The reason for the freebie is that it's *extremely* unlikely to want to define != as something other than the complement of == (the only use case is IEEE compliant NaNs); however it's pretty common to define non-total orderings (e.g. set inclusion). -- --Guido van Rossum (home page: http://www.python.org/~guido/)
![](https://secure.gravatar.com/avatar/a9aa7fdd7a72ac870c603739a9c26b94.jpg?s=120&d=mm&r=g)
On Wed, Oct 15, 2008 at 2:11 PM, Guido van Rossum <guido@python.org> wrote: On Wed, Oct 15, 2008 at 11:03 AM, Terry Reedy <tjreedy@udel.edu> wrote:
George Sakkis wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__,
An entry for __cmp__ was in the 3.0c1 doc, which confused me. It is now gone in
http://docs.python.org/dev/3.0/reference/datamodel.html#special-method-names
Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
I should also note that part of George's proposal has already been implemented: if you define __eq__, you get a complementary __ne__ for free. However it doesn't work the other way around (defining __ne__ doesn't give you __eq__ for free), and there is no similar relationship for the ordering operators. The reason for the freebie is that it's *extremely* unlikely to want to define != as something other than the complement of == (the only use case is IEEE compliant NaNs); however it's pretty common to define non-total orderings (e.g. set inclusion).
Partial orderings are certainly used, but I believe they are far less common than total ones. Regardless, a partially ordered class has to explicitly define the supported methods with the desired semantics anyway; the proposed change wouldn't make this any harder. George
![](https://secure.gravatar.com/avatar/a3c52ba2a586b4c0d97d6030511738d9.jpg?s=120&d=mm&r=g)
On 15 Oct 2008, at 19:35, George Sakkis wrote:
On Wed, Oct 15, 2008 at 2:11 PM, Guido van Rossum <guido@python.org> wrote:
On Wed, Oct 15, 2008 at 11:03 AM, Terry Reedy <tjreedy@udel.edu> wrote:
George Sakkis wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops
(?)
__cmp__,
An entry for __cmp__ was in the 3.0c1 doc, which confused me. It is now gone in http://docs.python.org/dev/3.0/reference/datamodel.html#special-method-names
Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
I should also note that part of George's proposal has already been implemented: if you define __eq__, you get a complementary __ne__ for free. However it doesn't work the other way around (defining __ne__ doesn't give you __eq__ for free), and there is no similar relationship for the ordering operators. The reason for the freebie is that it's *extremely* unlikely to want to define != as something other than the complement of == (the only use case is IEEE compliant NaNs); however it's pretty common to define non-total orderings (e.g. set inclusion).
Partial orderings are certainly used, but I believe they are far less common than total ones. Regardless, a partially ordered class has to explicitly define the supported methods with the desired semantics anyway; the proposed change wouldn't make this any harder.
I don't understand. In a mathematical ordering, * x > y means the same as y < x * x <= y means the same as x < y or x = y * x >= y means the same as x > y or x = y and this is irrespective of whether the ordering is partial or total. So, given __eq__ and __lt__, all the rest follows. E.g. def __gt__(self, other): return other.__lt__(self) etc... Where am I going wrong? -- Arnaud
![](https://secure.gravatar.com/avatar/417d648b66e8466768a0924dea5de5a0.jpg?s=120&d=mm&r=g)
2008/10/15 Arnaud Delobelle <arnodel@googlemail.com>:
Partial orderings are certainly used, but I believe they are far less common than total ones. Regardless, a partially ordered class has to explicitly define the supported methods with the desired semantics anyway; the proposed change wouldn't make this any harder.
I don't understand. In a mathematical ordering,
* x > y means the same as y < x * x <= y means the same as x < y or x = y * x >= y means the same as x > y or x = y
and this is irrespective of whether the ordering is partial or total.
But for a total ordering 'not y < x' is a more efficient definition of 'x <= y' than 'x < y or x == y'. -- Marcin Kowalczyk qrczak@knm.org.pl http://qrnik.knm.org.pl/~qrczak/
![](https://secure.gravatar.com/avatar/a9aa7fdd7a72ac870c603739a9c26b94.jpg?s=120&d=mm&r=g)
On Wed, Oct 15, 2008 at 4:50 PM, Arnaud Delobelle <arnodel@googlemail.com>wrote:
On 15 Oct 2008, at 19:35, George Sakkis wrote:
On Wed, Oct 15, 2008 at 2:11 PM, Guido van Rossum <guido@python.org>
wrote:
On Wed, Oct 15, 2008 at 11:03 AM, Terry Reedy <tjreedy@udel.edu> wrote:
George Sakkis wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__,
An entry for __cmp__ was in the 3.0c1 doc, which confused me. It is now gone in
http://docs.python.org/dev/3.0/reference/datamodel.html#special-method-names
Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
I should also note that part of George's proposal has already been implemented: if you define __eq__, you get a complementary __ne__ for free. However it doesn't work the other way around (defining __ne__ doesn't give you __eq__ for free), and there is no similar relationship for the ordering operators. The reason for the freebie is that it's *extremely* unlikely to want to define != as something other than the complement of == (the only use case is IEEE compliant NaNs); however it's pretty common to define non-total orderings (e.g. set inclusion).
Partial orderings are certainly used, but I believe they are far less common than total ones. Regardless, a partially ordered class has to explicitly define the supported methods with the desired semantics anyway; the proposed change wouldn't make this any harder.
I don't understand. In a mathematical ordering,
* x > y means the same as y < x * x <= y means the same as x < y or x = y * x >= y means the same as x > y or x = y
and this is irrespective of whether the ordering is partial or total.
So, given __eq__ and __lt__, all the rest follows. E.g.
def __gt__(self, other): return other.__lt__(self) etc...
Where am I going wrong?
For total orderings, one can define all methods in terms of self.__eq__ and self.__lt__, i.e. avoid making any assumption about `other`, e.g: def __gt__(self, other): return not (self == other or self < other) Your __gt__ definition (which is also what Michael Foord does in his class decorator) may break mysteriously: class Thing(object): def __init__(self, val): self.val = val @total_ordering class Thing_lt(Thing): def __lt__(self, other): return self.val < other.val >>> t1 = Thing_lt(1) >>> t2 = Thing(2) >>> t1 < t2 True >>> t1 > t2 ... RuntimeError: maximum recursion depth exceeded George
![](https://secure.gravatar.com/avatar/bf0ee222dbef43f7743772df65fa8ac8.jpg?s=120&d=mm&r=g)
It seems like a lot of people are missing the idea that you *only* need < (or >, <=, >=, with slightly different variations) for total ordering on two objects of the same class, so here it is: x < y x > y -> y < x x <= y --> not y < x x >= y --> not x < y x == y --> not x < y and not y < x x != y --> x < y or y < x At least, that's what I got from the original post, Brandon
![](https://secure.gravatar.com/avatar/d6b9415353e04ffa6de5a8f3aaea0553.jpg?s=120&d=mm&r=g)
Guido van Rossum wrote:
On Wed, Oct 15, 2008 at 11:03 AM, Terry Reedy <tjreedy@udel.edu> wrote:
George Sakkis wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__, An entry for __cmp__ was in the 3.0c1 doc, which confused me. It is now gone in http://docs.python.org/dev/3.0/reference/datamodel.html#special-method-names
Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
I should also note that part of George's proposal has already been implemented: if you define __eq__, you get a complementary __ne__ for free. However it doesn't work the other way around (defining __ne__ doesn't give you __eq__ for free), and there is no similar relationship for the ordering operators. The reason for the freebie is that it's *extremely* unlikely to want to define != as something other than the complement of == (the only use case is IEEE compliant NaNs); however it's pretty common to define non-total orderings (e.g. set inclusion).
You previously said "Sure, but let's aim for 3.1." However, I could interpret the above as saying that we have already done as much as is sensible (except for changing the docs). Or are you merely saying that any other freebies must make sure to respect the possibility of non-total orderings and not accidentally convert such into a (non-consistent) 'total' ordering? There are several possible basis pairs of defined operations. A specification must list which one(s) would work. Terry
![](https://secure.gravatar.com/avatar/047f2332cde3730f1ed661eebb0c5686.jpg?s=120&d=mm&r=g)
On Wed, Oct 15, 2008 at 12:12 PM, Terry Reedy <tjreedy@udel.edu> wrote:
Guido van Rossum wrote:
On Wed, Oct 15, 2008 at 11:03 AM, Terry Reedy <tjreedy@udel.edu> wrote:
George Sakkis wrote:
Now that 3.x fixes the arbitrary object comparison wart and drops (?) __cmp__,
An entry for __cmp__ was in the 3.0c1 doc, which confused me. It is now gone in
http://docs.python.org/dev/3.0/reference/datamodel.html#special-method-names
Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
I should also note that part of George's proposal has already been implemented: if you define __eq__, you get a complementary __ne__ for free. However it doesn't work the other way around (defining __ne__ doesn't give you __eq__ for free), and there is no similar relationship for the ordering operators. The reason for the freebie is that it's *extremely* unlikely to want to define != as something other than the complement of == (the only use case is IEEE compliant NaNs); however it's pretty common to define non-total orderings (e.g. set inclusion).
You previously said "Sure, but let's aim for 3.1." However, I could interpret the above as saying that we have already done as much as is sensible (except for changing the docs).
I think we've done as much as I am comfortable with doing *by default* (i.e. when inheriting from object). The rest should be provided via mix-ins. But even those mix-ins should wait until 3.1.
Or are you merely saying that any other freebies must make sure to respect the possibility of non-total orderings and not accidentally convert such into a (non-consistent) 'total' ordering?
There are several possible basis pairs of defined operations. A specification must list which one(s) would work.
There could be several different mix-ins that implement a total ordering based on different basis pairs. (Or even a single basis operation -- *in principle* all you need is a '<' operation.) -- --Guido van Rossum (home page: http://www.python.org/~guido/)
![](https://secure.gravatar.com/avatar/bc2071afd499daef001e75e14d7f9cce.jpg?s=120&d=mm&r=g)
From: "Guido van Rossum" <guido@python.org>
I think we've done as much as I am comfortable with doing *by default* (i.e. when inheriting from object). The rest should be provided via mix-ins. But even those mix-ins should wait until 3.1.
Rather than a mix-in, my preference is for a class decorator that is smart enough to propagate whatever underlying comparison is provided. That way, you can choose to define any of __lt__, __le__, __gt__, or __ge__ to get all the rest. We did something like this as a class exercise at PyUK. Raymond
![](https://secure.gravatar.com/avatar/d6b9415353e04ffa6de5a8f3aaea0553.jpg?s=120&d=mm&r=g)
Guido van Rossum wrote:
On Wed, Oct 15, 2008 at 12:12 PM, Terry Reedy <tjreedy@udel.edu> wrote:
interpret the above as saying that we have already done as much as is sensible (except for changing the docs).
I think we've done as much as I am comfortable with doing *by default* (i.e. when inheriting from object). The rest should be provided via mix-ins. But even those mix-ins should wait until 3.1.
After posting and then thinking some more, I came to the same conclusion, that anything more should be by explicit request. Document the imported tool well and the results are the programmer's responsibility.
There are several possible basis pairs of defined operations. A specification must list which one(s) would work.
There could be several different mix-ins that implement a total ordering based on different basis pairs. (Or even a single basis operation -- *in principle* all you need is a '<' operation.)
Raymond's idea of an intelligent (adaptive) decorator (or two?) sounds pretty good too. Terry
![](https://secure.gravatar.com/avatar/a9aa7fdd7a72ac870c603739a9c26b94.jpg?s=120&d=mm&r=g)
On Thu, Oct 16, 2008 at 12:07 AM, Terry Reedy <tjreedy@udel.edu> wrote:
Guido van Rossum wrote:
On Wed, Oct 15, 2008 at 12:12 PM, Terry Reedy <tjreedy@udel.edu> wrote:
interpret the above as saying that we have already done as much as is
sensible (except for changing the docs).
I think we've done as much as I am comfortable with doing *by default* (i.e. when inheriting from object). The rest should be provided via mix-ins. But even those mix-ins should wait until 3.1.
After posting and then thinking some more, I came to the same conclusion, that anything more should be by explicit request.
Can you expand on how you reached this conclusion ? For one thing, total orderings are far more common, so that alone is a strong reason to have them by default, even if it made things harder for partial orderings. Even for those though, the current behavior is not necessarily better: class SetlikeThingie(object): def __init__(self, *items): self.items = set(items) def __eq__(self, other): return self.items == other.items def __ne__(self, other): return self.items != other.items def __lt__(self, other): return self!=other and self.items.issubset(other.items)
a = SetlikeThingie(1,2,3) b = SetlikeThingie(2,3,4) assert (a<b or a==b) == (a<=b) AssertionError
George
![](https://secure.gravatar.com/avatar/72ee673975357d43d79069ac1cd6abda.jpg?s=120&d=mm&r=g)
It still bothers me that there is no longer a way to provide a single method that performs a three-way comparison. Not only because total ordering is the most common case, but because it makes comparing sequences for ordering very inefficient -- you end up comparing everything twice, once for < and once for =. So I'd like to see this addressed somehow in any scheme to revamp the comparison system. One way would be to add a slot such as __compare__ that works like the old __cmp__ except that it can return four possible results -- less, equal, greater or not-equal. Comparison operations would first look for the corresponding individual method, and then fall back on calling __compare__. There would be a function compare(x, y) that first looks for a __compare__ method, then falls back on trying to find individual methods with which to work out the result. If we were to adopt something like this, it would obviate any need for direct translations between the comparison operations, and in fact any such direct translations might get in the way. So the bottom line is that I think such features should be kept in mixins or decorators for the time being, until this can all be thought through properly. -- Greg
![](https://secure.gravatar.com/avatar/0fd546f234e457fc8b537f7280711f10.jpg?s=120&d=mm&r=g)
On Thursday 16 October 2008, Greg Ewing wrote:
It still bothers me that there is no longer a way to provide a single method that performs a three-way comparison. Not only because total ordering is the most common case, but because it makes comparing sequences for ordering very inefficient -- you end up comparing everything twice, once for < and once for =.
As a note, you can always implement <= as well, thereby reducing the overhead of the 'unimplemented' operations to simply negating their complement. I do this whenever == and < share non-trivial code.
![](https://secure.gravatar.com/avatar/a9aa7fdd7a72ac870c603739a9c26b94.jpg?s=120&d=mm&r=g)
On Thu, Oct 16, 2008 at 6:54 PM, Dillon Collins <dillonco@comcast.net>wrote: On Thursday 16 October 2008, Greg Ewing wrote:
It still bothers me that there is no longer a way to provide a single method that performs a three-way comparison. Not only because total ordering is the most common case, but because it makes comparing sequences for ordering very inefficient -- you end up comparing everything twice, once for < and once for =.
As a note, you can always implement <= as well, thereby reducing the overhead of the 'unimplemented' operations to simply negating their complement. I do this whenever == and < share non-trivial code.
How does this help ? If the result of "<=" is True, you still have to differentiate between < and == to know whether to stop or proceed with the next element in the sequence. George
![](https://secure.gravatar.com/avatar/0fd546f234e457fc8b537f7280711f10.jpg?s=120&d=mm&r=g)
On Thursday 16 October 2008, George Sakkis wrote:
On Thu, Oct 16, 2008 at 6:54 PM, Dillon Collins <dillonco@comcast.net>wrote:
On Thursday 16 October 2008, Greg Ewing wrote:
It still bothers me that there is no longer a way to provide a single method that performs a three-way comparison. Not only because total ordering is the most common case, but because it makes comparing sequences for ordering very inefficient -- you end up comparing everything twice, once for < and once for =.
As a note, you can always implement <= as well, thereby reducing the overhead of the 'unimplemented' operations to simply negating their complement. I do this whenever == and < share non-trivial code.
How does this help ? If the result of "<=" is True, you still have to differentiate between < and == to know whether to stop or proceed with the next element in the sequence.
Very true. I spaced and was thinking more towards the general case, rather than list sorting specifically.
![](https://secure.gravatar.com/avatar/d6b9415353e04ffa6de5a8f3aaea0553.jpg?s=120&d=mm&r=g)
George Sakkis wrote:
I think we've done as much as I am comfortable with doing *by default* (i.e. when inheriting from object). The rest should be provided via mix-ins. But even those mix-ins should wait until 3.1.
After posting and then thinking some more, I came to the same conclusion, that anything more should be by explicit request.
Can you expand on how you reached this conclusion ?
Some related reasons/feelings: 3.0 perhaps completes a major revision of comparisons from default compare to default not compare and from __cmp__ based to 6 __xx__s based. But I suspect the full ramifications of this are yet to be seen. And there may or may not be refinements yet to me made. (Greg's point also.) I am not yet convinced that anything more automatic would be completely safe. It seems like possibly one bit of magic too much. If there are multiple sensible magics, better to let the programmer import and choose. It is hard to take things away once built in. tjr
![](https://secure.gravatar.com/avatar/a9aa7fdd7a72ac870c603739a9c26b94.jpg?s=120&d=mm&r=g)
On Thu, Oct 16, 2008 at 8:27 PM, Terry Reedy <tjreedy@udel.edu> wrote: George Sakkis wrote:
I think we've done as much as I am comfortable with doing *by
default* (i.e. when inheriting from object). The rest should be provided via mix-ins. But even those mix-ins should wait until 3.1.
After posting and then thinking some more, I came to the same conclusion, that anything more should be by explicit request.
Can you expand on how you reached this conclusion ?
Some related reasons/feelings:
3.0 perhaps completes a major revision of comparisons from default compare to default not compare and from __cmp__ based to 6 __xx__s based. But I suspect the full ramifications of this are yet to be seen. And there may or may not be refinements yet to me made. (Greg's point also.)
I am not yet convinced that anything more automatic would be completely safe.
Neither is the current behavior, and I haven't seen a use case where the proposal makes things less safe than they already are.
It seems like possibly one bit of magic too much.
Having total ordering just work by default is too magic ? If there are multiple sensible magics, better to let the programmer import
and choose.
It is hard to take things away once built in.
That's what I take from the whole skepticism, and there's nothing wrong to becoming more conservative as the language matures. Still It's rather ironic that the main motivation of introducing rich comparisons in the first place was to support Numpy, a 3rd party package which most users will never need, while there is resistance to a feature that will benefit the majority. Let's focus then on putting out an explicit approach first (for 2.7 and 3.1) and make it automatic later if there are no side effects. My prediction is that this will follow a path similar to class decorators, which also didn't make it initially for 2.4 but were added eventually (yay!). George
![](https://secure.gravatar.com/avatar/0fd546f234e457fc8b537f7280711f10.jpg?s=120&d=mm&r=g)
On Wednesday 15 October 2008, Terry Reedy wrote:
Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
Well, I believe that the suggestion comes down to having object's rich comparison operators try to use those of it's subclass rather than just throwing an error. Something like: class object(): def __lt__(self, o): raise Error def __eq__(self, o): raise Error def __ne__(self, o): return not self.__eq__(self, o) def __le__(self, o): return self.__lt__(self, o) or self.__eq__(self, o) def __gt__(self, o): return not (self.__lt__(self, o) or self.__eq__(self, o)) def __ge__(self, o): return not self.__lt__(self, o) and self.__eq__(self, o) Of course, if it were to actually be implemented, it would make more sense if it could use any two non complement ops (rather than lt and eq), but that would also make some trouble, I think. BTW, rather than the class decorator, you could just inherit (multiply?) the above class for total ordering as well.
![](https://secure.gravatar.com/avatar/d6b9415353e04ffa6de5a8f3aaea0553.jpg?s=120&d=mm&r=g)
Dillon Collins wrote:
On Wednesday 15 October 2008, Terry Reedy wrote:
Since rich comparisons are defined on object and are inherited by all classes, it would be difficult to make them not defined.
Well, I believe that the suggestion comes down to having object's rich comparison operators try to use those of it's subclass rather than just throwing an error.
Something like:
class object(): def __lt__(self, o): raise Error def __eq__(self, o): raise Error def __ne__(self, o): return not self.__eq__(self, o) def __le__(self, o): return self.__lt__(self, o) or self.__eq__(self, o) def __gt__(self, o): return not (self.__lt__(self, o) or self.__eq__(self, o)) def __ge__(self, o): return not self.__lt__(self, o) and self.__eq__(self, o)
Of course, if it were to actually be implemented, it would make more sense if it could use any two non complement ops (rather than lt and eq), but that would also make some trouble, I think.
BTW, rather than the class decorator, you could just inherit (multiply?) the above class for total ordering as well.
I do not understand this response. In 3.0, the actual definitions are the C equivalent of class object(): def __eq__(self,other): return id(self) == id(other) def __ne__(self,other): return id(self) != id(other) def __lt__(self,other): return NotImplemented <etcetera> tjr
![](https://secure.gravatar.com/avatar/8341c5bff3dcbd8ed34d9d68bd4169f2.jpg?s=120&d=mm&r=g)
Terry Reedy wrote:
In 3.0, the actual definitions are the C equivalent of
class object(): def __eq__(self,other): return id(self) == id(other) def __ne__(self,other): return id(self) != id(other) def __lt__(self,other): return NotImplemented
Not exactly. For example, object().__eq__(object()) ==> NotImplemented, not False; and __ne__ calls __eq__, at least sometimes. The best I can do is: def __eq__(self, other): if self is other: return True return NotImplemented def __ne__(self, other): # calls PyObject_RichCompare(self, other, Py_EQ)... eq = (self == other) # ...which is kinda like this if eq is NotImplemented: # is this even possible? return NotImplemented return not eq def __lt__(self, other): return NotImplemented This behavior makes sense to me, except for __ne__ calling PyObject_RichCompare, which seems like a bug. If I understand correctly, object.__ne__ should be more like this: def __ne__(self, other): eq = self.__eq__(other) if eq is NotImplemented: return NotImplemented return not eq (When I write something like `self.__eq__(other)`, here and below, what I really mean is something more like what half_richcompare does, avoiding the instance dict and returning NotImplemented if the method is not found.) The current behavior causes __eq__ to be called four times in cases where two seems like enough. Rather confusing. So I think the proposal is to change the other three methods to try using __lt__ and __eq__ in a similar way: def __le__(self, other): # Note: NotImplemented is truthy, so if either of these # returns NotImplemented, __le__ returns NotImplemented. return self.__lt__(other) or self.__eq__(other) def __gt__(self, other): # 'and' isn't quite as convenient for us here as 'or' # was above, so spell out what we want: lt = self.__lt__(other) if lt is NotImplemented: return NotImplemented if lt: return False eq = self.__eq__(other) if eq is NotImplemented: return NotImplemented return not eq def __ge__(self, other): lt = self.__lt__(other) if lt is NotImplemented: return NotImplemented return not lt These methods never call __eq__ without first calling __lt__. That's significant: if __lt__ always returns NotImplemented--the default--then these methods should always return NotImplemented too: we don't want to get a bogus True or False result based on a successful call to __eq__. It would also be nice to stop telling people that: x.__eq__(y) <==> x==y x.__ne__(y) <==> x!=y and so forth, in the docstrings and the language reference, as that's an awfully loose approximation of the truth. -j
participants (11)
-
Andre Roberge
-
Arnaud Delobelle
-
Brandon Mintern
-
Dillon Collins
-
George Sakkis
-
Greg Ewing
-
Guido van Rossum
-
Jason Orendorff
-
Marcin 'Qrczak' Kowalczyk
-
Raymond Hettinger
-
Terry Reedy