
Sorting plays with mutability by working in-place, but for many uses it would be just as good if sorting returned a sorted copy instead -- the key thing here is the sorting, not the mutability.
And the key assumption for sorting things is that the things are sortable, which means there exists and order on the basic set. Which again suggests that list elements usually have something in common.
If a list contains ONE complex number and no other number, then the list can be sorted.
But the order isn't meaningful.
If the list contains elements that having something in common, by both being complex numbers, then it cannot be sorted.
So, lists whose elements have LESS in common (by being of widely different types) are more likely to be sortable than lists some of whose elements have in common the fact of being numbers (if one or more of those numbers are complex).
Although not likely to give practical problems (after all I suspect most Pythonistas never use complex numbers at all), this anomaly (introduced in 1.6, I think) makes conceptualization less uniform and thus somewhat harder to teach.
If I had to do it over again, I'd only implement == and != for objects of vastly differing types, and limit <, <=, >, >= to objects that are meaningfully comparable. I'd like to to this in Python 3.0, but that probably means we'd have to start deprecating default comparisons except (in)equality in Python 2.4. --Guido van Rossum (home page: http://www.python.org/~guido/)