
Guido> The exception is when you need to do something different based Guido> on the type of an object and you can't add a method for what Guido> you want to do. But that is relatively rare.
Perhaps the reason it's rare is that it's difficult to do.
Perhaps... Is it the chicken or the egg?
One of the cases I was thinking of was the built-in * operator, which does something completely diferent if one of its operands is an integer.
Really? I suppose you're thinking of sequence repetition. I consider that one of my early mistakes (it didn't make it to my "regrets" list but probably should have). It would have been much simpler if sequences simply supported multiplcation, and in fact repeated changes to the implementation (and subtle edge cases of the semantics) are slowly nudging into this direction.
Another one was the buffering iterator we were discussing earlier, which ideally would omit buffering entirely if asked to buffer a type that already supports multiple iteration.
How do you do that in C++? I guess you overload the function that asks for the iterator, and call that function in a template. I think in Python we can ask the caller to provide a buffering iterator when a function needs one. Since we really have very little power at compile time, we sometimes need to do a little more work at run time. But the resulting language appears to be easier to understand (for most people anyway) despite the theoretical deficiency. I'm not quite sure why that is, but I am slowly developing a theory, based on a remark by Samuele Pedroni; at least I believe it was he who remarked at some point "Python has only run time", ehich got me thinking. My theory, partially developed though it is, is that it is much harder (again, for most people :-) to understand in your head what happens at compile time than it is to understand what goes at run time. Or perhaps that understanding *both* is harder than understanding only one. But I believe that for most people acquiring a sufficient mental model for what goes on at run time is simpler than the mental model for what goes on at compile time. Possibly this is because compilers really *do* rely on very sophisticated algorithms (such as deciding which overloading function is called based upon type information and available conversions). Run time on the other hand is dead simple most of the time -- it has to be, since it has to be executed by a machine that has a very limited time to make its decisions. All this reminds me of a remark that I believe is due to John Ousterhout at the VHLL conference in '94 in Santa Fe, where you & I first met. (Strangely it was Perl's Tom Christiansen who was in a large part responsible for the eclectic program.) You gave a talk about ML, and I believe it was in response to your talk that John remarked that ML was best suited for people with an IQ of over 150. That rang true to me, since the only other person besides you that I know who is a serious ML user definitely falls into that category. And ML is definitely a language that does more than the average language at compile time.
Some other categories:
callable sequence generator class instance type number integer floating-point number complex number mutable tuple mapping method built-in
Guido> You missed the two that are most commonly needed in practice: Guido> string and file. :-)
Actually, I thought of them but omitted them to avoid confusion between a type and a category with a single element.
Can you explain? Neither string (which has Unicode and 8-bit, plus a few other objects that are sufficiently string-like to be regex-searchable, like arrays) nor file (at least in the "lore protocol" interpretation of file-like object) are categories with a single element.
Guido> I believe that the notion of an informal or "lore" (as Jim Guido> Fulton likes to call it) protocol first became apparent when we Guido> started to use the idea of a "file-like object" as a valid Guido> value for sys.stdout.
OK. So what I'm asking about is a way of making notions such as "file-like object" more formal and/or automatic.
Yeah, that's the holy Grail of interfaces in Python.
Of course, one reason for my interest is my experience with a language that supports compile-time overloading -- what I'm really seeing on the horizon is some kind of notion of overloading in Python, perhaps along the lines of ML's clausal function definitions (which I think are truly elegant).
Honestly, I hadn't read this far ahead when I brought up ML above. :-) I really hope that the holy grail can be found at run time rather than compile time. Python's compile time doesn't have enough information easily available, and to gather the necessary information is very expensive (requiring whole-program analysis) and not 100% reliable (due to Python's extreme dynamic side).
Guido> Interestingly enough, Jim Fulton asked me to critique the Interface Guido> package as it exists in Zope 3, from the perspective of adding Guido> (something like) it to Python 2.3.
Guido> This is a descendant of the "scarecrow" proposal, Guido> http://www.foretec.com/python/workshops/1998-11/dd-fulton.html (see Guido> also http://www.zope.org/Members/jim/PythonInterfaces/Summary).
Guido> The Zope3 implementation can be viewed here: Guido> http://cvs.zope.org/Zope3/lib/python/Interface/
I'll have a look; thanks!
BTW A the original scarecrow proposal is at http://www.foretec.com/python/workshops/1998-11/dd-fulton-sum.html --Guido van Rossum (home page: http://www.python.org/~guido/)