[Python-3000] PEP 3124 - more commentary

Phillip J. Eby pje at telecommunity.com
Tue May 15 01:21:51 CEST 2007

At 03:43 PM 5/14/2007 -0700, Guido van Rossum wrote:
> > Chaining using the first argument can be implemented using a bound
> > method object, which gets performance bonuses from the C eval loop
> > that partial() objects don't.  (Of course, when RuleDispatch was
> > written, partial() objects didn't exist, anyway.)
>Sounds like premature optimization to me. We can find a way to do it
>fast later; let's first make it right.

As I said, when RuleDispatch was written, partial() didn't exist; it 
was less a matter of performance there than convenience.

>True. So are you working with Tim Delaney on this? Otherwise he may
>propose a simpler mechanism that won't allow this re-use of the

PEP 367 doesn't currently propose a mechanism for the actual 
assignment; I was waiting to see what was proposed, to then suggest 
as minimal a tweak or generalization as necessary.  Also, prior to 
now, you hadn't commented on the first-argument-class rule and I 
didn't know if you were going to reject it anyway.

>super is going to be a keyword with magic properties. Wouldn't it be
>great if instead of
>def flatten(x: Mapping, nm: next_method):
>   ...
>   nm(x)
>we could write
>def flatten(x: Mapping):
>   ...
>   super.flatten(x)  # or super(x)
>or some other permutation of super?

Well, either we'd have to implement it using a hidden parameter, or 
give up on the possibility of the same function being added more than 
once to the same function (e.g., for both Mapping and some specific 
types).  There's no way for the code in the body of the overload to 
know in what context it was invoked.

The current mechanism works by creating bound methods for each 
registration of the same function object, in each "applicability chain".

That doesn't mean it's impossible, just that I haven't given the 
mechanism any thought, and at first glance it looks really hairy to 
implement -- even if it were done using a hidden parameter.

>Or do you see the need to call
>both next-method and super from the same code?

Hm, that's a mind-bender.  I can't think of a sensible use case for 
that, though.  If you're a plain method, you'd just use super.  If 
you're a generic function or overloaded method, you'd just call the 
next method.

The only way I can see you doing that is if you needed to call the 
super of some *other* method, which doesn't make a lot of sense.  In 
any case, we could probably use super(...) for next-method and 
super.methodname() for everything else, so I wouldn't worry about 
it.  (Which means you'd have to use super.__call__() inside of a 
__call__ method, but I think that's OK.)

>Do note that e.g. in IronPython (and maybe also in Jython?)
>exec/eval/compile are 10-50x slower (relative to the rest of the
>system) than in CPython.

This would only get done by @overloadable, and never again thereafter.

>It does look like a clever approach though.

Does that mean you dislike it?  ;-)

> > Hm.  I'll need to give some thought to that, but it seems to me that
> > it's sort of like having None defaults for the missing arguments, and
> > then treating the missing-argument versions as requiring type(None)
> > for those arguments.  Except that we'd need something besides None,
> > and that the overloads would need wrappers that drop the extra
> > arguments.  It certainly seems possible, anyway.
> >
> > I'm not sure I like it, though.
>C++ and Java users use it all the time though.

Right, but they don't have keyword arguments or defaults, 
either.  The part I'm not sure about has to do with interaction with 
Python-specific things like those.  When do you use each one?  One 
Obvious Way seems to favor default arguments, especially since you 
can always use defaults of None and implement overloads for 
type(None) to catch the default cases.  i.e., ISTM that cases like 
range() are more an exception than the rule.

> > It's not obvious from the first
> > function's signature that you can call it with fewer arguments, or
> > what that would mean.  For example, shouldn't the later signatures be
> > "range(stop)" and "range(start,stop)"?  Hm.
>I don't know if the arg names for overloadings must match those of the
>default function or not -- is that specified by your PEP?

It isn't currently, but that's because it's assumed that all the 
methods have the same signature.  If we were going to allow 
subset-signatures (i.e, allow you to define methods whose signature 
omits portions of the main function's signature), ISTM that the 
argument names should have meaning.

Of course, maybe a motivating example other than "range()" would help 
here, since not too many other functions have optional positional 
arguments in the middle of the argument list.  :)

>My own trivially simple overloading code (sandbox/overload, and now
>also added as an experiment to sandbox/abc, with slightly different
>terminology and using issubclass exclusively, as you recommended over
>a year ago :-) has no problem with this. Of course it only handles
>positional arguments and completely ignores argument names except as
>keys into the annotations dict.

Yeah, none of my GF implementations care about the target methods' 
signatures except for the next_method thingy.  But with variable 
argument lists, I think we *should* care.

Also, AFAIK, the languages that allow different-sized argument lists 
for the same function either don't have first class functions (e.g. 
Java) or else have special syntax to allow you to refer to the 
different variations, e.g. "x/1" and "x/2" to refer to the 1 and 2 
argument versions of function x.  That is, they really *are* 
different objects.  (And Java and C++ of course have less 
comprehensible forms of name mangling internally.)

Personally, though, I think that kind of overloading is a poor 
substitute for the parameter flexibility we already have in 
Python.  That is, I think those other languages should be envying 
Python here, rather than the other way around.  :)

More information about the Python-3000 mailing list