[Python-3000] PEP 3124 - more commentary

Phillip J. Eby pje at telecommunity.com
Mon May 14 23:50:56 CEST 2007

At 12:47 PM 5/14/2007 -0700, Guido van Rossum wrote:
>> >I realize that @overload is only a shorthand for @when(function). But
>> >I'd much rather not have @overload at all -- the frame inspection
>> >makes it really hard for me to explain carefully what happens without
>> >just giving the code that uses sys._getframe(); and this makes it
>> >difficult to reason about code using @overload.
>>This is why in the very earliest GF discussions here, I proposed a
>>'defop expr(...)' syntax, as it would eliminate the need for any
>>getframe hackery.
>But that would completely kill your "but it's all pure Python code so
>it's harmless and portable" argument.

Uh, wha?  You lost me completely there.  A 'defop' syntax simply 
eliminates the need to name the target function twice (once in the 
decorator, and again in the 'def').  I don't get what that has to do 
with stuff being harmless or portable or any of that.

Are you perhaps conflating this with the issue of marking functions 
as overloadable?  These are  independent ideas, AFAICT.

>It seems that you're really not interested at all in compromising to
>accept mandatory marking of the base overloadable function.

Uh, wha?  I already agreed to that a couple of weeks ago:


I just haven't updated the PEP yet -- any more than I've updated it 
with anything else that's been in these ongoing threads, like the 
:next_method annotation or splitting the PEP.

>>Anyway, with this, it could also be placed as a keyword
>>argument.  The main reason for putting it in the first position is
>>performance.  Allowing it to be anywhere, however, would let the
>>choice of where be a matter of style.
>Right. What's the performance issue with the first argument?

Chaining using the first argument can be implemented using a bound 
method object, which gets performance bonuses from the C eval loop 
that partial() objects don't.  (Of course, when RuleDispatch was 
written, partial() objects didn't exist, anyway.)

>>However, since we're going to have to have some way for 'super' to
>>know the class a function is defined in, ISTM that the same magic
>>should be reusable for the first-argument rule.
>Perhaps. Though super only needs to know it once the method is being
>called, while your decorator (presumably) needs to know when the
>method is being defined, i.e. before the class object is constructed.

Not really; at some point the class object has to be assigned and 
stored somewhere for super to use, so if same process of "assigning" 
can be used to actually perform the registration, we're good to go.

>Also, the similarities between next-method and super are overwhelming.
>It would be great if you could work with Tim Delaney on a mechanism
>underlying all three issues, or at least two of the three.

I'm not sure I follow you.  Do you mean, something like using :super 
as the annotation instead of next_method, or are you just talking 
about the implementation mechanics?

>> >Forgive me if this is mentioned in the PEP, but what happens with
>> >keyword args? Can I invoke an overloaded function with (some) keyword
>> >args, assuming they match the argument names given in the default
>> >implementation?
>>Yes.  That's done with code generation; PEAK-Rules uses direct
>>bytecode generation, but a sourcecode-based generation is also
>>possible and would be used for the PEP implementation (it was also
>>used in RuleDispatch).
>There's currently no discussion of this.

Well, actually there's this bit:

"""The use of BytecodeAssembler can be replaced using an "exec" or "compile"
workaround, given a reasonable effort.  (It would be easier to do this
if the ``func_closure`` attribute of function objects was writable.)"""

But the closure bit is irrelevant if we're using @overloadable.

>Without a good understanding
>of the implementation I cannot accept the PEP.

The mechanism is exec'ing of a string containing a function 
definition.  The original function's signature is obtained using 
inspect.getargspec(), and the string is exec'd to obtain a new 
function whose signature matches, but whose body contains the generic 
function lookup code.

In practice, the actual function definition has to be nested, so that 
argument defaults can be passed in without needing to convert them to 
strings, and so that the needed lookup tables can be seen via closure 
variables.  A string template would look something like:

     def make_the_function(__defaults, __lookup):
         def $funcname($accept_signature):
             return __lookup($type_tuple)($call_signature)
         return $funcname

The $type_tuple bit would expand to something like:

     type(firstargname), type(secondargname), ...

And $accept_signature would expand to the original function's 
signature, with default values replaced by "__defaults[0]", 
"__defaults[1]", etc. in order to make the resulting function have 
the same default values.

The function that would be returned from @overloadable would be the 
result of calling "make_the_function", passing in the original 
function's func_defaults and an appropriate value for __lookup.

A similar approach is used in RuleDispatch currently.

>> >Also, can we overload different-length signatures (like in C++ or
>> >Java)? This is very common in those languages; while Python typically
>> >uses default argument values, there are use cases that don't easily
>> >fit in that pattern (e.g. the signature of range()).
>>I see a couple different possibilities for this.  Could you give an
>>example of how you'd *like* it to work?
>In the simplest case (no default argument values) overloading two-arg
>functions and three-arg functions with the same name should act as if
>there were two completely separate functions, except for the base
>(default) function. Example:
>def range(start:int, stop:int, step:int):
>  ...  # implement xrange
>def range(x): return range(0, x, 1)
>def range(x, y): return range(x, y, 1)

Hm.  I'll need to give some thought to that, but it seems to me that 
it's sort of like having None defaults for the missing arguments, and 
then treating the missing-argument versions as requiring type(None) 
for those arguments.  Except that we'd need something besides None, 
and that the overloads would need wrappers that drop the extra 
arguments.  It certainly seems possible, anyway.

I'm not sure I like it, though.  It's not obvious from the first 
function's signature that you can call it with fewer arguments, or 
what that would mean.  For example, shouldn't the later signatures be 
"range(stop)" and "range(start,stop)"?  Hm.

More information about the Python-3000 mailing list