[Python-3000] Draft pre-PEP: function annotations
Phillip J. Eby
pje at telecommunity.com
Fri Aug 11 23:11:00 CEST 2006
At 01:46 PM 8/11/2006 -0700, Josiah Carlson wrote:
>"Phillip J. Eby" <pje at telecommunity.com> wrote:
> > At 09:04 AM 8/11/2006 -0700, Josiah Carlson wrote:
> > >I think you misunderstood Talin. While it was a pain for him to work
> > >his way through implementing all of the loading/etc. protocols, I
> > >believe his point was that if we allow any and all arbitrary metadata to
> > >be placed on arguments to and from functions, then invariably there will
> > >be multiple methods of doing as much. That isn't a problem unto itself,
> > >but when there ends up being multiple metadata formats, with multiple
> > >interpretations of them, and a user decides that they want to combine
> > >the functionality of two metadata formats, they may be stuck due to
> > >incompatibilities, etc.
> > I was giving him the benefit of the doubt by assuming he was bringing up a
> > *new* objection that I hadn't already answered. This "incompatibility"
> > argument has already been addressed; it is trivially solved by overloaded
> > functions (e.g. pickle.dump(), str(), iter(), etc.).
>In effect, you seem to be saying "when user X wants to add their own
>metadata with interpretation, they need to overload the previously
>existing metadata interpreter".
No, they need to overload whatever *operation* is being performed *on* the
For example, if I am using a decorator that adds type checking to the
function, then that decorator is an example of an operation that should be
More precisely, that decorator would probably have an operation that
generates type checking code for an individual type annotation -- and
*that* is the operation that would need overloading. The
"generate_typecheck_code()" operation would be an overloadable function.
Another possible operation: printing help for a function. You would need a
"format_type_annotation()" overloadable operation, and so on.
There is no *single* "metadata interpreter", in other words. There are
just operations you perform on metadata.
If multiple people define different variants of the same operation, let's
say "generate_typecheck_code()" and "generate_code_for_typecheck()", and
you have some code that defines methods for one overloadable function, but
you have code that wants to call the other, you just write some methods for
one that call the other, or make one be the default implementation for the
There is no need for a *single* canonical operation *or* type. This is the
whole point of generic functions, really. They eliminate the need for One
Framework To Rule Them All, and tend to dissolve the "framework"ness right
out of frameworks. What you end up with are extensible libraries instead
>Since you brought up pickle.dump(), str(), iter(), etc., I'll point out
>that str(), iter(), etc., call special methods on the defined object
>(__str__, __iter__, etc.), and while pickle can have picklers be
>registered, it also has a special method interface. Because all of the
>metadata defined is (according to the pre-PEP) attached to a single
>__signature__ attribute of the function, interpretation of the metadata
>isn't as easy as calling str(obj), as you claim.
Actually, with overloadable functions, it is, since overloadable functions
can be extended by anybody, without needing to monkey with the
classes. Note that if Guido had originally created Python with
overloadable functions, it's rather unlikely that __special__ methods would
have arisen. Instead, it's much more likely that there would be syntax
sugar for easily defining overloads, like "defop str(self): ...".
>Let us say that I have two metadata interpters. One that believes that
>the metadata is types and wants to verify type on function call. The
>other believes that the metadata is documentation. Both were written
>without regards to the other. Please describe to me (in code preferably)
>how I would be able to use both of them without having a defined
>metadata interpretation chaining semantic.
See explanation above.
> > Remember, PEAK already does this kind of openly-extensible metadata for
> > attributes, using a single-dispatch overloaded function (analagous to
> > pickle.dump). If you want to show that it's really possible to create
> > "incompatible" annotations, try creating some for attributes in PEAK.
>Could you at least provide a link to where it is documented how to
>create metadata attributes in PEAK? My attempts to delve into PEAK
>documentation has thus far failed horribly.
Here's the tutorial for defining new metadata (among other things):
The example defines a "Message()" metadata type whose sole purpose is to
print a message when the attribute is declared.
What's not really explained there is that all the 'addMethod' stuff is
basically adding methods to an overloaded function.
Anyway, PEAK uses this simple metadata declaration system to implement both
security permission declarations:
and command-line options:
In PEAK's case, a single overloaded operation is invoked when the metadata
is defined, and then that overloaded operation performs whatever actions
are relevant for the metadata. For function metadata, however, it's
sufficient to use distinct overloaded functions for distinct operations and
not actually "do" anything unless it's needed.
However, if we wanted things to be able to happen just by declaring
metadata (without using any decorators or performing any other operations),
then yes, the language would need some equivalent to PEAK's
"declareAttribute()" overloaded function. However, my understanding of the
proposal was that annotations were intended to be inert and purely
informational *unless* processed by a decorator or some other mechanism.
More information about the Python-3000