[Python-3000] Draft pre-PEP: function annotations

Phillip J. Eby pje at telecommunity.com
Sun Aug 13 19:28:42 CEST 2006


At 01:06 AM 8/13/2006 -0700, Paul Prescod wrote:
>There is something different about annotations than everything else in 
>Python so far. Annotations are the first feature other than docstrings 
>(which are proto-annotations) in core Python where third party tools are 
>supposed to go trolling through your objects FINDING STUFF that they may 
>decide is interesting or not to them.

You make it sound like we've never had documentation tools before, or web 
servers.  Zope has been trolling through Python objects "finding stuff" 
since *1996*.  It's not at all a coincidence that the first 
interface/adaptation systems for Python (AFAIK) were built for Zope.

So some people in the Python community have had an entire *decade* of 
experience with this kind of thing.  It's just a guess, but some of them 
might actually know a thing or two about the subject by now.  ;-)


>Now I'm sure that with all of your framework programming you've run into 
>this many times and have many techniques for making these assertions 
>unambiguous. All we need to do is document them so that people who are not 
>as knowledgable will not get themselves into trouble.

Sure.  Here are two nice articles that people can read to understand the 
basic ideas of "tell, don't ask".  One by the "Pragmatic Programmers":

     http://www.pragmaticprogrammer.com/articles/jan_03_enbug.pdf


And another by Allen Holub on the evils of getters and setters, that 
touches on the same principles:

     http://www.javaworld.com/javaworld/jw-09-2003/jw-0905-toolbox.html



>It isn't sufficient to say: "Only smart people will use this stuff so we 
>need not worry" which is what the original PEP said. Even if it is true, I 
>don't understand why we would bother taking the risk when the alternative 
>is so low-cost.

There are so many other pitfalls to writing extensible and interoperable 
code in Python, why focus so much effort on such an incredibly minor 
one?  The truth is that hardly anybody cares about writing extensible or 
interoperable code except framework developers -- and they've already *got* 
solutions.  Twisted or Zope developers would see this as a trivial use case 
for adaptation, and PEAK developers would use either adaptation or generic 
functions, and keep on moving with nary a speedbump.

Nonetheless, I don't object to documenting best practices; I just don't 
want to mandate a *particular* solution -- with one exception.

If Py3K is going to include overloaded functions, then that should be 
considered the One Obvious Way to work with annotations, since it's an 
"included battery" (and none of the existing 
interface/adaptation/overloading toolkits are likely to work as-is in Py3K 
without some porting effort).  But if Py3K doesn't include overloading or 
adaptation, then the One Obvious Way will be "whatever a knowledgeable 
framework programmer wants to do."


>Pickling works because of the underscores and magic like " 
>__safe_for_unpickling__". Len works because of __length__. etc. There are 
>reasons there are underscores there. You understand them, I understand 
>them, Talin understands them. That doesn't mean that they are 
>self-evident. A lesser inventor might have used a method just called 
>"safe_for_pickling" and some unlucky programmer at Bick's might have 
>accidentally triggered unexpected aspects of the protocol while 
>documenting the properties of cucumbers.

Note that you're pointing out a problem that already exists today in 
Python, and has for some time.  It's why the Zope folks use interfaces and 
adaptation, and why I use overloaded functions.  The problem has nothing to 
do with annotations as such, so if you want to solve that problem, you 
should be pushing for overloaded functions in the stdlib, and using 
annotations as an example of why they're good to have.


>Can we agree that the PEP should describe strategies that people should 
>use to make their annotation recognition strategies unambiguous and 
>failure-proof?

Absolutely - and I recommended that we recommend "tell, don't ask" 
processing using one of the following techniques:

1. duck typing
2. adaptation
3. overloaded functions
4. type registries

You seem to be arguing that duck typing is inadequate because it is 
name-based and names can conflict.  I agree, which is why I believe #2-4 
are better: they don't rely on mere name matching.  However, duck typing is 
still *adequate* as long as names are sufficiently descriptive or at least 
lengthy enough to prevent collision.  Including a package-specific 
namespace prefix like "foo_printDocumentation" is sufficient best practice 
to avoid duck typing name collisions in virtually all cases.

I'm just baffled why all this focus on the issue on such a minor thing, 
when Python has far more pitfalls to interoperability than this.  But I 
guess if you see this as the first time that objects might be implicitly 
used by something, I suppose it makes sense.  But it's really not the first 
time, and these are well-understood problems among developers of major 
Python frameworks, especially Zope.


>I think that merely documenting appropriately defensive techniques might 
>be enough to make Talin happy. Note that it isn't the processing code that 
>needs to be defensive (in the sense of try/catch blocks). It is the whole 
>recognition strategy that the processing code uses. Whatever recognition 
>strategy it uses must be unambiguous. It seems like it would hurt nobody 
>to document this and suggest some unambiguous techniques.

I already recommended that we do this, and have repeated my recommendation 
above for your convenience.



More information about the Python-3000 mailing list