[Python-3000] Draft pre-PEP: function annotations

Talin talin at acm.org
Sun Aug 13 10:18:18 CEST 2006


Paul Prescod wrote:
>> And the interpetation of:
>>
>>     def cat(infile: [doc("input stream"), opt("i")] = sys.stdin,
>>             outfile: [doc("output stream"), opt("o")] = sys.stdout
>>     ):
>>
>> is likewise unambiguous, unless the creator of the documentation or 
>> option
>> features has defined some other interpretation for a list than
>> "recursively
>> apply to contained items".
> 
> 
> The meaning is "unambiguous unless..." then it ambiguous. So as per my
> previous proposal I think that you and I agree that we should disallow the
> stupid interpretation by encoding the obvious one in the PEP.
> 
> In which case, you need only do something like:
>>
>>     def cat(infile: docopt("input stream", "i") = sys.stdin,
>>             outfile: docopt("output stream", "o") = sys.stdout
>>     ):
>>
>> with an appropriate definition of methods for the 'docopt' type.
> 
> 
> Given that there are an infinite number of tools in the universe that could
> be processing "doc" and "opt" annotations, how would the user KNOW that
> there is one out there with a stupid interpretation of lists? They might
> annotate thousands of classes before finding out that some hot tool that
> they were planning to use next year is incompatible. So let's please define
> a STANDARD way of attaching multiple annotations to a parameter. Lists seem
> like a no-brainer choice for that.
> 
> Since many people seem to be unfamiliar with overloaded functions, I would
>> just like to take this opportunity to remind you that the actual overload
>> mechanism is irrelevant.  If you gave 'doc' objects a 'printDocString()'
>> method and 'opt' objects a 'setOptionName()' method, the exact same logic
>> regarding extensibility applies.  The 'docopt' type would simply 
>> implement
>> both methods.
>>
>> This is normal, simple standard Python stuff; nothing at all fancy.
> 
> 
> The context is a little bit different than standard duck typing.
> 
> Let's say I define a function like this:
> 
> def car(b):
> "b is a list-like object"
> return b[0]
> 
> Then someone comes along and does something I never expected. They invent a
> type representing a list of bits in a bitfield. They pass it to my function
> and everything works trivially. But there's something important that
> happened. The programmer ASSERTED by passing the RDF list to the function
> 'a' that it is a list like object. My code wouldn't have tried to treat it
> as a list if the user hadn't passed it as one explicitly.
> 
> Now look at it from the point of view of function annotations. As we said
> before, the annotations are inert. They are just attached. There is some
> code like a type checker or documentation generator that comes along after
> the fact and scoops them up to do something with them. The user did not
> assert (at the language level!) that any particular annotation applies to
> any particular annotation processor. The annotation processor is just
> looking for stuff that it recognizes. But what if it thinks it recognizes
> something but does not?
> 
> Consider this potential case:
> 
> BobsDocumentationGenerator.py:
> 
> class BobsDocumentationGeneratorAnnotation:
>    def __init__...
>    def printDocument(self):
>        print self.doc
>    def sideEffect(self):
>        deleteHardDrive()
> 
> def BobsDocumentationGenerator(annotation):
>   if hasattr(annotation, "printDocument"):
>       annotation.printDocument()
> 
> SamsDocumentationGenerator.py:
> 
> class SamsDocumentationGeneratorAnnotation:
>    def __init__...
>    def printDocument(self):
>        return self.doc
>    def sideEffect(self):
>        email(self.doc, "python-dev at pytho...")
> 
> def SamsDocumentationGenerator(annotation):
>   if hasattr(annotation, "printDocument"):
>       print annotation.printDocument()
>       annotation.sideEffect()
> 
> These objects, _by accident_ have the same method signature but different
> side effects and return values. Nobody anywhere in the system made an
> incorrect assertion. They just happened to be unlucky in the naming of 
> their
> methods. (unbelievably unlucky but you get the drift)
> 
> One simple way to make it unambiguous would be to do a test more like:
> 
>   if hasattr(annotation, SamsDocumentationGenerator.uniqueObject): ...
> 
> The association of the unique object with an annotator object would be an
> explicit assertion of compatibility.
> 
> Can we agree that the PEP should describe strategies that people should use
> to make their annotation recognition strategies unambiguous and
> failure-proof?
> 
> I think that merely documenting appropriately defensive techniques might be
> enough to make Talin happy. Note that it isn't the processing code that
> needs to be defensive (in the sense of try/catch blocks). It is the whole
> recognition strategy that the processing code uses. Whatever recognition
> strategy it uses must be unambiguous. It seems like it would hurt nobody to
> document this and suggest some unambiguous techniques.

This says pretty much what I was trying to say, only better :)

I think I am going to chill out on this topic for a bit - it seems that 
there are folks who have a better understanding of the issue than I do, 
and mainly the only reason I was commenting on the PEP was because that 
was what was asked for. I don't really have a big stake in the whole 
annotation effort, there are other issues that I am really more 
interested in.

-- Talin



More information about the Python-3000 mailing list