[Python-3000] Fwd: Conventions for annotation consumers

Ron Adam rrr at ronadam.com
Sat Aug 19 12:29:31 CEST 2006


Nick Coghlan wrote:

[Clipped other good points.]

> 3. The question has been raised as to whether or not there is a practical way 
> for a developer to use annotations that make sense to a *static* analysis tool 
> that doesn't actually execute the Python code
> 
> If someone figures out a way to handle the last point *without* compromising 
> the ease of use for annotations designed to handle point 1, all well and good. 
> Otherwise, I'd call YAGNI. OK, annotations wouldn't be useful for tools like 
> pychecker in that case. So be it - to be really useful for a tool like 
> pychecker they'd have to be ubiquitous, and that's really not Python any more.

Something I've been looking for is an alternate way to generate function 
signatures that are closer to those used in the documents.

Where help(str.find) gives:

     find(...)
         S.find(sub [,start [,end]]) -> int

         Return the lowest index in S where substring sub is found,
         such that sub is contained within s[start,end].  Optional
         arguments start and end are interpreted as in slice notation.

         Return -1 on failure.

But I am wondering if the annotations could help with both pydoc and 
pychecker.  Then maybe function specifications could be generated and 
look more like ...

    str.find(sub:IsString [,start:IsInt [,end:IsInt]]) -> IsInt

instead of just...

    find(...)


[See below where I'm going with this.]



> All that said, I'm still not entirely convinced that function annotations are 
> a good idea in the first place - I'm inclined to believe that signature 
> objects providing a "bind" method that returns a dictionary mapping the method 
> call's arguments to the function's named parameters will prove far more 
> useful. With this approach, the 'annotations' would continue to be supplied as 
> arguments to decorator factories instead of as expressions directly in the 
> function header. IOW, I've yet to see any use case that is significantly 
> easier to write with function annotations instead of decorator arguments, and 
> several cases where function annotations are significantly worse.
 >
> For one thing, function annotations are useless for decorating a function that 
> was defined elsewhere, whereas it doesn't matter where the function came from 
> when using decorator arguments. The latter also has a major benefit in 
> unambiguously associating each annotation with the decorator that is the 
> intended consumer.


I've been thinking about this also.  It seems maybe there is an effort 
to separate the "meta-data" and the "use of meta-data" a bit too finely. 
  So What you then get is lock and key effect where the decorators that 
use the meta-data and the meta-data itself are separate, but at the same 
time, strongly associated by location (module) and developer.  This may 
be a bit overstated in order to describe it, but I do think it's a 
concern as well. But this is also probably more of a style of use issue 
than an issue with annotations them selves.

The meta-data can also *be* the validator.  So instead of just using 
Float, Int, Long, etc... and writing a smart validator to read and check 
each of those, You can just call the meta-data directly with each 
related argument to validate/modify/or do whatever to it.

So this ...

  > @docstring
  > @typechecker
  > @constrain_values
  > def foo(a: [doc("frobination count"),
  >             type(Number),
  >             constrain_values(range(3,9))],
  >         b: [type(Number),
  >             # This can be only 4, 8 or 12
  >             constrain_values([4,8,12])]) -> type(Number):


could be reduced to ...    (removing redundant checks as well)

     from metalib import *

     @callmeta
     def foo( a: [ SetDoc("frobination count"), InRange(3,9) ],
              b: InSet([4,8,12]) )
              -> IsNumber:
        # code


Which isn't too bad. Or even as positional decorator arguments...

     from metalib import *

     @callmeta( [SetDoc("frobination count"), InRange(3,9)],
                InSet([4,8,12]),
                IsNumber )
     def foo(a, b):
         # code


Both of these are very similar.  The callmeta decorator would be 
impemented different, but by using the validators as the meta-data, it 
makes both versions easier to read and use. IMHO of course.


The metalib routines could be something (roughly) like...

     def IsNumber(arg):
	return type(arg) in (float, int, long)

     def IsString(arg):
         return type(arg) in (str, unicode)

     def InSet(list_):
	def inset(arg):
            return arg in list_
         return inset

     def InRange(start, stop):
         def inrange(arg):
             return start <= arg <= stop
         return inrange

     etc...


(Or it might be better for them to be objects.)


Anyway it's vary late and I'm probably overlooking something, and I 
haven't actually tried any of these so your mileage may vary.  ;-)

Cheers,
   Ron







More information about the Python-3000 mailing list