Hi everyone,
I have a question about the typing system that may make heads explode a bit. It's certainly messing with mine.
The mypy documentation gives a good summary of how you describe the type signature of second-order decorators. Let me stick in some names for these things to clarify the question:
- A decorable (or decorated) function is TypeVar['F', bound=Callable[..., Any]]
- A zero-argument decorator is a Callable[[F], F]
- A multi-argument decorator is a Callable[[], ZeroArgumentDecorator] (it usually takes kwargs)
- A polydecorator (cf the example in the mypy docs of a decorator that supports both bare calls and ones with arguments) is basically a union of the two; it's a callable which, if called with a single positional DecorableFunction argument, returns a DecorableFunction, and if called without that, returns a ZeroArgumentDecorator.
The documentation shows how you can type-annotate polydecorators using @overload, and this is all well and good.
But... writing nice polydecorators is a PITA, so as part of a tools library I'm working on, I'm writing a third-order decorator called @flex_decorator. This thing takes a "decorator spec" -- a function which takes a positional decorable function argument plus arbitrary kwargs and returns a decorated function -- and returns a polydecorator. This way, you can say something like
@flex_decorator
def my_decorator(target: Callable, *, arg1=foo, arg2=bar):
.... do something and return a new Callable
and then you can either say
@my_decorator
def some_function(...)
which uses the default values for all the kwargs (and raises an error if some kwargs don't have defaults), or
@my_decorator(arg1, arg2)
def some_function(...)
which sets values. This frees the user of my_decorator from having to remember whether it requires a () or not, and frees the author of my_decorator from having to handle all that delicate branching and overload annotation.
And so I come to the problem: I need to figure out how to manage type signatures in the definition of flex_decorator. Internally, that function constructs a polydecorator out of the decorator spec and returns it. And inside the definition of flex_decorator itself, it's easy enough to put the @overload instructions on the nested function decoration.
But what's the return type of flex_decorator? The problem I'm facing is that I'm not sure how to define a type signature, rather than a concrete function definition, with an overload. But if I don't do that, then when I actually use a flex_decorator as in the example above, a call like the last example above -- @my_decorator(foo, bar) def some_function(...) -- causes mypy to conclude that an "Untyped decorator makes function "some_function" untyped."
To give a concrete example, one of the call sites looks like this:
@flex_decorator
def cachemethod(
function: WrappedFunctionType,
*,
cache: MethodCacheArgument = dict,
lock: MethodLockArgument = False,
key: Optional[Callable[..., KeyType]] = None,
cache_exceptions: bool = False,
**kwargs: Any,
) -> '_WrappedDescriptor':
The resulting cachemethod object should end up with the type signature
@overload
def cachemethod(function: WrappedFunctionType) -> _WrappedDescriptor: ...
@overload
def cachemethod(*, cache: MethodCacheArgument = dict, ....) -> Callable[[WrappedFunctionType], _WrappedDescriptor]: ...
But I'm not sure how flex_decorator can best cause that to be its type signature, because stuffing @overload defs into something else's def isn't quite well-specified.
Does anyone have any idea of how to handle this? I'd really like to be able to provide this functionality to users without completely breaking their ability to do type-checking!