
On 16 August 2014 04:38, Łukasz Langa <lukasz@langa.pl> wrote:
Ah, that’s right, I meant singledispatch. Sorry for the confusion! :)
I think "singledispatch" is still potentially relevant to Matthew's idea of a library specifically devoted to dispatch and signature matching. My rationale is that "functools.singledispatch" actually has some pretty neat funcationality behind it. I'm talking things like functools._c3_merge, functools._compose_mro, functools._find_impl - all the machinery that is needed to take the complexity of ABC registration and turn it into something that can be checked quickly at runtime. Even the trick with using abc.get_cache_token() to invalidate the dispatch caches whenever the object graph changes via explicit ABC registration is worth illuminating further. Decoupling the underlying dispatch machinery from the specific functools use case also opens up additional possibilities for type based dispatch. The way Julia uses multiple dispatch for binary operators is interesting (http://julia.readthedocs.org/en/latest/manual/methods/), although not directly applicable to Python's binary operators, since we use the "return NotImplemented" dance to decide which implementation to use. That's where I can see a possible fit for something like multiple dispatch support in the standard library: making it easier to write binary operator overloads correctly. A *lot* of folks make the mistake of raising TypeError or NotImplementedError directly in their operator overload implementations, rather than returning the NotImplemented singleton that tells the interpreter to try the other type. Even some of the CPython builtins get that wrong, since the sq_concat and sq_repeat slots in C don't properly support the type coercion dance, so you *have* to raise the exception yourself if you're only implementing those without implementing nb_add and nb_mul (types defined in Python automatically populate both sets of C level slots if you define __add__ or __mul__). Dealing with the "NotImplemented dance" properly is also why functools.total_ordering got substantially slower in Python 3.4 - it isn't at risk of blowing up with RecursionError in some cases any more, but it paid a hefty price in speed to get there. Notation wise, I strongly encourage going with the format defined in PEP 443: a "default implementation" that defines the namespace all the other implementations will hook into, along with explicit registration of additional overloads. If there's no sensible default implementation, it can be written to raise an appropriate exception (a dispatch library could even help with raising an appropriately formatted type error). For example, here's how a PEP 443 inspired notation would handle the task of defining a stricter version of sequence repetition than the default "*" binary operator: @functools.multidispatch(2) # Multiple dispatch on the first 2 positional arguments def repeat(lhs, rhs): raise BinaryDispatchError('repeat', lhs, rhs) # See below for possible definition @example.register(numbers.Integral, collections.abc.Sequence): @example.register(collections.abc.Sequence, numbers.Integral): def repeat(lhs, rhs): return lhs * rhs # Assume this useful helper class is defined somewhere... class BinaryDispatchError(TypeError): def __init__(self, fname, lhs, rhs): self.fname = fname self.lhs_kind = lhs_kind = lhs.__class__.__name__ self.rhs_kind = rhs_kind = rhs.__class__.__name__ msg = "Unsupported operand type(s) for {!r}: {!r} and {!r}".format(fname, lhs_kind, rhs_kind) super().__init__(msg) Regards, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia