[Python-3000] What makes infix operators special? (was Re: Type parameterization (was: Re: Type annotations: annotating generators))

Nick Coghlan ncoghlan at gmail.com
Sun May 21 09:51:31 CEST 2006


Travis E. Oliphant wrote:
> Guido van Rossum wrote:
>> On 5/20/06, Talin <talin at acm.org> wrote:
>>> As far as how to pick the set of operators, well my notion on that was
>> I suggest you let this rest for now. We have plenty of existing
>> operators, and I'd rather wait for a more compelling reason to add
>> more.
>>
> 
> I'm sorry to interrupt the resting, but I just wanted to remind 
> everybody that Numerical computing people would still like a way to 
> distinguish between "element-by-element" multiplication and tensor 
> contraction (matrix multiplication).  A new operator could definitely 
> help the situation.
> 
> The existing dance between matrix objects and array objects to reuse the 
> single "*" operator can get confusing.

<This is a rambling thinking-out-loud kind of post. You have been warned ;)>

Something I've occasionally wondered about is just what it is that makes 
operators so much more convenient than methods or builtin functions for some 
operations.

The first benefit is the obvious one: they're a concise mnemonic for 
operations that may otherwise require a long method name in order to be clear 
(compare A.symmetric_difference(B) with A ^ B for sets). A related aspect is 
the ability to avoid repeating yourself for in-place operations (A ^= B 
instead of A = A.symmetric_difference(B)).

The second benefit I see is that operators embody a standard form of 
type-based dispatch used to implement support for mixed-type operations (the 
dance with special methods that return NotImplemented).

The latter benefit is a question of semantics, while the former is a question 
of syntax. I think both are important, but that it's the syntactic one that is 
the real driver for use of operator overloading. (NumPy's ufuncs appear to 
based on the idea of providing the semantic benefit for additional operations 
without syntactic support).

In the mathematical world of scalars and matrices, we have element-by-element 
multiplication, matrix multiplication, the vector scalar product and the 
vector cross product all competing for the same standard operator ('*').

Using '@' would now be fairly counterintuitive, given that symbol's 
association with decorators.

The doubling trick used to separate true division from integer division isn't 
available, since the doubled operator '**' is already used for exponentiation.

I wonder if something could be done with the idea of using additional symbol 
pairs for operators. Then something like '*.' could added for the vector 
scalar product (aka the dot product), and '*~' could be added for matrix 
multiplication. It might even be possible to permit '/%' for divmod.

However, I also wonder if that way lies madness ;)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia
---------------------------------------------------------------
             http://www.boredomandlaziness.org


More information about the Python-3000 mailing list