[PEP draft 2] Adding new math operators

Tim Hochberg tim.hochberg at ieee.org
Thu Aug 10 16:16:58 EDT 2000


hzhu at localhost.localdomain (Huaiyu Zhu) writes:

I'm going to punt on the scalar issue, not because I don't think it's
important, but rather because I agree that it's a mess.

> 
> >The second issue, which I believe you've been referring to is what if
> >f returns an array when your in a matrix environment or vice-versa. In
> >this case, both notations are equally vulnerable (actually the .E
> >notation is slightly less vulerable, but not enough to really matter).
> >
> >a * f()
> >a ~* f()
> >
> >will both fail if f() doesn't have the prevailing type.
> 
> The point I was making was: if you use the prevailing-type approach, you can
> just write wrapper functions accepting given type and return given type, so
> you do know the return type.  This is not possible in mixed-type approach,
> because you don't know in what context the function will be used.  That's
> what I meant earlier by "not having enough information to decide".
> 
> In essence, I really don't like a programming language in which the
> semantics of operation depends on far away context.

This makes no sense to me, perhaps you need to clarify what you mean
by the prevailing type approach. As far as I can tell, the effect of *
and ~* would be completely determined by the objects they operate
on. So the following oversimplified function:

def f(a,b):
   return a ~* b

will return:

(a.E * b.E).M if a and b are matrices,
(a.M * b.M).E if a and b are arrays.

and fail for mixed types.

Yes this matches the return type to the input type, but since it
returns a completely different operation depending on the input time,
which is probably not the intent, I don't see how this helps. The ".E"
notation has the opposite problem: the operation is well defined, but
the return type cannot be easily matched.

There are two approachs that avoid this problem. One is the shadow
type one. The other would be to have the sense of the ~* and *
operations agree for MatPy and NumPy. There are however, other issues
with both of these approaches.

> >You don't need to write it up. Just supply the code and how you would
> >write it up in ~* notation. I'll write it up in .E notation and point
> >out the relevant pitfalls in the ~* version and then you can return
> >the favor.
> 
> OK, you asked for it.  Here's what I got by rgrep'ing one of my neural
> network modules.  However, to really see the effects, keep in mind that:

This should shut me up for a while....


> - They occur in the middle of (substantially more lines of) matrix
>   computations,
> - Some of the variables can be pure numbers.
> 
> lambda y:(1-y.__dotpow__(2)),
> lambda y:2*y.__dotmul__(1-y.__dotpow__(2)),
> f1_2 = f1.__dotpow__(2)
> u = f1.__dotmul__(w)
> S = f1_2.__dotmul__(R) - f2.__dotmul__(v)
> dAA = S * xT.__dotpow__(2)
> rA = self.dA.__dotdiv__(self.dAA)
> rb = self.db.__dotdiv__(self.dbb)
> s = x.__dotmul__(o*a)
> y = r.__dotmul__(o*b)
> e = mean(v.__dotpow__(2))/2
> u = v.__dotmul__(o*b)
> S = 1-r.__dotpow__(2)
> w = S.__dotmul__(u*bet - s)
> dbb = mean(r.__dotpow__(2))
> dab = mean(x.__dotmul__(S).__dotmul__(v+y))
> daa = mean((x.__dotmul__(S).__dotmul__(b)).__dotpow__(2))
> db  = mean(r.__dotmul__(v))
> da  = mean(x.__dotmul__(w))
> 
> Does anybody consider this as very Pythonic?  :-)
> 
> Huaiyu

-tim



More information about the Python-list mailing list