[PEP draft 2] Adding new math operators

Huaiyu Zhu hzhu at localhost.localdomain
Thu Aug 10 14:34:42 EDT 2000


On Thu, 10 Aug 2000 01:26:19 GMT, Tim Hochberg <tim.hochberg at ieee.org> wrote:
>
>Why wouldn't it just be:
>
>z = (x.E + y.E).M
>
>given that you already know that x and y are vectors of matrix type.

There appears to be a problem with this discussion: the "conversion
approach" are actually several very different proposals, depending on
- Is the flavor persistent after each operation?
- Is the .E applicable to all objects?

If .E is not applicable to pure numbers, there are further distinctions
- do you always know that the objects are not pure numbers anywhere you
  want to use .E?  (The emphasis is on "always".)
- in functions, do you always check input type, or do you mandate that no
  pure number is passed in?

I don't think these as realistic proposals unless all these distinctions are
explicitly specified.  At least Konrad and you are talking about quite
different proposals (regarding .E on pure numbers).

>I don't think it has anything to do with the type/class split. It
>would be perfectly possible to add E and M methods to scalars. It's
>just unlikely that they are going to grow such methods to suit
>us. Heck even I don't think that would be a good idea.

I meant that if there were no type/class split, then application developers
could just write their own E and M methods and test whether they are
suitable.  But in current situation a design has to come first, dealing with
all possible situations, before any code is written.

>Values can be scalars, just not arbitrary scalars. I consider rank 0
>arrays scalars and they would work fine. It's core python numeric
>types that would not work (ints, floats and complexes). This means
>that it's at least possible to design functions that always return
>matrix/array type objects, returning rank(0) arrays when they want to
>return a scalar. Objects returned from these functions would then 
>play nicely with above notation.

Not just output.  You effectively require all the inputs to your formulas
are not pure python numbers.  I'm not sure if this is realistic.  (Also see
Konrad's note about NumPy history.)


>The second issue, which I believe you've been referring to is what if
>f returns an array when your in a matrix environment or vice-versa. In
>this case, both notations are equally vulnerable (actually the .E
>notation is slightly less vulerable, but not enough to really matter).
>
>a * f()
>a ~* f()
>
>will both fail if f() doesn't have the prevailing type.

The point I was making was: if you use the prevailing-type approach, you can
just write wrapper functions accepting given type and return given type, so
you do know the return type.  This is not possible in mixed-type approach,
because you don't know in what context the function will be used.  That's
what I meant earlier by "not having enough information to decide".

In essence, I really don't like a programming language in which the
semantics of operation depends on far away context.

>> This would be a problem only if objects change flavor frequently.  Given the
>> convenience of using ~op there's really no need to do that, at least no more
>> than what is already the case now between NumPy and MatPy interfaces. 
>
>This is also an issue if you don't know what type (array or matrix)
>get's returned by function. I see this as very similar to the scalar issue.

No.  If they don't change flavor, you can implement functions accordingly.

[ discussion about why both approaches fail snipped ]

You could say both of them fail equally under the assumption that in both
approach you are unsure of the type of return values of functions.  But this
assumption is not true.  In one approach it is very easy to fix the return
type, in the other it is impossible.

>> The shadow type does introduce a lot of additional problems, but it at least
>> ensures that you know the flavors of _all_ objects in a piece of code.  
>
>As long as you check _all_ the objects at the boundaries and check
>_all_ values returned by functions this is true.

No checking is necessary at all, if in a given module you only import
functions with a given flavor.

>You don't need to write it up. Just supply the code and how you would
>write it up in ~* notation. I'll write it up in .E notation and point
>out the relevant pitfalls in the ~* version and then you can return
>the favor.

OK, you asked for it.  Here's what I got by rgrep'ing one of my neural
network modules.  However, to really see the effects, keep in mind that:

- They occur in the middle of (substantially more lines of) matrix
  computations,
- Some of the variables can be pure numbers.

lambda y:(1-y.__dotpow__(2)),
lambda y:2*y.__dotmul__(1-y.__dotpow__(2)),
f1_2 = f1.__dotpow__(2)
u = f1.__dotmul__(w)
S = f1_2.__dotmul__(R) - f2.__dotmul__(v)
dAA = S * xT.__dotpow__(2)
rA = self.dA.__dotdiv__(self.dAA)
rb = self.db.__dotdiv__(self.dbb)
s = x.__dotmul__(o*a)
y = r.__dotmul__(o*b)
e = mean(v.__dotpow__(2))/2
u = v.__dotmul__(o*b)
S = 1-r.__dotpow__(2)
w = S.__dotmul__(u*bet - s)
dbb = mean(r.__dotpow__(2))
dab = mean(x.__dotmul__(S).__dotmul__(v+y))
daa = mean((x.__dotmul__(S).__dotmul__(b)).__dotpow__(2))
db  = mean(r.__dotmul__(v))
da  = mean(x.__dotmul__(w))

Does anybody consider this as very Pythonic?  :-)

Huaiyu



More information about the Python-list mailing list