[Numpy-discussion] Numpy and PEP 343

David M. Cooke cookedm at physics.mcmaster.ca
Tue Feb 28 13:48:02 EST 2006


Tim Hochberg <tim.hochberg at cox.net> writes:

> <pie-in-the-sky>
>
> An idea that has popped up from time to time is delaying evalution of
> a complicated expressions so that the result can be computed more
> efficiently. For instance, the matrix expression:
>
> a = b*c + d*e
>
> results in the creation of two, potentially large, temporary matrices
> and also does a couple of extra loops at the C level than the
> equivalent expression implemented in C would.
>
> The general idea has been to construct some sort of psuedo-object,
> when the numerical operations are indicated, then do the actual
> numerical operations at some later time. This would be very
> problematic if implemented for all arrays since it would quickly
> become impossible to figure out what was going on, particularly with
> view semantics. However, it could result in large performance
> improvements without becoming incomprehensible if implemented in small
> enough chunks.
>
> A "straightforward" approach would look something like:
>
>    numpy.begin_defer()    # Now all numpy operations (in this thread)
>    are deferred
>    a = b*c + d*e # 'a' is a special object that holds pointers to
>                          #  'b', 'c', 'd' and 'e' and knows what ops to
>    perform.
>    numpy.end_defer() # 'a' performs the operations and now looks like
>    an array
>
> Since 'a' knows the whole series of operations in advance it can
> perform them more efficiently than would be possible using the basic
> numpy machinery. Ideally, the space for 'a' could be allocated up
> front, and all of the operations could be done in a single loop. In
> practice the optimization might be somewhat less ambitious, depending
> on how much energy people put into this. However, this approach has
> some problems. One is the syntax, which clunky and a bit unsafe (a
> missing end_defer in a function could cause stuff to break very far
> away). The other is that I suspect that this sort of deferred
> evaluation makes multiple views of an array even more likely to bite
> the unwary.

This is a good idea; probably a bit difficult. I don't like the global
defer context though. That could get messy, especially if you start
calling functions.

> The syntax issue can be cleanly addressed now that PEP 343 (the 'with'
> statement) is going into Python 2.5. Thus the above would look like:
>
> with numpy.deferral():
>    a = b*c + d*e
>
> Just removing the extra allocation of temporary variables can result
> in 30% speedup for this case[1], so the payoff would likely be large.
> On the down side, it could be quite a can of worms, and would likely
> require a lot of work to implement.

Alternatively, make some sort of expression type:

ex = VirtualExpression()

ex.a = ex.b * ex.c + ex.d * ex.e

then,

compute = ex.compile(a=(shape_of_a, dtype_of_a), etc.....)

This could return a function that would look like

def compute(b, c, d, e):
    a = empty(shape_of_a, dtype=dtype_of_a)
    multiply(b, c, a)
    # ok, I'm making this one up :-)
    fused_multiply_add(d, e, a)
    return a

a = compute(b, c, d, e)

Or, it could use some sort of numeric-specific bytecode that can be
interpreted quickly in C. With some sort of optimizing compiler for
that bytecode it could be really fun (it could use BLAS when
appropriate, for instance!).

or ... use weave :-)

-- 
|>|\/|<
/--------------------------------------------------------------------------\
|David M. Cooke                      http://arbutus.physics.mcmaster.ca/dmc/
|cookedm at physics.mcmaster.ca




More information about the NumPy-Discussion mailing list