[Numpy-discussion] Correlate with small arrays

Robert Kern robert.kern at gmail.com
Thu Mar 20 15:33:26 EDT 2008


On Thu, Mar 20, 2008 at 5:44 AM, Peter Creasey
<p.e.creasey.00 at googlemail.com> wrote:
> >  >  I'm trying to do a PDE style calculation with numpy arrays
>  >  >
>  >  >  y = a * x[:-2] + b * x[1:-1] + c * x[2:]
>  >  >
>  >  >  with a,b,c constants. I realise I could use correlate for this, i.e
>  >  >
>  >  >  y = numpy.correlate(x, array((a, b, c)))
>  >
>
> >  The relative performance seems to vary depending on the size, but it
>  >  seems to me that correlate usually beats the manual implementation,
>  >  particularly if you don't measure the array() part, too. len(x)=1000
>  >  is the only size where the manual version seems to beat correlate on
>  >  my machine.
>
>  Thanks for the quick response! Unfortunately 1000 < len(x) < 20000 are
>  just the cases I'm using, (they seem to be 1-3 times as slower on my
>  machine).

Odd. What machine are you using? I have an Intel Core 2 Duo MacBook.

>  I'm just thinking that this is exactly the kind of problem that could
>  be done much faster in C, i.e in the manual implementation the
>  processor goes through an array of len(x) maybe 5 times (3
>  multiplications and 2 additions), yet in C I could put those constants
>  in the registers and go through the array just once. Maybe this is
>  flawed logic, but if not I'm hoping someone has already done this?

The function is PyArray_Correlate() in
numpy/core/src/multiarraymodule.c. If you have suggestions for
improving it, we're all ears.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco



More information about the NumPy-Discussion mailing list