Sebastian Haase wrote:
On 10/26/07, David Cournapeau <david@ar.media.kyoto-u.ac.jp> wrote:
P.S: IMHO, this is one of the main limitation of numpy (or any language using arrays for speed; and this is really difficult to optimize: you need compilation, JIT or similar to solve those efficiently).
This is where the scipy - sandbox numexpr project comes in - if I'm not misaken ....
http://www.scipy.org/SciPyPackages/NumExpr
Description The scipy.sandbox.numexpr package supplies routines for the fast evaluation of array expressions elementwise by using a vector-based virtual machine. It's comparable to scipy.weave.blitz (in Weave), but doesn't require a separate compile step of C or C++ code.
I hope that more noise around this will result in more interest and subsequentially result in more support. I think numexpr might be one of the most powerful ideas in numpy / scipy "recently". Did you know about numexpr - David ?
I knew about it, but never really tried it. But numexpr still through python for function calls, right ? The big problem of python for (recursive) numeric work is the function call cost. On my macbook (core 2 duo @ 2 ghz), which I already consider quite beefy, I cannot call more than 0.5-1 million functions every second (really simple ones, e.g. not member functions of some objects, which are more costly). That's why I don't see much hope outside JIT for those (incidently, the kind of things 'we' are doing seem like the most simple things to JIT). cheers, David