
On Thu, Oct 11, 2012 at 10:50 AM, Nicolas Rougier Nicolas.Rougier@inria.fr wrote:
I missed the original post but I personally find this addition especially useful for my work in computational neuroscience.
I did something vaguely similar in a small framework (http://dana.loria.fr/, you can look more specifically at http://dana.loria.fr/doc/connection.html for details). Examples are available from: http://dana.loria.fr/examples.html
The actual computation can be made in several ways depending on the properties of the kernel but the idea is to compute an array "K" such that given an array "A" and a kernel "k", A*K holds the expected result. This also work with sparse array for example when the kernel is very small. I suspect the PR will be quite efficient compared to what I did.
Would the current PR be useful to you if merged as-is? A common pitfall with these sorts of contributions is that we realize only after merging it that there is some tiny detail of the API that makes it not-quite-usable for some people with related problems, so it'd be awesome if you could take a closer look.
-n