[Numpy-discussion] question about optimizing

Robin robince at gmail.com
Sat May 17 16:59:27 EDT 2008


On Sat, May 17, 2008 at 7:22 PM, Brian Blais <bblais at bryant.edu> wrote:
> at least for me, that was the motivation.  I am trying to build a simulation
> framework for part of the brain, which requires connected layers of nodes.
>  A layer is either a 1D or 2D structure of nodes, with each node a
> relatively complex beast.  Rather than reinvent the indexing (1D, 2D,
> slicing, etc...), I just inherited from ndarray.  I thought, after the fact,
> that some numpy functions on arrays would help speed up the code, which
> consists mostly of calling an update function on all nodes, passing each
> them an input vector.  I wasn't sure if there would be any speed up for
> this, compared to
> for n in self.flat:
>    n.update(input_vector)
> From the response, the answer seems to be no, and that I should stick with
> the python loops for clarity.  But also, the words of Anne Archibald, makes
> me think that I have made a bad choice by inheriting from ndarray, although
> I am not sure what a convenient alternative would be.
>
>
> bb

Hello,

It depends on what you are doing but to really exploit the performance
gains of numpy it can be better to change the way you are doing the
model to have arrays representing the various properties of each node
seperately.

For example, instead of having each node with a voltage and a number
of channels node.V node.C (list of n channel conductances), you have
big arrays holding the values for all of them. So V would be an array
1xn_nodes and C would be an array of n_nodes x max number of channels
(some of them could be zero if not all nodes have the same number of
channels).
Another global type array n_nodes x n_nodes could hold all the connections...

Then instead of updating each node individually you can update all
nodes together with vectorised operations utilising ATLAS etc.

The indexing can get a bit complicated and it may not be possible
depending on how your nodes interact (but usually even if a node
depends on values of other nodes it does so only at the previous
timestep so you can store the full previous state and reference it in
the update function).

Just a suggestion - it was much more efficient for me to do it this
way with integrate and fire type neural networks... Also I hope I've
clearly expressed what I mean - it's getting late here.

Cheers

Robin



More information about the NumPy-Discussion mailing list