[Numpy-discussion] Openmp support (was numpy's future (1.1 and beyond): which direction(s) ?)

Gnata Xavier xavier.gnata at gmail.com
Mon Mar 24 20:08:32 EDT 2008


Robert Kern wrote:
> On Mon, Mar 24, 2008 at 12:12 PM, Gnata Xavier <xavier.gnata at gmail.com> wrote:
>
>   
>>  Well it is not that easy. We have several numpy code following like this :
>>  1) open an large data file to get a numpy array
>>  2) perform computations on this array (I'm only talking of the numpy
>>  part here. scipy is something else)
>>  3) Write the result is another large file
>>
>>  It is so simple to write using numpy :)
>>  Now, if I want to have several exe, step 3 is often a problem.
>>     
>
> If that large file can be accessed by memory-mapping, then step 3 can
> actually be quite easy. You have one program make the empty file of
> the given size (f.seek(FILE_SIZE); f.write('\0'); f.seek(0,0)) and
> then make each of the parallel programs memory map the file and only
> write to their respective portions.
>
>   
Yep but that is the best case.
Our "standard" case is a quite long sequence of simple computation on  
arrays.
Some part are clearly thread-candidates but not every parts.
For instance, at step N+1 I have to multiply foo by the sum of a large 
array computed at step N-1.
I can split the sum computation over several exe but it is not 
convenient at all and not that easy to get the sum at the end (I know 
ugly ways to do that. ugly).

One step large computations can be split into several exe. Several steps 
large one are another story :(

Xavier








More information about the NumPy-Discussion mailing list