
On 5/3/14, 11:56 PM, Siegfried Gonzi wrote:
Hi all
I noticed IDL uses at least 400% (4 processors or cores) out of the box for simple things like reading and processing files, calculating the mean etc.
I have never seen this happening with numpy except for the linalgebra stuff (e.g lapack).
Well, this might be because it is the place where using several processes makes more sense. Normally, when you are reading files, the bottleneck is the I/O subsystem (at least if you don't have to convert from text to numbers), and for calculating the mean, normally the bottleneck is memory throughput. Having said this, there are several packages that work on top of NumPy that can use multiple cores when performing numpy operations, like numexpr (https://github.com/pydata/numexpr), or Theano (http://deeplearning.net/software/theano/tutorial/multi_cores.html) -- Francesc Alted