
On Wed, May 7, 2014 at 7:11 PM, Sturla Molden <sturla.molden@gmail.com> wrote:
On 03/05/14 23:56, Siegfried Gonzi wrote:
I noticed IDL uses at least 400% (4 processors or cores) out of the box for simple things like reading and processing files, calculating the mean etc.
The DMA controller is working at its own pace, regardless of what the CPU is doing. You cannot get data faster off the disk by burning the CPU. If you are seeing 100 % CPU usage while doing file i/o there is something very bad going on. If you did this to an i/o intensive server it would go up in a ball of smoke... The purpose of high-performance asynchronous i/o systems such as epoll, kqueue, IOCP is actually to keep the CPU usage to a minimum.
That said, reading data stored in text files is usually a CPU-bound operation, and if someone wrote the code to make numpy's text file readers multithreaded, and did so in a maintainable way, then we'd probably accept the patch. The only reason this hasn't happened is that no-one's done it. -n -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org