On 7/23/07, Lars Friedrich <lfriedri@imtek.de> wrote:
Hello everyone,

I am using array.tofile successfully for a data-acqusition-streaming
application. I mean that I do the following:

for a long time:
        temp = dataAcquisisionDevice.getData()
         temp.tofile(myDataFile)

temp is a numpy array that is used for storing the data temporarily. The
data acquisition device is acquiring continuously and writing the data
to a buffer from which I can read with .getData(). This works fine, but
of course, when I turn the sample rate higher, there is a point when
temp.toFile is too slow. The dataAcquisitionDevice's buffer will run
full before I can fetch the data again.

(temp has a size of ~Mbyte, and the for loop has a period of ~0.5
seconds so that increasing the chunk size won't help)

I have no idea how efficient array.tofile() is. Maybe it is terribly
efficient and what I see is just the limitation of my hardware
(harddisk). Currently I can stream with roughly 4 Mbyte/s, which is
quite fast, I guess. However, if anyone can point me to a way to write
my data to harddisk faster, I would be very happy!

4 MB/s is extremely slow, these days most drives will do better than 50 MB/s during sustained writes. Raid-0 will about double that rate if you aren't terribly worried about drive failure. What operating system and hardware are you using?

Chuck