[Numpy-discussion] tofile speed

Lars Friedrich lfriedri at imtek.de
Mon Jul 23 03:57:45 EDT 2007


Hello everyone,

I am using array.tofile successfully for a data-acqusition-streaming 
application. I mean that I do the following:

for a long time:
	temp = dataAcquisisionDevice.getData()
	temp.tofile(myDataFile)

temp is a numpy array that is used for storing the data temporarily. The 
data acquisition device is acquiring continuously and writing the data 
to a buffer from which I can read with .getData(). This works fine, but 
of course, when I turn the sample rate higher, there is a point when 
temp.toFile is too slow. The dataAcquisitionDevice's buffer will run 
full before I can fetch the data again.

(temp has a size of ~Mbyte, and the for loop has a period of ~0.5 
seconds so that increasing the chunk size won't help)

I have no idea how efficient array.tofile() is. Maybe it is terribly 
efficient and what I see is just the limitation of my hardware 
(harddisk). Currently I can stream with roughly 4 Mbyte/s, which is 
quite fast, I guess. However, if anyone can point me to a way to write 
my data to harddisk faster, I would be very happy!

Thanks

Lars


-- 
Dipl.-Ing. Lars Friedrich

Photonic Measurement Technology
Department of Microsystems Engineering -- IMTEK
University of Freiburg
Georges-Köhler-Allee 102
D-79110 Freiburg
Germany

phone: +49-761-203-7531
fax:   +49-761-203-7537
room:  01 088
email: lfriedri at imtek.de



More information about the NumPy-Discussion mailing list