Ivan Vilata i Balaguer (el 2007-11-30 a les 19:19:38 +0100) va dir::
Well, one thing you could do is dump your data into a PyTables_ ``CArray`` dataset, which you may afterwards access as if its was a NumPy array to get slices which are actually NumPy arrays. PyTables datasets have no problem in working with datasets exceeding memory size. [...]
I've put together the simple script I've attached which dumps a binary file into a PyTables' ``CArray`` or loads it to measure the time taken to load each frame. I've run it on my laptop, which has a not very fast 4200 RPM laptop hard disk, and I've reached average times of 16 ms per frame, after dropping caches with:: # sync && echo 1 > /proc/sys/vm/drop_caches This I've done with the standard chunkshape and no compression. Your data may lean itself very well to bigger chunkshapes and compression, which should lower access times even further. Since (as David pointed out) 200 Hz may be a little exaggerated for human eye, loading individual frames from disk may prove more than enough for your problem. HTH, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ Cárabos Coop. V. V V Enjoy Data ""