
Hello everyone, thank you for the replies. Sebastian, the chunk size is roughly 4*10^6 samples, with two byte per sample, this is about 8MB. I can vary this size, but increasing it only helps for much smaller values. For example, when I use a size of 100 Samples, I am much too slow. It gets better for 1000 Samples, 10000 Samples and so on. But since I already reached a chunksize in the region of megabytes, I have difficulties to increase my buffer size further. I also have the feeling that increasing does not help in this size region. (correct me if I am wrong...) Chuck, I am using a Windows XP system with a new (few months old) Maxtor SATA-drive. Lars

So you are saying that a given tofile() call returns only after 2 seconds !? Can you measure the getData() call time (just comment the tofile out for a while- I that doesn't use 100% CPU ..) ? (timeit module is needed - I think) Maybe multithreading might help - so that tofile() and GetData() can overlap. But 2 sec is really slow .... -S. On 7/24/07, Lars Friedrich <lfriedri@imtek.de> wrote:
Hello everyone,
thank you for the replies.
Sebastian, the chunk size is roughly 4*10^6 samples, with two byte per sample, this is about 8MB. I can vary this size, but increasing it only helps for much smaller values. For example, when I use a size of 100 Samples, I am much too slow. It gets better for 1000 Samples, 10000 Samples and so on. But since I already reached a chunksize in the region of megabytes, I have difficulties to increase my buffer size further. I also have the feeling that increasing does not help in this size region. (correct me if I am wrong...)
Chuck, I am using a Windows XP system with a new (few months old) Maxtor SATA-drive.
Lars _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion

Your are not generating text files - right? On 7/24/07, Sebastian Haase <haase@msg.ucsf.edu> wrote:
So you are saying that a given tofile() call returns only after 2 seconds !? Can you measure the getData() call time (just comment the tofile out for a while- I that doesn't use 100% CPU ..) ? (timeit module is needed - I think) Maybe multithreading might help - so that tofile() and GetData() can overlap. But 2 sec is really slow ....
-S.
On 7/24/07, Lars Friedrich <lfriedri@imtek.de> wrote:
Hello everyone,
thank you for the replies.
Sebastian, the chunk size is roughly 4*10^6 samples, with two byte per sample, this is about 8MB. I can vary this size, but increasing it only helps for much smaller values. For example, when I use a size of 100 Samples, I am much too slow. It gets better for 1000 Samples, 10000 Samples and so on. But since I already reached a chunksize in the region of megabytes, I have difficulties to increase my buffer size further. I also have the feeling that increasing does not help in this size region. (correct me if I am wrong...)
Chuck, I am using a Windows XP system with a new (few months old) Maxtor SATA-drive.
Lars _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
participants (2)
-
Lars Friedrich
-
Sebastian Haase