Hi All, I have tried the solutions proposed in the previous thread and it looks like Chris' one is the fastest for my purposes. Now, I have a question which is probably more conceptual than implementation-related. I started this little thread as my task is to read medium to (relatively) big unformatted binary files written by another (black-box) software (which is written in Fortran). These files can range from 10 MB to 200 MB, more or less, and I read them using a f2py-wrapped Fortran subroutine. I got a stupendous speed improvement when I switched from Compaq Visual Fortran to G95 with "STREAM" access (from 8% to 90% faster, depending on the infamous "indices" I was talking about). Now, I was thinking about using the multiprocessing module in Python, as we have 4-cpus PCs at work and I could try to call my subroutine using multiple Python processes. I *really* should do this in Fortran directly but I haven't found any reference on how to do file I/O in parallel in Fortran and I haven't got any help from comp.lang.fortran in that sense (only a warning that I may slow down everything by using multiple processes). Splitting the reading process between 4 processes will require the exchange of 5-20 MB from the child processes to the main one: do you think my script will benefit from using multiprocessing? Is there any drawback in using Numpy arrays in multiple processes? If using multiprocessing in Python will create too much overhead, does anyone have any suggestion/reference/link/code on how to handle parallel I/O in Fortran directly? Should I try another approach? Thank you a lot for your suggestions. Andrea. "Imagination Is The Only Weapon In The War Against Reality." http://xoomer.alice.it/infinity77/ http://thedoomedcity.blogspot.com/