Most efficient way to write data out to a text file?
bokr at oz.net
Thu Jun 27 04:01:35 CEST 2002
On Thu, 27 Jun 2002 01:28:22 GMT, candiazoo at attbi.com wrote:
>I assume some sort of block i/o or preallocating a large block of file space
>would be the best way to do it? I am outputting about 700,000 records to a text
>file and my program only spits out about 20 records per second. This is using
>standard file open and write...
>This is on a W2K machine... perhaps I should use the win32file interface? Any
I doubt that that is the solution. Where are your "records" coming from and/or
how are they being generated/mdified, and how big are they?
If you are building string records by s='' followed by s+='chunk' or the like many times,
this is a typical newbie performance killer. Try accumulating your chunks in a list,
like sl= followed by sl.append('chunk') instead, and then do f.write(''.join(sl))
instead of the f.write(s) you would have done.
Try timing generation of output data vs writing it, and see what's happening.
import time and use clock() to get an accurate floating point seconds time reading. E.g.,
from time import clock
tstart = clock()
# ... generate an output record here
# ... write the output something like f.write(record) here
generating took %f sec
outputting took %f sec""" % (tendgen-tstart, tendwrite-tendgen)
More information about the Python-list