[Tutor] A further question about opening and closing files
Steven D'Aprano
steve at pearwood.info
Wed Sep 9 20:58:22 CEST 2015
On Wed, Sep 09, 2015 at 08:20:44PM +0200, Laura Creighton wrote:
> In a message of Wed, 09 Sep 2015 17:42:05 +0100, Alan Gauld writes:
> >You can force the writes (I see Laura has shown how) but
> >mostly you should just let the OS do it's thing. Otherwise
> >you risk cluttering up the IO bus and preventing other
> >programs from writing their files.
>
> Is this something we have to worry about these days? I haven't
> worried about it for a long time, and write real time multiplayer
> games which demand unbuffered writes .... Of course, things
> would be different if I were sending gigabytes of video down the
> pipe, but for the sort of small writes I am doing, I don't think
> there is any performance problem at all.
>
> Anybody got some benchmarks so we can find out?
Good question!
There's definitely a performance hit, but it's not as big as I expected:
py> with Stopwatch():
... with open("/tmp/junk", "w") as f:
... for i in range(100000):
... f.write("a")
...
time taken: 0.129952 seconds
py> with Stopwatch():
... with open("/tmp/junk", "w") as f:
... for i in range(100000):
... f.write("a")
... f.flush()
...
time taken: 0.579273 seconds
What really gets expensive is doing a sync.
py> with Stopwatch():
... with open("/tmp/junk", "w") as f:
... fid = f.fileno()
... for i in range(100000):
... f.write("a")
... f.flush()
... os.fsync(fid)
...
time taken: 123.283973 seconds
Yes, that's right. From half a second to two minutes.
--
Steve
More information about the Tutor
mailing list