writing large files quickly

Steven D'Aprano steve at REMOVETHIScyber.com.au
Fri Jan 27 18:14:28 EST 2006

On Fri, 27 Jan 2006 12:30:49 -0800, Donn Cave wrote:

> In article <drduvt$m6r$1 at solaris.cc.vt.edu>,
>  rbt <rbt at athop1.ath.vt.edu> wrote:
>> Won't work!? It's absolutely fabulous! I just need something big, quick 
>> and zeros work great.
>> How the heck does that make a 400 MB file that fast? It literally takes 
>> a second or two while every other solution takes at least 2 - 5 minutes. 
>> Awesome... thanks for the tip!!!
> Because it isn't really writing the zeros.   You can make these
> files all day long and not run out of disk space, because this
> kind of file doesn't take very many blocks.   The blocks that
> were never written are virtual blocks, inasmuch as read() at
> that location will cause the filesystem to return a block of NULs.

Isn't this a file system specific solution though? Won't your file system
need to have support for "sparse files", or else it won't work?

Here is another possible solution, if you are running Linux, farm the real
work out to some C code optimised for writing blocks to the disk:

# untested and, it goes without saying, untimed
os.system("dd if=/dev/zero of=largefile.bin bs=64K count=16384")

That should make a 4GB file as fast as possible. If you have lots and lots
of memory, you could try upping the block size (bs=...).


More information about the Python-list mailing list