writing large files quickly
grante at visi.com
Fri Jan 27 18:38:51 EST 2006
On 2006-01-27, Steven D'Aprano <steve at REMOVETHIScyber.com.au> wrote:
>> Because it isn't really writing the zeros. You can make these
>> files all day long and not run out of disk space, because this
>> kind of file doesn't take very many blocks. The blocks that
>> were never written are virtual blocks, inasmuch as read() at
>> that location will cause the filesystem to return a block of NULs.
> Isn't this a file system specific solution though? Won't your file system
> need to have support for "sparse files", or else it won't work?
If your fs doesn't support sparse files, then you'll end up with a
file that really does have 400MB of 0x00 bytes in it. Which is
what the OP really needed in the first place.
> Here is another possible solution, if you are running Linux, farm the real
> work out to some C code optimised for writing blocks to the disk:
> # untested and, it goes without saying, untimed
> os.system("dd if=/dev/zero of=largefile.bin bs=64K count=16384")
> That should make a 4GB file as fast as possible. If you have lots and lots
> of memory, you could try upping the block size (bs=...).
I agree. that probably is the optimal solution for Unix boxes.
I messed around with something like that once, and block sizes
bigger than 64k didn't make much difference.
Grant Edwards grante Yow! As President I
at have to go vacuum my coin
More information about the Python-list