Creating huge data in very less time.
sjmachin at lexicon.net
Tue Mar 31 09:47:00 CEST 2009
On Mar 31, 4:44 pm, "venutaurus... at gmail.com"
<venutaurus... at gmail.com> wrote:
> Hello all,
> I've a requirement where I need to create around 1000
> files under a given folder with each file size of around 1GB. The
> constraints here are each file should have random data and no two
> files should be unique even if I run the same script multiple times.
> Moreover the filenames should also be unique every time I run the
> script.One possibility is that we can use Unix time format for the
> file names with some extensions. Can this be done within few minutes
> of time.
You should be able to write a simple script to create 1000 files with
unique names and each containing 1GB of doesn't-matter-what data and
find out for yourself how long that takes. If it takes much longer
than a "few" (how many is a few?) minutes, then it's pointless
worrying about other constraints like "no two files should be
unique" (whatever that means) and "random data" (why do you want to
create 1000GB of random data??) because imposing them certainly won't
make it run faster.
> Is it possble only using threads or can be done in any other
> way. This has to be done in Windows.
> Please mail back for any queries you may have,
This looks VERY SIMILAR to a question you asked about 12 days ago ...
More information about the Python-list