
Even one tar file per simulation would be ok, at least for the smaller ones. The enzo_tiny_cosmology simulation is designed to showcase time series and things using multiple datasets, so having a tarfile for each dataset is probably unnecessary. Perhaps we could just evaluate what files are meant to be used in groups and put those together in single tarfiles.
Britton
On Tue, Jul 24, 2012 at 1:55 PM, Casey W. Stark caseywstark@gmail.comwrote:
I would be in favor of one tarball for simplicity. Are the example files that large?
- Casey
On Tue, Jul 24, 2012 at 10:48 AM, Matthew Turk matthewturk@gmail.comwrote:
Hi John,
Hm, that's puzzling. Stephen, any ideas?
After Stephen's email, I went from +0 on keeping the downloader to +1, because I think having it from the command line is a much simpler solution than what we had tried before, which was the download-by-hand. So let's see if we can address this, and then check it in to scripts/ .
-Matt
On Tue, Jul 24, 2012 at 1:45 PM, John ZuHone jzuhone@gmail.com wrote:
Hi all,
What I don't like about the downloader is the directory structure it
creates. At least on my machine, if I download only the sloshing dataset, I get:
GasSloshing/GasSloshing/sloshing_nomag2*
as the location of the files. Is there any reason why it ended up this
way?
John
On Jul 24, 2012, at 11:26 AM, Stephen Skory wrote:
Hi Matt,
One question that's come up is: do we want to continue using download.py? The burden on uploaders is moderately higher, in that the files all have to be tarred up in a particular way, and they have to be added to download.py, but it does provide a measure of robustness. If we do this, can we move download.py into the main distribution, under scripts/ ? Stephen, John, others who have used the script, what do you think about this?
I will not be insulted if we do away with the download script. If a web page of download links is easier for everyone around, that's fine by me. The best argument I can think of for keeping it, or something similar, it is forces some kind of uniformity so that the datasets are in an expected layout. Also, downloading tens of data dumps one at at time from a webpage is kind of tedious. That could be solved by having both the separate data dumps and a big 'ol tarball of the whole thing so people could get exactly what they want, but that doubles disk space on someone's computer.
But this isn't HIPPA medical data, so we can be loosie goosie if that makes things easier for everyone!
-- Stephen Skory s@skory.us http://stephenskory.com/ 510.621.3687 (google voice) _______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org