Is it unreasonable for me to expect yt on a workstation with 4x3GHz cores and ~20 GB of RAM to be able to handle a ~100 GB dataset? I'm trying to select out a subset of the dataset using cut_region(), but still seem to run into hanging or running out of RAM. For example, when I try to do a write_out() on the cut region yt sucks up all 20 GB of RAM and I have to kill it. Is there a preferred method for loading in a subset of data? I don't need near the full 100 GB of data.
I'm using yt dev and looking at BoxLib/Maestro data.
-- Adam Jacobs Department of Physics and Astronomy, PhD Candidate Stony Brook University