Hi Sam, On Fri, Nov 29, 2013 at 11:11 AM, Sam Geen <samgeen@astro.ox.ac.uk> wrote:
OK, no problem (I'm guessing you guys had better things to do than us Europeans these past few days :) ); I'm not 100% guaranteed to work through this list in the very near future, but I thought I'd enumerate what I thought needed to be done in case someone else decided to do them first, or had better ideas about how to proceed. I'll jump onto IRC before I make any big changes like 2) or 7), in any case.
Awesome! A bunch of us idle in IRC, and that's a good way to have some faster turnover conversations, too.
Thanks for the reminder of the AMR-on-demand discussion; I think one obvious thing that could be done is to create a separate class for each file/segment of file in the RAMSES domain, and read on the first instance that the data is accessed - since we already know the structure of the RAMSES data, skipping around a file to the bits we need shouldn't be too hard. It'd be a case of running a simple check against a member variable "read" flag every time the accessor function was called, assuming that this doesn't degrade the performance hugely.
That's an interesting idea. Right now we just have one object per domain file (i.e., a 1024 CPU job will have 1024 RAMSESDomainFile objects). Having an LRU cache has been on the todolist for a while, but not yet implemented. I'd like to see that somewhere fundamental inside the BaseIOHandler class, but haven't yet implemented it; it will be useful for the n-body data as well. For IO within a RAMSES file we do skip around to the different fluid variables, but read an entire fluid variable from a single domain file at a time, which we then fill into our in-memory octree structure. I have a sketch of doing on-demand loading, but it's far from ready. The next few weeks are a bit tricky for me, but I will try to push it up where it's visible and you can give some feedback. The on-demand stuff will definitely help with scaling; typically the parallelism occurs at the level of chunking, which for RAMSES is done on the set of Domain Files. So if we move to on-demand, then the only time the hydro files will be read will be when actually accessed by a specific processor. Adding the LRU cache will then help with this in the case of multiple identical load balancing operations. If we then make LRU caching an integral part of how the load balancing occurs, we'll be pretty set, I think. This overlaps a bit with some work that needs to happen for the particle codes, which I'm transitioning from single-root octree to forest of octree. Sorry for the short reply, it's still officially "vacation" here today, and I'm out for the day. :) -Matt
Another idea might be to create a bridge class between Pymses and YT, if it turns out that Pymses is better optimised for RAMSES data, or if it turns out we're both expending effort writing the same code. Either way, though, if you have a huge dataset it'll take an age to run your code. The only thing to do here is to cache processed data and refer to that, though this has problems if you update the code and the cache becomes outdated.
On 29/11/13 16:35, Matthew Turk wrote:
Hi Sam,
Thanks for the awesome list! Sorry I didn't reply before.
A couple quick things -- the first one is *thank you*! These all look really great, and I'm keen to do what I can to support you. A few things I had on my medium-term list were to enable AMR structures to be constructed on demand (as noted in an exchange with Nick Moeckel here a few weeks ago) rather than in advance, and to ease the particle IO process.
Points 8 and 9: thank you! Number 9 has long been a problem because of how the anti-aliasing works; I think we could actually fix it in the display layer just by mandating at least 1e-13 in dynamic range between min/max. ;-)
-Matt
On Fri, Nov 29, 2013 at 10:29 AM, Sam Geen <samgeen@astro.ox.ac.uk> wrote:
A couple more things I just noticed:
8) Some of the units (pressure & temperature) seem not to be implemented, or don't show up in the plot axes. 9) Projection plotting does some strange things if the values are uniform to near floating-point precision (example: http://i.imgur.com/kWHQ7dl.png)
On 27/11/13 18:28, Nathan Goldbaum wrote:
Glad you're excited to customize the Ramses frontend! All of the things you suggest sound like excellent improvements.
On Wednesday, November 27, 2013, Sam Geen wrote:
Hi,
I intend to try to fiddle with the RAMSES frontend when I have time/need, and thought it would be good to collate a list of tasks that need to be completed so we have a consensus on what needs to be fixed. Feel free to suggest things or tell me that they're already implemented if I missed them:
1) Add support for RT and ATON files, which are now part of the default RAMSES (I assume from the code that the cooling and grav files are already read) 2) Via 1), it might be nice to refactor the RAMSESDomainFile class a bit to provide a more generic Ramses file reading routine/class, since the formats of the files are fairly similar and in doing 1) we might get some copy-paste bloat. 3) Allow for RAMSES runs that only contain AMR & particles (i.e. pure N-body runs with no hydro) 4) Refactor the inputs to fit YT default field names (for MHD, RT and ATON). 5) Allow YT to interpret non-cosmological simulations in RAMSES, or if it already does, remove the warning that says this. 6) Romain Teyssier suggested allowing users to specify their own default field names for user-modified versions of RAMSES. I don't know if YT caches data that would allow this, but I thought I'd punt the suggestion along. Another option could be to allow users to expose the RAMSES namelist files to YT (i.e. the parameter files for starting up a run) - these contain a lot more information on the physics included, etc. I'd put this on a low priority unless someone thinks of something clever that solves this cleanly. 7) It could be worthwhile to implement read-on-demand if it's not already - sometimes the users won't query the ATON/RT/hydro/particle file or certain fluid fields in each file and so we wouldn't need to read those files in that case. This could be folded into 2).
Cheers,
Sam _______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ yt-dev mailing list yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org