Wow, a parallel gdf writer? I was planning to add that sometime next week or so. I'd like to know how you approached it. I was simply thinking of having a flag to only write a data-only "sidecar" file that would use HDF5 links in the master file for grids on other cores. Please let me know if I can help integrate what you've done into this framework.


On Thu, Jan 30, 2014 at 11:09 AM, Stuart Mumford <> wrote:

I will have a look over this, I wrote a parallel compatible version of
this code a while back I will try and get it working within your


On 29 January 2014 22:22, j s oishi <> wrote:
> Hi all,
> I've adapted yt's into a standalone class called pygdf
> ( I've done this so that any python
> code can now save data in the gdf format.
> While doing this, I came across something in the yt gdf frontend that I'm
> not quite sure I understand. In the grid metadata, yt expects
> /grid_particle_count to be a 2-D array, but the gdf standard clearly states
> this should be a 1-D (int64, N) array, where N is the number of grids.
> Further complicating the issue is the fact that if I create a 1-D array for
> /grid_particle_count, the failure point in the following script
> from yt.mods import *
> pf = load("/tmp/blah.gdf")
> sp = SlicePlot(pf, 2, "Density")
> is actually in data_objects/, not in the gdf frontend itself.
> The file /tmp/blah.gdf can be generated by running the script in
> pygdf.
> The obvious workaround is to simply make /grid_particle_count a 2D array
> (this is especially so since I'm not actually using particles at the
> moment), but I'm not sure why this should be so. The total count of
> particles in N grids seems to be better fit in a 1-D array to me. However,
> I'd like to understand better what's going on. Any pointers or clarification
> would be very helpful.
> Also, any feedback on pygdf would be most welcome!
> thanks,
> j
> _______________________________________________
> yt-dev mailing list
yt-dev mailing list