I have a work-in-progress PR in the hopper regarding GDF reading and writing:
What it does is implement direct writing of covering grids to GDF
files (something I've been using in my research, to work with the
resulting files directly in yt) and clobbering existing files with the
same path if the user allows for it.
However, when fiddling around with this I determined that it was
necessary to make GDF more unit-aware. Ultimately, I determined that
changes were needed that would break the standard that we had
determined for the files (http://yt-project.org/gdf.txt).
The two main changes are:
1) Remove "field_to_cgs", and allow the fields to be in whatever units
we wish them to be in the file (which are specified as HDF5
2) Add a new top-level group containing the information for
"length","mass", and "time" units. These will be used when the file is
opened up in yt to determine the units.
Since (for now) we do automatic conversion to cgs, that is something
that is still left unimplemented, but otherwise I think this is all to
do for now.
I'm writing the email in case anyone has any suggestions or objections
to the format changes--particularly Jeff or Kacper. We'll obviously
need to document them if they are accepted.
NASA/Goddard Space Flight Center
Does anyone know what happened to the mapserver in 3.0? It is no longer in
command_line.py, and the docs are in conflict about it working. I'll fix
this, but I wanted to know if it had been removed on purpose.
University of Arizona
Thank you to everybody for pushing *so hard* on everything the last
few days. I have now finished up the drafts of my remaining
contributions to the documentation, which I *believe* is the last
remaining area of focus. I would like to take Cameron up on his
offer, which I previously voted again, to delay until Friday. This is
not so that we can all keep working, but because *I* misjudged the
level of my contributions and what they would require, and *I* need a
few days to let them settle. Mea culpa. Cameron, Nathan, Britton,
Mike, Hilary, Aaron -- everyone has just been absolutely amazing. So
I will be submitting mine, they can get reviewed over the next couple
days, and then on Friday maybe we can "release" it? Does that work?
I'm looking over the docs right now and am blown away by the significant
improvement relative to the situation the docs were in before the push.
Thank you so much everyone who has contributed to this push.
There is still some room for proofreading, so please take a look at the
docs if you have some free time today.
Right now there is inconsistency in the docs of using YT, ``yt`` (i.e.
code-text) or yt. What do people prefer here? I think we should avoid YT,
but I could go either way on the remaining two options.
Also, section headers are of two formats: "Capitalize all Important Words"
and "Only capitalize the first word of the section". I'm more of a +1 on
the first option.
There is a suggestion made by Michael Zingale about moving the parallel
docs from the analysis section to being a top-level section in the docs,
since it applies to viz, analysis, etc. I'm -1 on this move as I like
having the top-level docs be relatively few in number.
Lastly, there is no real "introduction" as to what the different sections
mean. There is the bootcamp and cookbook which give lots of usage
examples, but no true introduction that tells people why to look at
"fields" or "objects" or whatever. We have tried to lay out the docs in an
order and with labels that is logical and gives people an idea of their
contents, but does anyone think we should have a short top-level
introduction? Or even something on the front page? Not sure on this, but
new users might have more of an opinion.
Keep up the good work!
University of Arizona
I'm looking over the cookbook docs and trying the recipes on my own dataset
in prep for the release.
I don't really understand the "Image Background Colors" recipe at the end
This just shows various gradients, but is the purpose to show that one can
plot their data over one of these gradients? If so, perhaps we can add
some data to one of these?
Dept. of Physics & Astronomy • Stony Brook University • Stony Brook, NY
Does anyone, perhaps Kacper, know if we can do any answer testing in
parallel? I would like to add a test for the rockstar halo finder, but it
has to be run with a minimum of 3 MPI tasks. This would not require 3
actual CPUs to run on since each the tasks sort of take turns. Is there a
way to do this?
First I want to thank everyone for their hard work at getting 3.0 where it
is in terms of functionality, bug fixes, and documentation. However, I'm
concerned that there are still several things that need doing in the
documentation prior to release of 3.0.
Beyond that, I think once those things are done, we need some proofreading
of the docs, because I'm not convinced there aren't still sections that are
out of date and reflecting 2.x versions of the code. Proofreading (and
subsequent correction) may take a while.
I think it behooves us to push back the release a few more days until we
make sure this is where we want it to be. This is a major release with
major API breakages, and I want to make sure the documentation actually
reflects the codebase, so new users and new converts to 3.0 don't get
confused. I certainly was confused when i first moved over because there
are a lot of significant changes that it's easy to forget after using it
for a while and being as tied into the community as we all are.
What do people think?
University of Arizona