Hi folks,
I posted this as a comment on #2172, but I wanted to note it here.
Right now the holdup for merging yt-4 into master is the answer
testing. We actually would not expect the answers to be the same
between the two (at the very least because the ordering of values will
be different) so we need to do some kind of manually inspection.
My plan for addressing answer testing differences, which I will start
by doing manually:
* All grid and octree data should have either the same, or
justifiably different answers. In my view, "justifiably different"
includes unit changes, and also includes changes as a result of
particles being selected on edges of grid boundaries. These will be
documented. (See #2195 )
* Results for all data selections in SPH and particle datasets should
be identical in count and values, although the order will likely be
different. To address this, we will have a "sorting order" field that
is the morton index of the particles, and the values will be sorted
and compared. This should help to identify situations where different
numbers of particles are being selected (typically they should not be,
except for situations related to smoothing length.)
* Manual inspection of visualizations that avoid meshing in 4.0 and
that utilize meshing in 3.x.
I believe this should cover most of the cases, and will take to the
next step of verification. To conduct these tests, I'm going to work
on a script that outputs the appropriate values into an HDF5 file, and
then compare the results for both. This will be somewhat distinct from
the answer testing and designed for ease of exploration.
Once I have a system prepared for this, I will post back here, and I
will likely put the results online to view.
-Matt