Hi all,
I'd like to open up the discussion of "fixing" the yt coordinate
systems, as we move nearer and nearer on 3.0. Jeff, Nathan and
Britton have brought this up a couple times, that the x/y/z ordering
is not consistent with what they expect, and I'd like to figure out if
we can fix that now -- it's as good a time as any for bandaid ripping.
It may just be a matter of the transposition of buffers and the
x_dict and y_dict, or it might be more complex, although I suspect it
won't be much more than fixing those two items.
-Matt
New issue 769: In-place operators of arrays/floats and YTArrays/YTQuantities don't result in unitified structures
https://bitbucket.org/yt_analysis/yt/issue/769/in-place-operators-of-arrays…
John ZuHone:
For example, this:
```
#!python
x = np.arange(10)
y = YTQuantity(1.0, "cm")
x *= y
```
Doesn't return a `YTArray`. Should it?
Responsible: MatthewTurk
New issue 768: Units and exponents
https://bitbucket.org/yt_analysis/yt/issue/768/units-and-exponents
Matthew Turk:
Exponents don't seem to change the units of a YTArray.
```
#!python
from yt.units import cm
import numpy as np
cm_arr = np.array([1.0,1.0]) * cm
print cm**3
print cm_arr * cm_arr
print cm_arr**3
```
Responsible: ngoldbaum
New issue 767: Unitrefactor: Stream datasets are always setting code_length == cm
https://bitbucket.org/yt_analysis/yt/issue/767/unitrefactor-stream-datasets…
John ZuHone:
I can't get any length units to be returned in anything other than code units:
```
#!python
pf = load_uniform_grid(data, ddims, length_unit=2*R*cm_per_kpc)
pf.domain_right_edge
pf.domain_right_edge.in_units("cm")
pf.domain_right_edge.in_units("kpc")
```
gives:
```
#!python
YTArray([ 1., 1., 1.]) code_length
YTArray([ 1., 1., 1.]) cm
YTArray([ 3.24077929e-22, 3.24077929e-22, 3.24077929e-22]) kpc
```
which is what happens regardless of what goes in for `length_unit`.
Responsible: MatthewTurk
Hi all (particularly Matt, I suspect),
I'm having an issue doing isocontour flux extraction on a disk in
yt-3.0. I'm using this script:
from yt.mods import *
pf = load('HiResIsolatedgalaxy/DD0044/DD0044')
dd = pf.h.disk('c', [0,0,1], (20, 'kpc'), (3, 'kpc'))
radii = (np.arange(100)+1)/100.*20
fluxes = []
for r in radii:
iso = pf.h.surface(dd, 'Radiuskpc', r)
print iso['Density']
This produces the following traceback: http://paste.yt-project.org/show/4263/
The issue seems to be that the "center" field parameter isn't being
passed down to the grid object that is generating the covering grid.
Right now it looks as if the `retrieve_ghost_zones` method attached to
the grid object expects to find entries in the grid patch object's
field_parameters dict. It seems as if I could update the grid patch
with the field parameters from the disk object before the call to
`_extract_isocontours_from_grid`.
This makes me think that there might be a reason why it's not already
set up like that. Has the way we handle field parameters changed much
in the 3.0 codebase?
Thanks for your help,
Nathan
Hi all,
I've suddenly started having issues with cython trying to process
grid_traversal.pyx in the yt branch.
See: http://paste.yt-project.org/show/4262/
This might be related to Cython 0.20, as I had that installed
initially. That said, I've tried with 0.19 and 0.18, and both fail in
the same way.
This is strange, since I'm pretty sure I build 2.7dev sometime last
week and I don't think anything has changed.
Is anyone else seeing the same thing?
Thanks!
-Nathan
Hi all,
I've adapted yt's gdf_writer.py into a standalone class called pygdf (
https://bitbucket.org/jsoishi/pygdf). I've done this so that any python
code can now save data in the gdf format.
While doing this, I came across something in the yt gdf frontend that I'm
not quite sure I understand. In the grid metadata, yt expects
/grid_particle_count to be a 2-D array, but the gdf standard clearly states
this should be a 1-D (int64, N) array, where N is the number of grids.
Further complicating the issue is the fact that if I create a 1-D array for
/grid_particle_count, the failure point in the following script
from yt.mods import *
pf = load("/tmp/blah.gdf")
sp = SlicePlot(pf, 2, "Density")
is actually in data_objects/grid_patch.py, not in the gdf frontend itself.
The file /tmp/blah.gdf can be generated by running the test.py script in
pygdf.
The obvious workaround is to simply make /grid_particle_count a 2D array
(this is especially so since I'm not actually using particles at the
moment), but I'm not sure why this should be so. The total count of
particles in N grids seems to be better fit in a 1-D array to me. However,
I'd like to understand better what's going on. Any pointers or
clarification would be very helpful.
Also, any feedback on pygdf would be most welcome!
thanks,
j
Hi all,
We were having some mailing list issues earlier in the day, but I have
been assured they are now fixed. So if you're seeing this, and got a
bounce earlier, the clog seems to have been resolved. Sorry about
that.
-Matt
Hi all,
I've spent a bit of time today comparing the current tip of 3.0 to the
current unitrefactor, in an attempt to see whether field accesses on
data objects will return answers that are consistent with what we were
returning before.
The specific comparison I'm doing is based on the IsolatedGalaxy
dataset. This dataset has a large number of on-disk fields and is a
good exercise of yt's field detection and derived fields machinery.
I'm looking at whether field accesses on data objects return results
that are identical to the results we get in the current 3.0 tip.
The full testing script has been pasted here:
http://paste.yt-project.org/show/4250
The results (just what the script printed to my terminal) are pasted
here: http://paste.yt-project.org/show/4251
There appear to be three things that happened in this test:
1. The field access returned bitwise identical results
2. The field access returned results that are slightly different (~1e-6 level).
3. The field access returned results that are order-unity different.
Obviously the first case isn't an issue. I believe all instances of
the third case are due to comparing quantities that have different
units (for example, particle_position_[x,y,z] used to return data in
code units, but now does so in CGS).
Case 2 is a little bit more involved and I'm not sure what to do about
it in general -- thus this message ;)
I'll take the density field as an example to illustrate what is happening.
In the parameter file for this dataset, I see the following:
MassUnits = 8.11471e+43
DensityUnits = 2.76112e-30
TimeUnits = 2.32946e+18
LengthUnits = 3.086e+24
Right now, unitrefactor is using the values of MassUnits and
LengthUnits to construct the YTArray that contains the field data for
the ('enzo', 'Density') field since this field has units of
`code_mass/code_length**3`. The conversion factors from `code_mass`
and `code_length` to CGS are exactly the MassUnits and LengthUnits
variables from the dataset parameter file. So, the CGS conversion
factor for the array is:
MassUnits / LengthUnits**3 = 2.761197257954595e-30 g / cm**3
A careful reader will note that this is only equal to the DensityUnits
in the parameter file up to rounding in the fifth decimal place. This
difference, it turns out, is the source of the differences I saw in
the comparison.
I haven't gone through the rest of the fields in detail, but i suspect
the all the remaining fields showing small differences to all have
issues of this sort. My guess is the fields that compare exactly will
either have a CGS conversion factor of unity, or the conversion factor
was derived directly from MassUnits, LengthUnits, and TimeUnits in the
data file.
I'm not sure what the proper way to handle this is.
Right now unitrefactor is ignoring the on-disk CGS conversion factor
for Density. I don't think this choice should matter in principle,
since mass and density units should be algebraically related via the
LengthUnits. Unfortunately, rounding errors in the parameter file
mean there will likely be a small amount of disagreement.
Should we attempt to be more fastidious about adopting the proper
conversion factors in the enzo frontend, if a conversion factor is
available on disk? Should we not worry about these minor differences
in field accesses that unitrefactor is introducing? Should we have
units like `code_density` for all on-disk fields with available CGS
conversion factors? If so, how do we deal with the fact that the
conversion from code_density to code_mass/code_length**3 is not
straightforward?
Thanks for any help or advice anyone who has gotten this far can provide :)
Cheers,
Nathan
Hi all,
In the interest of promoting more discussion, easier forums for
chatting and raising of issues and so on, I'm going to start
experimenting with holding low-intensity, no-frills hangouts on Friday
mornings at 11AM Eastern time. I'll set up the hangout and if anyone
shows up, we can talk about where things are, or specific issues.
No pressure to attend, and if you do, no pressure to bring any issues.
The idea is to keep it more true to the name "hangout" than
"meeting." :)
I'm not sure if I can schedule this in advance on G+, but I'm going to
try to, so keep an eye on the plus page at
http://plus.google.com/+ytprojectorg to see if the event shows up
there.
We'll start out this Friday, and see how it goes for a few weeks.
-Matt