New issue 1034: halo callback docstring issue
The docstring for the virial_quantities halo callback is incorrect. The docstring references a `critical_density` keyword argument, but the actual function signature contains a `critical_overdensity` keyword argument. While fixing this, it would probably be worth giving the rest of the docstrings in this file a once-over.
Not sure if it is better to request features here or on bitbucket so I
am doing both . I am doing some code comparisons between Enzo and
Gadget and it would be nice if I could map an Enzo data set to Gadget
and visa versa. And I am sure it would be nice if YT users could
translate data sets between all codes YT can read.
YT already reads in and transforms these data sets to match YT's
internal structures. It would be a nice feature to be able to map one
data set to another for code comparison purposes using the already
Joseph Smidt <josephsmidt(a)gmail.com>
P.O. Box 1663, Mail Stop B283
Los Alamos, NM 87545
New issue 1033: Create data set format mapper
I am doing some code comparisons between Enzo and Gadget and it would be nice if I could map an Enzo data set to Gadget and visa versa. And I am sure it would be nice if I could translate data sets between all codes YT can read.
YT already reads in and transforms these data sets. It would be a nice feature to be able to map one data set to another for code comparison purposes.
New issue 1032: Projection Plot Image Size Inconsistency
I use projection plots to generate images of my simulation marching through time, then combine the images into a video using mencoder. Attached is a sample script I use to generate the images and a sample video. You will notice in the video that the frame "oscillates" as it progresses, which is very distracting from the actual simulation. This oscillation is due to some of the images being rendered with 360 pixels in the vertical direction and others rendered with 359 pixels.
Nothing changes in the window height or vertical position of the plot center, as you will see in the attached script. I have tried using different figure sizes, but the result it the same. For the vertical position of the plot center, I tried using both "height/2." and hardcoding in the coordinate (0.16 in this case), but it didn't correct the issue.
I have uploaded ten consecutive timesteps which can be used to run the attached script. You will notice that steps 308350, 308400 and 308650 are rendered with 360 pixels in the vertical direction, and all others are rendered with 359 pixels.
Here is the tar file for the dataset: http://use.yt/upload/679cc306
I ran this script using the most updated version of development yt as well as 3.1 stable with the same result.
Perhaps there is a way to control the number of pixels used, but I could not find this in the yt documentation.
Any help is appreciated!
Andrew, Kacper, Allyson and I had a hangout to go over the current
state of the VR refactor that Sam started.
Where we're at:
* Docs need to be written and updated. I have committed to taking
this on, and I am starting today.
* There are two cameras that need to be ported, the setreo spherical
camera and the perspective camera. Suoqing, is there any chance you
might be able to take a look at doing this, and issuing a PR to Sam's
repository? I ask only because you've been so good at camera porting
in the past. :)
Once that's done, since there is a backwards-compatible camera
interface, I assert that we should consider heavily simply pulling in
the code and beginning to migrate.
New issue 1031: Bug Report: Memory leak when rendering projection plots in parallel
I emailed this bug report out to yt-users before finding this repository, sorry for the double post.
I believe I have found a memory leak when iterating through a DatasetSeries in parallel to render projection plots of each data file in the series. I work in a CFD research group and our code is built on boxlib, so I have been using yt to generate images of my simulations. The DatasetSeries's I work with are fairly large, on the order of 6000 to 7000 data sets per series. To expedite the image rendering process, I have been using yt's parallelism to iterate through the series with openmpi.
The ram usage, per mpi process, increases steadily throughout the duration of the script's runtime. When running my attached script with 16 mpi processes on a 16 core machine (1 process per core) the ram usage starts at about 2.56 GB per process and gradually increases to over 25 GB per process, at which time the script typically crashes due to insufficient ram on the machine to continue running.
My simulations do involve data sets of increasing memory requirements versus simulation time, but not to the scale mentioned above in the corresponding ram increases. I determined that the crashing is due to a memory leak by starting the script over after it crashed, using only the data set files that had not yet been processed (to ensure that the memory required to render the projection of each data set is the same as it was before it crashed by starting over at the same time in the simulation as when it crashed).
I also ran this script on my university's cluster, using several different openmpi implementations, and always got the same result. Additionally, the crashes always occur at approximately the same script runtime.
Attached to this email is a copy of a script I'm using. I run the script on my 16 core mini-cluster using the command:
mpirun -np 16 --cpus-per-proc 1 br0.05_temp.py
I also attached a sample image of the projection plots I'm generating.
Any help with this bug is appreciated!
New issue 1030: Document the enable_plugins function
This function parses the plugins file and allows the fields and functions defined there to be used in a script that imports yt using `import yt`. It needs docstrings, to be present in the API docs, and needs to be mentioned in the documentation section on the plugins file.
I've discovered something that I think could maybe be a bug (though maybe
not) in the Ramses front end but am unsure about the motivation behind the
code so thought I'd check in and try to learn a bit.
In frontends/ramses/data_structures.py, I think the length_unit is set to
be the length_unit that comes from the info_xxx.txt file multiplied by the
box length. For the example yt data sets, this doesn't impact anything
because the box lengths are unity.
In examining some Agora snapshots, however the box lengths are not unity -
one that I'm looking at for example has a boxlen of 600. This results in
length units that aren't similar to what the info_xxx.txt file says, and
more dramatically, mass units that are very different. For example, in one
such snapshot, I read from the info file:
boxlen = 0.600000000000000E+03
time = 0.500035810875167E+00
aexp = 0.100000000000000E+01
H0 = 0.100000000000000E+01
omega_m = 0.100000000000000E+01
omega_l = 0.000000000000000E+00
omega_k = 0.000000000000000E+00
omega_b = 0.000000000000000E+00
unit_l = 0.308656802500000E+22
unit_d = 0.157339152800000E-25
unit_t = 0.308687241173998E+17
but if I load this snapshot in yt, I get:
>In : print ds.mass_unit
>In : print ds.mass_unit.in_units("Msun")
>In : print ds.length_unit
which consequently results in derived quantities from the simulation (like
stellar mass) that are totally off (by a factor boxlen**3).
So, I'm writing (as a total Ramses novice) to try to understand the
motivation behind multiplying the length unit by the boxlen in the
frontend, and if I should be accounting for this factor of boxlen**3 in my
analysis, or if this is a bona fide bug.
ps I updated this morning:
thundersnow:yt desika$ hg id
8d3663ac217d+ (yt) tip