Dear yt
Can current yt calculate 3-D Mass power spectra? I checked the website but
I didn't find any information. I think calculating 3-D Mass power
spectra is a very useful for cosmological simulations. So I guess maybe yt
supports this function now....?
Thanks in advance
Hi all,
I am trying to find clumps in a few wind-tunnel runs of a galaxy with ram
pressure stripping, so that I can measure the mass, volume, etc. I want to
make sure, then, that I know how clump finder is selecting its contours.
Unfortunately, I am not getting the minimum densities that I am
expecting--although not all the time!
I have used IsolatedGalaxy and run the same clump finder routine, but with
different maximum densities for the contours. To reproduce my error I had
to make the step size smaller--here I am using step = 1.1.
Basically, I am not always stepping up my minimum density contour, even
when the maximum density in the clump makes it clear I *should* be.
I am attaching the script where I have cmax = 9.0e-24, which shows the
error. You can see that in the figure I attach, looking at the green
triangles. The y-axis is the maximum density in a clump, the x-axis is the
minimum density. To guide your eye I have drawn lines at each step.
Basically, no point should be above a black line and to the left of the
connected vertical line (because then the max value is above the next
possible minimum in the steps) *unless* the cmax value was reached. To
make this really clear if you look at the upper-left green triangle, the
max density is ~7e-24 while the minimum density is ~3.6e-24--there are a
lot of steps between those values--why is the minimum density not 6.68e-24?
Strangely enough, this problem does *not* show up when I have cmax =
6.7e-24 (cyan circles).
I write out the max and min values in a text file at the end of the script,
so I also attach the little script I use to make the figure.
Any help would be *much* appreciated!
Thank you all,
Stephanie
--
Dr. Stephanie Tonnesen
Associate Research Scientist
CCA, Flatiron Institute
New York, NY
stonnes(a)gmail.com
Hello all,
I have a question about the ParticlePlot function. I am currently using the ParticlePlot function to plot the coordinates of all the particles. I was wondering if it was possible to plot a subset of the particles only, like only the particles of a certain mass or of a certain type in multimass or star/dark matter simulations. I am aware that it is possible to apply a filter to the yt.ProjectionPlot function, but do not see anything like this in the yt.ParticlePlot docs.
Thanks!
The yt community is proud to announce the release of yt 3.5.1. This is a
patch release and includes a number of minor improvements and bugfixes.
yt (https://yt-project.org) is an open source, community developed toolkit
for the analysis and visualization of volumetric data. Development is
hosted on GitHub (https://github.com/yt-project/yt).
See the changelog below for a summary of new features, changes, and
bugfixes.
For more information, including installation instructions, links to
community resources, and information on contributing to yt’s development,
please see the yt homepage at http://yt-project.org and the documentation
for yt 3.5.1 at http://yt-project.org/docs/3.5.1
<http://yt-project.org/docs/3.4.1>.
Binaries for yt 3.5.1 are available via pip and conda. If you installed via
the install script or use conda to manage your python installation, you can
update yt via:
$ conda update -c conda-forge yt
And via pip if you manage your python installation with pip:
$ pip install -U yt
As always, if you have any questions, concerns, or run into any trouble
updating please don’t hesitate to send a message to the mailing list or
stop by our Slack or IRC channel.
yt is the product of a large community of developers and users and we are
extraordinarily grateful for and proud of their contributions. Please
forward this announcement on to any interested parties.
Best,
The yt development team
*Changelog*
- Avoid use of deprecated field names internally, silencing some
user-visible deprecation warnings. See PR 2073
<https://github.com/yt-project/yt/pull/2073>.
- Fix crash using clump.save_as_dataset when field_data is not defined.
See PR 2079 <https://github.com/yt-project/yt/pull/2079/>. Thank you to
Adam McMaster (@adammcmaster) for the fix.
- Fix for ramses RT and group reading under Python3. See PR 2092
<https://github.com/yt-project/yt/pull/2092>.
- The line integral convolution annotation no longer erroneously
includes noise in regions where the vector field under consideration has a
value of zero. See PR 2094 <https://github.com/yt-project/yt/pull/2094>.
- Explicit set language_level to 2 in yt's cython code to avoid breakage
with a future Cython release that changes the default language_level. See PR
2100 <https://github.com/yt-project/yt/pull/2100>.
- Add support for matplotlib 3.0, fixes issue where colorbar values were
sometimes incorrect. See PR 2101
<https://github.com/yt-project/yt/pull/2101>.
- Fix incorrect weighting function used in definition of
mozotta_weighting field. See PR 2102
<https://github.com/yt-project/yt/pull/2102>.
- Fix an issue that caused reading a particle field from a cut_region to
produce a crash. See PR 2106 <https://github.com/yt-project/yt/pull/2106>
.
- Fix unit issues leading to incorrect angular momentum calculations.
See PR 2114 <https://github.com/yt-project/yt/pull/2114>.
- Fix incorrect behavior when creating a covering_grid or arbitrary_grid
with a left_edge or right_edge that has specified units. See PR 2119
<https://github.com/yt-project/yt/pull/2119>.
- Avoid reading big-endian Gadget binary data. See PR 2120
<https://github.com/yt-project/yt/pull/2120>.
- Convert radius to appropriate unit in the sphere plot callback. See PR
2128 <https://github.com/yt-project/yt/pull/2128>.
- Fix issues doing CIC deposition on a chunk containing only one oct.
See PR 2130 <https://github.com/yt-project/yt/pull/2130>.
- Fix FutureWarning generated by NumPy 1.16. See PR 2133
<https://github.com/yt-project/yt/pull/2133>.
- Fix issues reading ARTIO multiresolution N-body particles. See PR 2140
<https://github.com/yt-project/yt/pull/2140>. Thank you to Gillen Brown
(@gillenbrown) for the fix.
- Fix crashes when a plugins file has a function that uses global
variables defined in the file. See PR 2144
<https://github.com/yt-project/yt/pull/2144>.
- Fix several bugs in the Enzo-p frontend. See PR 2150
<https://github.com/yt-project/yt/pull/2150>.
- Fix issues reading ramses data on Windows. See PR 2152
<https://github.com/yt-project/yt/pull/2152>.
Hi, Everybody!
I'm trying to run a parallel projection on a 1024^3 simulation, but I keep
getting out-of-memory. Serial works fine. This is all on Stampede, on an
interactive node. My script is direct from the website, and is below. Is
there something dumb I'm doing?
Thanks!
I do
$ibrun -np 2 python parallel_test.py
and parallel_test.py contains
"""
from mpi4py import MPI
import yt
yt.enable_parallelism()
ds =
yt.load("/scratch/00369/tg456484/Paper49d_moresims/ze01_M10_MA1_1024_quan/DD0095/data0095")
/DD%04d/data%04d"%(22,22))
p = yt.ProjectionPlot(ds, "x", "density")
p.save()
"""
my output then looks like:
000 yt : [INFO ] 2019-02-22 13:00:41,879 Gathering a field list (this
may take a moment.)
File "parallel_test.py", line 13, in <module>
p = yt.ProjectionPlot(ds, "x", "density")
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/visualizati
on/plot_window.py", line 1480, in __init__
max_level=max_level)
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/data_object
s/construction_data_containers.py", line 270, in __init__
self.get_data(field)
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/data_object
s/construction_data_containers.py", line 334, in get_data
self._initialize_chunk(chunk, tree)
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/geometry/geometry_handler.py",
line 271, in cached_func
tr = func(self)
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/geometry/geometry_handler.py",
line 332, in icoords
ci = np.empty((self.data_size, 3), dtype='int64')
P006 yt : [ERROR ] 2019-02-22 13:03:36,145 MemoryError:
File "parallel_test.py", line 13, in <module>
p = yt.ProjectionPlot(ds, "x", "density")
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/visualization/plot_window.py",
line 1480, in __init__
max_level=max_level)
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/data_objects/construction_data_containers.py",
line 270, in __init__
self.get_data(field)
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/data_objects/construction_data_containers.py",
line 334, in get_data
self._initialize_chunk(chunk, tree)
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/data_objects/construction_data_containers.py",
line 401, in _initialize_chunk
icoords = chunk.icoords
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/data_objects/data_containers.py",
line 1555, in icoords
return self._current_chunk.icoords
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/geometry/geometry_handler.py",
line 271, in cached_func
tr = func(self)
File
"/home1/00369/tg456484/local-yt-2019-02-22-py3/yt-conda/lib/python3.7/site-packages/yt/geometry/geometry_handler.py",
line 332, in icoords
ci = np.empty((self.data_size, 3), dtype='int64')
P007 yt : [ERROR ] 2019-02-22 13:03:36,145 MemoryError:
P002 yt : [ERROR ] 2019-02-22 13:03:36,200 Error occured on rank 2.
P004 yt : [ERROR ] 2019-02-22 13:03:36,200 Error occured on rank 4.
P000 yt : [ERROR ] 2019-02-22 13:03:36,200 Error occured on rank 0.
P001 yt : [ERROR ] 2019-02-22 13:03:36,200 Error occured on rank 1.
P003 yt : [ERROR ] 2019-02-22 13:03:36,200 Error occured on rank 3.
P005 yt : [ERROR ] 2019-02-22 13:03:36,200 Error occured on rank 5.
P006 yt : [ERROR ] 2019-02-22 13:03:36,200 Error occured on rank 6.
P007 yt : [ERROR ] 2019-02-22 13:03:36,200 Error occured on rank 7.
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 5
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 6
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4
TACC: MPI job exited with code: 1
TACC: Shutdown complete. Exiting.
--
-- Sent from a computer.
Hello all!
I am trying to find the highest resolution regions in an ART simulation. I have tried using the method outlined here http://yt-project.org/doc/analyzing/filtering.html under cut regions. I have attempted to used a filter of the 'level', 'grid_level' field to find only regions where the grid level is 12, and apply it. I have a feeling I am making a mistake with the yt plotting functions.
Here is my sample code I am using, which I just modified from the filtering page. https://bpaste.net/show/1c58d3137d7f
And here is the error I am getting: https://bpaste.net/show/8cb6c1c61f49
Now the grid_ad is filled with a bunch of 12 level grids, so I am not sure how the full of nan is happening. Is it something I am doing wrong? I can attach the generated images as well, but am not sure the best way to do that.
Thanks!
Hi,
I am using YT to process the outputs of a non-standard RAMSES-RT simulation (non-standard in the sense that it uses custom particle/hydro variables), and it works excellently for almost everything. However I am now trying to make plots of the surface star formation rate density, and have hit an obstacle which I can't seem to surmount.
Specifically I want to estimate the SFRD by looking at all star particles that formed recently, which requires correctly processing the particle birth times. This seems to be a tricky business in RAMSES, as depending on the type of the simulation run (cosmological/RT/etc) the particle time units are different. Luckily for this particular case I have separate output files which contain the star particle formation times in the correct physical units (e.g. Gyr). What I would like to do is load these in, override the "star_age" field of the star particles, and then use that to filter out the star particles that formed recently. I have tried this and it seems to work well, up until I want to deposit this filtered particle field. At this point the code fails with the NeedsGridType and YTIllDefinedFilter exceptions.
Here's a minimal working example:
################################################################################
#!/usr/bin/env python3
import yt
from yt.data_objects.particle_filters import add_particle_filter
from yt import YTArray
import numpy as np
from scipy.io import FortranFile
# Load outputs of non-standard RAMSES-RT run
particles = [("particle_birth_time", "d"), ("particle_metallicity", "d"),
("particle_initial_mass", "d")]
fields = ["Density", "x-velocity", "y-velocity", "z-velocity", "nontp",
"Pressure", "Metallicity", "HI", "HII", "HeII", "HeIII",
"ivar_ref"]
ds = yt.load("output_00050/info_00050.txt",
fields=fields, extra_particle_fields=particles)
# Add initial filter to distinguish star particles
def _stars(pfilter, data):
return data[("io", "conformal_birth_time")] != 0 # star particles
add_particle_filter("stars", function=_stars, filtered_type="all",
requires=["conformal_birth_time"])
ds.add_particle_filter("stars")
# Get star formation times in Gyr
# (Based on pynbody's get_tform, read in star formation time from converted
# output)
def convert_times():
ages = np.array([])
base = "output_00050/birth_00050."
ncpu = 1024
nstar = 0
for i in range(ncpu):
fname = base + "out{:05d}".format(i + 1)
with FortranFile(fname) as birth_file:
read_ages = birth_file.read_reals(np.float64)
new = np.where(read_ages > 0)[0] # star particles only
nstar += len(new)
ages = np.append(ages, read_ages[new])
return nstar, ages
nstar, stellar_ages = convert_times()
# Override existing ("stars", "star_age") field with correct values
def _stellar_age(field, data):
return data.ds.current_time - YTArray(stellar_ages, "Gyr")
ds.add_field(("stars", "star_age"), function=_stellar_age,
units="Myr", particle_type=True, force_override=True)
# Filter stars born within past 10 Myr
def _new_stars(pfilter, data):
age = data[(pfilter.filtered_type, "star_age")].in_units("Myr")
mask = np.logical_and(age.in_units("Myr") <= 10, age > 0.0)
return mask
add_particle_filter("new_stars", function=_new_stars, filtered_type="stars",
requires=["star_age"])
ds.add_particle_filter("new_stars")
# Create new deposit field with estimate of recent star formation rate density
def _star_formation(field, data):
stellar_density = data[("deposit", "new_stars_density")]
stellar_age = ds.quan(10, "Myr")
return stellar_density / stellar_age
ds.add_field(("deposit", "sfr"), function=_star_formation,
units="Msun/kpc**3/yr", sampling_type="cell")
# Define a region of interest to project over
centre = ds.arr([0.5, 0.5, 0.5], "code_length")
fwidth = ds.quan(0.1, "code_length")
left_edge = centre - (fwidth/2)*1.1
right_edge = centre + (fwidth/2)*1.1
box = ds.box(left_edge=left_edge, right_edge=right_edge)
proj = ds.proj(field=("deposit", "sfr"), axis="z", weight_field=None,
center=centre, data_source=box)
################################################################################
The short summary of the error output is as follow:
################################################################################
yt.fields.field_exceptions.NeedsGridType: (0, None)
During handling of the above exception, another exception occurred:
yt.utilities.exceptions.YTIllDefinedFilter: Filter '<yt.data_objects.particle_filters.ParticleFilter object at 0x7f04591b2390>' ill-defined. Applied to shape (0, 3) but is shape (14150626,).
################################################################################
I was wondering if I am making a mistake somewhere, or if perhaps I'm simply going about this in the wrong way? The issue seems to revolve around using data[("deposit", "new_stars_density")]; if for example I instead use data[("deposit", "stars_density")] in the _star_formation() function then it all works fine.
Many thanks in advance for any help!
Lewis
Hey everyone,
I would like to calculate the average metallically between two
temperature values. IE... the average metallicity between T = 200 K and T
= 1000 K. Is there a way to do this? Thanks!
--
------------------------------------------------------------------------
Joseph Smidt <josephsmidt(a)gmail.com>
Theoretical Division
P.O. Box 1663, Mail Stop B283
Los Alamos, NM 87545
Office: 505-665-9752
Fax: 505-667-1931
Matt, Nathan,
Thank you so much, however we would like something more adaptive, to
preserve the simulation high resolution.
Along these lines, I thought it may help to compute something like
3*v + Lap[v]*dx^2
where Lap[] is the Laplacian operator. I noticed that it is possible to
compute a gradient easily:
>>> g = d.add_gradient_fields(('deposit', 'N-BODY_density'))
>>> g
[('deposit', 'N-BODY_density_gradient_x'), ('deposit',
'N-BODY_density_gradient_y'), ('deposit', 'N-BODY_density_gradient_z'),
('deposit', 'N-BODY_density_gradient_magnitude')]
However, when I try to differentiate the gradient one more time, I get
an error:
>>> ggx = d.add_gradient_fields(('deposit', 'N-BODY_density_gradient_x'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gnedin/ART/yt/yt/data_objects/static_output.py", line
1391, in add_gradient_fields
units = self.field_info[ftype, input_field].units
File "/home/gnedin/ART/yt/yt/fields/field_info_container.py", line
330, in __missing__
raise KeyError("No field named %s" % (key,))
KeyError: "No field named ('deposit', 'N-BODY_density_gradient_x')"
What am I doing wrong?
n
On 2/1/19 11:19 AM, Nathan Goldbaum wrote:
> I would also suggest looking into the arbitrary_grid data object, which
> will allow you to create a grid specifically for doing the deposit
> operation at the level of resolution you desire, decoupling the deposit
> operation from the AMR grid:
>
> Create a 20 kpc^3 grid centered on the center of the simulation, with a
> resolution of 64^3, and deposit all of the particles onto it:
>
> In [8]: grid = ds.arbitrary_grid(ds.domain_center - 10*yt.units.kpc,
> ds.domain_center + 10*yt.units.kpc, [64, 64, 64])
>
> In [9]: grid['deposit', 'all_density'].shape
> Out[9]: (64, 64, 64)
>
> On Fri, Feb 1, 2019 at 11:13 AM Matthew Turk <matthewturk(a)gmail.com
> <mailto:matthewturk@gmail.com>> wrote:
>
> Hi Nick and Hanjue,
>
> From any given selector object, you can specify the max level data
> will be drawn from; this should work with particle data in the ARTIO
> frontend, but because it may not overlap explicitly with the
> indexing system you should double check.
>
> An example:
>
> dd = ds.all_data()
> dd.max_level = 5
>
> That will restrict up to and including level 5 data.
>
> On Fri, Feb 1, 2019 at 11:11 AM Nick Gnedin <gnedin(a)fnal.gov
> <mailto:gnedin@fnal.gov>> wrote:
>
>
> Folks,
>
> We would like to use dark matter density in one of our simulations,
> however the build-in derived field ('deposit', 'N-BODY_density')
> is too
> noisy for our purposes. We can think of two ways to make it
> smoother: 1)
> to reduce the max refinement level of the underlying grid or 2) use
> SPH-like averaging on particles first before depositing them on
> the grid.
>
> Since we are newbies, could someone give us a few hints on how to
> proceed with one or both of these approaches?
>
> Many thanks,
>
> Nick
> Hanjue
> _______________________________________________
> yt-users mailing list -- yt-users(a)python.org
> <mailto:yt-users@python.org>
> To unsubscribe send an email to yt-users-leave(a)python.org
> <mailto:yt-users-leave@python.org>
>
> _______________________________________________
> yt-users mailing list -- yt-users(a)python.org
> <mailto:yt-users@python.org>
> To unsubscribe send an email to yt-users-leave(a)python.org
> <mailto:yt-users-leave@python.org>
>
Folks,
We would like to use dark matter density in one of our simulations,
however the build-in derived field ('deposit', 'N-BODY_density') is too
noisy for our purposes. We can think of two ways to make it smoother: 1)
to reduce the max refinement level of the underlying grid or 2) use
SPH-like averaging on particles first before depositing them on the grid.
Since we are newbies, could someone give us a few hints on how to
proceed with one or both of these approaches?
Many thanks,
Nick
Hanjue