Hi, Everybody!
Does anyone out there have a technique for getting the variance out of
a profile object? A profile object is good at getting <X> vs. B, I'd
then like to get < (X - <X>)^2 > vs B. Matt and I had spittballed the
possibility some time ago, but I was wondering if anyone out there had
successfully done it.
Thanks,
d.
--
Sent from my computer.
Dear yt
Can current yt calculate 3-D Mass power spectra? I checked the website but
I didn't find any information. I think calculating 3-D Mass power
spectra is a very useful for cosmological simulations. So I guess maybe yt
supports this function now....?
Thanks in advance
Dear yt-users,
Hi, I want to know about the function "extract_connected_sets". I want to
extract SuperNova enriched bubbles (which I define as Z > -4 or something)
and analyze its size or mass or some other values. I'm working with
cosmological simulation by enzo, and there are many SN enriched bubbles.
What I need is the set of "bubble regions" of (probably) YTRegions. For
this purpose, it seemed to me that "extract_connected_sets" would be the
perfect function, but I don't understand what this object is, or how I can
analyze the object, because when I tried projection plot of the objects,
nothing appeared.
Could you tell me how I can use "extract_connected_sets"? Or, is there any
other way to extract SN enriched region?
Best,
YT
Dear yt users,
Hi, I want to ask about star particles in enzo simulation. I want to know
whether "star particle filter" only give me "alive" star particles or also
counts "dead" particles (which have exploded in the past).
I defined the "star particle" with particle filter like this:
import yt
from yt.data_objects.particle_filters import add_particle_filter
def stars(pfilter, data):
filter = data[(pfilter.filtered_type, "particle_type")] == 2
return filter
def DMparticles(pfilter, data):
filter = data[(pfilter.filtered_type, "particle_type")] == 1
return filter
yt.add_particle_filter("Stars", function=stars, filtered_type='io',
requires=["particle_type"])
yt.add_particle_filter("DMparticles", function=DMparticles,
filtered_type='io',
requires=["particle_type"])
ds = yt.load("~/IsolatedGalaxy/galaxy0030/galaxy0030")
ds.add_particle_filter('Stars')
ds.add_particle_filter('DMparticles')
ad = ds.all_data()
stellar_mass = ad[("Stars", "particle_mass")].in_units('Msun')
print(stellar_mass)
Does this "stellar_mass" contain both alive and dead stars ?
Best,
Y.T.
Hi all,
I'm trying to annotate BH particles on top of a regular yt SlicePlot.
I'm loading from an Enzo dataset and using yt version 3.3.1, the plot
field and width in parsecs are given as command line arguments (I tried
using density and 3.0). Here is the script I'm working with:
import yt
import sys
from yt.units import pc, Msun, Myr
import numpy as np
yt.enable_parallelism()
#get command line inputs, syntax is [field] [size]
field = sys.argv[1]
try :
size = float(sys.argv[2])
except (TypeError, IndexError):
print("Error, argument syntax is 'plot.py [field] [size]'")
quit(0)
ds = yt.load("DD0121/output_0121")
time = '{:.3f}'.format(round(ds.current_time.in_units('Myr') -
194.608*Myr, 4))
val, loc = ds.find_max('density')
plt = yt.SlicePlot(ds, 'z', field, center= loc, width= (size, 'pc'))
plt.set_zlim(field, 1e-21, 1e-12)
plt.annotate_particles(ds, size*pc, ptype= 8) #produces 1st error
#plt.annotate_particles(ds, size*pc, p_size= 2.0, col= 'k', marker= 'o',
stride= 1, ptype= 8, minimum_mass= None, alpha= 1.0) #produces 2nd error
plt.annotate_text((0.05, 0.05), 't=' + time + 'Myr', coord_system= 'axis')
plt.save("test.png")
The script seems to run ok until attempting to save the plot, when I get
the following error:
Traceback (most recent call last):
File "plot_yt.py", line 35, in <module>
plt.save("test.png")
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_container.py",
line 78, in newfunc
args[0]._setup_plots()
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_window.py",
line 949, in _setup_plots
self.run_callbacks()
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_window.py",
line 1005, in run_callbacks
sys.exc_info()[2])
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_window.py",
line 999, in run_callbacks
callback(cbw)
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_modifications.py",
line 53, in _check_geometry
return func(self, plot)
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_modifications.py",
line 1575, in __call__
reg = self._get_region((x0,x1), (y0,y1), plot.data.axis, data)
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_modifications.py",
line 1639, in _get_region
LE[zax] = data.center[zax].ndarray_view() - self.width*0.5
yt.utilities.exceptions.YTPlotCallbackError: annotate_particles callback
failed with the following error: unsupported operand type(s) for *:
'EnzoDataset' and 'float'
I've tried adding all the keyword arguments and their parameters to the
annotate_particles call (switching the commented line on my script), but
this produces:
Traceback (most recent call last):
File "plot_yt.py", line 35, in <module>
plt.save("test.png")
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_container.py",
line 78, in newfunc
args[0]._setup_plots()
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_window.py",
line 949, in _setup_plots
self.run_callbacks()
File
"/opt/apps/yt/3.3.1/yt-conda/lib/python2.7/site-packages/yt/visualization/plot_window.py",
line 997, in run_callbacks
callback = CallbackMaker(*args[1:], **kwargs)
TypeError: __init__() got multiple values for keyword argument 'p_size'
Any help on getting this to work would be vastly appreciated. I was also
wondering whether it was possible to use the annotate_particles callback
on OffAxisSlicePlots and OffAxisProjectionPlots, I've tried both and
neither seems to support this functionality.
Many thanks,
Sam Patrick
Dear yt-users,
I was making a plot with "radial_velocity" and noticed that it is always
positive, which is odd (there should be both inflow and outflow). The
source code is:
>>> print(ds.field_info["gas","radial_velocity"].get_source())
def _radial(field, data):
return data[ftype, "%s_spherical_radius" % basename]
I find that quite confusing. It also looks the same as (‘gas’,
‘radial_magnetic_field’) according to http://yt-project.org/doc/
reference/field_list.html
Thank you!
Hi yt-users!
I am trying to add a field that is radius/rvir. This is an idealized
galaxy sim with static DM potential (no live halo), so I was planning on
putting the virial radius in by hand. I am not totally sure what is
causing yt to choke--does it not like that I am putting in a number? See
below for details.
Thanks in advance for any help!
Best,
Stephanie
The lines of code:
def rrvir(field,data):
return data['radius'].in_units('kpc')/(218.,'kpc')
i =0
while i < len(loop):
ds = yt.load("blahblah/DD"+loop[i]+"/sb_"+loop[i])
ds.add_field(('gas','r_rvir'),function=rrvir)
....many unimportant lines....
I get this error message:
yt : [INFO ] 2018-06-14 09:02:48,783 Gathering a field list (this may
take a moment.)
yt_slices_allouts_mli.py:19: UserWarning: Because 'sampling_type' not
specified, yt will assume a cell 'sampl
ing_type'
ds.add_field(('gas','r_rvir'),function=rrvir)
Traceback (most recent call last):
File "yt_slices_allouts_mli.py", line 19, in <module>
ds.add_field(('gas','r_rvir'),function=rrvir)
File
"/home/stonnesen/yt-conda/src/yt-git/yt/data_objects/static_output.py",
line 1221, in add_field
deps, _ = self.field_info.check_derived_fields([name])
File
"/home/stonnesen/yt-conda/src/yt-git/yt/fields/field_info_container.py",
line 366, in check_derived_fi
elds
fd = fi.get_dependencies(ds = self.ds)
File "/home/stonnesen/yt-conda/src/yt-git/yt/fields/derived_field.py",
line 210, in get_dependencies
e[self.name]
File "/home/stonnesen/yt-conda/src/yt-git/yt/fields/field_detector.py",
line 108, in __missing__
vv = finfo(self)
File "/home/stonnesen/yt-conda/src/yt-git/yt/fields/derived_field.py",
line 250, in __call__
dd = self._function(self, data)
File "yt_slices_allouts_mli.py", line 13, in rrvir
return data['radius'].in_units('kpc')/(300.,'kpc')
File "/home/stonnesen/yt-conda/src/yt-git/yt/units/yt_array.py", line
1372, in __array_ufunc__
out=out, **kwargs)
TypeError: ufunc 'true_divide' not supported for the input types, and the
inputs could not be safely coerced
to any supported types according to the casting rule ''safe''
--
Dr. Stephanie Tonnesen
Associate Research Scientist
CCA, Flatiron Institute
New York, NY
stonnes(a)gmail.com
Hi yt-users,
I am putting yt on a new machine, and it looks like I will have to use the
all-in-one script. I would like to install yt-dev. To do that, what do I
need to rename the BRANCH= to?
Thanks!
Stephanie
--
Dr. Stephanie Tonnesen
Associate Research Scientist
CCA, Flatiron Institute
New York, NY
stonnes(a)gmail.com
Hi everyone,
I'm working on doing an analysis on the ~500 outputs I have from a
simulation run so naturally I want to do it in parallel. The data lives on
Pleiades where a single node has 32/64 Gb depending on the machine you
pick.
The general code structure is to take a dataset and compute the fluxes of
multiple quantities binned in radius. Because the outputs are large, I'd
like to load in 1 dataset per node but then use all 16 cores on the node
for the radial flux calculations.
To test my code, I'm using 2 smaller outputs of ~4.5 Gb each so they should
easily fit on one node but I keep getting memory errors from pleiades. The
code does run correctly on my laptop. I'm fairly certain I'm not setting up
the code correctly with the different num_proc keywords so that it's trying
to do the calculation on a single core instead of half the node.
I've posted a paired down example of my code to pastebin that uses two
outputs from enzo_comoslogy_plus dataset. The code is named
"flux_test_parallel.py" and should run if put inside that dataset
directory. The parallel portion of the code is preceded by a line of #'s. (
http://paste.yt-project.org/show/21/)
Any advice for how to force the parallel structure to use the machine
memory correctly or general pointers for this kind of script would be
really appreciated!
Thanks!
Lauren
Britton,
Thank you - I will experiment and report performance. I think requiring
the same number of bins in all profiles is a reasonable restriction,
even if it increases the output size significantly. If someone is
concerned with keeping the output size minimized, they can always write
halos within several mass ranges into separate outputs with numbers of
bins fixed within a single output but varied between mass ranges, and
that can be accomplished with the existing functionality straightforwardly.
n
On 06/11/2018 11:31 AM, Britton Smith wrote:
> Hi Nick,
>
> Great question, I can see how this could become an issue when halo
> catalogs get large.
>
> I've sketched out a way to do this below that may be slightly hacky, but
> will get the job done.
> http://paste.yt-project.org/show/24/
>
> In the above, I create a new callback that attaches the profiles hanging
> off the halo object to the halo catalog itself, then combine them into
> single arrays at the end and save using yt.save_as_dataset. This will
> work fine if the profiles all have the same number of bins. If not, it
> would probably be better to write to an hdf5 file by hand with a single
> hdf5 group per profile. That said, HDF5 performance is known to degrade
> when the number of groups in a file gets large. In any case, hopefully,
> something like the above will work.
>
> Perhaps in the future, we can make some modifications to the actual code
> to do something like this if it seems to work well.
>
> Britton
>
> On Wed, Jun 6, 2018 at 4:02 AM Nick Gnedin <gnedin(a)fnal.gov
> <mailto:gnedin@fnal.gov>> wrote:
>
>
> I would like to save halo profiles for a large simulation. I understand
> that I can use a simple callback like this:
>
> hc.add_callback("save_profiles", storage="virial_profiles",
> output_dir="profiles")
>
> The problem however is that the call back saves each profiles as a
> separate h5 file, and for a large simulation the number of files may be
> prohibitive. Instead, I would like to save all of the profiles in a
> single fileset, ideally as attributes to halo objects.
>
> For example, inside my profiling callback I can do the following:
>
> def hprof_function(halo):
> ...
> pv = yt.create_profile(...)
> setattr(halo,"profs",pv)
>
> However, hc.create() function does not save all the attributes for halo
> objects, so halo.profs would not be saved.
>
> What would be a proper yt way of storing all the profiles as a single
> dataset?
>
> Thank you,
>
> n
>
> _______________________________________________
> yt-users mailing list -- yt-users(a)python.org
> <mailto:yt-users@python.org>
> To unsubscribe send an email to yt-users-leave(a)python.org
> <mailto:yt-users-leave@python.org>
>