Hi, Everybody!
Does anyone out there have a technique for getting the variance out of
a profile object? A profile object is good at getting <X> vs. B, I'd
then like to get < (X - <X>)^2 > vs B. Matt and I had spittballed the
possibility some time ago, but I was wondering if anyone out there had
successfully done it.
Thanks,
d.
--
Sent from my computer.
Hi all,
I'm trying to calculate the total amount of photons crossing a spherical
surface using FLASH simulation data. I noticed yt has a surface extraction
method, and I used this to extract the spherical surface. I only have the
flux of photons in each cell and therefore need to mulitply by the
cross-sectional area of each cell to get what I need.
I have a couple questions about this surface extraction method:
- What exactly does it return?
- Can I get the cross sectional area of each cell on the surface (as seen
from the center of the simulation volume)
- If not, is there a better way to do this using yt?
Cheers and thanks for the help,
Corey
HI all,
I'm getting a weird failure in the annotate particles callback that I'm
having trouble reproducing on another machine.
Here is the error message output I'm getting:
http://paste.yt-project.org/show/7048/ . I'm just doing a simple:
$ sp = yt.SlicePlot(ds, 'z', 'density')
$ sp.annotate_particles(0.9)
$ sp.save()
This error occurs only on a machine where I have yt installed from source
(version 3.3.4, changeset 6e21d8723012 of the stable branch.. this also
occured on changeset 7f76a9ccfcca). I could not reproduce the error on a
local machine where I've installed yt using anaconda (yt version 3.3.4). It
fails on a ProjectionPlot as well.
Any thoughts on what could be causing this?
Thanks,
Andrew
--
NSF Graduate Fellow
Columbia University
Department of Astronomy
Dear All,
I am trying to create a HaloCatalog with the following code.
import yt
from yt.analysis_modules.halo_analysis.api import HaloCatalog
data_ds = yt.load('../RD0057/RedshiftOutput0057')
hc = HaloCatalog(data_ds=data_ds, finder_method='hop',
finder_kwargs={"threshold": 500.0, "dm_only": False, "ptype": "all",
"padding": 0.02})
hc.add_filter("quantity_value", "particle_mass", ">", 5e12, "Msun")
hc.create()
With this, I am able to create the catalog. But when we are trying to load
it, I am unable to get almost all fields. Here i am copying the error
message.
File "tra_flux_on_earth.py", line 161, in <module>
hc = HaloCatalog(halos_ds=halos)
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/analysis_modules/halo_analysis/halo_catalog.py",
line 126, in __init__
halos_ds.index
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/data_objects/static_output.py",
line 424, in index
self.create_field_info()
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/data_objects/static_output.py",
line 481, in create_field_info
self.field_info.load_all_plugins()
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/field_info_container.py",
line 279, in load_all_plugins
self.find_dependencies(loaded)
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/field_info_container.py",
line 292, in find_dependencies
deps, unavailable = self.check_derived_fields(loaded)
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/field_info_container.py",
line 362, in check_derived_fields
fd = fi.get_dependencies(ds = self.ds)
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/derived_field.py",
line 178, in get_dependencies
e[self.name]
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/field_detector.py",
line 99, in __missing__
vv = finfo(self)
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/derived_field.py",
line 204, in __call__
dd = self._function(self, data)
File "tra_flux_on_earth.py", line 72, in _fact
tempo = Obs_freq/(1.6e6*data["MagField"]/for_mag)
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/field_detector.py",
line 99, in __missing__
vv = finfo(self)
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/derived_field.py",
line 204, in __call__
dd = self._function(self, data)
File "tra_flux_on_earth.py", line 36, in _magfeed
mag = (0.05*4.0*pie*data["density"]*(data["TurbVel"]**2))**0.5
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/fields/field_detector.py",
line 89, in __missing__
finfo = self.ds._get_field_info(*field)
File
"/home/john/anaconda3/lib/python3.6/site-packages/yt/data_objects/static_output.py",
line 666, in _get_field_info
raise YTFieldNotFound((ftype, fname), self)
yt.utilities.exceptions.YTFieldNotFound: Could not find field '('all',
'density')' in catalog.0.h5.
What could be the problem?
--
Reju Sam John
Dear All,
I am very new in YT 3.3's HaloCatalog functionality. With the following
piece of code, I am able to get paerical mass and virial radius.
import yt
from yt.analysis_modules.halo_analysis.api import HaloCatalog
data_ds =
yt.load('/run/media/john/Seagate_Expansion_Drive/cosmo-sim_20/RD0057/RedshiftOutput0057')
hc = HaloCatalog(data_ds=data_ds, finder_method='hop')
hc.create()
ad = hc.halos_ds.all_data()
pm = ad['particle_mass'][:]
vr = ad['virial_radius'][:]
I would like to calculate the center of mass of the halo. I tried with the
following line
com = ad['center_of_mass'][:]
since like virial_radius(), center_of_mass() was available in YT2.6.
But this is not working.
Could you please suggest me how to calculate the center of mass of a halo
in yt 3.3.3 ?
--
Reju Sam John
Hello YT-community,
I have been trying to use yt for analysis of the output from a GADGET 2
simulation stored as an Unformatted Fortran Binary. It consists of files
named as snapshot_068 which is divided into 1024 subfiles named as
snapshot_068.0, snapshot_068.1 and so on. Whenever I am loading this with
yt I am getting the following error message and I have got no idea as to
how to fix this. I have also tried reading only one subfile of this multi
part snapshot but with no success. I am relying on the community to help me
in this regard.
My code:
fname = 'snapdir_068/snapshot_068'
ds = yt.load(fname)
Error displayed:
yt : [ERROR ] 2017-02-22 21:55:34,587 None of the arguments provided to
load() is a valid file
yt : [ERROR ] 2017-02-22 21:55:34,587 Please check that you have used a
correct path
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
"/home/alankar/anaconda3/lib/python3.5/site-packages/yt/convenience.py",
line 76, in load
raise YTOutputNotIdentified(args, kwargs)
yt.utilities.exceptions.YTOutputNotIdentified: Supplied ('snapshot_068',)
{}, but could not load!
#Trying to read only one of the multi part file
My code:
fname = 'snapdir_068/snapshot_068.0'
ds = yt.load(fname)
Error displayed:
yt : [ERROR ] 2017-02-22 21:57:17,625 Couldn't figure out output type
for /media/alankar/Seagate Expansion
Drive/mb2/snapshots/snapdir_068/snapshot_068.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File
"/home/alankar/anaconda3/lib/python3.5/site-packages/yt/convenience.py",
line 98, in load
raise YTOutputNotIdentified(args, kwargs)
yt.utilities.exceptions.YTOutputNotIdentified: Supplied ('snapshot_068.0',)
{}, but could not load!
Cheers,
Alankar
Hi yt users!
For my master research, I'm working on virtual observations from the EAGLE
(gadget based) simulations. Currently I'm trying out pyXSIM for this, which
uses yt to load the data, but I'm having some problems.
First, yt only loads the non-smoothed element abundances, as fields
"PartType0, (element)", while the eagle data has 2 fields:
PartType0/SmoothedElementAbundance/(element) and
PartType0/ElementAbundance/(element). Does anyone have any idea on how to
get the smoothed abundances out?
Also, I'm wondering: I have a script that selects the subhalos you want
from the eagle data, and then saves the particle data you need
(temperature, density, etc.) of only the particles in those subhalos to a
hdf5 file. However, when I try to load this data with yt, it recognises it
as gadget data, and then expects more than one file to be present, I guess
because usually there is more than one snapshot file. E.g. if my file is
called partialdata.0.hdf5 yt starts to look for partialdata.1.hdf5 (if i
just call it partialdata.hdf5 it still looks for partialdata.0.hdf5).
However, in gadget, the number of output files can be chosen freely, so you
could also choose to save your entire snapshot in one hdf5 file. So it
seems that in yt there should be a way to specify the number of files
you're dealing with... Does anyone know how to solve this problem? I think
that yt recognises the eagle files as owls data (they are very similar), it
may have something to do with that.
The reason I want to select my subhalos before loading the data is that
when I load the entire dataset (a 100Mpc box, 1504^3 particles) and then
select e.g. a slice, when creating a derived field, I think that yt
allocates memory (or something like that) for it for the entire dataset,
and not just for the slice I have selected. Is this true?
Cheers and thanks in advance,
Charlotte Brand
Hi Nathan,
For a single profile plot, it's straightforward to save it as a yt dataset
and reload it later:
#### Begin ####
ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150')
ad = ds.all_data()
prof = yt.create_profile(ad,["z"],fields=["velocity_z"], n_bins=100,
logs={"z":False})
fn = prof.save_as_dataset()
# restore...
prof_ds = yt.load(fn)
prof_ds_plot = yt.ProfilePlot.from_profiles([prof_ds.profile], y_log=
{"velocity_z":False})
#### End ####
The above workflow breaks down when one has several different profiles to
generate from a time series. One can manage it via the filenames, but this
seems prone to error and inefficient.
Here's a sketch of what I'd like to do, with the time series processing
removed for clarity:
#### Begin ####
import yt
import gzip
import cPickle
ds = yt.load('GasSloshing/sloshing_nomag2_hdf5_plt_cnt_0150')
ad = ds.all_data()
prof = yt.create_profile(ad,["z"],fields=["velocity_z"], n_bins=100,
logs={"z":False})
fname = 'test_pkl.gz'
## Here's where things break down ##
# The following does NOT work because prof is a function object
with gzip.open(fname,'wb') as f:
cPickle.dump(prof, f)
f.close()
# TypeError: can't pickle function objects
# Resuming...
# From here, I'd like to load the profile object from the pickle file. If
the profile
# prof were a regular object, I would be able to do the following:
with gzip.open(fname, 'rb') as f:
prof_pkl = cPickle.load(f)
f.close()
# With the time series processing, there would be several profiles
available
# for plotting on the same set of axes, but for the purposes of this
example,
# I'm only including the one profile loaded above:
prof_plot = yt.ProfilePlot.from_profiles([prof_pkl],
y_log={"velocity_z":False},
labels=['aa'])
#### End ####
As a stop gap, I'll just save all the critical bits of the profile function
object (e.g. prof.field_data, prof.x, prof.x_bins, etc.) to a dictionary,
then pickle the dictionary.
Is there a better way?
Thanks,
Jason
On Sat, Feb 18, 2017 at 2:05 PM, <yt-users-request(a)lists.spacepope.org>
wrote:
> Send yt-users mailing list submissions to
> yt-users(a)lists.spacepope.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> or, via email, send a message with subject or body 'help' to
> yt-users-request(a)lists.spacepope.org
>
> You can reach the person managing the list at
> yt-users-owner(a)lists.spacepope.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of yt-users digest..."
>
> Today's Topics:
>
> 1. Profile data (Jason Galyardt)
> 2. Re: Profile data (Nathan Goldbaum)
>
>
> ---------- Forwarded message ----------
> From: Jason Galyardt <jason.galyardt(a)gmail.com>
> To: Discussion of the yt analysis package <yt-users(a)lists.spacepope.org>
> Cc:
> Date: Fri, 17 Feb 2017 15:58:27 -0500
> Subject: [yt-users] Profile data
> Dear yt Users,
>
> Does anyone know of a way to export the data of a profile object to a byte
> stream? Specifically, I'd like to be able to store several byte streams
> from several different simulation files into a single dictionary object
> (for example), and then pickle the dictionary. I'd like to then unpickle
> the dictionary object at a later time for plotting, etc.
>
> I have figured out a way to do this, but it involves saving each profile
> to a temporary file, reading the temp file into a byte stream, and then
> pickling the byte stream; then to reload, I would have to unpickle the byte
> stream, write it to a temporary file, then use yt.load() to get the profile
> object back. This works, but boy, it's inefficient. Dealing with a whole
> bunch of separate files generated by profile.save_as_dataset() is
> inefficient in different way. Any ideas on streamlining this workflow?
>
> Thanks,
> Jason
>
> ------
> Jason Galyardt
> University of Georgia
>
>
>
> ---------- Forwarded message ----------
> From: Nathan Goldbaum <nathan12343(a)gmail.com>
> To: Discussion of the yt analysis package <yt-users(a)lists.spacepope.org>
> Cc:
> Date: Fri, 17 Feb 2017 15:01:21 -0600
> Subject: Re: [yt-users] Profile data
>
>
> On Fri, Feb 17, 2017 at 2:58 PM, Jason Galyardt <jason.galyardt(a)gmail.com>
> wrote:
>
>> Dear yt Users,
>>
>> Does anyone know of a way to export the data of a profile object to a
>> byte stream? Specifically, I'd like to be able to store several byte
>> streams from several different simulation files into a single dictionary
>> object (for example), and then pickle the dictionary. I'd like to then
>> unpickle the dictionary object at a later time for plotting, etc.
>>
>> I have figured out a way to do this, but it involves saving each profile
>> to a temporary file, reading the temp file into a byte stream, and then
>> pickling the byte stream; then to reload, I would have to unpickle the byte
>> stream, write it to a temporary file, then use yt.load() to get the profile
>> object back. This works, but boy, it's inefficient. Dealing with a whole
>> bunch of separate files generated by profile.save_as_dataset() is
>> inefficient in different way. Any ideas on streamlining this workflow?
>>
>
> I'm not sure I fully understand what you're trying to do. A code example
> or an outline if a code example with the part you're confused about left to
> be filled in would help.
>
> Why not just save the raw profile data for the fields you're interested
> in, e.g. profile[field], which will be a numpy array. Do you need other
> data that's defined on the profile object?
>
>
>> Thanks,
>> Jason
>>
>> ------
>> Jason Galyardt
>> University of Georgia
>>
>>
>> _______________________________________________
>> yt-users mailing list
>> yt-users(a)lists.spacepope.org
>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>>
>>
>
> _______________________________________________
> yt-users mailing list
> yt-users(a)lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
>
Dear yt Users,
Does anyone know of a way to export the data of a profile object to a byte
stream? Specifically, I'd like to be able to store several byte streams
from several different simulation files into a single dictionary object
(for example), and then pickle the dictionary. I'd like to then unpickle
the dictionary object at a later time for plotting, etc.
I have figured out a way to do this, but it involves saving each profile to
a temporary file, reading the temp file into a byte stream, and then
pickling the byte stream; then to reload, I would have to unpickle the byte
stream, write it to a temporary file, then use yt.load() to get the profile
object back. This works, but boy, it's inefficient. Dealing with a whole
bunch of separate files generated by profile.save_as_dataset() is
inefficient in different way. Any ideas on streamlining this workflow?
Thanks,
Jason
------
Jason Galyardt
University of Georgia
Hi All,
My simulation seems to have no detail information on units, so I want to
assign cgs to the output. However, when I do the following
---------------------------------------------------------------------
import yt
units_override = {"length_unit":(1.0,"cm"),
"time_unit":(1.0,"s"),
"mass_unit":(1.0,"g"),
'nele_unit': (1.0, "cm**-3"),
"magnetic_unit":(1.0,"gauss")}
ds = yt.load('flash_hdf5_plt_cnt_0000', units_override=units_override)
print ds.point([0,0,0])['magz']
print ds.point([0,0,0])['nele']
---------------------------------------------------------------------
The output is:
---------------------------------------------------------------------
[ 0.] code_magnetic
[ 2.87595814e+15] code_length**(-3)
---------------------------------------------------------------------
How can I get:
---------------------------------------------------------------------
[ 0.] gauss
[ 2.87595814e+15] cm**(-3)
---------------------------------------------------------------------
Thanks,
Yingchao