Hi, Everybody!
Does anyone out there have a technique for getting the variance out of
a profile object? A profile object is good at getting <X> vs. B, I'd
then like to get < (X - <X>)^2 > vs B. Matt and I had spittballed the
possibility some time ago, but I was wondering if anyone out there had
successfully done it.
Thanks,
d.
--
Sent from my computer.
Hi all,
I have trouble running yt in parallel on Blue Waters. I installed yt using
miniconda, the version of yt is
~/miniconda/lib $yt version
yt module located at:
/u/sciteam/madcpf/miniconda/lib/python2.7/site-packages/yt-3.3.dev0-py2.7-linux-x86_64.egg
The current version and changeset for the code is:
---
Version = 3.3-dev
Changeset = 90f900be7a36+ yt
Then with miniconda/bin in PATH I installed mpi4py-2.0.0. But when I tried
to make the following simple output in parallel, I get:
import yt
yt.enable_parallelism()
from yt.utilities.parallel_tools.parallel_analysis_interface import\
parallel_objects, communication_system
comm = communication_system.communicators[-1]
print comm.rank, comm.size
0 1
0 1
0 1
0 1
0 1
0 1
0 1
0 1
...
When I run a similar code but with yt-2.x also on Blue Waters, I get what I
expect:
7 16
15 16
6 16
9 16
11 16
8 16
0 16
4 16
...
I'm confused about it. Could anyone give me some suggestions please?
Thanks,
Pengfei
Hi,
I wanted to use the photon simulator for mock x-ray observations on
single cells and line of sights using single cells. For this I wanted to
select the cells using covering_grid, but then I get the following error:
photon_simulator.py is doing
delta_min, delta_max = data_source.quantities.extrema("d%s"%ax)
and the result is
YTDataSelectorNotImplemented: Data selector 'covering_grid' not implemented.
But maybe covering_grid is not the right thing for what I had in mind,
how would you select the cells to pass as data_source to the photon
simulator?
Cheers,
Markus
Hi all,
I'm trying to load in some data from an ASCII VTK file (attached for
reference) that I created following the guidelines here:
http://www.vtk.org/wp-content/uploads/2015/04/file-formats.pdf. I can load
this file into Visit and I was hoping that to load it into yt would be as
simple as :
ds = yt.load("lid_driven_cavity_20.vtk")
When I try that however I get the error message:
Traceback (most recent call last):
File "visualization.py", line 3, in <module>
ds = yt.load("lid_driven_cavity_20.vtk")
File "C:\Users\Dave\Anaconda2\lib\site-packages\yt\convenience.py", line
84, in load
return candidates[0](*args, **kwargs)
File
"C:\Users\Dave\Anaconda2\lib\site-packages\yt\frontends\athena\data_structures.py",
line 470, in __init__
Dataset.__init__(self, filename, dataset_type,
units_override=units_override)
File
"C:\Users\Dave\Anaconda2\lib\site-packages\yt\data_objects\static_output.py",
line 190, in __init__
self._parse_parameter_file()
File
"C:\Users\Dave\Anaconda2\lib\site-packages\yt\frontends\athena\data_structures.py",
line 528, in _parse_parameter_file
self.domain_left_edge = grid['left_edge']
KeyError: 'left_edge'
Is it possible to load in such a file to yt? If not, since I'm writing my
own data file, how could I edit it to make it readable by yt?
Thanks in advance,
Lukas
Hello all,
This is a followup to my question here (
http://lists.spacepope.org/pipermail/yt-users-spacepope.org/2016-May/007830…).
Apologies, but I can't seem to figure out how to reply since I was never
sent an e-mail for either my question or Nathan's response.
I'm trying to load a legacy VTK file into YT (see format description
http://www.vtk.org/wp-content/uploads/2015/04/file-formats.pdf), but I when
I try:
ds = yt.load("file.vtk")
I get the error:
traceback (most recent call last):
File "visualization.py", line 3, in <module>
ds = yt.load("file.vtk")
File "C:\Users\Dave\Anaconda2\lib\site-packages\yt\convenience.py",
line 84, in load
return candidates[0](*args, **kwargs)
File
"C:\Users\Dave\Anaconda2\lib\site-packages\yt\frontends\athena\data_structures.py",
line 470, in __init__
Dataset.__init__(self, filename, dataset_type,
units_override=units_override)
File
"C:\Users\Dave\Anaconda2\lib\site-packages\yt\data_objects\static_output.py",
line 190, in __init__
self._parse_parameter_file()
File
"C:\Users\Dave\Anaconda2\lib\site-packages\yt\frontends\athena\data_structures.py",
line 528, in _parse_parameter_file
self.domain_left_edge = grid['left_edge']
KeyError: 'left_edge'
Nathan Goldbaum pointed out is his answer that YT can read in VTK
files from Athena. I looked into this and it seems like the difference
between my file and Athena's files is that mine are ASCII while
Athena's are binary. Is that right?
On another note, Nathan also pointed out there are people working on a
frontend for the VTK format I'm using. Could I provide any help there?
Thanks again, and sorry for the double posting.
Lukas
Hi, Everybody--
I just got the following error:
"""
RuntimeError: Error: yt attempted to read outside the boundaries of a
non-periodic domain along dimension 0.
Region left edge = -0.0625 code_length, Region right edge = 1.0625
code_length
Dataset left edge = 0.0 code_length, Dataset right edge = 1.0 code_length
This commonly happens when trying to compute ghost cells up to the domain
boundary. Two possible solutions are to load a smaller region that does not
border the edge or override the periodicity for this dataset.
"""
This is awesome. I did what it suggested and it worked. Whoever wrote
this error is awesome. Thanks a ton!
That's all.
d.
--
-- Sent from a computer.
Dear Yt users.
Last week I asked a question about the function HaloMassFcn, as I can't get
the fitting function it produces to match the data from the Halo Finder
Comparison Project and from the DEUS simulations. After chatting with
Nathan Goldbaum about the problem (here and on IRC), I still haven't found
an explanation. The only possible explanation I have come up with so far is
that there is a bug in the unit handling in HaloMassFcn, so that the masses
of the fitting function is output in Msun/h**2, rather than Msun as it
should -- this would explain why I need to divide the masses by h (which I
set to 0.7) to match the data, which uses Msun/h as their mass-unit.
However, it is entirely possible that I have simply misunderstood
something, I just really can't figure out what it is.
Here is the script where I'm plotting the fitting function from
HaloMassFcn, and comparing it to data from the Halo Finder Comparison
project (https://arxiv.org/pdf/1104.0949v1.pdf) and from on of the DEUS
simulations (http://www.deus-consortium.org/deuvo/#download), as well as
the resulting plot:
script: http://paste.yt-project.org/show/6530/
image http://imgur.com/otWW3QJ
I'm using this version of yt:
Version = 3.2.3
Changeset = bdea84d95099 (stable) @
I'm also filing a bug on this issue:
https://bitbucket.org/yt_analysis/yt/issues/1228/bug-in-halomassfcn-output-…
Best regards, Io.
> Message: 1
> Date: Mon, 23 May 2016 22:43:09 +0200
> From: Io Odderskov <io.odderskov(a)gmail.com>
> To: yt-users(a)lists.spacepope.org
> Subject: [yt-users] HaloMassFunction
> Message-ID:
> <
> CAC0EgOA-tpD5-0nZY9Ssyyh94zhRcXm1vYM6suyOzkucgOAjqg(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear Yt-users.
>
> I have just started used Yt, which I find to be very useful. However, I am
> having trouble converting the units of the output of the
> HaloMassFcn-function to match other data sets.
>
> I have tried comparing the Tinker 2008 fitting function produced by
> HaloMassFcn to the halo mass function from the Halo Finder Comparison
> Project (https://arxiv.org/pdf/1104.0949v1.pdf) and to a halo catalogue
> from the DEUS consortium (http://www.deus-consortium.org/deuvo/#download).
> In these cases, the halo masses are given in Msun/h, and the cumulative
> halo number density in h^3/Mpc^3. According to the Yt-documentation, the
> masses output by the HaloMassFcn-function should be in Msun, and the number
> densities in 1/Mpc^3. To convert to Msun/h and h^3/Mpc^3, I multiply the
> masses by h, and divide the number densities with h^3. In the attached
> figure, this corresponds to the line labelled "Tinker2008 - masses
> multiplied by h", which does not match the other data sets very well. If I
> instead divide the masses by h, I get the line labelled Tinker2008 - masses
> divided by h", which looks much better. But I really can't see how to
> justify this conversion.
>
> I hope you can help me figure out what I'm doing wrong :-)
> Thanks in advance!
>
> Best regards, Io.
>
> script: http://paste.yt-project.org/show/6530/
> image http://imgur.com/otWW3QJ
>
> version:
>
> Version = 3.2.3
> Changeset = bdea84d95099 (stable) @
>
> P.S. When I tried to do "yt upload_image", I got a 404 error. Perhaps imgur
> changed their API?
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/201…
> >
>
> ------------------------------
>
> Message: 2
> Date: Mon, 23 May 2016 15:51:20 -0500
> From: Nathan Goldbaum <nathan12343(a)gmail.com>
> To: Discussion of the yt analysis package
> <yt-users(a)lists.spacepope.org>
> Subject: Re: [yt-users] HaloMassFunction
> Message-ID:
> <CAJXewOn9Eq5f6j_xdJJ-du6it1g4aaAOMiBiz=
> 0-4sifsupk2w(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Mon, May 23, 2016 at 3:43 PM, Io Odderskov <io.odderskov(a)gmail.com>
> wrote:
>
> > Dear Yt-users.
> >
> > I have just started used Yt, which I find to be very useful. However, I
> am
> > having trouble converting the units of the output of the
> > HaloMassFcn-function to match other data sets.
> >
> > I have tried comparing the Tinker 2008 fitting function produced by
> > HaloMassFcn to the halo mass function from the Halo Finder Comparison
> > Project (https://arxiv.org/pdf/1104.0949v1.pdf) and to a halo catalogue
> > from the DEUS consortium (http://www.deus-consortium.org/deuvo/#download
> ).
> > In these cases, the halo masses are given in Msun/h, and the cumulative
> > halo number density in h^3/Mpc^3. According to the Yt-documentation, the
> > masses output by the HaloMassFcn-function should be in Msun, and the
> number
> > densities in 1/Mpc^3. To convert to Msun/h and h^3/Mpc^3, I multiply the
> > masses by h, and divide the number densities with h^3. In the attached
> > figure, this corresponds to the line labelled "Tinker2008 - masses
> > multiplied by h", which does not match the other data sets very well. If
> I
> > instead divide the masses by h, I get the line labelled Tinker2008 -
> masses
> > divided by h", which looks much better. But I really can't see how to
> > justify this conversion.
> >
>
> To convert to Msun/h, you're trying to multiply by h? I don't think that's
> right --- dividing by h is the correct thing to do.
>
>
> >
> > I hope you can help me figure out what I'm doing wrong :-)
> > Thanks in advance!
> >
> > Best regards, Io.
> >
> > script: http://paste.yt-project.org/show/6530/
> > image http://imgur.com/otWW3QJ
> >
> > version:
> >
> > Version = 3.2.3
> > Changeset = bdea84d95099 (stable) @
> >
> > P.S. When I tried to do "yt upload_image", I got a 404 error. Perhaps
> > imgur changed their API?
> >
>
> Yeah, imgur changed their API. This is fixed in the development branch of
> yt but hasn't been backported to the stable branch. Perhaps we should do a
> 3.2.4 release given than 3.3 is taking a bit longer than anticipated to
> release.
>
>
> >
> > _______________________________________________
> > yt-users mailing list
> > yt-users(a)lists.spacepope.org
> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
> >
> >
> -------------- next part --------------
>
Hi all,
I am trying to make a light ray using a few redshift outputs from z=0.4 to
z=0.1. Right now I just have two commands in my code taken basically
directly from the docs:
lr = LightRay("CBOX.enzo",simulation_type="Enzo",near_redshift=0.1,
far_redshift=0.4,time_data=False,redshift_data=True,find_outputs=True)
lr.make_light_ray(seed=8934876,fields=['temperature','density','velocity'],use_peculiar_velocity=True)
My current problem is that the first command "LightRay" doesn't seem to be
using any of my outputs between 0.4 and 0.1! I am attaching the error
file, and you can see that it finds 9 outputs, 7 of which are between and
including 0.4 to 0.1. However, then it claims that I am asking it to go
straight from 0.399996 to 0.1.
I do only have redshift dumps to work with, but I think that those should
be used since redshift_data=True
Does anyone have any advice for making this work?
Thanks,
Stephanie
--
Dr. Stephanie Tonnesen
Alvin E. Nashman Postdoctoral Fellow
Carnegie Observatories, Pasadena, CA
stonnes(a)gmail.com
Hi people,
though I am not sure if this is the right place to ask this, but I have
submitted a development queue job to STAMPEDE supercomputer and I was using
yt toolkit in my script, I got the following error after around half an
hour of running the job.
[c557-702.stampede.tacc.utexas.edu:mpispawn_0][child_handler] MPI process
(rank: 0, pid: 26868) terminated with signal 9 -> abort job
[c557-702.stampede.tacc.utexas.edu:mpirun_rsh][process_mpispawn_connection]
mpispawn_0 from node c557-702 aborted: MPI process error (1)
TACC: MPI job exited with code: 1
can anyone please shed light on the error here? with the development queue
the maximum code runtime is 2 hours.
thanks in advance.
-Turhan
Hi all,
I am simulating a single idealized galaxy and select a disk region to get
the rotational velocity (see attached code below). My problem is that
changing my angular momentum vector doesn't change the velocity profile!
Can anyone tell me what is going on here?
Thanks,
Stephanie
--
Dr. Stephanie Tonnesen
Alvin E. Nashman Postdoctoral Fellow
Carnegie Observatories, Pasadena, CA
stonnes(a)gmail.com