Hi, Everybody!
Does anyone out there have a technique for getting the variance out of
a profile object? A profile object is good at getting <X> vs. B, I'd
then like to get < (X - <X>)^2 > vs B. Matt and I had spittballed the
possibility some time ago, but I was wondering if anyone out there had
successfully done it.
Thanks,
d.
--
Sent from my computer.
Fellow yt users:
We at the yt development team want to make yt better for you, the user. We
have prepared a short survey to gain your valuable feedback to help us
understand what is working in yt and what is not. Could you please spare 2
minutes to fill out our survey below?
We plan to collect responses and make available a selection of results
by December
10, so please let us know what you think at the following URL:
http://goo.gl/forms/hRNryOWTPO
On behalf of the yt development team,
Cameron
--
Cameron Hummels
Postdoctoral Researcher
Steward Observatory
University of Arizona
http://chummels.org
Hello all,
We're hoping to use yt parallel volume rendering on a very large generic
brick - it's a simple rectangular unigrid slab, but containing something
like 1.5e11 points, so much too large for load_uniform_grid() to load
into memory in a single machine.
I imagine it wouldn't be hard to do the domain decomposition by hand,
loading a different chunk of grid into each MPI process. But then
what? What would it take to invoke the volume renderer on each piece
and composite them together? Would it help if the chunks were stored
in a KDTree? Is there some example (one of the existing data loaders?)
which I could follow?
Hi.
I ran a 2D simulation with Athena and exported the data to vtk files. I was
even able to load the file with yt and inspect some of its elements. What I
would like to do is to compare the density from the simulation to an
analytic solution (which I can evaluate at the grid points). The problem is
I cannot figure out how to extract the relevant information from the
yt.frontends.athena.data_structures.AthenaDataset object. What I expect are
2d arrays of the x coordinate, y coordinate and density.
Thanks in Advance,
Almog
Hi everyone. Is it normal for slices to take substantially longer to be
created if they're made from a derived field rather than an intrinsic one?
Specifically, I'm having an issue creating an axis-aligned slice using the
divergence of the velocity field. It's taking around 6.5 minutes just to
make the slice, whereas if I use temperature or density, it takes around 10
seconds or so for the same dataset.
I also notice that the amount of time it takes is not dependent on the
number of processors I'm using. I've used 1, 12, and 24 processors, with
identical results, even though I'm calling enable_parallelism(), and I can
see that all the processes are running.
I read in the docs that slice operations aren't generally done in parallel,
but in this case it seems that maybe it would be beneficial. A similar
operation in VisIt completes much faster, so I'm wondering if I've
misconfigured something, or if there is something I can do to speed things
up.
I'd appreciate any thoughts anyone has on the subject.
Thanks,
Dan
Dear all,
I would like to ask if there has been any progress on the volume rendering refactoring since late October. In particular, I would like to know if volume rendering and the new camera implementation allows for plotting field lines together with the rendering. If so, are there any examples on the web? Unfortunately, matplotlib contour plots with field lines are not good enough for me.
Thanks in advance,
Kiki
Kyriaki Dionysopoulou
=======================================================
Mathematical Sciences
University of Southampton
Southampton, SO17 1BJ, UK
K.Dionysopoulou(a)soton.ac.uk<mailto:K.Dionysopoulou@soton.ac.uk>
Is it unreasonable for me to expect yt on a workstation with 4x3GHz cores
and ~20 GB of RAM to be able to handle a ~100 GB dataset? I'm trying to
select out a subset of the dataset using cut_region(), but still seem to
run into hanging or running out of RAM. For example, when I try to do a
write_out() on the cut region yt sucks up all 20 GB of RAM and I have to
kill it. Is there a preferred method for loading in a subset of data? I
don't need near the full 100 GB of data.
I'm using yt dev and looking at BoxLib/Maestro data.
--
Adam Jacobs
Department of Physics and Astronomy, PhD Candidate
Stony Brook University
http://astro.sunysb.edu/amjacobs/
Hi All,
I need to set the "center" field parameter for making a projection of the
"radial_velocity".
I see from the docs how to do that for a data container but its not clear
what to do for a
projection.
The error I am getting at the moment when the "center" is not set is:
File
"/homeappl/home/regan/appl_taito/YT/Dev-3.0/yt/yt/data_objects/data_containers.py",
line 249, in __getitem__
self.get_data(f)
File
"/homeappl/home/regan/appl_taito/YT/Dev-3.0/yt/yt/data_objects/data_containers.py",
line 656, in get_data
finfo.check_available(self)
File
"/homeappl/home/regan/appl_taito/YT/Dev-3.0/yt/yt/fields/derived_field.py",
line 146, in check_available
validator(data)
File
"/homeappl/home/regan/appl_taito/YT/Dev-3.0/yt/yt/fields/derived_field.py",
line 235, in __call__
raise NeedsParameter(doesnt_have)
yt.fields.field_exceptions.NeedsParameter: (['center'])
Any tips much appreciated!
Cheers,
John
Hey All,
I'm trying to get started with basic Rockstar halo finding in yt (I
typically use HOP, though current analysis requires Rockstar). I'm
having trouble with the following line:
>from yt.analysis_modules.halo_finding.rockstar.api import
RockstarHaloFinder
which gives the error: http://paste.yt-project.org/show/5224/
I'm running yt 3.0 (updated this morning). I just installed Rockstar (I
think) by switching on the INST_ROCKSTAR flag in the install script to 1,
and re-running the script. To be sure, I am running in an 'activated' yt,
with the following ld_library_path:
>(yt-x86_64)thundersnow:~ desika$ echo $LD_LIBRARY_PATH
/Users/desika/yt-x86_64/lib:/opt/gsl/lib:
Though I note manually adding the directory:
/Users/desika/yt-x86_64/src/yt-hg/yt/analysis_modules/halo_finding/rockstar/
to my ld_library_path doesn't fix the problem. Is this symptomatic of an
erroneous installation on my part?
thanks
-desika