Hi,
I just installed the yt development version
http://hg.yt-project.org/yt/raw/yt/doc/install_script.sh
Version = 3.1-dev
Changeset = dc2ade033502 (yt) tip
into ubuntu 14.04
with
INST_ZLIB=0
INST_FTYPE=0
yt seemed to install properly and it works otherwise, but it doesn't
open any windows when I'm trying to show an image, e.g.,
when running the script
http://yt-project.org/doc/quickstart/3)_Simple_Visualization.py
line by line in ipython:
p = yt.ProjectionPlot(ds, "y", "density")
p.show() #doesn't open a window
In [35]: p.show()
<yt.visualization.plot_window.ProjectionPlot at 0x7fe1897941d0>
However, saving the image works:
p.save()
When I run that script directly with python, I get the following error:
Traceback (most recent call last):
File "3)_Simple_Visualization_short.py", line 28, in <module>
p.show()
File "/home/.../yt/yt-x86_64/src/yt-hg/yt/visualization/plot_container.py",
line 622, in show
raise YTNotInsideNotebook
yt.utilities.exceptions.YTNotInsideNotebook: This function only works
from within an IPython Notebook.
I'm using
ipython --matplotlib
from pylab import *
and e.g. plot() and figure() open a window normally. Is there a way to
open a window with yt's show() and other functions
using ipython and python, or does that really require IPython Notebook?
Cheers,
Johanna
Hi folks,
I'm trying to use the isosurface tools (export_sketchfab &
export_obj; http://yt-project.org/doc/visualizing/sketchfab.html) to
make renderings of multiple line species from ALMA data cubes. I'm
doing this using both the yt-fits interface and spectral-cube
(spectral-cube.rtfd.org).
I've had some success in getting multicolored and transparent
contours to show up (e.g.,
https://sketchfab.com/models/8cddb94df8024c28ab3e4e5a43c2b069), but I
still don't have complete control over the colors. To make that
model, I created pure red/green/blue colormaps and added them to the
colormap registry, e.g.:
blue = mpl.colors.LinearSegmentedColormap('blue', {'red':[(0,0,0),(1,1,1)],
'green':[(0,0,0),(1,1,1)],
'blue':[(0,1,1),(1,1,1)]})
from yt.visualization._colormap_data import color_map_luts
color_map_luts['blue'] = blue(np.linspace(0,1,256)).T
but that was fairly awkward, and the contours are still showing up
white-ish. I think this is because I have set the color_field to the
flux_field (I only have one field to work with!):
surf = dataset.surface(dataset.all_data(),
dataset.field_list[0],
level)
surf.export_obj(filename, transparency=transparency,
color_field=dataset.field_list[0],
color_map='blue',
plot_index=jj)
but I can't figure out an alternative way to specify color, except
perhaps to define a dummy field that is all 1 - which I also don't
know how to do, but I'm sure I could figure it out. Is there a better
way, though?
Also, when I try importing my meshes into MeshLab, I get the error:
Error details: Some materials definitions were not found, a default
white material is used where no material was available
and then the contours all show up as gray and opaque, even if I change
"render->color" to "per face" as suggested in the docs. But, I've
never used MeshLab before, so I might just be misusing the program.
Thanks,
--
Adam Ginsburg
Fellow, European Southern Observatory
http://www.adamgginsburg.com/
Hello all,
We're hoping to use yt parallel volume rendering on a very large generic
brick - it's a simple rectangular unigrid slab, but containing something
like 1.5e11 points, so much too large for load_uniform_grid() to load
into memory in a single machine.
I imagine it wouldn't be hard to do the domain decomposition by hand,
loading a different chunk of grid into each MPI process. But then
what? What would it take to invoke the volume renderer on each piece
and composite them together? Would it help if the chunks were stored
in a KDTree? Is there some example (one of the existing data loaders?)
which I could follow?
Hi.
I ran a 2D simulation with Athena and exported the data to vtk files. I was
even able to load the file with yt and inspect some of its elements. What I
would like to do is to compare the density from the simulation to an
analytic solution (which I can evaluate at the grid points). The problem is
I cannot figure out how to extract the relevant information from the
yt.frontends.athena.data_structures.AthenaDataset object. What I expect are
2d arrays of the x coordinate, y coordinate and density.
Thanks in Advance,
Almog
Hi everyone. Is it normal for slices to take substantially longer to be
created if they're made from a derived field rather than an intrinsic one?
Specifically, I'm having an issue creating an axis-aligned slice using the
divergence of the velocity field. It's taking around 6.5 minutes just to
make the slice, whereas if I use temperature or density, it takes around 10
seconds or so for the same dataset.
I also notice that the amount of time it takes is not dependent on the
number of processors I'm using. I've used 1, 12, and 24 processors, with
identical results, even though I'm calling enable_parallelism(), and I can
see that all the processes are running.
I read in the docs that slice operations aren't generally done in parallel,
but in this case it seems that maybe it would be beneficial. A similar
operation in VisIt completes much faster, so I'm wondering if I've
misconfigured something, or if there is something I can do to speed things
up.
I'd appreciate any thoughts anyone has on the subject.
Thanks,
Dan
Dear all,
I would like to ask if there has been any progress on the volume rendering refactoring since late October. In particular, I would like to know if volume rendering and the new camera implementation allows for plotting field lines together with the rendering. If so, are there any examples on the web? Unfortunately, matplotlib contour plots with field lines are not good enough for me.
Thanks in advance,
Kiki
Kyriaki Dionysopoulou
=======================================================
Mathematical Sciences
University of Southampton
Southampton, SO17 1BJ, UK
K.Dionysopoulou(a)soton.ac.uk<mailto:K.Dionysopoulou@soton.ac.uk>