Are you running in parallel? Which fields are you profiling? Do they have
lots of dependencies, or require ghost zones? Is yt using lazy_reader? Does
your data have many grids?
On Mar 14, 2012 12:37 AM, "david collins" <antpuncher(a)gmail.com> wrote:
> I should add that this was done on 64 cores-- in serial it works fine,
> just slow.
> On Tue, Mar 13, 2012 at 9:35 PM, david collins <antpuncher(a)gmail.com>
> > Hi, all--
> > I have an old dataset that I'm trying to make profiles on. It's a
> > 512^3 root grid, but was written with ParallelRootGridIO off. I find
> > that it's using strange amounts of memory, more than 12 Gb. Is this a
> > known problem with a straight forward work-around?
> > d.
> > --
> > Sent from my computer.
> Sent from my computer.
> yt-dev mailing list
I'm getting seg faults on a simple projection:
pc = PlotCollection(pf)
I used 'hg bisect' and tracked it down to this changeset:
If I replace yt/data_objects/static_output.py with the parent version
bbb5aeca876f, the seg fault goes away. I see the seg fault, and the
fix, on both Mac and Linux.
Does anyone else see this issue?
510.621.3687 (google voice)
I have another odd issue, again stemming from with this vintage
dataset. It's a 512^3 root with 4 levels, and a refinement by 4.
When the parallel analysis gave me problems yesterday, I tried
re-griding the data by restarting and forcing parallel root grid io on
(including a removal of the rebuild hierarchy call to ensure the grids
stay the same, and returning before any hydro is done.) The odd bit
is now projections look different. Attached are two images,
sphere_serial is through the old serial root grid run,
sphere_parallel is through the output with parallel root grid io.
Centered on the max, 0.02 in radius, Density projection. Projections
were made using
proj = pf.h.proj(ax,field,center=center,source=sphere) You can see
that less structure seems to be captured in the parallel dataset.
I've done two checks to make sure that it's not a glitch in the
re-griding. First is the actual data-- the grids are in a different
order from parallel to serial, but once that's taken into account all
the data in subgrids is identical (at least in density, the field in
question). Additionally the new tiles match the corresponding
positions in the old tile. So the data itself is fine.
The other test I did was to regrid, but leaving serial root grid io
on-- new serial set is identical (except again for grid order) to the
old, and the projections match.
Differences also show up in profiles.
Again, not a major problem, but it's pretty counter-intuitive, so I
was wondering if anyone had run into this.
Sent from my computer.
I have an old dataset that I'm trying to make profiles on. It's a
512^3 root grid, but was written with ParallelRootGridIO off. I find
that it's using strange amounts of memory, more than 12 Gb. Is this a
known problem with a straight forward work-around?
Sent from my computer.
Hi, I guess this question mainly goes to John Wise,
I currently have two columns of data I am plotting with matplotlib calls,
but if I save them as EPS the file size is ~1.5 MB instead of png as ~35KB,
is there a simple way to save them as EPS format, with PNG graph but LaTeX
text for the axis in YT?
During your latest YT workshop talk, you showed how we can create EPS
images using PlotCollection png phase graph with LaTeX text, which is
exactly what I wanted to do for my publication to bring down the image file
size... Except my graphs were created from 2 columns of data (derived using
YT but plotted with matplotlib) and not the PlotCollection engine. Can the
png be inserted to PlotCollection to take advantage of the EPS writer?
Other suggestions, comments are welcomed (such as not even bother worrying
about the file size, or this is beyond the scope of YT).
PS. I've tried using the cmap=gray in the imshow call, and that cuts the
file size down to 0.5MB, but colored plots are so much nicer!
What's the current state of the art with regard to parallel clump
finding? Out of curiosity, I did a clumpfind blocked into octant, and
I found super-linear scaling (non-scientific, my 512^3 wouldn't finish
in 24 hours, but each octant ran in 30 minutes). Now I have these
clumps, and I'd like to sew them together. So I have a couple
1.) What's the easiest way to join two extracted sets? I figure I can
take these 8 sets and sew them together pretty easy ex-post-facto. If
there's a tool that does
new_joined_thing = yt.join( clump1.data, clump2.data)
then writing the machinery to sort out which clumps hit which faces is
pretty easy, as is sewing together the various joined things into a
2.) Is there anything about the countour finder that would not block
decompose like this?
3.) Has anyone made any progress on things like this?
Sent from my computer.
I'm trying to add support for FLASH particle files. However, my particle files are nonstandard and so my particle reader may not work with the default particle types that come with FLASH. I'm curious if anyone would like to share a more vanilla particle file dataset so I can test my yt modifications.
Thanks very much,
Astronomy & Astrophysics, UCSC
I was giving reason a spin, and everything went smoothly, I was able to
launch reason on Nautilus and Lens, and I tried it out with Safari 5.1.1,
Firefox 7.0.1, and Chrome 17.0.963.65 installed under OS X 10.7.2. I
didn't want to bogged down the login nodes if I want to do a projection of
an 800 cube Enzo data, so dd=pf.h.all_data() was as far as I got with the
800 cube. I get
channel 3: open failed: connect failed: Connection refused
When I tried it on an interactive node and using a browser trying to
connect to the compute node. Is reason only limited to the login nodes due
to supercomputer security with the compute nodes?