Hi guys,
Stephen is having issues with the wiki not allowing edits and the
tickets resisting creation. He is an authenticated user, which is
allt hat's required. Do any of you guys have a second to test if it
works for you?
-Matt
Hi guys,
Just got off a conference call with Britton and Dave. The highlights
of the discussion were that the contouring is going to get rewritten,
in two phases.
1. Dave is going to check in the ability to demand bifurcation of
clumps during clump finding; this is an extension of Britton's
Clump.py.
2. I'm going to start (some day) on rewriting the contouring algorithm
to use one from the paper "Efficient Computation of Topology of Level
Sets" by Pascucci and Cole-McLaughlin 2002. This is longer term, but
it's faster than the one I wrote a year ago and it gives the entire
tree in a single pass.
Britton has some good ideas about other improvements to the clump
finder, as well; the upshot of point #2 is that this tree will be
trimmable in exactly the same way as the current method, but it will
be smaller, faster, and give you more information. It's also
parallelizable.
-Matt
Hi all,
I need to calculate the virial properties (mass in particular) of 80,000+ haloes for perhaps up to 25 datasets. Actually, it's that many haloes at z=0, so fewer for higher z, but you get the idea. Can you guys suggest the best way to optimize this? My only good idea is to split up the entries in HopAnalysis.out into multiple files so each run of HaloProfiler has roughly the same amount of work to do. Of course, not making projections will speed things up too.
Thanks!
_______________________________________________________
sskory(a)physics.ucsd.edu o__ Stephen Skory
http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student
________________________________(_)_\(_)_______________
Hi,
I'm having some trouble with the HaloProfiler. I'm running with the trunk r1147 sample_halo_profile.par and runHaloProfiler.py. Here's the error I'm getting:
File "runHaloProfiler.py", line 8, in <module>
q.makeProfiles()
File "/share/home/00649/tg457850/yt/lib/python2.6/site-packages/yt-1.5dev.dev_r1147-py2.6-linux-x86_64.egg/yt/extensions/HaloProfiler.py", line 107, in makeProfiles
virial = self._CalculateVirialQuantities(profile)
File "/share/home/00649/tg457850/yt/lib/python2.6/site-packages/yt-1.5dev.dev_r1147-py2.6-linux-x86_64.egg/yt/extensions/HaloProfiler.py", line 276, in _CalculateVirialQuantities
if (overDensity[1] <= self.haloProfilerParameters['VirialOverdensity']):
IndexError: list index out of range
For some of the haloes, profile['CellVolume'] and profile['TotalMassMsun'] is being set to all zeros. If I put this 'if' below in HaloProfiler.py before the call to CalculateVirialQuantities(), say line 104, I don't get the error above and the script runs to completion, making profiles and images.
if profile['CellVolume'][0] == 0.0 : continue
profile['CellVolume'] being zero gives nans in _AddActualOverdensity() in HaloProfiler.py.
If I run this in a unaltered enzo dataset (with no .yt, HopAnalysis.out etc... files), I get this problem for every dataset I've tried. I've tried two runs of 64^3 root grid with 3 levels cosmology, one is 1 mpc/h and the other 64 mpc/h.
I've done some sleuthing, but I don't undertstand enough of how Profiles.py to make sense of it. 'args' in _lazy_add_fields() is returning empty from _get_bins(), and _get_bins() is returning empty because self._get_field() is returning nothing.
Those of you who understand this better, how can I help you?
Thanks!
_______________________________________________________
sskory(a)physics.ucsd.edu o__ Stephen Skory
http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student
________________________________(_)_\(_)_______________
Hey guys,
Just as a fun note. winpdb.org has the coolest debugger I've ever
seen. It's basically a wrapper around pdb, which is already pretty
nice, but with additional features of remote debugging as well as a
GUI for setting breakpoints and examining the stack. You can start
the GUI -- even on your local machine -- and then on the local/remote
machine run your script through rpdb2. Then, just attach and go.
Pretty useful for debugging -- it's how I found the bugs in the spin
parameter & particle vectors earlier today.
-Matt
Is anybody here an expert on the way enzo_anyl calculates spin
parameter? I was hoping to convince myself that:
a) I understand how it does it in enzo_anyl
b) That is being replicated in DerivedQuantities.py
I have gone back and forth on this with Brian and Britton, but that
was almost a year ago. I think it needs another look. Here is how yt
works right now:
am = data["SpecificAngularMomentum"]*data["CellMassMsun"]
j_mag = am.sum(axis=1)
m_enc = data["CellMassMsun"].sum() + data["ParticleMassMsun"].sum()
e_term_pre = na.sum(data["CellMassMsun"]*data["VelocityMagnitude"]**2.0)
weight=data["CellMassMsun"].sum()
so we get the sum of Lx, Ly, Lz, the total mass, the total kinetic
energy in the baryons, and the total *baryon* mass.
then during the combination step:
W = weight.sum()
M = m_enc.sum()
J = na.sqrt(((j_mag.sum(axis=0))**2.0).sum())/W
E = na.sqrt(e_term_pre.sum()/W)
G = 6.67e-8 # cm^3 g^-1 s^-2
spin = J * E / (M*1.989e33*G)
What this does it combine all the weights from the individual grid or
processors to get the total baryon mass in the entire region, the
entire *enclosed mass* (which includes the particles), and then the
magnitude of the angular momentum vector for all the enclosed baryons,
which gets divided by the enclosed *baryon mass* to get the average
specific angular momentum for the region. E is then the total kinetic
energy divided by the total enclosed mass, which gives a
characteristic baryon velocity. Finally, we take the average specific
angular momentum, multiply that by the characteristic velocity, and
divide by the total enclosed (baryon+particle) mass.
Does this make sense? Should the characteristic velocity and angular
momentum include the particles? Or does this not matter?
-Matt
All,
I know Matt has delayed formally releasing and promoting 1.5 due to his graduation, but I'm wondering if we should start documenting some of the new things in 1.5? For example, I think I might have use for Britton's halo profiling tools right now, but there's no documentation how to use it (although I'm sure I can figure it out by myself, I'm making a point here). Similarly, I have intelligence to contribute about running parallel HOP which isn't documentated anywhere, and Dave has added/created some scripts of his own for yt which I'm aware of but don't know anything about.
I don't know if I want to debate the pros and cons of Sphinx and the Trac wiki, but I am just wondering what people think we should be doing about this? 1.5 will be promoted eventually and these documents need to be written.
_______________________________________________________
sskory(a)physics.ucsd.edu o__ Stephen Skory
http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student
________________________________(_)_\(_)_______________
Hi Stephen,
I was wondering how parallel HOP is going? I've got some big datasets
I'd like to run it on, but I haven't heard if it's working as expected
or not. What's the current status? Are the results converged? Do we
have a good idea of how to pad the tiles?
I'll probably use this information to update ticket #163.
-Matt
Hi guys,
I've been working on a new branch:
http://svn.enzotools.org/yt/grid-optimization/
to pull down overhead on large hierarchies, specifically for Kraken.
If you could give it a checkout and run some simple analysis with it,
that'd be awesome -- right now it passes all the unit tests, so any
failures *you* run into should be made into *new* unit tests. It's
substantially faster and substantially lower memory (40% less for the
L7 run) so I'd like to make sure it works, merge it back and kill. It
also features a fairly important fix to the Covering Grids.
-Matt