Just FYI, if you need fast kd-trees in C with a Python interface,
scipy.spatial has them; additionally, scipy.spatial is largely
separable from scipy as a whole, so any compilation problems with scipy
will likely not apply to spatial. (
http://docs.scipy.org/doc/scipy/reference/spatial.html ) This could be
extremely useful for a number of problems with generating subsets of
data in yt, clustering algorithms, and ray tracing algorithms. And now
Sent to you by Matt via Google Reader: Implementation of a parallel
cKDTree via gmane.comp.python.scientific.user by Sturla Molden on
2/27/09 I have fiddled a bit with scipy.spatial.cKDTree for better
performance on multicore CPUs. I have used threading.Thread instead of
OpenMP, so no special compilation or compiler is required. The number
of threads defaults to the number of processors if it can be
determined. The performance is not much different from what I get with
OpenMP. It is faster than using cKDTree with multiprocessing and shared
memory. Memory handling is also improved. There are checks for NULL
pointers returned by malloc or realloc. setjmp/longjmp is used for
error handling if malloc or realloc fail. A memory pool is used to make
sure all complex data structures are cleaned up properly. I have
assumed that crt functions malloc, realloc and free are thread safe.
This is usually the case. If they are not, they must be wrapped with
calls to PyGILState_Ensure and PyGILState_Release. I have not done this
as it could impair scalability. Regards, Sturla Molden
_______________________________________________ SciPy-user mailing list
SciPy-user< at >scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user
Things you can do from here:
- Subscribe to gmane.comp.python.scientific.user using Google Reader
- Get started using Google Reader to easily keep up with all your
Could you summarize quickly for us the following two points? I want
-- very much -- to eliminate the SS_HopOutput.py file and place it in
the old location, but we *cannot* do so until these are answered
satisfactorily. You are the one in the best position to answer them.
1. What is the status of determining, automatically, the necessary
padding? I have suggested that this could be some small multiple of
the root grid dx, but I don't recall hearing back from you. (I
believe you have been using on the order of 6 root grid cells as
2. Are there any outstanding bugs in SS_HopOutput that you have found
prevent it from replacing the old one, and working in both serial and
parallel? (I have found none.)
Thanks very much!
I'm officially declaring open season on the yt wiki. Please feel free
to make any changes or additions you think are appropriate --
sometimes I forget that things are out of date, or wrong, or whatever.
So please feel free to modify or change things, make them simpler,
(And go ahead and add anything you think needs to be added!)
I'm making a bunch of plots with a greyscale colormap. When I set the
zlim, things get rescaled properly, but the colors on the colorbar are
rgb, not grayscale.
I noticed by accident that the following works:
pf = lagos.TenPCStaticOutput('/lustre/scratch/collins/128_amr/OK1/DD2030/data2030')
pc = raven.PlotCollection(pf)
because of the first 'save' feature. Removing it makes a rgb
colorbar. What does save do that forces the colorbar map?
Hi guys, I'm moving this to -dev.
This should be in the halos themselves; unfortunately we want to avoid
double-communicating, both between procs and into the file system. I
will work on that, using Stephen's script as a template.
However, I've also set up a simple little way to add things to wrap,
to get rid of the get_indices, get_velocities, etc functions. I think
we can use this to prevent any getter/setter methods.
Stephen, this is just a simple little mod, but if you could test that
this appropriately wraps the __getitem__ method, that'd be awesome.
Then we can ditch the getters.
On Fri, Feb 13, 2009 at 10:27 AM, Stephen Skory <stephenskory(a)yahoo.com> wrote:
>> If you want to try it again, you might
>> consider manually setting (and then unsetting) the _processing
>> attribute of your HopGroup to True; this should set it to turn off the
> To follow up, your suggestion works, and I've pasted the full script below for everyone's edification.
> Please note that I haven't fixed some things in the version I pasted, in particular "grouped_particles" will be incorrect.
> sskory(a)physics.ucsd.edu o__ Stephen Skory
> http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student
> yt-users mailing list
Currently, I have yt checkouts on several different machines, and in
these I have been having some issues with keeping things in sync. For
a while I was using bzr with an svn plugin to manage this; but in the
last couple days I've been having problems with this. Additionally,
it has been unreliable lately.
However, mercurial is substantially more reliable -- and furthermore,
the existing (and easy_install-able) hgsvn is much less fancy in what
it seeks to do. I have created a mercurial mirror on bitbucket.org.
I'll be using this as my primary dumping point for experimental and
possibly non-functional stuff. To merge changes back upstream, I will
be using the workflow outlined here:
Basically you check out using the special hgsvn command, and then
branch all you like, etc etc, and then update and then merge into a
*separate* subversion repository using a patch command. This is
ideal, I think, as it requires no additional metadata.
However, the reason I'm bringing this up is that I'll be doing all of
this on bitbucket.org, which is designed to foster and encourage
social software development. It includes things like wikis in the
checkout, issue tracking, and *extremely* simple forking and patching
between users. It's designed for collaborations, as well as passing
patches back and forth.
This will NOT replace the wiki, source control, or issue tracking on
yt.enzotools.org. Mercurial is FAR too niche and FAR too rapidly
developed for that. (Although I would like to note that one of the
coolest things about bitbucket is that you can get a tarball in the
flavor of your choice right on the main project page.) But for
experimental stuff, for feature hacking, for all of that, mercurial
and bitbucket are useful tools.
The things I hope to address here, in the BitBucket repo, would be things like:
* int math covering grid (almost done!)
* better vertex-centering of data (trickier)
* GUI re-working
and so on and so forth.
Anyway, if this sort of thing appeals to you -- repositories of your
changes that you can pass around more easily, scripts and whatnot -- I
encourage you to go to BitBucket, sign up with an OpenID account (I
used my GMail account) and then "fork" yt. (There's a button for
that.) If you make changes you think I should have -- and vice versa!
-- clicking on the "pull request" button helps communicate them back
and forth. I think this will lead to more experimentation and
possibly a better end product.
I have implemented the integer-based covering grid. It's not really
much faster; the interpolated one should be, but the primary one is
not. (This was disappointing to me.)
Anyway, this exposed a slight bug in the covering grid generation of
ghost zones. I have attached an example plot. Upper left is the new,
upper right is the old, lower left is the relative difference.
It looks to me like an off-by-one in the grid values. This should not
change substantially any of the calculations, as it still gives
*correct* values, so contouring and whatnot should still work exactly
correctly. We just get a little bit of bonus data one one side and a
little less on the other. It's *possible* that if your contour join
was exclusively in the ghost zone of a single grid that you now have
too many; this seems kind of unlikely to me, but it still should be
addressed. I'm going to fix it, but probably by inserting the new
routine rather than by fixing the old one. The only thing remaining
(you can see for yourself at my bzr repo:
http://bzr.enzotools.org/integer_cgrid/ ) thing to do is to do
periodic domains and smoothing. For smoothing I will be implementing
the (int math friendly) algorithm from Ralf Kaehler.