
Hi all, I just wanted to announce that the new kd-Tree rendering framework is now in the 'yt' branch of the repository. There are a couple things I wanted to point to if you are interested. The changeset itself: http://yt.enzotools.org/changeset/c7947fef16ac/ A post on blog.enzotools.org highlighting some recent successes: http://blog.enzotools.org/amr-kd-tree-rendering-added-to-yt A simple script, where you should just have to change the parameter file name: http://paste.enzotools.org/show/1367/ A more advanced script that exposes a few new options: http://paste.enzotools.org/show/1368/ Both of these scripts should be able to be run in parallel (as long as N is a power of 2 for now) transparently as: mpirun -np N python script.py --parallel Parallel performance will depend on the structure of your data, but the docs for the Camera object have some suggestions. If you find any problems or have any thoughts, let me know! Best, Sam

Hi Sam, Great work! I'm really happy to see this make it into the primary trunk. I'd like to encourage people to try this out, particularly on large datasets, and write to the list or Sam if you run into problems. This is a big increase in functionality, and everyone wants to make sure it works out alright. I've been using the volume rendering capabilities of yt quite extensively, in kind of an unconventional way, to calculate off-axis average values, and I'm very excited about the performance improvements that this new subsystem will bring. Congrats, Sam! -Matt On Tue, Nov 9, 2010 at 5:12 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi all, I just wanted to announce that the new kd-Tree rendering framework is now in the 'yt' branch of the repository. There are a couple things I wanted to point to if you are interested. The changeset itself: http://yt.enzotools.org/changeset/c7947fef16ac/ A post on blog.enzotools.org highlighting some recent successes: http://blog.enzotools.org/amr-kd-tree-rendering-added-to-yt A simple script, where you should just have to change the parameter file name: http://paste.enzotools.org/show/1367/ A more advanced script that exposes a few new options: http://paste.enzotools.org/show/1368/ Both of these scripts should be able to be run in parallel (as long as N is a power of 2 for now) transparently as: mpirun -np N python script.py --parallel Parallel performance will depend on the structure of your data, but the docs for the Camera object have some suggestions. If you find any problems or have any thoughts, let me know! Best, Sam
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi all, I recently did some volume renders of a 50 Mpc box unigrid simulation with 1024^3 grid cells on kraken. I used exactly 64 cores and did not have to use less than the full number of cores available per node. I was making 1024^2 images that took roughly between 5-10 seconds to render. I tried some 2048 that took around 30-40 seconds. I was rendering baryon overdensity with a transfer function that had 2000 narrow gaussians. The number was high because I am combining this with a movie in which I render only one of those guassians at a time and build the box up from low overdensity up to high. I didn't go to lower number of processors, so I'm not exactly sure at what point this would have run out of ram. I consider this an overwhelming success. I've attached some sample images, one with the full transfer function and a sample frame from the movie where I do them one at a time while spinning. Very very nice job! Britton On Wed, Nov 10, 2010 at 11:46 AM, Matthew Turk <matthewturk@gmail.com>wrote:
Hi Sam,
Great work! I'm really happy to see this make it into the primary trunk.
I'd like to encourage people to try this out, particularly on large datasets, and write to the list or Sam if you run into problems. This is a big increase in functionality, and everyone wants to make sure it works out alright.
I've been using the volume rendering capabilities of yt quite extensively, in kind of an unconventional way, to calculate off-axis average values, and I'm very excited about the performance improvements that this new subsystem will bring.
Congrats, Sam!
-Matt
On Tue, Nov 9, 2010 at 5:12 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi all, I just wanted to announce that the new kd-Tree rendering framework is now in the 'yt' branch of the repository. There are a couple things I wanted to point to if you are interested. The changeset itself: http://yt.enzotools.org/changeset/c7947fef16ac/ A post on blog.enzotools.org highlighting some recent successes: http://blog.enzotools.org/amr-kd-tree-rendering-added-to-yt A simple script, where you should just have to change the parameter file name: http://paste.enzotools.org/show/1367/ A more advanced script that exposes a few new options: http://paste.enzotools.org/show/1368/ Both of these scripts should be able to be run in parallel (as long as N is a power of 2 for now) transparently as: mpirun -np N python script.py --parallel Parallel performance will depend on the structure of your data, but the docs for the Camera object have some suggestions. If you find any problems or have any thoughts, let me know! Best, Sam
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi again everyone, For those who are interested, I just finished the final version of the movie I referred to in my post to this thread. I have uploaded it to the yt group on vimeo, and it will be available shortly here: http://vimeo.com/groups/ytgallery/videos/17095494 If you can't wait, I put it here as well: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4 Thanks again to Matt and Sam for making this whole thing possible. Britton On Fri, Nov 19, 2010 at 11:27 AM, Britton Smith <brittonsmith@gmail.com>wrote:
Hi all,
I recently did some volume renders of a 50 Mpc box unigrid simulation with 1024^3 grid cells on kraken. I used exactly 64 cores and did not have to use less than the full number of cores available per node. I was making 1024^2 images that took roughly between 5-10 seconds to render. I tried some 2048 that took around 30-40 seconds. I was rendering baryon overdensity with a transfer function that had 2000 narrow gaussians. The number was high because I am combining this with a movie in which I render only one of those guassians at a time and build the box up from low overdensity up to high. I didn't go to lower number of processors, so I'm not exactly sure at what point this would have run out of ram. I consider this an overwhelming success. I've attached some sample images, one with the full transfer function and a sample frame from the movie where I do them one at a time while spinning. Very very nice job!
Britton
On Wed, Nov 10, 2010 at 11:46 AM, Matthew Turk <matthewturk@gmail.com>wrote:
Hi Sam,
Great work! I'm really happy to see this make it into the primary trunk.
I'd like to encourage people to try this out, particularly on large datasets, and write to the list or Sam if you run into problems. This is a big increase in functionality, and everyone wants to make sure it works out alright.
I've been using the volume rendering capabilities of yt quite extensively, in kind of an unconventional way, to calculate off-axis average values, and I'm very excited about the performance improvements that this new subsystem will bring.
Congrats, Sam!
-Matt
On Tue, Nov 9, 2010 at 5:12 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi all, I just wanted to announce that the new kd-Tree rendering framework is now in the 'yt' branch of the repository. There are a couple things I wanted to point to if you are interested. The changeset itself: http://yt.enzotools.org/changeset/c7947fef16ac/ A post on blog.enzotools.org highlighting some recent successes: http://blog.enzotools.org/amr-kd-tree-rendering-added-to-yt A simple script, where you should just have to change the parameter file name: http://paste.enzotools.org/show/1367/ A more advanced script that exposes a few new options: http://paste.enzotools.org/show/1368/ Both of these scripts should be able to be run in parallel (as long as N is a power of 2 for now) transparently as: mpirun -np N python script.py --parallel Parallel performance will depend on the structure of your data, but the docs for the Camera object have some suggestions. If you find any problems or have any thoughts, let me know! Best, Sam
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Pardon my insanity, everyone. I just uploaded a slightly better version of this movie to vimeo. It's here: http://vimeo.com/17100442 You can get it here as well: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4 or here for a higher resolution version: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble_hd.mp4<http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4> Britton On Mon, Nov 22, 2010 at 4:18 PM, Britton Smith <brittonsmith@gmail.com>wrote:
Hi again everyone,
For those who are interested, I just finished the final version of the movie I referred to in my post to this thread. I have uploaded it to the yt group on vimeo, and it will be available shortly here: http://vimeo.com/groups/ytgallery/videos/17095494 If you can't wait, I put it here as well: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4
Thanks again to Matt and Sam for making this whole thing possible.
Britton
On Fri, Nov 19, 2010 at 11:27 AM, Britton Smith <brittonsmith@gmail.com>wrote:
Hi all,
I recently did some volume renders of a 50 Mpc box unigrid simulation with 1024^3 grid cells on kraken. I used exactly 64 cores and did not have to use less than the full number of cores available per node. I was making 1024^2 images that took roughly between 5-10 seconds to render. I tried some 2048 that took around 30-40 seconds. I was rendering baryon overdensity with a transfer function that had 2000 narrow gaussians. The number was high because I am combining this with a movie in which I render only one of those guassians at a time and build the box up from low overdensity up to high. I didn't go to lower number of processors, so I'm not exactly sure at what point this would have run out of ram. I consider this an overwhelming success. I've attached some sample images, one with the full transfer function and a sample frame from the movie where I do them one at a time while spinning. Very very nice job!
Britton
On Wed, Nov 10, 2010 at 11:46 AM, Matthew Turk <matthewturk@gmail.com>wrote:
Hi Sam,
Great work! I'm really happy to see this make it into the primary trunk.
I'd like to encourage people to try this out, particularly on large datasets, and write to the list or Sam if you run into problems. This is a big increase in functionality, and everyone wants to make sure it works out alright.
I've been using the volume rendering capabilities of yt quite extensively, in kind of an unconventional way, to calculate off-axis average values, and I'm very excited about the performance improvements that this new subsystem will bring.
Congrats, Sam!
-Matt
On Tue, Nov 9, 2010 at 5:12 PM, Sam Skillman <samskillman@gmail.com> wrote:
Hi all, I just wanted to announce that the new kd-Tree rendering framework is now in the 'yt' branch of the repository. There are a couple things I wanted to point to if you are interested. The changeset itself: http://yt.enzotools.org/changeset/c7947fef16ac/ A post on blog.enzotools.org highlighting some recent successes: http://blog.enzotools.org/amr-kd-tree-rendering-added-to-yt A simple script, where you should just have to change the parameter file name: http://paste.enzotools.org/show/1367/ A more advanced script that exposes a few new options: http://paste.enzotools.org/show/1368/ Both of these scripts should be able to be run in parallel (as long as N is a power of 2 for now) transparently as: mpirun -np N python script.py --parallel Parallel performance will depend on the structure of your data, but the docs for the Camera object have some suggestions. If you find any problems or have any thoughts, let me know! Best, Sam
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi Britton+everyone, I've been trying to make a similar movie/image of my data and am running into a problem. I used the basic script here http://paste.enzotools.org/show/1367/ mentioned earlier in this thread, and it works up until I get to the cam.snapshot line, where I receive the error at the end of the email. Here is my yt instinfo in case it matters: The current version of the code is: --- 9ee32ec8db42 (yt) tip Any ideas? Thanks, -Mike In [15]: image=cam.snapshot(fn='my_rendering.png') Field table 0 corresponds to 0 (Weighted with -1 ) Field table 1 corresponds to 0 (Weighted with -1 ) Field table 2 corresponds to 0 (Weighted with -1 ) Field table 3 corresponds to 0 (Weighted with -1 ) Channel 0 corresponds to 0 Channel 1 corresponds to 1 Channel 2 corresponds to 2 Channel 3 corresponds to 4 Channel 4 corresponds to 4 Channel 5 corresponds to 4 yt INFO 2010-11-23 15:01:48,336 About to cast ---------------------------------------------------------------------------:-- KeyError Traceback (most recent call last) /home/butler85/yt-x86_64/src/yt-hg/scripts/iyt in <module>() ----> 1 2 3 4 5 /home/butler85/yt-x86_64/src/yt-hg/yt/visualization/volume_rendering/camera.pyc in snapshot(self, fn) 328 self.volume.reset_cast() 329 image = self.volume.kd_ray_cast(image, tfp, vector_plane, --> 330 self.back_center, self.front_center) 331 else: 332 pbar = get_pbar("Ray casting", /home/butler85/yt-x86_64/src/yt-hg/yt/utilities/amr_kdtree/amr_kdtree.pyc in kd_ray_cast(self, image, tfp, vector_plane, back_center, front_center) 819 pbar = get_pbar("Ray casting",self.total_cost) 820 total_cells = 0 --> 821 for brick in self.traverse(back_center, front_center, start_id): 822 brick['brick'].cast_plane(tfp, vector_plane) 823 total_cells += brick['cost'] /home/butler85/yt-x86_64/src/yt-hg/yt/utilities/amr_kdtree/amr_kdtree.pyc in traverse(self, back_center, front_center, start_id) 698 while(True): 699 --> 700 current_node = tree[current_id] 701 702 if head_node['cast_done'] is 1: KeyError: -1
Pardon my insanity, everyone. I just uploaded a slightly better version of this movie to vimeo. It's here: http://vimeo.com/17100442 You can get it here as well: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4 or here for a higher resolution version: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble_hd.mp4<http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4>
Britton
On Mon, Nov 22, 2010 at 4:18 PM, Britton Smith <brittonsmith@gmail.com>wrote:
Hi again everyone,
For those who are interested, I just finished the final version of the movie I referred to in my post to this thread. I have uploaded it to the yt group on vimeo, and it will be available shortly here: http://vimeo.com/groups/ytgallery/videos/17095494 If you can't wait, I put it here as well: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4
Thanks again to Matt and Sam for making this whole thing possible.
Britton
On Fri, Nov 19, 2010 at 11:27 AM, Britton Smith <brittonsmith@gmail.com>wrote:
Hi all,
I recently did some volume renders of a 50 Mpc box unigrid simulation with 1024^3 grid cells on kraken. I used exactly 64 cores and did not have to use less than the full number of cores available per node. I was making 1024^2 images that took roughly between 5-10 seconds to render. I tried some 2048 that took around 30-40 seconds. I was rendering baryon overdensity with a transfer function that had 2000 narrow gaussians. The number was high because I am combining this with a movie in which I render only one of those guassians at a time and build the box up from low overdensity up to high. I didn't go to lower number of processors, so I'm not exactly sure at what point this would have run out of ram. I consider this an overwhelming success. I've attached some sample images, one with the full transfer function and a sample frame from the movie where I do them one at a time while spinning. Very very nice job!
Britton
On Wed, Nov 10, 2010 at 11:46 AM, Matthew Turk <matthewturk@gmail.com>wrote:
Hi Sam,
Great work! I'm really happy to see this make it into the primary trunk.
I'd like to encourage people to try this out, particularly on large datasets, and write to the list or Sam if you run into problems. This is a big increase in functionality, and everyone wants to make sure it works out alright.
I've been using the volume rendering capabilities of yt quite extensively, in kind of an unconventional way, to calculate off-axis average values, and I'm very excited about the performance improvements that this new subsystem will bring.
Congrats, Sam!
-Matt
Hi all, I just wanted to announce that the new kd-Tree rendering framework is now in the 'yt' branch of the repository. There are a couple things I wanted to point to if you are interested. The changeset itself: http://yt.enzotools.org/changeset/c7947fef16ac/ A post on blog.enzotools.org highlighting some recent successes: http://blog.enzotools.org/amr-kd-tree-rendering-added-to-yt A simple script, where you should just have to change the parameter file name: http://paste.enzotools.org/show/1367/ A more advanced script that exposes a few new options: http://paste.enzotools.org/show/1368/ Both of these scripts should be able to be run in parallel (as long as N is a power of 2 for now) transparently as: mpirun -np N python script.py --parallel Parallel performance will depend on the structure of your data, but
On Tue, Nov 9, 2010 at 5:12 PM, Sam Skillman <samskillman@gmail.com> wrote: the docs
for the Camera object have some suggestions. If you find any problems or have any thoughts, let me know! Best, Sam
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi Mike, After talking off list, I've realized that this was due to my failure to make sure that the renderer works on a single grid dataset. This should be fixed in 222589d1ed61. Let me know if that doesn't work. Also just to note here that parallelization of a single grid dataset is not currently supported because we don't break up a single grid into multiple bricks. If anyone would really like to see this, please email me off list and we can put it in the new feature requests. I'm also going to note here that I've been having issues with parallel analysis in general with the new config file stuff so those of you doing that may want to hold out on updating the 'yt' branch for a few updates. I'll bring it up again on a new thread. Best and sorry for the trouble, Sam On Tue, Nov 23, 2010 at 1:04 PM, Mike Butler <butler85@astro.ufl.edu> wrote:
Hi Britton+everyone,
I've been trying to make a similar movie/image of my data and am running into a problem. I used the basic script here http://paste.enzotools.org/show/1367/ mentioned earlier in this thread, and it works up until I get to the cam.snapshot line, where I receive the error at the end of the email.
Here is my yt instinfo in case it matters:
The current version of the code is:
--- 9ee32ec8db42 (yt) tip
Any ideas? Thanks,
-Mike
In [15]: image=cam.snapshot(fn='my_rendering.png') Field table 0 corresponds to 0 (Weighted with -1 ) Field table 1 corresponds to 0 (Weighted with -1 ) Field table 2 corresponds to 0 (Weighted with -1 ) Field table 3 corresponds to 0 (Weighted with -1 ) Channel 0 corresponds to 0 Channel 1 corresponds to 1 Channel 2 corresponds to 2 Channel 3 corresponds to 4 Channel 4 corresponds to 4 Channel 5 corresponds to 4 yt INFO 2010-11-23 15:01:48,336 About to cast
---------------------------------------------------------------------------:-- KeyError Traceback (most recent call last)
/home/butler85/yt-x86_64/src/yt-hg/scripts/iyt in <module>() ----> 1 2 3 4 5
/home/butler85/yt-x86_64/src/yt-hg/yt/visualization/volume_rendering/camera.pyc in snapshot(self, fn) 328 self.volume.reset_cast() 329 image = self.volume.kd_ray_cast(image, tfp, vector_plane, --> 330 self.back_center, self.front_center) 331 else: 332 pbar = get_pbar("Ray casting",
/home/butler85/yt-x86_64/src/yt-hg/yt/utilities/amr_kdtree/amr_kdtree.pyc in kd_ray_cast(self, image, tfp, vector_plane, back_center, front_center) 819 pbar = get_pbar("Ray casting",self.total_cost) 820 total_cells = 0 --> 821 for brick in self.traverse(back_center, front_center, start_id): 822 brick['brick'].cast_plane(tfp, vector_plane) 823 total_cells += brick['cost']
/home/butler85/yt-x86_64/src/yt-hg/yt/utilities/amr_kdtree/amr_kdtree.pyc in traverse(self, back_center, front_center, start_id) 698 while(True): 699 --> 700 current_node = tree[current_id] 701 702 if head_node['cast_done'] is 1:
KeyError: -1
Pardon my insanity, everyone. I just uploaded a slightly better version of this movie to vimeo. It's here: http://vimeo.com/17100442 You can get it here as well: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4 or here for a higher resolution version: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble_hd.mp4< http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4>
Britton
On Mon, Nov 22, 2010 at 4:18 PM, Britton Smith <brittonsmith@gmail.com>wrote:
Hi again everyone,
For those who are interested, I just finished the final version of the movie I referred to in my post to this thread. I have uploaded it to the yt group on vimeo, and it will be available shortly here: http://vimeo.com/groups/ytgallery/videos/17095494 If you can't wait, I put it here as well: http://www.pa.msu.edu/people/britton/whim_movies/evolve_assemble.mp4
Thanks again to Matt and Sam for making this whole thing possible.
Britton
On Fri, Nov 19, 2010 at 11:27 AM, Britton Smith <brittonsmith@gmail.com>wrote:
Hi all,
I recently did some volume renders of a 50 Mpc box unigrid simulation with 1024^3 grid cells on kraken. I used exactly 64 cores and did not have to use less than the full number of cores available per node. I was making 1024^2 images that took roughly between 5-10 seconds to render. I tried some 2048 that took around 30-40 seconds. I was rendering baryon overdensity with a transfer function that had 2000 narrow gaussians. The number was high because I am combining this with a movie in which I render only one of those guassians at a time and build the box up from low overdensity up to high. I didn't go to lower number of processors, so I'm not exactly sure at what point this would have run out of ram. I consider this an overwhelming success. I've attached some sample images, one with the full transfer function and a sample frame from the movie where I do them one at a time while spinning. Very very nice job!
Britton
On Wed, Nov 10, 2010 at 11:46 AM, Matthew Turk <matthewturk@gmail.com>wrote:
Hi Sam,
Great work! I'm really happy to see this make it into the primary trunk.
I'd like to encourage people to try this out, particularly on large datasets, and write to the list or Sam if you run into problems. This is a big increase in functionality, and everyone wants to make sure it works out alright.
I've been using the volume rendering capabilities of yt quite extensively, in kind of an unconventional way, to calculate off-axis average values, and I'm very excited about the performance improvements that this new subsystem will bring.
Congrats, Sam!
-Matt
Hi all, I just wanted to announce that the new kd-Tree rendering framework is now in the 'yt' branch of the repository. There are a couple things I wanted to point to if you are interested. The changeset itself: http://yt.enzotools.org/changeset/c7947fef16ac/ A post on blog.enzotools.org highlighting some recent successes: http://blog.enzotools.org/amr-kd-tree-rendering-added-to-yt A simple script, where you should just have to change the parameter file name: http://paste.enzotools.org/show/1367/ A more advanced script that exposes a few new options: http://paste.enzotools.org/show/1368/ Both of these scripts should be able to be run in parallel (as long as N is a power of 2 for now) transparently as: mpirun -np N python script.py --parallel Parallel performance will depend on the structure of your data, but
On Tue, Nov 9, 2010 at 5:12 PM, Sam Skillman <samskillman@gmail.com> wrote: the docs
for the Camera object have some suggestions. If you find any problems or have any thoughts, let me know! Best, Sam
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
participants (4)
-
Britton Smith
-
Matthew Turk
-
Mike Butler
-
Sam Skillman