Memory leaks and volume rendering

Hi all, I'm volume rendering a ~1 GB dataset on an 8 CPU machine with 32 GB of RAM. I keep using up the memory on the machine with the following script, which creates a time series object and runs through the pfs to create each image. Is there other memory I should be freeing up at each iteration in the loop? http://paste.yt-project.org/show/2137/ Best, John

Hi John, I've found that calling the garbage collector via gc.collect() at the end of a loop over filenames fixes issues like this. Probably worth a try, anyway. -Nathan On Jan 28, 2012, at 8:52 AM, John ZuHone wrote:
Hi all,
I'm volume rendering a ~1 GB dataset on an 8 CPU machine with 32 GB of RAM. I keep using up the memory on the machine with the following script, which creates a time series object and runs through the pfs to create each image. Is there other memory I should be freeing up at each iteration in the loop?
http://paste.yt-project.org/show/2137/
Best,
John _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f24aeed3642254826273!

Hi John, Additionally, adding 'del cam, image' at the end of each loop should help considerably. The camera object contains the homogenized volume roughly equal to the size of the entire dataset for one field, so I'm guessing that is where most of it is coming from. Let us know if that doesn't help. Sam On Sat, Jan 28, 2012 at 7:31 PM, Nathan Goldbaum <goldbaum@ucolick.org>wrote:
Hi John,
I've found that calling the garbage collector via gc.collect() at the end of a loop over filenames fixes issues like this. Probably worth a try, anyway.
-Nathan
On Jan 28, 2012, at 8:52 AM, John ZuHone wrote:
Hi all,
I'm volume rendering a ~1 GB dataset on an 8 CPU machine with 32 GB of RAM. I keep using up the memory on the machine with the following script, which creates a time series object and runs through the pfs to create each image. Is there other memory I should be freeing up at each iteration in the loop?
http://paste.yt-project.org/show/2137/
Best,
John _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f24aeed3642254826273!
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi guys, Neither of these seemed to help (for which I am a little surprised), it ran out of memory at the same position as before. Best, John On Jan 29, 2012, at 2:13 AM, Sam Skillman wrote:
Hi John,
Additionally, adding 'del cam, image' at the end of each loop should help considerably. The camera object contains the homogenized volume roughly equal to the size of the entire dataset for one field, so I'm guessing that is where most of it is coming from.
Let us know if that doesn't help.
Sam
On Sat, Jan 28, 2012 at 7:31 PM, Nathan Goldbaum <goldbaum@ucolick.org> wrote: Hi John,
I've found that calling the garbage collector via gc.collect() at the end of a loop over filenames fixes issues like this. Probably worth a try, anyway.
-Nathan
On Jan 28, 2012, at 8:52 AM, John ZuHone wrote:
Hi all,
I'm volume rendering a ~1 GB dataset on an 8 CPU machine with 32 GB of RAM. I keep using up the memory on the machine with the following script, which creates a time series object and runs through the pfs to create each image. Is there other memory I should be freeing up at each iteration in the loop?
http://paste.yt-project.org/show/2137/
Best,
John _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f24aeed3642254826273!
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Hi John, The nuclear option is to call a separate python interpreter to evaluate the interior of your loop. The OS generally does a much better job of cleaning up after python than the interpreter its self. You can do this relatively straightforwardly via the subprocess module, passing parameters as command line arguments via the argparse module. Also, is it possible that the file you are trying to volume render is too big? Can you do one iteration of the loop for just that plotfile? -Nathan On Jan 29, 2012, at 6:55 AM, John ZuHone wrote:
Hi guys,
Neither of these seemed to help (for which I am a little surprised), it ran out of memory at the same position as before.
Best,
John
On Jan 29, 2012, at 2:13 AM, Sam Skillman wrote:
Hi John,
Additionally, adding 'del cam, image' at the end of each loop should help considerably. The camera object contains the homogenized volume roughly equal to the size of the entire dataset for one field, so I'm guessing that is where most of it is coming from.
Let us know if that doesn't help.
Sam
On Sat, Jan 28, 2012 at 7:31 PM, Nathan Goldbaum <goldbaum@ucolick.org> wrote: Hi John,
I've found that calling the garbage collector via gc.collect() at the end of a loop over filenames fixes issues like this. Probably worth a try, anyway.
-Nathan
On Jan 28, 2012, at 8:52 AM, John ZuHone wrote:
Hi all,
I'm volume rendering a ~1 GB dataset on an 8 CPU machine with 32 GB of RAM. I keep using up the memory on the machine with the following script, which creates a time series object and runs through the pfs to create each image. Is there other memory I should be freeing up at each iteration in the loop?
http://paste.yt-project.org/show/2137/
Best,
John _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f255dfd1561963799029! _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f255dfd1561963799029!

Hi all, Sorry for the short email (I am doing my best to take the weekend off) but I think what's happening is that the time series is initialized by loading all the pfs and retaining references to them inside ts.outputs. So any amount of deleting them is not going to work, and it's also possible that the grids all retain their data. (At the very minimum, the hierarchy will still exist for them.) I'll take a look at this on monday to come up with a "free when I'm done with it" option, but you can probably work around it for now by iterating just over the file names and loading those one by one. -Matt On Sun, Jan 29, 2012 at 11:01 AM, Nathan Goldbaum <goldbaum@ucolick.org> wrote:
Hi John,
The nuclear option is to call a separate python interpreter to evaluate the interior of your loop. The OS generally does a much better job of cleaning up after python than the interpreter its self. You can do this relatively straightforwardly via the subprocess module, passing parameters as command line arguments via the argparse module.
Also, is it possible that the file you are trying to volume render is too big? Can you do one iteration of the loop for just that plotfile?
-Nathan
On Jan 29, 2012, at 6:55 AM, John ZuHone wrote:
Hi guys,
Neither of these seemed to help (for which I am a little surprised), it ran out of memory at the same position as before.
Best,
John
On Jan 29, 2012, at 2:13 AM, Sam Skillman wrote:
Hi John,
Additionally, adding 'del cam, image' at the end of each loop should help considerably. The camera object contains the homogenized volume roughly equal to the size of the entire dataset for one field, so I'm guessing that is where most of it is coming from.
Let us know if that doesn't help.
Sam
On Sat, Jan 28, 2012 at 7:31 PM, Nathan Goldbaum <goldbaum@ucolick.org> wrote: Hi John,
I've found that calling the garbage collector via gc.collect() at the end of a loop over filenames fixes issues like this. Probably worth a try, anyway.
-Nathan
On Jan 28, 2012, at 8:52 AM, John ZuHone wrote:
Hi all,
I'm volume rendering a ~1 GB dataset on an 8 CPU machine with 32 GB of RAM. I keep using up the memory on the machine with the following script, which creates a time series object and runs through the pfs to create each image. Is there other memory I should be freeing up at each iteration in the loop?
http://paste.yt-project.org/show/2137/
Best,
John _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f255dfd1561963799029! _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f255dfd1561963799029!
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Thanks to everyone for the help. The files are not very big, so I think this is an issue of memory not being freed up as it should, as per Matt's suggestion. I'm rewriting the script as he suggested and then trying that. John On Jan 29, 2012, at 2:13 AM, Sam Skillman wrote:
Hi John,
Additionally, adding 'del cam, image' at the end of each loop should help considerably. The camera object contains the homogenized volume roughly equal to the size of the entire dataset for one field, so I'm guessing that is where most of it is coming from.
Let us know if that doesn't help.
Sam
On Sat, Jan 28, 2012 at 7:31 PM, Nathan Goldbaum <goldbaum@ucolick.org> wrote: Hi John,
I've found that calling the garbage collector via gc.collect() at the end of a loop over filenames fixes issues like this. Probably worth a try, anyway.
-Nathan
On Jan 28, 2012, at 8:52 AM, John ZuHone wrote:
Hi all,
I'm volume rendering a ~1 GB dataset on an 8 CPU machine with 32 GB of RAM. I keep using up the memory on the machine with the following script, which creates a time series object and runs through the pfs to create each image. Is there other memory I should be freeing up at each iteration in the loop?
http://paste.yt-project.org/show/2137/
Best,
John _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f24aeed3642254826273!
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org

Sorry to butt in here, but could this be related to the same issue I had when trying to iterate many different subvolume for the halo profiler in a loop style, and see the peak max memory grow over each iteration of the loop? The behavior described seems awfully familiar to me. The way I got around this problem is to star a new python instance for each of the subvolume in bash, and pass in the subvolume number via sys.argv[]. in bash I have: for j in {50..50} do mpirun -n 256 python <script name>.py $j --parallel 2>&1 | tee `printf %i $j`progress.txt done the above will analyze the piece number 50 of my data in the yt script I have: import sys inputsv = int(sys.argv[1]) But of course this is a temporary fix of the symptoms not the cure, since I wasn't able to narrow the problem down further. From G.S. On Sun, Jan 29, 2012 at 9:58 AM, John ZuHone <jzuhone@milkyway.gsfc.nasa.gov
wrote:
Thanks to everyone for the help. The files are not very big, so I think this is an issue of memory not being freed up as it should, as per Matt's suggestion. I'm rewriting the script as he suggested and then trying that.
John
On Jan 29, 2012, at 2:13 AM, Sam Skillman wrote:
Hi John,
Additionally, adding 'del cam, image' at the end of each loop should help considerably. The camera object contains the homogenized volume roughly equal to the size of the entire dataset for one field, so I'm guessing that is where most of it is coming from.
Let us know if that doesn't help.
Sam
On Sat, Jan 28, 2012 at 7:31 PM, Nathan Goldbaum <goldbaum@ucolick.org>wrote:
Hi John,
I've found that calling the garbage collector via gc.collect() at the end of a loop over filenames fixes issues like this. Probably worth a try, anyway.
-Nathan
On Jan 28, 2012, at 8:52 AM, John ZuHone wrote:
Hi all,
I'm volume rendering a ~1 GB dataset on an 8 CPU machine with 32 GB of RAM. I keep using up the memory on the machine with the following script, which creates a time series object and runs through the pfs to create each image. Is there other memory I should be freeing up at each iteration in the loop?
http://paste.yt-project.org/show/2137/
Best,
John _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
!DSPAM:10175,4f24aeed3642254826273!
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
participants (6)
-
Geoffrey So
-
John ZuHone
-
John ZuHone
-
Matthew Turk
-
Nathan Goldbaum
-
Sam Skillman