
Hi all,
If anyone has some time to check, I'd appreciate it if you could let me know if you can reproduce the memory leak described by Rixin Li in this issue:
https://github.com/yt-project/yt/issues/1776
To check, just do the following:
$ curl -JO http://use.yt/upload/33b7e323 $ tar xzvf yt_load_memory_leak_issue1776.tar.gz $ cd yt_load_memory_leak_issue1776 $ python encapsulated_yt.py
(the tarball is only half a megabyte)
On my machine that script has a peak memory usage of ~180 MB or so and the garbage collector is able to destroy the dataset objects that get created. However, Rixin reports linearly increasing memory usage with time, which seems to be because the dataset objects or something created by the datasets can't be garbage collected.
Unfortunately it's difficult to remotely debug issues like this so it would be very nice to find someone else who can trigger the memory leak.
Thanks for your help,
Nathan

Hi Nathan,
Trying this out a little blindly, but I don't think I can reproduce this memory leak either.
I have run the commands you shared; I got the following, just in case:
$ python encapsulated_yt.py /home/marianne/miniconda3/envs/yt/lib/python3.6/site-packages/yt/frontends/athena/data_structures.py:521: RuntimeWarning: invalid value encountered in true_divide self.domain_dimensions = np.round(self.domain_width/grid['dds']).astype('int32')
I monitored manually (with my eyes) memory usage from running `htop`; it seemed to go up a little and that was it; there was no linear increase.
Here is my setup info:
Ubuntu 16.04.4 LTS python=3.6.2 yt=3.4.1 (running in a conda env)
Best, Marianne
On Mon, May 7, 2018 at 11:07 AM, Nathan Goldbaum nathan12343@gmail.com wrote:
Hi all,
If anyone has some time to check, I'd appreciate it if you could let me know if you can reproduce the memory leak described by Rixin Li in this issue:
https://github.com/yt-project/yt/issues/1776
To check, just do the following:
$ curl -JO http://use.yt/upload/33b7e323 $ tar xzvf yt_load_memory_leak_issue1776.tar.gz $ cd yt_load_memory_leak_issue1776 $ python encapsulated_yt.py
(the tarball is only half a megabyte)
On my machine that script has a peak memory usage of ~180 MB or so and the garbage collector is able to destroy the dataset objects that get created. However, Rixin reports linearly increasing memory usage with time, which seems to be because the dataset objects or something created by the datasets can't be garbage collected.
Unfortunately it's difficult to remotely debug issues like this so it would be very nice to find someone else who can trigger the memory leak.
Thanks for your help,
Nathan
yt-dev mailing list -- yt-dev@python.org To unsubscribe send an email to yt-dev-leave@python.org

Hi Nathan,
I've tried and my peak mem usage is ~200 MB. So the GC is working for my machine.
Here is my setup info:
macOS 10.13.4 python=3.6.2 yt=3.4.1
Best, Bili
On Mon, May 7, 2018 at 9:38 AM, Marianne Corvellec < marianne.corvellec@gmail.com> wrote:
Hi Nathan,
Trying this out a little blindly, but I don't think I can reproduce this memory leak either.
I have run the commands you shared; I got the following, just in case:
$ python encapsulated_yt.py
/home/marianne/miniconda3/envs/yt/lib/python3.6/site- packages/yt/frontends/athena/data_structures.py:521: RuntimeWarning: invalid value encountered in true_divide self.domain_dimensions = np.round(self.domain_width/grid['dds']).astype('int32')
I monitored manually (with my eyes) memory usage from running `htop`; it seemed to go up a little and that was it; there was no linear increase.
Here is my setup info:
Ubuntu 16.04.4 LTS python=3.6.2 yt=3.4.1 (running in a conda env)
Best, Marianne
On Mon, May 7, 2018 at 11:07 AM, Nathan Goldbaum nathan12343@gmail.com wrote:
Hi all,
If anyone has some time to check, I'd appreciate it if you could let me
know
if you can reproduce the memory leak described by Rixin Li in this issue:
https://github.com/yt-project/yt/issues/1776
To check, just do the following:
$ curl -JO http://use.yt/upload/33b7e323 $ tar xzvf yt_load_memory_leak_issue1776.tar.gz $ cd yt_load_memory_leak_issue1776 $ python encapsulated_yt.py
(the tarball is only half a megabyte)
On my machine that script has a peak memory usage of ~180 MB or so and
the
garbage collector is able to destroy the dataset objects that get
created.
However, Rixin reports linearly increasing memory usage with time, which seems to be because the dataset objects or something created by the
datasets
can't be garbage collected.
Unfortunately it's difficult to remotely debug issues like this so it
would
be very nice to find someone else who can trigger the memory leak.
Thanks for your help,
Nathan
yt-dev mailing list -- yt-dev@python.org To unsubscribe send an email to yt-dev-leave@python.org
yt-dev mailing list -- yt-dev@python.org To unsubscribe send an email to yt-dev-leave@python.org

I just ran this. The peak memory usage was smaller than chrome or slack, so it didn't really register for me :)
python 3.6.5 yt 3.5.dev0
fedora 28 / gcc 8.0.1
On Mon, May 7, 2018 at 1:23 PM, Bili Dong - Gmail qobilidop@gmail.com wrote:
Hi Nathan,
I've tried and my peak mem usage is ~200 MB. So the GC is working for my machine.
Here is my setup info:
macOS 10.13.4 python=3.6.2 yt=3.4.1
Best, Bili
On Mon, May 7, 2018 at 9:38 AM, Marianne Corvellec < marianne.corvellec@gmail.com> wrote:
Hi Nathan,
Trying this out a little blindly, but I don't think I can reproduce this memory leak either.
I have run the commands you shared; I got the following, just in case:
$ python encapsulated_yt.py
/home/marianne/miniconda3/envs/yt/lib/python3.6/site-package s/yt/frontends/athena/data_structures.py:521: RuntimeWarning: invalid value encountered in true_divide self.domain_dimensions = np.round(self.domain_width/grid['dds']).astype('int32')
I monitored manually (with my eyes) memory usage from running `htop`; it seemed to go up a little and that was it; there was no linear increase.
Here is my setup info:
Ubuntu 16.04.4 LTS python=3.6.2 yt=3.4.1 (running in a conda env)
Best, Marianne
On Mon, May 7, 2018 at 11:07 AM, Nathan Goldbaum nathan12343@gmail.com wrote:
Hi all,
If anyone has some time to check, I'd appreciate it if you could let me
know
if you can reproduce the memory leak described by Rixin Li in this
issue:
https://github.com/yt-project/yt/issues/1776
To check, just do the following:
$ curl -JO http://use.yt/upload/33b7e323 $ tar xzvf yt_load_memory_leak_issue1776.tar.gz $ cd yt_load_memory_leak_issue1776 $ python encapsulated_yt.py
(the tarball is only half a megabyte)
On my machine that script has a peak memory usage of ~180 MB or so and
the
garbage collector is able to destroy the dataset objects that get
created.
However, Rixin reports linearly increasing memory usage with time, which seems to be because the dataset objects or something created by the
datasets
can't be garbage collected.
Unfortunately it's difficult to remotely debug issues like this so it
would
be very nice to find someone else who can trigger the memory leak.
Thanks for your help,
Nathan
yt-dev mailing list -- yt-dev@python.org To unsubscribe send an email to yt-dev-leave@python.org
yt-dev mailing list -- yt-dev@python.org To unsubscribe send an email to yt-dev-leave@python.org
yt-dev mailing list -- yt-dev@python.org To unsubscribe send an email to yt-dev-leave@python.org
participants (4)
-
Bili Dong - Gmail
-
Marianne Corvellec
-
Michael Zingale
-
Nathan Goldbaum