If anyone has some time to check, I'd appreciate it if you could let me know if you can reproduce the memory leak described by Rixin Li in this issue:
To check, just do the following:
$ curl -JO http://use.yt/upload/33b7e323 $ tar xzvf yt_load_memory_leak_issue1776.tar.gz $ cd yt_load_memory_leak_issue1776 $ python encapsulated_yt.py
(the tarball is only half a megabyte)
On my machine that script has a peak memory usage of ~180 MB or so and the garbage collector is able to destroy the dataset objects that get created. However, Rixin reports linearly increasing memory usage with time, which seems to be because the dataset objects or something created by the datasets can't be garbage collected.
Unfortunately it's difficult to remotely debug issues like this so it would be very nice to find someone else who can trigger the memory leak.
Thanks for your help,