[Numpy-discussion] Memory leak?
jtaylor.debian at googlemail.com
Fri Jan 31 12:31:28 EST 2014
On 31.01.2014 18:12, Nathaniel Smith wrote:
> On Fri, Jan 31, 2014 at 4:29 PM, Benjamin Root <ben.root at ou.edu> wrote:
>> Just to chime in here about the SciPy Superpack... this distribution tracks
>> the master branch of many projects, and then puts out releases, on the
>> assumption that master contains pristine code, I guess. I have gone down
>> strange rabbit holes thinking that a particular bug was fixed already and
>> the user telling me a version number that would confirm that, only to
>> discover that the superpack actually packaged matplotlib about a month prior
>> to releasing a version.
>> I will not comment on how good or bad of an idea it is for the Superpack to
>> do that, but I just wanted to make other developers aware of this to keep
>> them from falling down the same rabbit hole.
> Wow, that is good to know. Esp. since the web page:
> simply advertises that it gives you things like numpy 1.9 and scipy
> 0.14, which don't exist. (With some note about dev versions buried in
> prose a few sentences later.)
> Empirically, development versions of numpy have always contained bugs,
> regressions, and compatibility breaks that were fixed in the released
> version; and we make absolutely no guarantees about compatibility
> between dev versions and any release versions. And it sort of has to
> be that way for us to be able to make progress. But if too many people
> start using dev versions for daily use, then we and downstream
> dependencies will have to start adding compatibility hacks and stuff
> to support those dev versions. Which would be a nightmare for
> developers and users both.
> Recommending this build for daily use by non-developers strikes me as
> dangerous for both users and the wider ecosystem.
while probably not good for the user I think its very good for us.
This is the second bug I introduced found by superpack users.
This one might have gone unnoticed into the next release as it is pretty
much impossible to find via tests. Even in valgrind reports its hard to
find as its lumped in with all of pythons hundreds of memory arena
Concerning the fix, it seems if python sees tp_free == PYObject_Del/Free
it replaces it with the tp_free of the base type which is int_free in
this case. int_free uses a special allocator for even lower overhead so
we start leaking.
We either need to find the right flag to set for our scalars so it stops
doing that, add an indirection so the function pointers don't match or
stop using the object allocator as we are apparently digging to deep
into pythons internal implementation details by doing so.
More information about the NumPy-Discussion