No longer enable Py_TRACE_REFS by default in debug build

Hi, When Python is built in debug mode, PyObject gets 2 new fields: _ob_prev and _ob_next. These fields change the offset of following fields in the PyObject structure and so breaks the ABI. I propose to modify the debug build (Py_DEBUG) to no longer imply Py_TRACE_REFS. Antoine Pitrou proposed this idea when the C API was discussed to get a stable ABI. https://bugs.python.org/issue36465 https://github.com/python/cpython/pull/12615 This change makes the debug build ABI closer to the release build ABI, but I am not sure how to compare these two ABI. Technically, C extensions still need to be recompiled. What do you think? -- I also wrote a PR to remove all code related to Py_TRACE_REFS: https://github.com/python/cpython/pull/12614 I don't think that it's right approach. I prefer to keep this special build around to see if anyone needs it, and wait one or two Python releases to decide what to do with it. Victor -- Night gathers, and now my watch begins. It shall not end until my death.

On 09Apr2019 0925, Victor Stinner wrote:
This change makes the debug build ABI closer to the release build ABI, but I am not sure how to compare these two ABI. Technically, C extensions still need to be recompiled.
What do you think?
What are the other changes that would be required? And is there another way to get the same functionality without ABI modifications? I think it's worthwhile if we can really get to debug and non-debug builds being ABI compatible. Getting partway there in this case doesn't seem to offer any benefits. Cheers, Steve

Le mar. 9 avr. 2019 à 22:16, Steve Dower <steve.dower@python.org> a écrit :
What are the other changes that would be required?
I don't know.
And is there another way to get the same functionality without ABI modifications?
Py_TRACE_REFS is a double linked list of *all* Python objects. To get this functionality, you need to store the list somewhere. I don't know how to maintain such list outside the PyObject structure. One solution would be to enable Py_TRACE_REFS in release mode. Does anyone want to add 16 bytes to every PyObject? I don't want that :-)
I think it's worthwhile if we can really get to debug and non-debug builds being ABI compatible. Getting partway there in this case doesn't seem to offer any benefits.
Disabling Py_TRACE_REFS by default in debug mode reduces the Python memory footprint. Py_TRACE_REFS costs 2 pointers per PyObject: 16 bytes on 64-bit platforms. I don't think that I ever used sys.getobjects(), whereas many projects use gc.get_objects() which is also available in release builds (not only in debug builds). I'm quite sure that almost nobody uses debug builds because the ABI is incompatible. The main question is if anyone ever used Py_TRACE_REFS? Does someone use sys.getobjects() or PYTHONDUMPREFS environment variable? Using PYTHONDUMPREFS=1 on a debug build (with Py_TRACE_REFS) does simply crash Python 3.7 at exit. So I don't think that anyone use it :-) I wrote PR 12614 to remove all code related to Py_TRACE_REFS. I wrote it to see which code depends on it: commit 63509498761a0e7f72585a8cd7df325ea2abd1b2 (HEAD -> remove_trace_refs, origin/remove_trace_refs) Author: Victor Stinner <vstinner@redhat.com> Date: Thu Mar 28 23:26:58 2019 +0100 WIP: bpo-36465: Remove Py_TRACE_REFS special build Remove _ob_prev and _ob_next fields of PyObject when Python is compiled in debug mode to make debug ABI closer to the release ABI. Remove: * sys.getobjects() * PYTHONDUMPREFS environment variable * _PyCoreConfig.dump_refs * PyObject._ob_prev and PyObject._ob_next fields * _PyObject_HEAD_EXTRA and _PyObject_EXTRA_INIT macros * _Py_AddToAllObjects() * _Py_PrintReferenceAddresses() * _Py_PrintReferences() Victor -- Night gathers, and now my watch begins. It shall not end until my death.

On 10Apr2019 0401, Victor Stinner wrote:
Le mar. 9 avr. 2019 à 22:16, Steve Dower <steve.dower@python.org> a écrit :
What are the other changes that would be required?
I don't know.
And is there another way to get the same functionality without ABI modifications?
Py_TRACE_REFS is a double linked list of *all* Python objects. To get this functionality, you need to store the list somewhere. I don't know how to maintain such list outside the PyObject structure.
There's certainly no more convenient way to do it. Maybe if we had detached reference counts it would be easier, but it would likely still result in ABI compatibility issues between debug builds of extensions and release builds of Python (the most common scenario, in my experience).
One solution would be to enable Py_TRACE_REFS in release mode. Does anyone want to add 16 bytes to every PyObject? I don't want that :-)
Yeah, nobody suggested that anyway :)
I think it's worthwhile if we can really get to debug and non-debug builds being ABI compatible. Getting partway there in this case doesn't seem to offer any benefits.
Disabling Py_TRACE_REFS by default in debug mode reduces the Python memory footprint. Py_TRACE_REFS costs 2 pointers per PyObject: 16 bytes on 64-bit platforms.
Right, except it's debug mode.
I don't think that I ever used sys.getobjects(), whereas many projects use gc.get_objects() which is also available in release builds (not only in debug builds).
I'm quite sure that almost nobody uses debug builds because the ABI is incompatible.
There were just over 250,000 downloads of the prebuilt debug binaries for Windows (which are optional in the installer and turned off by default) in March. Whether they are being used is another question, but I know for sure at least a few people who use them. When you want to use a debug build of your extension module, using a debug build of CPython is the only way to do it. So unless we can get rid of *all* the ABI incompatibilities, a debug build of CPython is still going to be necessary and disabling/removing reference tracking doesn't provide any benefit.
The main question is if anyone ever used Py_TRACE_REFS? Does someone use sys.getobjects() or PYTHONDUMPREFS environment variable?
Using PYTHONDUMPREFS=1 on a debug build (with Py_TRACE_REFS) does simply crash Python 3.7 at exit. So I don't think that anyone use it :-)
How do we track reference leaks in the buildbots? Can/should we be using this? It doesn't crash on Python 3.8, so I suspect fixing the bug is a better option than using it as an excuse to remove the feature. From a quick test, it seems that a tuple element is being freed but not removed from the tuple, so it's probably a double-decref bug somewhere in 3.7. Cheers, Steve

On 10Apr2019 1109, Steve Dower wrote:
On 10Apr2019 0401, Victor Stinner wrote:
I think it's worthwhile if we can really get to debug and non-debug builds being ABI compatible. Getting partway there in this case doesn't seem to offer any benefits.
Disabling Py_TRACE_REFS by default in debug mode reduces the Python memory footprint. Py_TRACE_REFS costs 2 pointers per PyObject: 16 bytes on 64-bit platforms.
Right, except it's debug mode.
I left this comment unfinished :) It's debug mode, and so you should expect less efficient memory and CPU usage. That's why we have two modes - so that it's easier to debug issues. Now, if debug mode was unusably slow or had way too much overhead, we'd want to fix that. But it isn't unusable, so reducing memory usage at the cost of making debugging harder is not compelling. Cheers, Steve

I recall finding memory leaks using this. (E.g. I remember a leak in Zope due to a cache that was never pruned.) But presumably gc.get_objects() would have been sufficient. (IIRC it didn't exist at the time.) On Wed, Apr 10, 2019 at 11:48 AM Steve Dower <steve.dower@python.org> wrote:
On 10Apr2019 1109, Steve Dower wrote:
On 10Apr2019 0401, Victor Stinner wrote:
I think it's worthwhile if we can really get to debug and non-debug builds being ABI compatible. Getting partway there in this case doesn't seem to offer any benefits.
Disabling Py_TRACE_REFS by default in debug mode reduces the Python memory footprint. Py_TRACE_REFS costs 2 pointers per PyObject: 16 bytes on 64-bit platforms.
Right, except it's debug mode.
I left this comment unfinished :)
It's debug mode, and so you should expect less efficient memory and CPU usage. That's why we have two modes - so that it's easier to debug issues.
Now, if debug mode was unusably slow or had way too much overhead, we'd want to fix that. But it isn't unusable, so reducing memory usage at the cost of making debugging harder is not compelling.
Cheers, Steve _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org
-- --Guido van Rossum (python.org/~guido) *Pronouns: he/him/his **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>

On 4/10/2019 2:45 PM, Steve Dower wrote:
It's debug mode, and so you should expect less efficient memory and CPU usage.
On my Windows machine, 'python -m test -ugui' takes about twice as long.
That's why we have two modes - so that it's easier to debug issues.
-- Terry Jan Reedy

Le mer. 10 avr. 2019 à 20:09, Steve Dower <steve.dower@python.org> a écrit :
The main question is if anyone ever used Py_TRACE_REFS? Does someone use sys.getobjects() or PYTHONDUMPREFS environment variable?
Using PYTHONDUMPREFS=1 on a debug build (with Py_TRACE_REFS) does simply crash Python 3.7 at exit. So I don't think that anyone use it :-)
How do we track reference leaks in the buildbots? Can/should we be using this?
Ah, maybe there is a misunderstanding. You don't need Py_TRACE_REFS to track memory leaks: "python3 -m test -R 3:3" works without that. test_regrtest contains an unit test for reference leaks (I know it that I wrote the test :-)), and you can see that the test pass on my PR. I also checked manually by adding a memory leak into a test: it is still detected :-) regrtest uses sys.gettotalrefcount(), sys.getallocatedblocks() and support.fd_count() to track reference, memory and file descriptor leaks. None of these functions are related to Py_TRACE_REFS. Again, the question is who rely on Py_TRACE_REFS. If nobody rely on it, I don't see the point of keeping this expensive feature (at least, not by default).
It doesn't crash on Python 3.8, so I suspect fixing the bug is a better option than using it as an excuse to remove the feature.
It's not what I said. I only said that it seems that nobody uses PYTHONDUMPREFS, since it's broken for a long time. It's just a hint about the usage of Py_TRACE_REFS. I don't propose to remove the feature, but to disable it by default. Victor -- Night gathers, and now my watch begins. It shall not end until my death.

On Wed, Apr 10, 2019, 04:04 Victor Stinner <vstinner@redhat.com> wrote:
Le mar. 9 avr. 2019 à 22:16, Steve Dower <steve.dower@python.org> a écrit :
What are the other changes that would be required?
I don't know.
And is there another way to get the same functionality without ABI modifications?
Py_TRACE_REFS is a double linked list of *all* Python objects. To get this functionality, you need to store the list somewhere. I don't know how to maintain such list outside the PyObject structure.
I assume these pointers get updated from some generic allocation/free code. Could that code instead overallocate by 16 bytes, use the first 16 bytes to hold the pointers, and then return the PyObject* as (actual allocated pointer + 16)? Basically the "container_of" trick. I don't think that I ever used sys.getobjects(), whereas many projects
use gc.get_objects() which is also available in release builds (not only in debug builds).
Can anyone explain what pydebug builds are... for? Confession: I've never used them myself, and don't know why I would want to. (I have to assume that most of Steve's Windows downloads are from folks who thought they were downloading a python debugger.) -n

On Wed, Apr 10, 2019 at 12:30 PM Nathaniel Smith <njs@pobox.com> wrote:
On Wed, Apr 10, 2019, 04:04 Victor Stinner <vstinner@redhat.com> wrote:
Le mar. 9 avr. 2019 à 22:16, Steve Dower <steve.dower@python.org> a écrit :
What are the other changes that would be required?
I don't know.
And is there another way to get the same functionality without ABI modifications?
Py_TRACE_REFS is a double linked list of *all* Python objects. To get this functionality, you need to store the list somewhere. I don't know how to maintain such list outside the PyObject structure.
I assume these pointers get updated from some generic allocation/free code. Could that code instead overallocate by 16 bytes, use the first 16 bytes to hold the pointers, and then return the PyObject* as (actual allocated pointer + 16)? Basically the "container_of" trick.
I don't think that I ever used sys.getobjects(), whereas many projects
use gc.get_objects() which is also available in release builds (not only in debug builds).
Can anyone explain what pydebug builds are... for? Confession: I've never used them myself, and don't know why I would want to.
There is a bunch of extra things done in a debug build, e.g. all freed memory is blanked out with a known pattern so it's easy to tell when you're reading from freed memory (and thus probably messed up your refcounts). And then various extras are tossed on to the sys module to help with things. Basically anything people have found useful and require being compiled in typically get clumped in under the debug build. -Brett
(I have to assume that most of Steve's Windows downloads are from folks who thought they were downloading a python debugger.)
-n _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org

Le mer. 10 avr. 2019 à 21:45, Brett Cannon <brett@python.org> a écrit :
Can anyone explain what pydebug builds are... for? Confession: I've never used them myself, and don't know why I would want to.
There is a bunch of extra things done in a debug build, e.g. all freed memory is blanked out with a known pattern so it's easy to tell when you're reading from freed memory (and thus probably messed up your refcounts).
Since the debug build ABI is incompatible, it's not easy to use a debug build. For that reasons, I'm working for a few years to add such debugging features into regular release build. For example, you can now get this debugger on memory allocations using PYTHONMALLOC=debug environment variable since Python 3.6. Since such debug feature is not easy to discover (especially if you don't read closely What's New In Python 3.x), I added a generic "-X dev" command line option to enable a "development mode". It enables various similar features to debug code: https://docs.python.org/dev/using/cmdline.html#id5 Effect of the developer mode: * Add default warning filter, as -W default. * Install debug hooks on memory allocators: see the PyMem_SetupDebugHooks() C function. * Enable the faulthandler module to dump the Python traceback on a crash. * Enable asyncio debug mode. * Set the dev_mode attribute of sys.flags to True See also https://pythondev.readthedocs.io/debug_tools.html where I started to document these debug tools and how to use them.
And then various extras are tossed on to the sys module to help with things. Basically anything people have found useful and require being compiled in typically get clumped in under the debug build.
The debug build still contains many features which are useful to debug C extensions. For example, it adds sys.gettotalrefcnt() which is a convenient way to detect reference leaks. This funtion require Py_REF_DEBUG which modifies Py_INCREF() to add "_Py_RefTotal++;". Iit is has an impact on overall Python performance and should not be enabled in release build. Victor -- Night gathers, and now my watch begins. It shall not end until my death.

On 10Apr2019 1227, Nathaniel Smith wrote:
On Wed, Apr 10, 2019, 04:04 Victor Stinner <vstinner@redhat.com <mailto:vstinner@redhat.com>> wrote: I don't think that I ever used sys.getobjects(), whereas many projects use gc.get_objects() which is also available in release builds (not only in debug builds).
Can anyone explain what pydebug builds are... for? Confession: I've never used them myself, and don't know why I would want to.
(I have to assume that most of Steve's Windows downloads are from folks who thought they were downloading a python debugger.)
They're for debugging :) In general, debug builds are meant for faster inner-loop development. They generally do incremental builds properly and much faster by omitting most optimisations, which also enables source mapping to be more accurate when debugging. Assertions are typically enabled so that you are notified when a precondition is first identified rather than when it causes the crash (compiling these out later means you don't pay a runtime cost once you've got the inputs correct - generally these are used for developer-controlled values, rather than user-provided ones). So the idea is that you can quickly edit, build, debug, fix your code in a debug configuration, and then use a release configuration for the actual released build. Full release builds may take 2-3x longer than full debug builds, given the extra effort they make at optimisation, and very often can't do minimal incremental builds at all (so they may be 10-100x slower if you only modified one source file). But because the builds behave functionally equivalently, you can iterate with the faster configuration and get more done. (Disclaimer: I do most of my work on Windows where this has been properly developed. What I hear from non-Windows developers is that other tools can't actually handle this kind of workflow properly. Sorry.) The reason we ship debug Python binaries is because debug builds use a different C Runtime, so if you do a debug build of an extension module you're working on it won't actually work with a non-debug build of CPython. While it's possible that people misread "Download debug binaries" (the text in the installer) and think that it's an actual debugger, I'd suggest that your total lack of context here means you should avoid making assumptions about users you know nothing about. Cheers, Steve

On Wed, Apr 10, 2019 at 1:50 PM Steve Dower <steve.dower@python.org> wrote:
On 10Apr2019 1227, Nathaniel Smith wrote:
On Wed, Apr 10, 2019, 04:04 Victor Stinner <vstinner@redhat.com <mailto:vstinner@redhat.com>> wrote: I don't think that I ever used sys.getobjects(), whereas many projects use gc.get_objects() which is also available in release builds (not only in debug builds).
Can anyone explain what pydebug builds are... for? Confession: I've never used them myself, and don't know why I would want to.
(I have to assume that most of Steve's Windows downloads are from folks who thought they were downloading a python debugger.)
They're for debugging :)
In general, debug builds are meant for faster inner-loop development. They generally do incremental builds properly and much faster by omitting most optimisations, which also enables source mapping to be more accurate when debugging. Assertions are typically enabled so that you are notified when a precondition is first identified rather than when it causes the crash (compiling these out later means you don't pay a runtime cost once you've got the inputs correct - generally these are used for developer-controlled values, rather than user-provided ones).
So the idea is that you can quickly edit, build, debug, fix your code in a debug configuration, and then use a release configuration for the actual released build. Full release builds may take 2-3x longer than full debug builds, given the extra effort they make at optimisation, and very often can't do minimal incremental builds at all (so they may be 10-100x slower if you only modified one source file). But because the builds behave functionally equivalently, you can iterate with the faster configuration and get more done.
Sure, I'm familiar with the idea of debug and optimization settings in compilers. I build python with custom -g and -O flags all the time. (I do it by setting OPT when running configure.) It's also handy that many Linux distros these days let you install debug metadata for all the binaries they ship – I've used that when debugging third-party extension modules, to get a better idea of what was happening when a backtrace passes through libpython. But --with-pydebug is a whole other thing beyond that, that changes the ABI, has its own wheel tags, requires special cases in packages that use ctypes to access PyObject* internals, and appears to be almost entirely undocumented. It sounds like --with-pydebug has accumulated a big grab bag of unrelated features, mostly stuff that was useful at some point for some CPython dev trying to debug CPython itself? It's clearly not designed with end users as the primary audience, given that no-one knows what it actually does and that it makes third-party extensions really awkward to run. If that's right then I think Victor's plan of to sort through what it's actually doing makes a lot of sense, especially if we can remove the ABI breaking stuff, since that causes a disproportionate amount of trouble.
The reason we ship debug Python binaries is because debug builds use a different C Runtime, so if you do a debug build of an extension module you're working on it won't actually work with a non-debug build of CPython.
...But this is an important point. I'd forgotten that MSVC has a habit of changing the entire C runtime when you turn on the compiler's debugging mode. (On Linux, we generally don't bother rebuilding the C runtime unless you're actually debugging the C runtime, and anyway if you do want to switch to a debug version of the C runtime, it's ABI compatible so your program binaries don't have to be rebuilt.) Is it true that if the interpreter is built against ucrtd.lib, and an extension module is built against ucrt.lib, then they'll have incompatible ABIs and not work together? And that this detail is part of what's been glommed together into the "d" flag in the soabi tag on Windows? Is it possible for the Windows installer to include PDB files (/Zi /DEBUG) to allow debuggers to understand the regular release executable? (That's what I would have expected to get if I checked a box labeled "Download debug binaries".) -n -- Nathaniel J. Smith -- https://vorpus.org

On 10Apr2019 1917, Nathaniel Smith wrote:
It sounds like --with-pydebug has accumulated a big grab bag of unrelated features, mostly stuff that was useful at some point for some CPython dev trying to debug CPython itself? It's clearly not designed with end users as the primary audience, given that no-one knows what it actually does and that it makes third-party extensions really awkward to run. If that's right then I think Victor's plan of to sort through what it's actually doing makes a lot of sense, especially if we can remove the ABI breaking stuff, since that causes a disproportionate amount of trouble.
Does it really cause a "disproportionate" amount of trouble? It's definitely not meant for anyone who isn't working on C code, whether in CPython, an extension or a host application. If you want to use third-party extensions and are not able to rebuild them, that's a very good sign that you probably shouldn't be on the debug build at all. Perhaps the "--with-pydebug" option is too attractive? (Is it the default?) That's easily fixed.
The reason we ship debug Python binaries is because debug builds use a different C Runtime, so if you do a debug build of an extension module you're working on it won't actually work with a non-debug build of CPython.
...But this is an important point. I'd forgotten that MSVC has a habit of changing the entire C runtime when you turn on the compiler's debugging mode.
Technically they are separate options, but most project files are configured such that *their* Debug/Release switch affects both the compiler options (optimization) and the linker options (C runtime linkage).
Is it true that if the interpreter is built against ucrtd.lib, and an extension module is built against ucrt.lib, then they'll have incompatible ABIs and not work together? And that this detail is part of what's been glommed together into the "d" flag in the soabi tag on Windows?
Yep, except it's not actually in the soabi tag, but it's the "_d" suffix on module/executable names.
Is it possible for the Windows installer to include PDB files (/Zi /DEBUG) to allow debuggers to understand the regular release executable? (That's what I would have expected to get if I checked a box labeled "Download debug binaries".)
That box is immediately below one labelled "Download debug symbols", so hopefully seeing it in context would have set the right expectation. (And since I have them, there were 1.3 million downloads of the symbol packages via this option in March, but we also enable it by default via Visual Studio and that's responsible for about 1 million of those.) Cheers, Steve

On Thu, Apr 11, 2019 at 8:26 AM Steve Dower <steve.dower@python.org> wrote:
On 10Apr2019 1917, Nathaniel Smith wrote:
It sounds like --with-pydebug has accumulated a big grab bag of unrelated features, mostly stuff that was useful at some point for some CPython dev trying to debug CPython itself? It's clearly not designed with end users as the primary audience, given that no-one knows what it actually does and that it makes third-party extensions really awkward to run. If that's right then I think Victor's plan of to sort through what it's actually doing makes a lot of sense, especially if we can remove the ABI breaking stuff, since that causes a disproportionate amount of trouble.
Does it really cause a "disproportionate" amount of trouble? It's definitely not meant for anyone who isn't working on C code, whether in CPython, an extension or a host application. If you want to use third-party extensions and are not able to rebuild them, that's a very good sign that you probably shouldn't be on the debug build at all.
Well, here's what I mean by "disproportionate". Some of the costs of the ABI divergence are: - The first time I had to debug a C extension, I wasted a bunch of time trying to figure out how I was supposed to use Debian's 'python-dbg' package (the --with-pydebug build), before eventually figuring out that it was a red herring and what I actually wanted was the -dbgsym package (their equivalent of MSVC's /Zi /DEBUG files). - The extension loading machinery has extra code and complexity to track the two different ABIs. The package ecosystem does too, e.g. distutils needs to name extensions appropriately, and we need special wheel tags, and pip needs code to handle these tags: https://github.com/pypa/pip/blob/54b6a91405adc79cdb8a2954e9614d6860799ccb/sr... - If you want some of the features of --with-pydebug that don't change the ABI, then you still have to rebuild third-party extensions to get at them, and that's a significant hassle. (I could do it if I had to, but my time has value.) - Everyone who uses ctypes to access a PyObject* has to include some extra hacks to handle the difference between the regular and debug ABIs. There are a few different versions that get copy/pasted around as folklore, and they're all pretty obscure. For example: https://github.com/pallets/jinja/blob/fd89fed7456e755e33ba70674c41be5ab222e1... https://github.com/johndpope/sims4-ai-engine/blob/865212e841c716dc4364e0dba2... https://github.com/python-trio/trio/blob/862ced04e1f19287e098380ed8a0635004c... And then if you want to test this code, it means you have to add a --with-pydebug build to your CI infrastructure... I don't know how many people use Py_TRACE_REFS, but if we can't find anyone on python-dev who uses it then it must be pretty rare. If dropping Py_TRACE_REFS would let us converge the ABIs and get rid of all the stuff above, then that seems like a pretty good trade! But maybe the Windows C runtime issue will foil this...
The reason we ship debug Python binaries is because debug builds use a different C Runtime, so if you do a debug build of an extension module you're working on it won't actually work with a non-debug build of CPython.
...But this is an important point. I'd forgotten that MSVC has a habit of changing the entire C runtime when you turn on the compiler's debugging mode.
Technically they are separate options, but most project files are configured such that *their* Debug/Release switch affects both the compiler options (optimization) and the linker options (C runtime linkage).
So how do other projects handle this? I guess historically the main target audience for Visual Studio was folks building monolithic apps, where you can just rebuild everything with whatever options you want, and compared to that Python extensions are messier. But Python isn't the only project in this boat. Do ruby, nodejs, R, etc., all provide separate debug builds with incompatible ABIs on Windows, and propagate that information throughout their module/package ecosystem? -n -- Nathaniel J. Smith -- https://vorpus.org

On 12Apr.2019 1643, Nathaniel Smith wrote:
On Thu, Apr 11, 2019 at 8:26 AM Steve Dower <steve.dower@python.org> wrote:
On 10Apr2019 1917, Nathaniel Smith wrote:
I don't know how many people use Py_TRACE_REFS, but if we can't find anyone on python-dev who uses it then it must be pretty rare. If dropping Py_TRACE_REFS would let us converge the ABIs and get rid of all the stuff above, then that seems like a pretty good trade! But maybe the Windows C runtime issue will foil this...
The very first question I asked was whether this would let us converge the ABIs, and the answer was "no". Otherwise I'd have said go for it, despite the C runtime issues.
The reason we ship debug Python binaries is because debug builds use a different C Runtime, so if you do a debug build of an extension module you're working on it won't actually work with a non-debug build of CPython.
...But this is an important point. I'd forgotten that MSVC has a habit of changing the entire C runtime when you turn on the compiler's debugging mode.
Technically they are separate options, but most project files are configured such that *their* Debug/Release switch affects both the compiler options (optimization) and the linker options (C runtime linkage).
So how do other projects handle this? I guess historically the main target audience for Visual Studio was folks building monolithic apps, where you can just rebuild everything with whatever options you want, and compared to that Python extensions are messier. But Python isn't the only project in this boat. Do ruby, nodejs, R, etc., all provide separate debug builds with incompatible ABIs on Windows, and propagate that information throughout their module/package ecosystem?
Mostly I hear complaints about those languages *not* providing any help here. Python is renowned for having significantly better Windows support than any of them, so they're the wrong comparison to make in my opinion. Arguing that we should regress because other languages haven't caught up to us yet makes no sense. The tools that are better than Python typically don't ship debug builds either, unless you specifically request them. But they also don't leak their implementation details all over the place. If we had a better C API, we wouldn't have users who needed to match ABIs. For the most part, disabling optimizations in your own extension but using the non-debug ABI is sufficient, and if you're having to deal with other people's packages then maybe you don't have any choice (though I do know of people who have built debug versions of numpy before - turns out Windows developers are often just as capable as non-Windows developers when it comes to building things ;) ). And yes, they could also build CPython from source as well to get the debug ABI, or get the debug symbols, but I saw enough need that I decided it was worth the effort to just solve that problem. 250k downloads a month is enough to justify it for me. Not to bring the packaging discussions to another venue, but maybe this is yet another area we need to stop pretending that we're able to solve every single problem with just the tools we already have available? People who want debug builds of packages can build them themselves, even numpy and scipy, they don't need us to preemptively do all their work for them. But we can (and should) help short-cut unnecessary effort or research by providing helpful tools and instruction. Cheers, Steve

On Fri, Apr 12, 2019 at 5:05 PM Steve Dower <steve.dower@python.org> wrote:
On 12Apr.2019 1643, Nathaniel Smith wrote:
On Thu, Apr 11, 2019 at 8:26 AM Steve Dower <steve.dower@python.org> wrote:
On 10Apr2019 1917, Nathaniel Smith wrote:
I don't know how many people use Py_TRACE_REFS, but if we can't find anyone on python-dev who uses it then it must be pretty rare. If dropping Py_TRACE_REFS would let us converge the ABIs and get rid of all the stuff above, then that seems like a pretty good trade! But maybe the Windows C runtime issue will foil this...
The very first question I asked was whether this would let us converge the ABIs, and the answer was "no".
Otherwise I'd have said go for it, despite the C runtime issues.
I don't see that in the thread... just Victor saying he isn't sure whether there might be other ABI incompatibilities lurking that he hasn't found yet. Did I miss something? I'm mostly interested in this because of the possibility of converging the ABIs. If you think that the C runtime thing isn't a blocker for that, then that's useful information. Though obviously we still need to figure out whether there are any other blockers :-).
The reason we ship debug Python binaries is because debug builds use a different C Runtime, so if you do a debug build of an extension module you're working on it won't actually work with a non-debug build of CPython.
...But this is an important point. I'd forgotten that MSVC has a habit of changing the entire C runtime when you turn on the compiler's debugging mode.
Technically they are separate options, but most project files are configured such that *their* Debug/Release switch affects both the compiler options (optimization) and the linker options (C runtime linkage).
So how do other projects handle this? I guess historically the main target audience for Visual Studio was folks building monolithic apps, where you can just rebuild everything with whatever options you want, and compared to that Python extensions are messier. But Python isn't the only project in this boat. Do ruby, nodejs, R, etc., all provide separate debug builds with incompatible ABIs on Windows, and propagate that information throughout their module/package ecosystem?
Mostly I hear complaints about those languages *not* providing any help here. Python is renowned for having significantly better Windows support than any of them, so they're the wrong comparison to make in my opinion. Arguing that we should regress because other languages haven't caught up to us yet makes no sense.
The tools that are better than Python typically don't ship debug builds either, unless you specifically request them. But they also don't leak their implementation details all over the place. If we had a better C API, we wouldn't have users who needed to match ABIs.
Do you happen to have a list of places where the C API leaks details of the underlying CRT? (I'm mostly curious because whenever I've looked my conclusion was essentially: "Well....... I don't see any places that are *definitely* broken, so maybe mixing CRTs is fine? but I have zero confidence that I caught everything, so probably better to play it safe?". At least on py3 – I know the py2 C API was definitely broken if you mixed CRTs, because of the exposed FILE*.)
For the most part, disabling optimizations in your own extension but using the non-debug ABI is sufficient, and if you're having to deal with other people's packages then maybe you don't have any choice (though I do know of people who have built debug versions of numpy before - turns out Windows developers are often just as capable as non-Windows developers when it comes to building things ;)
I'm not sure why you think I was implying otherwise? I'm sorry if you thought I was attacking your users or something. I did say that I thought most users downloading the debug builds were probably confused about what they were actually getting, but I didn't mean because they were stupid Windows users, I meant because the debug builds are so confusing that even folks on the Python core team are confused about what they're actually getting. -n -- Nathaniel J. Smith -- https://vorpus.org

The very first question I asked was whether this would let us converge the ABIs, and the answer was "no".
Otherwise I'd have said go for it, despite the C runtime issues.
I don't see that in the thread... just Victor saying he isn't sure whether there might be other ABI incompatibilities lurking that he hasn't found yet. Did I miss something?
I'm mostly interested in this because of the possibility of converging the ABIs. If you think that the C runtime thing isn't a blocker for that, then that's useful information. Though obviously we still need to figure out whether there are any other blockers :-).
The reason we ship debug Python binaries is because debug builds use a different C Runtime, so if you do a debug build of an extension module you're working on it won't actually work with a non-debug build of CPython.
...But this is an important point. I'd forgotten that MSVC has a habit of changing the entire C runtime when you turn on the compiler's debugging mode.
Technically they are separate options, but most project files are configured such that *their* Debug/Release switch affects both the compiler options (optimization) and the linker options (C runtime
The answer is yes and it's my primary goal. See my first email: "This change makes the debug build ABI closer to the release build ABI". To be honest, I am now lost in this long thread :-) I don't recall why I started to argue so much about the memory footprint, it's not really the main point here. Victor linkage).
So how do other projects handle this? I guess historically the main target audience for Visual Studio was folks building monolithic apps, where you can just rebuild everything with whatever options you want, and compared to that Python extensions are messier. But Python isn't the only project in this boat. Do ruby, nodejs, R, etc., all provide separate debug builds with incompatible ABIs on Windows, and propagate that information throughout their module/package ecosystem?
Mostly I hear complaints about those languages *not* providing any help here. Python is renowned for having significantly better Windows support than any of them, so they're the wrong comparison to make in my opinion. Arguing that we should regress because other languages haven't caught up to us yet makes no sense.
The tools that are better than Python typically don't ship debug builds either, unless you specifically request them. But they also don't leak their implementation details all over the place. If we had a better C API, we wouldn't have users who needed to match ABIs.
Do you happen to have a list of places where the C API leaks details of the underlying CRT?
(I'm mostly curious because whenever I've looked my conclusion was essentially: "Well....... I don't see any places that are *definitely* broken, so maybe mixing CRTs is fine? but I have zero confidence that I caught everything, so probably better to play it safe?". At least on py3 – I know the py2 C API was definitely broken if you mixed CRTs, because of the exposed FILE*.)
For the most part, disabling optimizations in your own extension but using the non-debug ABI is sufficient, and if you're having to deal with other people's packages then maybe you don't have any choice (though I do know of people who have built debug versions of numpy before - turns out Windows developers are often just as capable as non-Windows developers when it comes to building things ;)
I'm not sure why you think I was implying otherwise? I'm sorry if you thought I was attacking your users or something. I did say that I thought most users downloading the debug builds were probably confused about what they were actually getting, but I didn't mean because they were stupid Windows users, I meant because the debug builds are so confusing that even folks on the Python core team are confused about what they're actually getting.
-n
-- Nathaniel J. Smith -- https://vorpus.org _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
-- Night gathers, and now my watch begins. It shall not end until my death.

On 12Apr2019 1819, Nathaniel Smith wrote:
On Fri, Apr 12, 2019 at 5:05 PM Steve Dower <steve.dower@python.org> wrote:
On Thu, Apr 11, 2019 at 8:26 AM Steve Dower <steve.dower@python.org> wrote: The very first question I asked was whether this would let us converge
On 12Apr.2019 1643, Nathaniel Smith wrote: the ABIs, and the answer was "no".
Otherwise I'd have said go for it, despite the C runtime issues.
I don't see that in the thread... just Victor saying he isn't sure whether there might be other ABI incompatibilities lurking that he hasn't found yet. Did I miss something?
"I don't know" means we can't say the APIs are converged, which is a no. I don't think you missed anything, but just read it through a different filter.
I'm mostly interested in this because of the possibility of converging the ABIs. If you think that the C runtime thing isn't a blocker for that, then that's useful information. Though obviously we still need to figure out whether there are any other blockers :-). [SNIP] Do you happen to have a list of places where the C API leaks details of the underlying CRT?
(I'm mostly curious because whenever I've looked my conclusion was essentially: "Well....... I don't see any places that are *definitely* broken, so maybe mixing CRTs is fine? but I have zero confidence that I caught everything, so probably better to play it safe?". At least on py3 – I know the py2 C API was definitely broken if you mixed CRTs, because of the exposed FILE*.)
Not since the discussions about migrating to VS 2015, but a few off the top of my head: * locale * file descriptors * stream buffers * thread locals * exception [handler] state (yes, there are exceptions used within the CRT, and they occasionally intentionally leak out past the C code) * atexit handlers * internal callbacks (mostly debug handlers, but since we're talking about debugging...) I'm pretty sure if I did some digging I'd be able to figure out which of these come from vcruntime140.dll vs ucrtbase.dll, and then come up with some far-too-clever linker options to make some of these more consistent, but there's no complete solution other than making sure you've got a complete debug or complete release build.
For the most part, disabling optimizations in your own extension but using the non-debug ABI is sufficient, and if you're having to deal with other people's packages then maybe you don't have any choice (though I do know of people who have built debug versions of numpy before - turns out Windows developers are often just as capable as non-Windows developers when it comes to building things ;)
I'm not sure why you think I was implying otherwise? I'm sorry if you thought I was attacking your users or something. I did say that I thought most users downloading the debug builds were probably confused about what they were actually getting, but I didn't mean because they were stupid Windows users, I meant because the debug builds are so confusing that even folks on the Python core team are confused about what they're actually getting.
"Our users", please :) In my experience, Windows developers just treat debug and release builds as part of the normal development process. The only confusion I've seen has been related to CPython's not-quite-Windows-ish approach to debug builds, and in practically every case it's been enough to explain "release CPython uses a different CRT to your debug extension, but once you align those it'll be fine". I definitely *do not* want to force or encourage package developers to release debug ABI versions of their prebuilt packages. But at the same time I don't want to remove the benefits that debug builds currently include. Basically, I'm happy with the status quo, and the users I talk to are happy with it. So I'd rather not worry about optimising debug builds for speed or memory usage. (It's a question of direction more than anything else, and until we get some official statement of direction then I'll keep advocating a direction based on my experiences ;) ) Cheers, Steve

On Thu, 11 Apr 2019 08:26:47 -0700 Steve Dower <steve.dower@python.org> wrote:
On 10Apr2019 1917, Nathaniel Smith wrote:
It sounds like --with-pydebug has accumulated a big grab bag of unrelated features, mostly stuff that was useful at some point for some CPython dev trying to debug CPython itself? It's clearly not designed with end users as the primary audience, given that no-one knows what it actually does and that it makes third-party extensions really awkward to run. If that's right then I think Victor's plan of to sort through what it's actually doing makes a lot of sense, especially if we can remove the ABI breaking stuff, since that causes a disproportionate amount of trouble.
Does it really cause a "disproportionate" amount of trouble? It's definitely not meant for anyone who isn't working on C code, whether in CPython, an extension or a host application. If you want to use third-party extensions and are not able to rebuild them, that's a very good sign that you probably shouldn't be on the debug build at all.
I can't really agree with that. There are third-party extensions that have non-trivial build requirements. The fact that you have to rebuild third-party dependencies is a strong deterrent against using pydebug builds even when they may be actually useful (for example when debugging an extension module of your own). If you could just install mainstream binary packages (e.g. from Anaconda or PyPI) on a debug build interpreter, the pain would go away. Regards Antoine.

On Mon, 15 Apr 2019 12:50:00 +0200 Antoine Pitrou <solipsis@pitrou.net> wrote:
On Thu, 11 Apr 2019 08:26:47 -0700 Steve Dower <steve.dower@python.org> wrote:
On 10Apr2019 1917, Nathaniel Smith wrote:
It sounds like --with-pydebug has accumulated a big grab bag of unrelated features, mostly stuff that was useful at some point for some CPython dev trying to debug CPython itself? It's clearly not designed with end users as the primary audience, given that no-one knows what it actually does and that it makes third-party extensions really awkward to run. If that's right then I think Victor's plan of to sort through what it's actually doing makes a lot of sense, especially if we can remove the ABI breaking stuff, since that causes a disproportionate amount of trouble.
Does it really cause a "disproportionate" amount of trouble? It's definitely not meant for anyone who isn't working on C code, whether in CPython, an extension or a host application. If you want to use third-party extensions and are not able to rebuild them, that's a very good sign that you probably shouldn't be on the debug build at all.
I can't really agree with that. There are third-party extensions that have non-trivial build requirements. The fact that you have to rebuild third-party dependencies is a strong deterrent against using pydebug builds even when they may be actually useful (for example when debugging an extension module of your own).
Oh, and as a datapoint, there are user requests for pydebug builds in Anaconda and conda-forge: https://github.com/ContinuumIO/anaconda-issues/issues/80 https://github.com/conda-forge/staged-recipes/issues/1593 The problem is, while it's technically relatively easy to build and distribute a special build of Python, to make it useful implies also building a whole separate distribution of Python libraries as well. I suspect the latter is why those issues were never acted upon. So, there's actual demand from people who would (probably) benefit from it, but are blocked by burden of recompiling all dependencies. Regards Antoine.

10.04.19 14:01, Victor Stinner пише:
Disabling Py_TRACE_REFS by default in debug mode reduces the Python memory footprint. Py_TRACE_REFS costs 2 pointers per PyObject: 16 bytes on 64-bit platforms.
Does not the memory allocator in debug mode have even larger cost per allocated block?

Le jeu. 11 avr. 2019 à 07:49, Serhiy Storchaka <storchaka@gmail.com> a écrit :
10.04.19 14:01, Victor Stinner пише:
Disabling Py_TRACE_REFS by default in debug mode reduces the Python memory footprint. Py_TRACE_REFS costs 2 pointers per PyObject: 16 bytes on 64-bit platforms.
Does not the memory allocator in debug mode have even larger cost per allocated block?
What do you mean? That a debug build already waste too much memory and so doesn't deserve to have a smaller memory footprint? I'm not sure that I understand your point. A smaller footprint can mean that more people may be able to use debug build. Disabling Py_TRACE_REFS should make Python a little bit faster. My question stands: is it worth to keep a feature which "waste" resources (memory footprint and CPU) and nobody uses it? Debug hooks add 4 x sizeof(size_t) bytes to every memory allocation to detect buffer underflow and buffer overflow. That's 32 bytes per memory allocation. By the way, IMHO the "serial number" is not really useful and could be removed to only add 3 x sizeof(size_t) (24 bytes). But the debug hook is very useful, it's common that it helps me to find real bugs in the code. Whereas I don't recall that Py_TRACE_REFS helped me even once. Victor -- Night gathers, and now my watch begins. It shall not end until my death.

On 11Apr2019 0228, Victor Stinner wrote:
Le jeu. 11 avr. 2019 à 07:49, Serhiy Storchaka <storchaka@gmail.com> a écrit :
10.04.19 14:01, Victor Stinner пише:
Disabling Py_TRACE_REFS by default in debug mode reduces the Python memory footprint. Py_TRACE_REFS costs 2 pointers per PyObject: 16 bytes on 64-bit platforms.
Does not the memory allocator in debug mode have even larger cost per allocated block?
What do you mean? That a debug build already waste too much memory and so doesn't deserve to have a smaller memory footprint? I'm not sure that I understand your point.
He means you're micro-optimising something that doesn't matter. If you really wanted to reduce memory usage in debug builds, you'd go after one of the bigger "problems".
A smaller footprint can mean that more people may be able to use debug build. Disabling Py_TRACE_REFS should make Python a little bit faster.
This isn't one of the goals of a debug build though, and you haven't pointed at any examples of people not being able to use the debug build because of memory pressure. (Which is because most people who are not working on CPython itself should not be using the debug build.)
My question stands: is it worth to keep a feature which "waste" resources (memory footprint and CPU) and nobody uses it?
You haven't even tried to show that nobody uses it, other than pointing out that it exposes a crash due to a refcounting bug (which is kind of the point ;) ). Cheers, Steve

11.04.19 12:28, Victor Stinner пише:
Le jeu. 11 avr. 2019 à 07:49, Serhiy Storchaka <storchaka@gmail.com> a écrit :
10.04.19 14:01, Victor Stinner пише:
Disabling Py_TRACE_REFS by default in debug mode reduces the Python memory footprint. Py_TRACE_REFS costs 2 pointers per PyObject: 16 bytes on 64-bit platforms.
Does not the memory allocator in debug mode have even larger cost per allocated block?
What do you mean? That a debug build already waste too much memory and so doesn't deserve to have a smaller memory footprint? I'm not sure that I understand your point.
If reducing the Python memory footprint is an argument for disabling Py_TRACE_REFS, it is a weak argument because there is larger overhead in the debug build. On other hand, since using the debug allocator doesn't cause problems with compatibility, it may be possible to use similar technique for the objects double list. Although this is not easy because of objects placed at static memory.

On Thu, Apr 11, 2019 at 8:32 AM Serhiy Storchaka <storchaka@gmail.com> wrote:
On other hand, since using the debug allocator doesn't cause problems with compatibility, it may be possible to use similar technique for the objects double list. Although this is not easy because of objects placed at static memory.
I guess one could track static objects separately, e.g. keep a simple global PyList containing all statically allocated objects. (This is easy since we know they're all immortal.) And then sys.getobjects() could walk the heap objects and statically allocated objects separately. -n -- Nathaniel J. Smith -- https://vorpus.org

Le jeu. 11 avr. 2019 à 17:33, Serhiy Storchaka <storchaka@gmail.com> a écrit :
If reducing the Python memory footprint is an argument for disabling Py_TRACE_REFS, it is a weak argument because there is larger overhead in the debug build.
The "serialno" field of debug memory allocators is documented as: "an excellent way to set a breakpoint on the next run, to capture the instant at which this block was passed out." I'm debugging crashes and memory leaks in CPython for 10 years, and I simply never had to use "serialno". I wrote https://bugs.python.org/issue36611 to remove the serialno field of debug hooks on Python memory allocators: it reduces the memory footprint by 5% (ex: 1.2 MiB on 33.0 MiB when running test_asyncio). Python is used on devices with low memory (ex: 256 MiB for the whole system). Allowing developers to use a debug build on such devices seem to be a legit rationale for such change. The debug build is very useful to identify bugs in C extensions.
On other hand, since using the debug allocator doesn't cause problems with compatibility, it may be possible to use similar technique for the objects double list. Although this is not easy because of objects placed at static memory.
I'm not sure of what you means by "objects placed at static memory": the double linked list of all Python objects is created at runtime. _ob_next and _ob_prev are initialized statically to NULL. I would be interested if Py_TRACE_REFS could be reimplemented in a more dynamic fashion. Even if it would still require a debug build, it would be nice to be able to "opt-in" for this feature (have it disabled by default, again, to reduce the overhead and reduce the memory footprint), as tracemalloc which plugs itself into memory allocators to attach traces to memory blocks. Except Guido who wrote "I recall finding memory leaks using this. (E.g. I remember a leak in Zope due to a cache that was never pruned.) But presumably gc.get_objects() would have been sufficient. (IIRC it didn't exist at the time.)", at this point, nobody said that they use Py_TRACE_REFS. So I'm not sure that it's worth it to invest time on a feature if nobody uses it? Victor -- Night gathers, and now my watch begins. It shall not end until my death.

Le ven. 12 avr. 2019 à 12:57, Victor Stinner <vstinner@redhat.com> a écrit :
I wrote https://bugs.python.org/issue36611 to remove the serialno field of debug hooks on Python memory allocators: it reduces the memory footprint by 5% (ex: 1.2 MiB on 33.0 MiB when running test_asyncio).
I measured the memory footprint when I combine my two changes: * disable Py_TRACE_REFS: https://bugs.python.org/issue36465 * disable/remove serialno field: https://bugs.python.org/issue36611 python3 -m test test_asyncio, without => with the change: 34,038.0 kB => 30,612.2 kB (-3,425.8 kiB, -10%) A reduction of 3.4 MiB on 34.0 MiB is quite significant, no? Victor -- Night gathers, and now my watch begins. It shall not end until my death.

Victor Stinner wrote:
Python is used on devices with low memory (ex: 256 MiB for the whole system). Allowing developers to use a debug build on such devices seem to be a legit rationale for such change.
Rather than removing features altogether, maybe the debug build could be split into a number of separate features that can be enabled individually? -- Greg

Le sam. 13 avr. 2019 à 00:38, Greg Ewing <greg.ewing@canterbury.ac.nz> a écrit :
Rather than removing features altogether, maybe the debug build could be split into a number of separate features that can be enabled individually?
I don't propose to *remove* a feature, but just to *disable* it *by default* (when Python is compiled in debug mode): "[WIP] bpo-36465: Py_DEBUG no longer implies Py_TRACE_REFS #12615" https://github.com/python/cpython/pull/12615/files In short, my change just removes: /* Py_DEBUG implies Py_TRACE_REFS. */ #if defined(Py_DEBUG) && !defined(Py_TRACE_REFS) #define Py_TRACE_REFS #endif The feature will still be accessible if you compile Python with Py_TRACE_REFS defined. In practice, I understood that the debug build of Python is not known by all core developers, and it seems like it's mostly used by core developers. Maybe it's even only used by core developers? It's hard to say. If it's only used by core developers, I hope that all core devs know to compile Python :-) Victor -- Night gathers, and now my watch begins. It shall not end until my death.

Victor Stinner wrote:
I'm not sure of what you means by "objects placed at static memory": the double linked list of all Python objects is created at runtime. _ob_next and _ob_prev are initialized statically to NULL.
The trick of allocating extra memory in front of the object would be harder to pull off for statically allocated objects, although probably not impossible. -- Greg

Serhiy Storchaka schrieb am 11.04.19 um 17:30:
If reducing the Python memory footprint is an argument for disabling Py_TRACE_REFS, it is a weak argument because there is larger overhead in the debug build.
I think what Victor is argueing is rather that we have better ways to debug memory problems these days, so we might be able to get rid of a relict that no-one is using (or should be using) anymore and that has its drawbacks (such as a very different ABI and higher memory load). I don't really have an opinion here, but I can at least say that I never found a use case for Py_TRACE_REFS myself and therefore certainly wouldn't miss it. Stefan

On Fri, Apr 12, 2019 at 5:51 AM Stefan Behnel <stefan_ml@behnel.de> wrote:
Serhiy Storchaka schrieb am 11.04.19 um 17:30:
If reducing the Python memory footprint is an argument for disabling Py_TRACE_REFS, it is a weak argument because there is larger overhead in the debug build.
I think what Victor is argueing is rather that we have better ways to debug memory problems these days, so we might be able to get rid of a relict that no-one is using (or should be using) anymore and that has its drawbacks (such as a very different ABI and higher memory load).
I don't really have an opinion here, but I can at least say that I never found a use case for Py_TRACE_REFS myself and therefore certainly wouldn't miss it.
I have a feeling that at some point someone might want to use this to debug some leak (presumably caused by C code) beyond what gc.get_objects() can report. But I agree that it isn't useful to the vast majority of users of a regular debug build. So let's leave it off by default even in debug builds. But let's not delete the macros. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him/his **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>
participants (10)
-
Antoine Pitrou
-
Brett Cannon
-
Greg Ewing
-
Guido van Rossum
-
Nathaniel Smith
-
Serhiy Storchaka
-
Stefan Behnel
-
Steve Dower
-
Terry Reedy
-
Victor Stinner