On Mon, Jul 30, 2018 at 10:46 AM, Victor Stinner firstname.lastname@example.org wrote:
2018-07-29 23:41 GMT+02:00 Jeroen Demeyer J.Demeyer@ugent.be:
For example, you mention that you want to make Py_INCREF() a function call instead of a macro. But since Py_INCREF is very common, I would guess that this would make performance worse (not by much maybe but surely measurable).
For the very specific case of Py_INCREF(), yes, I agree that performance is an issue. But I don't see how I would hide completely the PyObject structure without converting Py_INCREF() macro with a function call. (I have reasons to want to hide everything, explained in the project.)
The open question is if the cost of using function calls for Py_INCREF/DECREF versus the benefit of having the ability to modify deeply CPython internals.
I'm not sure that it's worth to bet at this point, it's too early, and we can decide that later. Moreover, it's also possible to keep Py_INCREF() as a macro in the "backward compatible" mode, but require a function call in the mode which hides all implementation details (the one where you can experiment deep CPython internals changes).
If the macro and function are absolutely 100% compatible, it would be possible to set compilation to use the function by default, and have a directive that switches to using the macro. It'd improve performance at the price of locking you to the exact CPython build. So within CPython itself, there'd be no performance cost (ergo if you mess with the internals, you have to recompile all of CPython), most extension libraries would pay a small (probably immeasurable) price for compatibility, and a small handful could opt to improve performance at the price of breaking if anything changes.