
On Fri, 13 Sep 2019 at 08:22, Petr Viktorin <encukou@gmail.com> wrote:
On 9/12/19 10:56 PM, Neil Schemenauer wrote:
On 2019-09-12, Petr Viktorin wrote:
On 9/12/19 9:05 PM, Neil Schemenauer wrote:
Slots like np_add could lookup the PyFunction analog and call it.
Where would they look it up?
I think either in tp_dict or you can have a table of object pointers indexed by slot ID.
tp_dict of what? Py_TYPE(self) doesn't work for subclasses.
That object has the reference to the module namespace, just like a PyFunction object does. When the module is first created, you allocate heap objects for each of the PyFunction object thingies and bind them to the module.
Why to the module, and not to the type object (which is bound to the module)?
To match the way PyFunction works. It has a reference to the module global namespace (i.e. __globals__). Likewise, functions defined in an extension module should have a reference to their global namespace.
The binding of a method to the type object should be done by the descriptor. That will make methods implemented in native Python code work similar to methods implemented by C code.
For normal methods, it's almost what PEP 573 does: I don't see much difference between this and storing the *type* with the PyFunction, and passing the type to the C code, which can then access the module state from the type.
For slots, this sounds like a way forward. There's quite a lot practical complications, though, so PEP 573 does not really address slots -- aside from the PyType_DefiningTypeFromSlotFunc crutch, which doesn't pretend to be a very good solution, but it's a very simple one that's available now. (If *that* is where PEP 573 strikes you as inelegant but expedient, then we agree. My educated guess is that an elegant alternative to this will take years to design and implement, and will be largely orthogonal to the rest of PEP 573.)
I also think we may end up deciding that this is something better left to context variables, as those are explicitly designed to be fast enough for the Decimal module to use them instead of a thread local variable, and it will allow C extension methods to cache the reference they actually care about, with whatever form of cache invalidation they want to support (with the separation of the cache by coroutine/thread/interpreter handled by the context variable machinery).
That way even if the slow paths continue to differ between binary extensions (MRO walking and METH_METHOD) and Python source modules (__class__ closure cell and function __globals__), the fast paths will converge (context variable lookup).
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia