On 2019-03-05, Steve Dower wrote:
I don't agree.
To be at all useful, I think your last sentence needs to be "force all PyObject structures to be allocated by *the single CPython memory allocator for the current runtime*".
I think you don't need to have a single allocator. My vision is that the responsibility of allocating and deallocating PyObject memory is responsibilty of the Python VM. It might use specialized allocators for different purposes, for example.
That means we don't need to store the deallocator function for each object, and can simply pass the memory blocks to a known allocator (even if that's been switched out at runtime startup, it won't have changed in the meantime).
It is up to the Python VM to decide how that's done. The VM might still store a deallocator function per type, like what is currently done.
However, in the context of features like NVRAM, GPU/CPU contexts, and even subinterpreters and subprocesses, I think there's a huge advantage in having objects know how to deallocate themselves. Without this, there's no way to support these more advanced concepts transparently. IMHO, that would be missing a huge opportunity.
Does it help if the PyObject can have a pointer to memory allocated in these different ways? It seems to me that allows most of the benefits but still allows the Python VM to GC PyObject memory in an efficient way. So a Python extension type can still allocate some extra memory assocated with instances of it and there is a dealloc method called by the VM to clean it up again. Just the memory for the PyObject itself must be allocated and deallocated by the VM itself.
Maybe that is not flexible enough to do what you want. It adds another layer of indirection. I'm glad you bring up those cases because the new API should support those kinds of things.