[Python-Dev] Windows and PyObject_NEW
Tue, 28 Mar 2000 09:02:21 +1000
Sorry for the delay, but Gordon's reply was accurate so should have kept you
> I've been reading Jeffrey Richter's "Advanced Windows" last night in order
> to try understanding better why PyObject_NEW is implemented
> differently for
So that is where the heaps discussion came from :-) The problem is simply
"too many heaps are available".
> Again, I feel uncomfortable with this, especially now, when
> I'm dealing with the memory aspect of Python's object
It is this exact reason it was added in the first place.
I believe this code predates the "_d" convention on Windows. AFAIK, this
could could be removed today and everything should work (but see below why
it probably wont)
MSVC allows you to choose from a number of CRT versions. Only in one of
these versions is the CRTL completely shared between the .EXE and all the
various .DLLs in the application.
What was happening is that this macro ended up causing the "malloc" for a
new object to occur in Python15.dll, but the Python type system meant that
tp_dealloc() (to cleanup the object) was called in the DLL implementing the
new type. Unless Python15.dll and our extension DLL shared the same CRTL
(and hence the same malloc heap, fileno table etc) things would die. The
DLL version of "free()" would complain, as it had never seen the pointer
before. This change meant the malloc() and the free() were both implemented
in the same DLL/EXE
This was particularly true with Debug builds. MSVC's debug CRTL
implementations have some very nice debugging features (guard-blocks, block
validity checks with debugger breapoints when things go wrong, leak
tracking, etc). However, this means they use yet another heap. Mixing
debug builds with release builds in Python is a recipe for disaster.
Theoretically, the problem has largely gone away now that a) we have
seperate "_d" versions and b) the "official" postition is to use the same
CRTL as Python15.dll. However, is it still a minor FAQ on comp.lang.python
why PyRun_ExecFile (or whatever) fails with mysterious errors - the reason
is exactly the same - they are using a different CRTL, so the CRTL can't map
the file pointers correctly, and we get unexplained IO errors. But now that
this macro hides the malloc problem, there may be plenty of "home grown"
extensions out there that do use a different CRTL and dont see any
problems - mainly cos they arent throwing file handles around!
Finally getting to the point of all this:
We now also have the PyMem_* functions. This problem also doesnt exist if
extension modules use these functions instead of malloc()/free(). We only
ask them to change the PyObject allocations and deallocations, not the rest
of their code, so it is no real burden. IMO, we should adopt these
functions for most internal object allocations and the extension
Also, we should consider adding relevant PyFile_fopen(), PyFile_fclose()
type functions, that simply are a thin layer over the fopen/fclose
functions. If extensions writers used these instead of fopen/fclose we
would gain a few fairly intangible things - lose the minor FAQ, platforms
that dont have fopen at all (eg, CE) would love you, etc.