mwh at python.net
Wed Feb 25 13:15:23 CET 2004
guy.flowers at Machineworks.com (Guy) writes:
> It might take me a little time to explain this but here goes.
> Firstly I'm not using the latest upto date python releases so my first
> plan is to try more upto date rels of python and win32 libs, as this
> has fixed problems for me in the past (Don't like doing this as it
> usally breaks stuff and sys admin are slow to update software,
> although they do a cracking job, just covering my back.)
> I've created a build script for my companies product, the product is
> in c and can be built using makefiles and gmake and dsps if using
> win32. The script seems to work ok, it builds by setting processes off
> and then it returns the built libs exe to a network area.
> The problems occure when I build more than one flavour of the product,
> (there are 16 possible flavours and about 50 modules which get turned
> into libs and exe's this means theres 800 processes to be set off one
> after the other or if using a clean build 1600 process and thats just
> for win32, to build one flavour it takes about 40 mins on my comp.)
> When using the build script it keeps logs of what its built and
> records all output from the process all these python objs and lists
> are destroyed by using del to try and free up memory, this never seems
> to have that much effect the memory just keeps getting used up until
> usally the script or python crashes due to the lack of mem.
Um, I don't think you understand what 'del' does.
For an oblique perspective on these issues, you might find reading the
messages mentioned in
enlightening, particularly my last one. Quoting:
>> I guess I have a lispers approach to this: view memory as
>> infinite and assume the implementation does enough behind its
>> back to maintain this illusion so long as not too many objects
>> are actually live.
> If you seperate the memory and object concepts, an object can
> cease to exist even though memory is still allocated for it.
Well, this begs another question that I guess underlies all of
this: what does it mean for an object to be "alive"? Again, my
background is functional/lisp-like languages, and I'm stealing
this description more-or-less from Appel's "Modern Compiler
Implementation in ML":
An object is *live* if it will be used again by the executing
At any given point of program execution, determining the set of
live objects can obviously be at least as hard as the halting
problem, so we settle for a weaker notion:
An object is *reachable* if it can be reached by traversing the
object graph from a given set of roots (in Python, things like
locals and the globals of all loaded modules).
The runtime pessimistically assumes that all reachable objects are
live (and reachable but not live objects can be a cause of memory
leaks in Python and other GCed languages).
From this point of view, you never actually /destroy/ objects, you
just forget about them, and a __del__ method is a side-effect of
dropping the last reference to an object -- a slightly unnatural
action to have a side effect!
This is obviously in contrast to C++, where you /do/ destroy
objects (or in the case of local variables, it is clear when the
object will be destroyed as a function of program flow).
I guess you are probably in the "reachable but not live" case of
Instead of doing 'del alist' you might want to try 'del alist[:]' and
see if behaviour improves.
112. Computer Science is embarrassed by the computer.
-- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html
More information about the Python-list