Queue cleanup

Lawrence D'Oliveiro ldo at geek-central.gen.new_zealand
Wed Sep 1 02:55:57 CEST 2010

In message <7x39tz42fd.fsf at ruckus.brouhaha.com>, Paul Rubin wrote:

> Dennis Lee Bieber <wlfraed at ix.netcom.com> writes:
>> Heap marking, OTOH, tends to run at indeterminate times, which could
>> have an impact if one needs predictable response timings
> Reference counting has the same problem.  If you drop the last reference
> to a complex structure, it could take quite a long time to free all the
> components.

One difference is the interaction with caching behaviour. When a reference-
counted object is freed, the odds are that happens fairly soon after the 
last access, so the object will still be in the CPU cache, and access will 
be fast.

Whereas garbage collection will happen at some indeterminate time long after 
the last access to the object, when it very likely will no longer be in the 
cache, and have to be brought back in just to be freed, quite likely bumping 
out something else that the running program needs to access.

This is one reason why garbage collection is still considered an expensive 
technique. Computing power has improved by something like five orders of 
magnitude over the last half-century, making possible all kinds of 
productivity-enhancing techniques that were once considered expensive to 
become commonplace: high-level languages, dynamic memory allocation, stacks, 
hardware floating-point, memory protection and so on.

But alone of all of these, garbage collection still remains just as costly 
to implement as ever. That should tell you something about how poorly it 
matches the characteristics of modern computing hardware.

More information about the Python-list mailing list