Python's biggest compromises
mwh at python.net
Fri Aug 1 15:47:26 CEST 2003
"Daniel Dittmar" <daniel.dittmar at sap.com> writes:
> John Roth wrote:
> > There's a flaw in your reasoning. The various techniques that
> > descend from mark and sweep (which is what you're
> > calling garbage collection) depend on being able to
> > identify all of the objects pointed to. For objects that are
> > owned by Python, that's a lengthy (that is, inefficient)
> That's what generation scavenging was developed for.
But most generational collectors are copying aren't they? That makes
it challenging to meet the "play nice with C" goal I talked about (not
impossible, but significantly difficult).
> One shouldn't argue by tradition alone, but the fact that the major
> implementations of dynamic languages like LISP and Smalltalk don't
> use reference counting should carry some weight.
True. But the major implementations of these languages are also
usually less portable, and something more of a fiddle to write C
extensions for (at least, for the implementations I know about, which
are mostly CL impls).
If you're writing a native code compiler, it doesn't seem that much of
a drag to have your code about e.g. the processor's registers.
> > It's easy to say that various languages would be improved
> > by adding "real" garbage collection, but those techniques
> > impose significant design constraints on the implementation
> > model.
> True. But one could review these constraints from time to time.
Indeed. People have tried using the Boehm GC with Python, but
generally found that performance got worse, or at least not strikingly
better. I don't think anyone has tried implementing a really good
(And unfortunately, I think one constraint that's not going away is
"not breaking every C extension that's ever been written).
For those of you fearing that the rest of the world might be
making fun of the US because of this: Rest assured, we are.
More information about the Python-list