Circular references and python

James Logajan JamesL at Lugoj.Com
Fri Feb 4 06:47:57 CET 2000

Neel Krishnaswami wrote:
> James Logajan <JamesL at Lugoj.Com> wrote:
> > Neel Krishnaswami wrote:
> > > Apply Neil Schemenauer's patches that convinces Python to use the
> > > Boehm garbage collector, or just live with the garbage, if your
> > > program is relatively short-lived. You can find his patch at:
> > >
> > >
> > >
> > > Manually freeing memory is ugly and error-prone. Don't do it. :)
> >
> > The vast majority of code (and programming languages) in the world
> > require explicit memory de-allocation. I don't believe that a GC
> > scheme has yet been invented that will work well for all problem
> > domains. I'm sure if it had, we'd all be using it by now. ;)
> Shrug. Then you can allocate a hunk of memory and manually collect the
> pieces of it, and just let the gc handle the whole hunk. You still win
> on the whole.

Sorry, not sure I understand that.

> I think a lot of people are working with severely outdated notions of
> the performance of of garbage collectors. Even 16 or 17 years ago
> garbage collection was expensive enough that Lisp Machiness made it
> something you ran manually every few days.

Have the algorithms improved, or have clock speeds gotten high enough that
the problem has been submerged? Aren't there important pathological problem
domains (data structures) which limit most all GC schemes? I know in
sorting, I have a choice of quick-sort which is generally faster than most
other sorting schemes, but there exist orderings which reduce it to N^2
complexity (pathological cases). On the other hand, if I'm willing to run
just a tad slower than quick-sort, I can use heap-sort and be assured of
N*log(N) complexity for all orderings. I'll admit I'm not sure what the
situation is with GC these days; any good (and relatively current) books on
the subject? Web searches come up with too damn many schemes, which suggests
to an old fart like me that the no single good GC scheme has been worked

> But the state of the art has improved hugely: in _Compiling with
> Continuations_, Andrew Appel claims that a well-tuned generational
> garbage collector is competative with stack allocation in terms of
> speed, for functional and OO languages. (IIRC, SML/NJ allocates even
> activation records on the heap, and wins doubly because this
> simplifies the runtime.)

I'll look into Appel's book, but suggestions from others are appreciated.

More information about the Python-list mailing list