[Python-Dev] CPython optimization: storing reference counters outside of objects

Guido van Rossum guido at python.org
Mon May 23 23:08:48 CEST 2011

On Mon, May 23, 2011 at 1:55 PM, Artur Siekielski
<artur.siekielski at gmail.com> wrote:
> Ok, I managed to make a quick but working patch (sufficient to get
> working interpreter, it segfaults for extension modules). It uses the
> "ememoa" allocator (http://code.google.com/p/ememoa/) which seems a
> reasonable pool allocator. The patch: http://dpaste.org/K8en/. The
> main obstacle was that there isn't a single function/macro that can be
> used to initialize all PyObjects, so I had to initialize static
> PyObjects (mainly PyTypeObjects) by hand.
> I used a naive quicksort algorithm as a benchmark:
> http://dpaste.org/qquh/ . The result is that after patching it runs
> 50% SLOWER. I profiled it and allocator methods used 35% time. So
> there is still 15% performance loss even if the allocator is poor.
> Anyway, I'd like to have working copy-on-write in CPython - in the
> presence of GIL I find it important to have multiprocess programs
> optimized (and I think it's a common idiom that a parent process
> prepares some big data structure, and child "worker" processes do some
> read-only quering).

That is the question though -- *is* the idiom commonly used? It
doesn't seem to me that it would scale all that far, since it only
works as long as all forked copies live on the same machine and run on
the same symmetrical multi-core processor.

--Guido van Rossum (python.org/~guido)

More information about the Python-Dev mailing list