Getting references to objects without incrementing reference counters

Dan Stromberg drsalists at gmail.com
Mon Nov 15 00:41:07 CET 2010


On Sun, Nov 14, 2010 at 11:08 AM, Artur Siekielski <
artur.siekielski at gmail.com> wrote:

> Hi.
> I'm using CPython 2.7 and Linux. In order to make parallel
> computations on a large list of objects I want to use multiple
> processes (by using multiprocessing module). In the first step I fill
> the list with objects and then I fork() my worker processes that do
> the job.
>

You could try
http://docs.python.org/library/multiprocessing.html#multiprocessing.Array to
put the data in shared memory.

Copy on write is great, but not a panacea, and not optimal in any sense of
the word I'm accustomed to.  Like you said it depends on page boundaries,
and also it can kind of fall apart if a shared library wasn't built as
position independent code due to all the address fixups.  And it's mostly
code that gets shared - data tends to diverge.

Supposedly at one time Sun was able to drastically reduce the memory
requirements of Solaris by grouping related variables (things that were
likely to be needed at the same time) into the same pages of memory.

Is it possible you're just seeing a heap grow that isn't getting garbage
collected as often as you expect?  Normal computation in Python creates
objects without free'ing them all that often.

HTH
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-list/attachments/20101114/47842dfb/attachment.html>


More information about the Python-list mailing list