[New-bugs-announce] [issue17025] reduce multiprocessing.Queue contention

Charles-François Natali report at bugs.python.org
Thu Jan 24 16:49:06 CET 2013

New submission from Charles-François Natali:

Here's an implementation of the idea posted on python-ideas (http://mail.python.org/pipermail/python-ideas/2013-January/018846.html).

The principle is really simple, we just serialize/unserialize the objects before/after holding the locks. This leads to reduced contention.

Here are the results of a benchmark using from 1 reader/1 writer up to 4 readers/4 writers, on a 8-cores box:

without patch:
$ ./python /tmp/multi_queue.py
took 0.8340198993682861 seconds with 1 workers
took 1.956531047821045 seconds with 2 workers
took 3.175778865814209 seconds with 3 workers
took 4.277260780334473 seconds with 4 workers

with patch:
$ ./python /tmp/multi_queue.py
took 0.7945001125335693 seconds with 1 workers
took 0.7428359985351562 seconds with 2 workers
took 0.7897098064422607 seconds with 3 workers
took 1.1860828399658203 seconds with 4 workers

I tried Richard's suggestion of serializing the data inside put(), but this reduces performance quite notably:
$ ./python /tmp/multi_queue.py
took 1.412883996963501 seconds with 1 workers
took 1.3212130069732666 seconds with 2 workers
took 1.2271699905395508 seconds with 3 workers
took 1.4817359447479248 seconds with 4 workers

Although I didn't analyse it further, I guess one reason could be that if the serializing is done in put(), the feeder thread has nothing to do but keep waiting for data to be available from the buffer, send it, and block until there's more to do: basically, it almost doesn't use its time-slice, and spends its time blocking and doing context switches.

files: queues_contention.diff
keywords: patch
messages: 180532
nosy: neologix, pitrou, sbt
priority: normal
severity: normal
status: open
title: reduce multiprocessing.Queue contention
type: performance
Added file: http://bugs.python.org/file28812/queues_contention.diff

Python tracker <report at bugs.python.org>

More information about the New-bugs-announce mailing list