[concurrency] Common Concurrent Problems
paul at boddie.org.uk
Mon Jun 8 23:10:11 CEST 2009
I noticed recently that there's an effort to show solutions to common
concurrent problems on the python.org Wiki, and I've been using this effort
as an excuse to look over my own work and to improve it in various ways. I
even made a remark on the "99 Concurrent Bottles of Beer" page which appears
to have been wide of the mark - that one would surely try and use operating
system features in the given example in order to provide a more optimal
implementation - and I note that Glyph appears to regard the stated problem
as not really being concurrent.
In the next few days, I intend to release a new version of my pprocess library
in order to more conveniently support problems like the one on the Wiki, but
previous experience has shown that more compelling problems are required.
Some time ago, the "Wide Finder" project attempted to document concurrency
solutions for a log parsing problem which was regarded by some as being
I/O-dominated, and for others led to optimised serial solutions that could
outperform parallel solutions due to the scope for optimisation in most
people's naive code. With pprocess, I bundle the PyGmy raytracer in order to
demonstrate that multiple processes really do get used and can benefit
programs on multiple cores, but I imagine that many people don't regard this
as being "real world"-enough.
How should we extend the problem on the Wiki to be something that doesn't have
a workable serial solution? Does anyone have any suggestions of more
realistic problems, or are we back at the level of Wide Finder?
More information about the concurrency-sig