[Python-ideas] Fwd: Concurrent safety?
Mike Meyer
mwm at mired.org
Tue Nov 1 16:54:58 CET 2011
On Tue, Nov 1, 2011 at 1:31 AM, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On Tue, Nov 1, 2011 at 6:01 PM, Stephen J. Turnbull <stephen at xemacs.org>
> wrote:
> > > I've identified the problem I want to solve: I want to make
> > > concurrent use of python objects "safe by default",
> >
> > But that's not what you've proposed, AIUI. You've proposed making
> > concurrent use *safer*, but not yet *safe*. That's quite different
> > from the analogy with automatic memory management, where the
> > programmer can't do anything dangerous with pointers (because they
> > can't do anything at all). The analogous model for concurrency is
> > processes, it seems to me. (I don't have a desperate need for high-
> > performance concurrency, so I take no position on processes + message
> > passing vs. threads + shared resources.)
>
> Guido and python-dev in general *have* effectively taken a position on
> that, though (mainly due to Global Interpreter Lock discussions).
>
> 1. Even for threads, the recommended approach is to use queue.Queue to
> avoid the common concurrency issues (such as race conditions and
> deadlock) associated with explicit locking
> 2. In Python 3, concurrent.futures offers an even *safer* interface
> and higher level interface for many concurrent workloads
> 3. If you use multiple processes and serialised messages, or higher
> level APIs like concurrent.futures, you can not only scale to multiple
> cores, but also to multiple *machines*.
>
I am aware of all this. I've written large systems using Queue.queue and
the multiple process/serialized messages model. I've dealt with code that
tried to mix the two (*not* a good idea). The process model works really
well - if you can use it. The problem is, if you can't, you lose all the
protection it provides. That's the area I'm trying to address.
Also, the process model doesn't prevent these concurrency issues, it just
moves them to external objects. I figure that's an even harder problem,
since it can involve multiple machines. An improvement in the shared
storage case might shed some light on it.
> This has led to a quite deserved reputation for being intolerant of
> changes that claim to make multithreaded development "better", but
> only at the expense of making single-threaded development worse.
>
I think I've found a way to implement the proposal without having a serious
impact on single-threaded code - at least in terms of performance and
having to change the code.
<mike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20111101/db421415/attachment.html>
More information about the Python-ideas
mailing list