[Python-Dev] [ANN]: "newthreading" - an approach to simplified thread usage, and a path to getting rid of the GIL

John Nagle nagle at animats.com
Sat Jun 26 18:39:19 CEST 2010


On 6/26/2010 7:44 AM, Jesse Noller wrote:
> On Sat, Jun 26, 2010 at 9:29 AM, Michael Foord
> <fuzzyman at voidspace.org.uk>  wrote:
>> On 26/06/2010 07:11, John Nagle wrote:
>>>
>>> We have just released a proof-of-concept implementation of a new
>>> approach to thread management - "newthreading".
....

>> The import * form is considered bad practise in *general* and
>> should not be recommended unless there is a good reason.

    I agree.  I just did that to make the examples cleaner.

>> however the introduction of free-threading in Python has not been
>> hampered by lack of synchronization primitives but by the
>> difficulty of changing the interpreter without unduly impacting
>> single threaded code.

     That's what I'm trying to address here.

>> Providing an alternative garbage collection mechanism other than
>> reference counting would be a more interesting first-step as far as
>> I can see, as that removes the locking required around every access
>> to an object (which currently touches the reference count).
>> Introducing free-threading by *changing* the threading semantics
>> (so you can't share non-frozen objects between threads) would not
>> be acceptable. That comment is likely to be based on a
>> misunderstanding of your future intentions though. :-)

     This work comes out of a discussion a few of us had at a restaurant
in Palo Alto after a Stanford talk by the group at Facebook which
is building a JIT compiler for PHP.  We were discussing how to
make threading both safe for the average programmer and efficient.
Javascript and PHP don't have threads at all; Python has safe
threading, but it's slow.  C/C++/Java all have race condition
problems, of course.  The Facebook guy pointed out that you
can't redefine a function dynamically in PHP, and they get
a performance win in their JIT by exploiting this.

     I haven't gone into the memory model in enough detail in the
technical paper.  The memory model I envision for this has three
memory zones:

     1.  Shared fully-immutable objects: primarily strings, numbers,
and tuples, all of whose elements are fully immutable.  These can
be shared without locking, and reclaimed by a concurrent garbage
collector like Boehm's.  They have no destructors, so finalization
is not an issue.

     2.  Local objects.  These are managed as at present, and
require no locking.  These can either be thread-local, or local
to a synchronized object.  There are no links between local
objects under different "ownership".  Whether each thread and
object has its own private heap, or whether there's a common heap with
locks at the allocator is an implementation decision.

     3.  Shared mutable objects: mostly synchronized objects, but
also immutable objects like tuples which contain references
to objects that aren't fully immutable.  These are the high-overhead
objects, and require locking during reference count updates, or
atomic reference count operations if supported by the hardware.
The general idea is to minimize the number of objects in this
zone.

     The zone of an object is determined when the object is created,
and never changes.   This is relatively simple to implement.
Tuples (and frozensets, frozendicts, etc.) are normally zone 2
objects.  Only "freeze" creates collections in zones 1 and 3.
Synchronized objects are always created in zone 3.
There are no difficult handoffs, where an object that was previously
thread-local now has to be shared and has to acquire locks during
the transition.

     Existing interlinked data structures, like parse trees and GUIs,
are by default zone 2 objects, with the same semantics as at
present.  They can be placed inside a SynchronizedObject if
desired, which makes them usable from multiple threads.
That's optional; they're thread-local otherwise.

     The rationale behind "freezing" some of the language semantics
when the program goes multi-thread comes from two sources -
Adam Olsen's Safethread work, and the acceptance of the
multiprocessing module.  Olsen tried to retain all the dynamism of
the language in a multithreaded environment, but locking all the
underlying dictionaries was a boat-anchor on the whole system,
and slowed things down so much that he abandoned the project.
The Unladen Swallow documentation indicates that early thinking
on the project was that Olsen's approach would allow getting
rid of the GIL, but later notes indicate that no path to a
GIL-free JIT system is currently in development.

     The multiprocessing module provides semantics similar to
threading with "freezing".  Data passed between processes is "frozen"
by pickling.  Processes can't modify each other's code.  Restrictive
though the multiprocessing module is, it appears to be useful.
It is sometimes recommended as the Pythonic approach to multi-core CPUs.
This is an indication that "freezing" is not unacceptable to the
user community.

     Most of the real-world use cases for extreme dynamism
involve events that happen during startup.  Configuration files are
read, modules are selectively included, functions are overridden, tables
of references to functions are set up, regular expressions are compiled,
and the code is brought into the appropriately configured state.  Then
the worker threads are started and the real work starts. The
"newthreading" approach allows all that.

     After two decades of failed attempts remove the Global
Interpreter Lock without making performance worse, it is perhaps
time to take a harder look at scaleable threading semantics.

					John Nagle
					Animats


More information about the Python-Dev mailing list