dual processor

Nick Craig-Wood nick at craig-wood.com
Mon Sep 5 18:29:48 CEST 2005

Scott David Daniels <Scott.Daniels at Acm.Org> wrote:
>  Nick Craig-Wood wrote:
> > Splitting the GIL introduces performance and memory penalties....
> > However its crystal clear now the future is SMP.  Modern chips seem to
> > have hit the GHz barrier, and now the easy meat for the processor
> > designers is to multiply silicon and make multiple thread / core
> > processors all in a single chip.
> > So, I believe Python has got to address the GIL, and soon.
>  However, there is no reason to assume that those multiple cores must
>  work in the same process.

No of course not.  However if they aren't then you've got the horrors
of IPC to deal with!  Which is difficult to do fast and portably. Much
easier to communicate with another thread, especially with the lovely
python threading primitives.

>  One of the biggest issues in running python in multiple
>  simultaneously active threads is that the Python opcodes themselves
>  are no longer indivisible.  Making a higher level language that
>  allows updates work with multiple threads involves lots of
>  coordination between threads simply to know when data structures
>  are correct and when they are in transition.

Sure!  No one said it was easy.  However I think it can be done to all
of python's native data types, and in a way that is completely
transparent to the user.

>  Even processes sharing some memory (in a "raw binary memory" style) are
>  easier to write and test.  You'd lose too much processor to coordination
>  effort which was likely unnecessary.  The simplest example I can think
>  of is decrementing a reference count.  Only one thread can be allowed to
>  DECREF at any given time for fear of leaking memory, even though it will
>  most often turn out the objects being DECREF'ed by distinct threads are
>  themselves distinct.

Yes locking is expensive.  If we placed a lock in every python object
that would bloat memory usage and cpu time grabbing and releasing all
those locks.  However if it meant your threaded program could use 90%
of all 16 CPUs, rather than 100% of one I think its obvious where the
payoff lies.

Memory is cheap.  Multiple cores (SMP/SMT) are everywhere!

>  In short, two Python threads running simultaneously cannot trust
>  that any basic Python data structures they access are in a
>  consistent state without some form of coordination.

Aye, lots of locking is needed.

Nick Craig-Wood <nick at craig-wood.com> -- http://www.craig-wood.com/nick

More information about the Python-list mailing list