2.6, 3.0, and truly independent intepreters
rhamph at gmail.com
Sat Oct 25 03:07:04 CEST 2008
On Fri, Oct 24, 2008 at 5:38 PM, Glenn Linderman <v+python at g.nevcal.com> wrote:
> On approximately 10/24/2008 2:16 PM, came the following characters from the
> keyboard of Rhamphoryncus:
>> On Oct 24, 3:02 pm, Glenn Linderman <v+pyt... at g.nevcal.com> wrote:
>>> On approximately 10/23/2008 2:24 PM, came the following characters from
>>> keyboard of Rhamphoryncus:
>>>> On Oct 23, 11:30 am, Glenn Linderman <v+pyt... at g.nevcal.com> wrote:
>>>>> On approximately 10/23/2008 12:24 AM, came the following characters
>>>>> the keyboard of Christian Heimes
>>>>>> Andy wrote:
>>>>>> I'm very - not absolute, but very - sure that Guido and the initial
>>>>>> designers of Python would have added the GIL anyway. The GIL makes
>>>>>> Python faster on single core machines and more stable on multi core
>>> Actually, the GIL doesn't make Python faster; it is a design decision
>>> reduces the overhead of lock acquisition, while still allowing use of
>>> Using finer-grained locks has higher run-time cost; eliminating the use
>>> global variables has a higher programmer-time cost, but would actually
>>> faster and more concurrently than using a GIL. Especially on a
>>> multi-core/multi-CPU machine.
>> Those "globals" include classes, modules, and functions. You can't
>> have *any* objects shared. Your interpreters are entirely isolated,
>> much like processes (and we all start wondering why you don't use
>> processes in the first place.)
> Indeed; isolated, independent interpreters are one of the goals. It is,
> indeed, much like processes, but in a single address space. It allows the
> master process (Python or C for the embedded case) to be coded using memory
> references and copies and pointer swaps instead of using semaphores, and
> potentially multi-megabyte message transfers.
> It is not clear to me that with the use of shared memory between processes,
> that the application couldn't use processes, and achieve many of the same
> goals. On the other hand, the code to create and manipulate processes and
> shared memory blocks is harder to write and has more overhead than the code
> to create and manipulate threads, which can, when told, access any memory
> block in the process. This allows the shared memory to be resized more
> easily, or more blocks of shared memory created more easily. On the other
> hand, the creation of shared memory blocks shouldn't be a high-use operation
> in a program that has sufficient number crunching to do to be able to
> consume multiple cores/CPUs.
>> Or use safethread. It imposes safe semantics on shared objects, so
>> you can keep your global classes, modules, and functions. Still need
>> garbage collection though, and on CPython that means refcounting and
>> the GIL.
> Sounds like safethread has 35-40% overhead. Sounds like too much, to me.
The specific implementation of safethread, which attempts to remove
the GIL from CPython, has significant overhead and had very limited
success at being scalable.
The monitor design proposed by safethread has no inherent overhead and
is completely scalable.
Adam Olsen, aka Rhamphoryncus
More information about the Python-list