[pypy-dev] pre-emptive micro-threads utilizing shared memory message passing?
fijall at gmail.com
Tue Jul 27 17:11:43 CEST 2010
>> concurrent threads (depends on implicit vs explicit shared memory)
>> might require a truly concurrent GC to achieve performance. This is
>> work (although not as big as removing refcounting from CPython for
>>> Are there detailed docs on why the Python GIL exists?
>>> I don't mean trivial statements like "because of C extensions" or "because the interpreter can't handle it".
>>> It may be possible that my particular usage would not require the GIL. However, I won't know this until I can understand what threading problems the Python interpreter has that the GIL was meant to protect against. Is there detailed documentation about this anywhere that covers all the threading issues that the GIL was meant to solve?
>> The short answer is "yes". The long answer is that it's much easier to
>> write interpreter assuming GIL is around. For fine-grained locking to
>> work and be efficient, you would need:
>> * The forementioned locking, to ensure that it's not that easy to
>> screw things up.
> I've wondered around the guarantees we need to offer to the
> programmer, and my guess was that Jython's memory model is similar.
> I've been concentrating on the dictionary of objects, on the
> assumption that lists and most other built-in structures should be
> locked by the programmer in case of concurrent modifications.
> However, we don't want to require locking to support something like:
> Thread 1:
> Thread 2:
> a = obj.oldmember;
> Looking for Jython memory model on Google produces some garbage and
> then this document from Unladen Swallow:
> It implicitly agrees on what's above (since Jython and IronPython both
> use thread-safe dictionaries), and then delves into issues about
> allowed reorderings.
> However, it requires that even racy code does not make the interpreter crash.
I guess the main restraint is "interpreter should not crash" indeed.
>> * Possibly a JIT optimization that would remove some locking.
> Any more specific ideas on this?
Well, yes. Determining when object is local so you don't need to do
any locking, even though it escapes (this is also "just work", since
it has been done before).
>> * Some sort of concurrent GC (not specifically running in a separate
>> thread, but having different pools of memory to allocate from)
> Among all points, this seems the easiest design-wise. Having
> per-thread pools is nowadays standard, so it's _just_ work (as opposed
> to 'complicated design'). Parallel GCs become important just when lots
> of garbage must be reclaimed.
> A GC is called concurrent, rather than parallel, when it runs
> concurrently with the mutator, and this usually reduces both pause
> times and throughput, so you probably don't want this as default (it
> is useful for particular programs, such as heavily interactive
> programs or videogames, I guess), do you?
I guess I meant parallel then.
> More details are here:
> The trick used in the (mostly) concurrent collector of Hotspot seems
> interesting: it uses two short-stop-the-world phases and lets the
> program run in between. I think I'll look for a paper on it.
Would be interested in that.
> Paolo Giarrusso - Ph.D. Student
More information about the Pypy-dev