[pypy-dev] Would the following shared memory model be possible?

William Leslie william.leslie.ttg at gmail.com
Thu Jul 29 15:15:32 CEST 2010


On 29 July 2010 18:55, Maciej Fijalkowski <fijall at gmail.com> wrote:
> On Thu, Jul 29, 2010 at 10:50 AM, William Leslie
> <william.leslie.ttg at gmail.com> wrote:
>> If task X expects that task Y will mutate some object it has, it needs
>> to go back to the source for every read. This means that if you do use
>> mutation of some shared object for communication, it needs to be
>> synchronised before every access. What this means for us is that every
>> read from a possibly mutable object requires an acquire, and every
>> write requires a release. It's as if every reference in the program is
>> implemented with a volatile pointer. Even if the object is never
>> mutated, there can be a lot of unnecessary bus chatter waiting for
>> MESI to tell us so.
>>
>
> I do agree there is an overhead. Can you provide some data how much
> this overhead is? Python is not a very simple language and a lot of
> things are complex and time consuming, so I wonder how it compares to
> locking per object.

It *is* locking per object, but you also spend time looking for the
data if someone else has invalidated your cache line.

Come to think of it, that isn't as bad as it first seemed to me. If
the sender never mutates the object, it will Just Work on any machine
with a fairly flat cache architecture.

Sorry. Carry on.

-- 
William Leslie



More information about the Pypy-dev mailing list