[pypy-dev] Would the following shared memory model be possible?

William Leslie william.leslie.ttg at gmail.com
Fri Jul 30 09:35:29 CEST 2010


On 30 July 2010 06:53, Paolo Giarrusso <p.giarrusso at gmail.com> wrote:
>> Come to think of it, that isn't as bad as it first seemed to me. If
>> the sender never mutates the object, it will Just Work on any machine
>> with a fairly flat cache architecture.
>
> You first wrote: "The alternative, implicitly writing updates back to
> memory as soon as possible and reading them out of memory every time,
> can be hundreds or more times slower."
> This is not "locking per object", it is just semantically close to it,
> and becomes equivalent if only one thread has a reference at any time.

Yes, direct memory access was misdirection (sorry), as the cache
already handles consistency even in NUMA systems of the same size that
sit on most desktops today, and most significantly you still need to
lock objects in many cases, such as looking up an entry in a dict,
which can change size while probing. Not only are uncached accesses
needlessly slow in the typical case, but they are not sufficient to
ensure consistency of some resizable rpython data structures.

-- 
William Leslie



More information about the Pypy-dev mailing list