![](https://secure.gravatar.com/avatar/79679073d76b29e22a54de31f845ea6d.jpg?s=120&d=mm&r=g)
On 30 July 2010 06:53, Paolo Giarrusso <p.giarrusso@gmail.com> wrote:
Come to think of it, that isn't as bad as it first seemed to me. If the sender never mutates the object, it will Just Work on any machine with a fairly flat cache architecture.
You first wrote: "The alternative, implicitly writing updates back to memory as soon as possible and reading them out of memory every time, can be hundreds or more times slower." This is not "locking per object", it is just semantically close to it, and becomes equivalent if only one thread has a reference at any time.
Yes, direct memory access was misdirection (sorry), as the cache already handles consistency even in NUMA systems of the same size that sit on most desktops today, and most significantly you still need to lock objects in many cases, such as looking up an entry in a dict, which can change size while probing. Not only are uncached accesses needlessly slow in the typical case, but they are not sufficient to ensure consistency of some resizable rpython data structures. -- William Leslie