![](https://secure.gravatar.com/avatar/79679073d76b29e22a54de31f845ea6d.jpg?s=120&d=mm&r=g)
On 29 July 2010 18:55, Maciej Fijalkowski <fijall@gmail.com> wrote:
On Thu, Jul 29, 2010 at 10:50 AM, William Leslie <william.leslie.ttg@gmail.com> wrote:
If task X expects that task Y will mutate some object it has, it needs to go back to the source for every read. This means that if you do use mutation of some shared object for communication, it needs to be synchronised before every access. What this means for us is that every read from a possibly mutable object requires an acquire, and every write requires a release. It's as if every reference in the program is implemented with a volatile pointer. Even if the object is never mutated, there can be a lot of unnecessary bus chatter waiting for MESI to tell us so.
I do agree there is an overhead. Can you provide some data how much this overhead is? Python is not a very simple language and a lot of things are complex and time consuming, so I wonder how it compares to locking per object.
It *is* locking per object, but you also spend time looking for the data if someone else has invalidated your cache line. Come to think of it, that isn't as bad as it first seemed to me. If the sender never mutates the object, it will Just Work on any machine with a fairly flat cache architecture. Sorry. Carry on. -- William Leslie