
Josiah Carlson wrote:
These examples are all what are generally referred to as "embarassingly parallel" in literature. One serious issue with programming parallel algorithms generally is that not all algorithms are necessarily parallelizable. Some are, certainly, but not all. The task is to discover those alternate algorithms that *are* parallelizable in such a way to offer gains that are "worth it".
A couple of ideas I wanted to explore: -- A base class or perhaps metaclass that makes python objects transactional. This wraps all attribute access so what you see is your transaction's view of the current state of the object. Something like: with atomic(): obj1.attribute = 1 # Other threads can't 'see' the new value until the # transaction commits. value = obj1.attribute 'atomic()' starts a new transaction in your thread, which is stored in thread-local data; Any objects that you mutate become part of the transaction automatically (will have to be careful about built-in mutable objects such as lists). At the end, the transaction either commits, or if there was a conflict, it rolls back and you get to do it over again. This would be useful for large networks of objects, where you want to make large numbers of local changes, where each local change affects an object and perhaps its surrounding objects. What I am describing is very similar to many complex 3D game worlds. ZODB already has a transactional object mechanism, although it's oriented towards database-style transactions and object persistence. -- A way to partition the space of python objects, such that objects in each partition cannot have references outside of the partition without going through some sort of synchronization mechanism, perhaps via some proxy. The idea is to be able to guarantee that shared state is only accessible in certain ways that you can reason about. -- Talin