Maybe that's what it seems to you; to others of us who have been looking at this problem for a while, the real question is how to get a better multi-process control and IPC library in Python, preferably one that is cross-platform. You can investigate that right now, and you don't even need to discuss it with other people.
(Despite my oft-stated fondness for threading, I do recognize the problems with threading, and if there were a way to make processes as simple as threads from a programming standpoint, I'd be much more willing to push processes.)
The processing package at
is multi-platform and mostly follows the API of threading. It also allows use of 'shared objects' which live in a manager process.
For example the following code is almost identical to the equivalent written with threads:
from processing import Process, Manager
def f(q): for i in range(10): q.put(i*i) q.put('STOP')
if __name__ == '__main__': manager = Manager() queue = manager.Queue(maxsize=3)
p = Process(target=f, args=[queue]) p.start()
result = None while result != 'STOP': result = queue.get() print result
Without work, #2 isn't "fast" using processes in Python. It is trivial using threads. But here's the thing: with work, #2 can be made fast. Using unix domain sockets (on linux, 3.4 ghz P4 Xeons, DDR2-PC4200 memory (you can get 50% faster memory nowadays)), I've been able to push 400 megs/second between processes. Maybe anonymous or named pipes, or perhaps a shared mmap with some sort of synchronization would allow for IPC to be cross platform and just about as fast.
The IPC uses sockets or (on Windows) named pipes. Linux and Windows are roughly equal in speed. On a P4 2.5Ghz laptop one can retreive an element from a shared dict about 20,000 times/sec. Not sure if that qualifies as fast enough.