[pypy-dev] pre-emptive micro-threads utilizing shared memory message passing?
evan at theunixman.com
Tue Jul 27 08:27:03 CEST 2010
On 07/26 22:09, Kevin Ar18 wrote:
> What I'm trying to accomplish:
> I am trying to write a particular threading scenario that follows these
> rules. It is partly an experiment and partly for actual
> production code.
This is actually interesting to me as well. I can't count the number of
times I've had to implement something like this for projects. It would be
nice to be able to use a public module instead of writing it all
> Now, I have spent some time trying to find a way to achieve this ... and
> I can implement a rather poor version using default Python. However, I
> don't see any way to implement my ideal version. Maybe someone here
> might have some pointers for me.
> Shared Memory between parallel processes
This is the way I usually implement it. I'm currently mulling over some
sort of byte-addressable abstraction that can use a buffer or any sequence
as a backing store, which would make it useful for mmap objects as well.
And I'm thinking about using the class definitions and inheritance to
handle nested structures in some way.
> Quick Question: Do queues from the multiprocessing module use shared
> memory? If the answer is YES, you can just skip this section, because
> that would solve this particular problem.
I can't imagine it wouldn't, but I haven't checked the source yet.
> Question: How can I share Python Objects between processes USING SHARED
> MEMORY? I do not want to have to copy or "pass" data back and forth
> between processes or have to use a proxy "server" process. These are
> both too much of a performance hit for my needs; shared memory is what
> I need.
Anonymous memory-mapped regions would work, with a suitable data
abstraction. Or even memory-mapped files, which aren't really all that
different on systems anymore.
> The multiprocessing module offers me 4 options: "queues", "pipes", "shared memory map", and a "server process".
> "Shared memory map" won't work as it only handles C values and arrays (not Python objects or variables).
cPickle could help. But then there's a serialization/deserialization step
which wouldn't really be too fast. It's not slow, but the cost of copying
the data is far outweighed by the cost of the dumps/loads, and if you need
to share multiple copies you're really going to feel it.
> "Server Process" sounds like a bad idea. Am I correct in that this
> option requires extra processing power and does not even use
> shared memory?
Not really. It depends on how you would implement it.
> The big question then... do "queues" and "pipes" used shared memory or
> do they pass data back and forth between processes? (if they used
> shared memory, then that would be perfect)
Queues most likely do, pipes absolutely do not.
> Does PyPy have any other options for me?
I wonder if it could be done with an object space, or similarly done
"behind the scenes" in the PyPy interpreter, sort of the way ZODB works
semi-transparently. Only in this case completely transparently.
> True Pre-emptive scheduling?
This wouldn't really be difficult, although doing it efficiently might
very well be without some serious black magic. But PyPy may also be the
right tool for that since the black magic can be written in Python or
RPython instead of C.
> Any way to get pre-emptive micro-threads? Stackless (the real
> Stackless, not the one in PyPy) has the ability to suspend them after a
> certain number of interpreter instructions; however, this is prone to
> problems because it can run much longer than expected. Ideally, I would
> like to have true pre-emptive scheduling using hardware interrupts based
> on timing or CPU cycles (like the OS does for real threads).
By using a process for each thread, and some shared memory arena for the
bulk of the application data structures, this is probably quite possible
without reimplementing the OS in Python.
> I am currently not aware of any way to achieve this in CPython, PyPy,
> Unladen Swallow, Stackless, etc....
I've done this a number of times, both with threads and with processes.
Processes ironically give you finer control over scheduling since you
aren't stuck behind the GIL, but as you are finding, you need some way to
> Are there detailed docs on why the Python GIL exists?
Here is the page from the Python Wiki:
And here is an interesting article on the GIL problem:
Evan Cofsky "The UNIX Man" <evan at tunixman.com>
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 230 bytes
Desc: Digital signature
More information about the Pypy-dev