Re: [pypy-dev] pypy-dev Digest, Vol 361, Issue 5
Hi Kevin: Message: 4 Date: Thu, 12 Aug 2010 19:42:54 -0400 From: Kevin Ar18 <kevinar18@hotmail.com> Subject: Re: [pypy-dev] pre-emptive micro-threads utilizing shared memory message passing? To: <pypy-dev@codespeak.net> Message-ID: <SNT110-W30CE421CB01A3D07C92F8DAA970@phx.gbl> Content-Type: text/plain; charset="iso-8859-1"
I don't mind replying to the mailing list unless it annoys someone? Maybe >some people could be interested by this discussion.
I am finding it a bit difficult to follow this thread. I am not sure who is saying what. Also I don't know if you are talking about an entirely new system or the stackless.py module.
In my case, I look at message passing from the perspective of the >tasklet. A tasklet can either be assigned a certain number of "in ports" >and a certain number of "out ports." In this case the "in ports" are the >.read() end of a queue or stream and the "out ports" are the .send() part >of a queue or stream.
A part of the model that Stackless uses is that tasklets have channels. Channels have send() and receive() operations.
For the scheduler, I would need to control when a tasklet runs. >Currently, I am thinking that I would look at all the "in ports" that a >tasklet has and make sure each one has some data. Only then would the >tasklet be scheduled to run by the scheduler.
The current scheduler already does this. However there are no in or out ports, just operations that can proceed.
Couldn't all those ports (channels) be read one at a time, then the >processing could be done?
If you are using stackless.py - the tasklet will block if it encounters a channel with no target on the other side. I wrote a select() function that allows monitoring on multiple channels.
Good idea. If there's no data to read, the tasklet can yield. ... but I >need to know when the tasklet can be put back into the scheduler queue
I don't want to toot my horn but I gave a talk that covers how rendez-vous semantics works at EuroPython: http://andrewfr.wordpress.com/2010/07/24/prototyping-gos-select-and-beyond/ Cheers, Andrew
I don't mind replying to the mailing list unless it annoys someone? Maybe >some people could be interested by this discussion.
I am finding it a bit difficult to follow this thread. I am not sure who is saying what. Also I don't know if you are talking about an entirely new system or the stackless.py module. An entirely new system/way of doing things -- meaning I don't think the stackless style would fit.
Originally, I was hoping for some way to achieve what I want in Python across multiple cores, but I'm finding there is no such primitives to do that effectively. I know the basics of how I would do it in a lower level language. Yes, there are many different topics that this brought up. Here's a summary: * I wanted to work on a different way of doing things (different than stackless)... but I needed lower level primitives that allowed me to pass data back and forth between threads using shared memory queues or pipes (instead of the current method that copies the data back and forth) * I then asked about the difficulty in doing some form of limited shared memory (one that wouldn't involve a GIL overhaul) * A branch of the discussion involved people discuss various locking problems that might cause... * The author of Kamaelia posted a message and we had a brief discussion down that road. (His project is very similar to what I want to do.) * Gabriel mentioned his project and we had a brief discussion. His project has some similarities ... but still is probably too different for my needs, but maybe would be very interesting to other people here. * In one of the emails, I brought up a possible solution to offering shared memory "message passing" that would not require locks of locking issues... but it really is too much for me to get involved with now. ... and I guess by now the discussion has pretty much died off as there was really nothing more....
Kevin, You may want to broaden your candidates. Jython already supports multiple cores with no GIL and shared memory with well-defined memory semantics derived directly from Java's memory model (and compatible with the informal memory model that we see in CPython). Because JRuby needs it for efficient support of Ruby 1.9 generators, which are more general than Python's (non-nested yields), there has been substantial attention paid to the MLVM coroutine support which has demonstrated 1M+ microthread scalability in a single JVM process. It would be amazing if someone spent some time looking at this in Jython. - Jim On Fri, Aug 13, 2010 at 9:29 PM, Kevin Ar18 <kevinar18@hotmail.com> wrote:
I don't mind replying to the mailing list unless it annoys someone? Maybe >some people could be interested by this discussion.
I am finding it a bit difficult to follow this thread. I am not sure who is saying what. Also I don't know if you are talking about an entirely new system or the stackless.py module. An entirely new system/way of doing things -- meaning I don't think the stackless style would fit.
Originally, I was hoping for some way to achieve what I want in Python across multiple cores, but I'm finding there is no such primitives to do that effectively. I know the basics of how I would do it in a lower level language.
Yes, there are many different topics that this brought up. Here's a summary: * I wanted to work on a different way of doing things (different than stackless)... but I needed lower level primitives that allowed me to pass data back and forth between threads using shared memory queues or pipes (instead of the current method that copies the data back and forth) * I then asked about the difficulty in doing some form of limited shared memory (one that wouldn't involve a GIL overhaul) * A branch of the discussion involved people discuss various locking problems that might cause... * The author of Kamaelia posted a message and we had a brief discussion down that road. (His project is very similar to what I want to do.) * Gabriel mentioned his project and we had a brief discussion. His project has some similarities ... but still is probably too different for my needs, but maybe would be very interesting to other people here. * In one of the emails, I brought up a possible solution to offering shared memory "message passing" that would not require locks of locking issues... but it really is too much for me to get involved with now.
... and I guess by now the discussion has pretty much died off as there was really nothing more....
_______________________________________________ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
You may want to broaden your candidates. Jython already supports multiple cores with no GIL and shared memory with well-defined memory semantics derived directly from Java's memory model (and compatible with the informal memory model that we see in CPython). Because JRuby needs it for efficient support of Ruby 1.9 generators, which are more general than Python's (non-nested yields), there has been substantial attention paid to the MLVM coroutine support which has demonstrated 1M+ microthread scalability in a single JVM process. It would be amazing if someone spent some time looking at this in Jython. For me, anything based on the Java VM or copyleft code it out of question. However, you are quite right in that it is not necessary that I use PyPy. For example, if Unladen Swallow had the primitives I needed, that would be great too. As a side note, PyPy does have two advantages: speed and that it is coded in RPython: which might even allow me to just hack PyPy itself at some point. :) BTW, thanks for the suggestion. Now that you brought up the topic of different implementations, I should probably check on what is going on in regards to Unladen Swallow, etc....
To clarify, there are numerous implementations of the JVM that are not copyleft, such as Apache Harmony. Of course the MLVM work I cited<http://classparser.blogspot.com/2010/04/jruby-coroutines-really-fast.html>is not one of them. Jython itself is licensed <http://www.jython.org/license.html> under the Python Software License. On Sat, Aug 14, 2010 at 6:24 PM, Kevin Ar18 <kevinar18@hotmail.com> wrote:
You may want to broaden your candidates. Jython already supports multiple cores with no GIL and shared memory with well-defined memory semantics derived directly from Java's memory model (and compatible with the informal memory model that we see in CPython). Because JRuby needs it for efficient support of Ruby 1.9 generators, which are more general than Python's (non-nested yields), there has been substantial attention paid to the MLVM coroutine support which has demonstrated 1M+ microthread scalability in a single JVM process.
It would be amazing if someone spent some time looking at this in Jython.
For me, anything based on the Java VM or copyleft code it out of question. However, you are quite right in that it is not necessary that I use PyPy. For example, if Unladen Swallow had the primitives I needed, that would be great too.
As a side note, PyPy does have two advantages: speed and that it is coded in RPython: which might even allow me to just hack PyPy itself at some point. :)
BTW, thanks for the suggestion. Now that you brought up the topic of different implementations, I should probably check on what is going on in regards to Unladen Swallow, etc....
_______________________________________________ pypy-dev@codespeak.net http://codespeak.net/mailman/listinfo/pypy-dev
participants (3)
-
Andrew Francis
-
Jim Baker
-
Kevin Ar18