[IPython-dev] Update on IPython recent developments
ellisonbg.net at gmail.com
Wed Jan 24 22:45:51 EST 2007
> Thanks for the update, and for all the hard work you guys have put
> into the new ipython kernel so far! I'm especially excited about the
> new test framework (they all pass here, woohoo!).
Great, we are also very excited about passing tests.
> A couple of beginner's questions:
> - When a task is executed on many engines, and some of them fail, how
> do I find out on which node it failed, and with which error message?
Currently if any action (execute, push, pull) is run against multiple
engines, the error handling goes as follows:
If any single engine fires an exception, that exception is raised in
your local python session immediately (when using the Perspective
Broker client in multienginepb.py. This behavior is not yet
implemented in the RemoteController class you get when importing
kernel.api. But this will change soon) any other exceptions are
logged, but this behavior could be customized. Right now we are not
passing back the engine id with the exception, but we know how to do
this and plan on implementing it.
> - Is there a limit on the size of data sent from the controller to an
> engine? When working on one machine (with more than one engine), is
> data being sent around exactly as on a network (i.e. are there no
> optimisations made for working on a local machine?) Would a UNIX
> socket be a viable alternative (to a socket) in this case?
Absolutely. There are many limits that come into play:
1. The RAM on each involved machine.
2. The number of engines a controller is controlling.
3. The network protocol being used. If you are using the
perspective broker protocols in the "saw" branch, the size limit for
this protocol is set in pbconfig.py. Just a warning though, if you
exceed this limit on a pull/gather currently, things will hang. On a
push you will get a sensible exception raised. We know how to solve
the problem on pull/gather of too large objects, but haven't done that
On a local machine, if you are running things on localhost (127.0.0.1)
you won't be limited by the network pipe/latency. I am not sure if
UNIX sockets would be better than that.
> - I'd like to code algorithms so they can be executed either with or
> without using the new kernel. How difficult would it be to
> implement a dummy controller interface that can easily be included
> with a package to avoid a dependency on ipython1? I.e. (drastically
> def getIDs():
> return 
> def push(nodes,**vars):
> for var,value in vars.iteritems():
> # set self._cache[var] = value
> def execute(node,cmd):
> # In cached namespace, execute cmd
Absolutely, this is something that Fernando has talked about a number
of times. Fernando, let's add this to our list of things to work on
> Hope these questions make sense on some level.
Absolutely. We will keep everyone posted as we work on these things.
> On Wed, Jan 24, 2007 at 01:34:19PM -0700, Brian Granger wrote:
> > Hello all,
> > The purpose of this email is to update IPython developers on the state
> > of IPython1, which is currently focused on bringing interactive
> > parallel and distributed capabilities to IPython. Over the past month
> > or so, we have done a good amount of refactoring of IPython1. As of
> > now, the current development branch for IPython1 is "saw" rather than
> > "chainsaw." Because people are still using chainsaw, we will maintain
> > it for a while (maybe another month), but the saw branch has a lot of
> > new things. I should say, that we are not yet recommending that
> > averge users start to use "saw" yet. There are a few more things we
> > need to do before it is a full replacement of the "chainsaw" branch.
> > But anyone following the development of IPython1 closely, definitely
> > should look at "saw."
> IPython-dev mailing list
> IPython-dev at scipy.org
More information about the IPython-dev