[IPython-dev] Kernel-client communication

Almar Klein almar.klein at gmail.com
Thu Sep 9 06:42:00 EDT 2010


> But thanks for your feedback and ideas: only if we can explain and
> clarify our thoughts sufficiently to justify them, can we be sure that
> we actually understand what we're doing.

Hehe, I can imagine you (or others reading this thread) start to think I'm
stubborn. Well I'm a bit of a purist at times and I keep to my opinion
unless I'm convinced by good arguments :)  But hey, its your project, so
please let me know if you've had enough of my criticism.

So here's a little more ...

> * I really think you can do with less sockets. I believe that the (black)
> > req/rep pair is not needed. You only seem to use it for when raw_input is
> > used. But why? When raw_input is used, you can just block and wait for
> some
> > stdin (I think that'll be the execute_request message). This should not
> be
> > too hard by replacing sys.stdin with an object that has a readline method
> > that does this. If two users are present, and one calls raw_input, they
> can
> > both provide input (whoever's first). To indicate this to the *other*
> user,
> > however, his prompt should be replaced with an empty string, so his
> cursor
> > is positioned right after the <text> in raw_input('<text>').
> Keep in mind that the direction of those sockets (the normal xreq/xrep
> pair for client input and the req/rep for kernel stdin) is opposite,
> and that's because they represent fundamentally different operations.

I get that, but I'm not sure whether this is correct/necessary for the
raw_input. In the original Python interpreter, raw_input just reads from
stdin, the same stream that's used for executing commands. The interpreter
just waits for the next "command", which is then interpreted as text, rather
than executing it. In a shell, this idea works quite well.

I'm not worried about 'too many sockets', I would worry about having
> more sockets, or *less* sockets, than we have separate, independent
> concepts and streams of information.  It seems that now, we do have
> one socket pair for each type of information flow, and this way we
> only multiplex when the data may vary but the communication semantics
> are constant (such as having multiple streams in the pub/sub).  I
> think this actually buys us a lot of simplicity, because each
> connection has precisely the socket and semantics of the type of
> information flow it needs to transfer.

So in your words, I believe that raw_input should actually be the same data
stream as the commands. Sure, because you now have a GUI at the other end,
you could create a pop-up dialog for the user to input the requested text.
But is that really better/nicer? It can also be a bit intrusive.

However, my main argument against this approach is what I already mentioned
in the previous e-mail: whatever client executes raw_input, the data should
be input at the "main client" (the client having the particular socket
connection). What if that happens to be the PC at work, while you're in the

Concluding: I see no reason why commands can be executed by all clients,
while a response to raw_input can only be given from one.

<snip stupid idea to get stdout stderr etc via a req/rep pattern>

> Rather than forcing the kernel to store all that info and play back
> multiple data streams, I'd separate this (valid) idea into its own
> entity.  Someone could easily write a client whose job is simply to
> listen to a kernel and store all streams of information, nicely
> organized for later replay as needed.  Because this client would only
> be storing the string data of the messages, there's no danger of
> making the kernel leak memory by asking it to hold on to every object
> it has ever produced (a very dangerous proposition).  And such a
> logger could then save that info, replay it for you, make it available
> over the web, selectively SMS you if a message on a certain stream
> matches something, etc.
> For example, with this design it becomes *trivial* to write a little
> program that subscribes only to the pyerr channel, monitors
> exceptions, and sends you an SMS if an exception of a particular type
> you care about arrives.  Something like that could probably be written
> in 2 hours.  And that, thanks to the clear separation of information
> flow semantics across the various channels, that make it very easy to
> grab/focus only on the data you need.

You're right. That seems very natural. However, I'm still a little concerned
about when a user connects to a kernel that's running extension code and is
therefore unresponsive. This would immediately become clear to him/her if he
saw the preceding stdout/stderr.

What about storing these messages (to be send from the PUB socket) at the
kernel? I mean store the string messages, not the objects used to create
them (i.e. no holding onto large objects). You could then send this history
when a client connects. Something like "Hi there, this is what we've been
doing so far.".

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100909/54c1576a/attachment.html>

More information about the IPython-dev mailing list