[IPython-dev] Kernel-client communication
Almar Klein
almar.klein at gmail.com
Fri Sep 3 04:30:29 EDT 2010
Hi,
I've been thinking about your protocol last night. (arg! This whole thing is
getting to my head too much!)
I see some advantages of loose messages such as your using in favor of my
channels approach. Specifically, the order of stdout and stderr messages is
not preserved, as they are send over a different channel (same socket
though). On a 1to1 system, this is not much of a problem, but for multiple
users it definitely will be, specifically if you also want to show the other
users input (pyin).
Anyway, I took another good look at your approach and I have a few
remarks/ideas. Might be useful, might be total BS.
* You say the prompt is a thing the client decides what it looks like. I
disagree. The kernel gives a prompt to indicate that the user can give
input, and maybe also what kind of input. A debugger might produce a
different prompt to indicate that the user is in debug mode. See also the
doc on sys.ps1 and sys.ps2 (http://docs.python.org/library/sys.html): the
user (or interpreter) can put an object on it that evaluates to something
nice when str()'nged.
* You plan on keeping history in the kernel. In this case I think this is
the task of the client. Otherwise you'd get your own history mixed with that
of someone else using that kernel? History is, I think, a feature of the
client to help the programmer. I see no use for storing it at the kernel.
* I really think you can do with less sockets. I believe that the (black)
req/rep pair is not needed. You only seem to use it for when raw_input is
used. But why? When raw_input is used, you can just block and wait for some
stdin (I think that'll be the execute_request message). This should not be
too hard by replacing sys.stdin with an object that has a readline method
that does this. If two users are present, and one calls raw_input, they can
both provide input (whoever's first). To indicate this to the *other* user,
however, his prompt should be replaced with an empty string, so his cursor
is positioned right after the <text> in raw_input('<text>').
* I think you can do with even less sockets :) But this is more of a wild
idea. Say that John set up an experiment at work and wants to check the
results in the bar on his Android (sorry I stole your example here,
Fernando). Now his experiment crashed, producing a traceback in the client
at his work PC. But now he cannot see the traceback as he just logged in!
----- So what about storing all stdout, stderr and pyin (basically all
"terminal-output") at the kernel? And rather than pub/sub, use the existing
req/rep to obtain this stuff. Maybe you can even pass new terminal-output
along with other replies. The client should indicate in the request a
sequence number to indicate to the kernel what messages were already
received. This does mean, however, that the client would have to
periodically query the kernel. But maybe this can also be done automatically
by writing a thin layer on top of the zmq interface. Oh, and you'd need to
encapsulate multiple terminal-messages in a single reply.
Just my two cents,
Almar
On 31 August 2010 07:28, Fernando Perez <fperez.net at gmail.com> wrote:
> On Mon, Aug 30, 2010 at 1:51 AM, Almar Klein <almar.klein at gmail.com>
> wrote:
> > Ah right. Although I'm not sure how often one would use such this in
> > practice, it's certainly a nice feature, and seems to open op a range of
> > possibilities. I can imagine this requirement makes things considerably
> > harder to implement, but since you're designing a whole new protocol from
> > scratch, it's probably a good choice to include it now.
>
> And the whole thing fits naturally in our design for tools that enable
> both interactive/collaborative computing and distributed/parallel work
> within one single framework. After all, it's just manipulating
> namespaces :)
>
> >> In our case obviously the kernel itself remains unresponsive, but the
> >> important part is that the networking doesn't suffer. So we have
> >> enough information to take action even in the face of an unresponsive
> >> kernel.
> >
> > I'm quite a new to networking, so sorry for if this sounds stupid: Other
> > than the heartbeat stuff not working, would it also have other effects? I
> > mean, data can not be send or received, so would maybe network buffers
> > overflow or anything?
>
> Depending on how you implemented your networking layer, you're likely
> to lose data. And you'll need to ensure that your api recovers
> gracefully from half-sent messages, unreplied messages, etc.
>
> Getting a robust and efficient message transport layer written is not
> easy work. It takes expertise and detailed knowledge, coupled with
> extensive real-world experience, to do it right. We simply decided to
> piggy back on some of the best that was out there, rather than trying
> to rewrite our own. The features we gain from zmq (it's not just the
> low-level performance, it's also the simple but powerful semantics of
> their various socket types, which we've baked into the very core of
> our design) are well worth the price of a C dependency in this case.
>
> > Further, am I right that the heartbeat is not necessary when
> communicating
> > between processes on the same box using 'localhost' (since some network
> > layers are bypassed)? That would give a short term solution for IEP.
>
> Yes, on local host you can detect the process via other mechanisms.
> The question is whether the system recovers gracefully from dropped
> messages or incomplete connections. You do need to engineer that into
> the code itself, so that you don't lock up your client when the kernel
> becomes unresponsive, for example.
>
> I'm sure we still have corner cases in our code where we can lock up,
> it's not easy to prevent all such occurrences.
>
> > No, that's the great thing! All channels are multiplexed over the same
> > socket pair. When writing a message to a channel, it is put in a queue,
> > adding a small header to indicate the channel id. There is a single
> thread
> > that sends and receives messages over the socket. It just pops the
> messages
> > from the queue and sends them to the other side. At the receiver side,
> the
> > messages are distributed to the queue corresponding to the right channel.
> So
> > there's one 'global' queue on the sending side and one queue per channel
> on
> > the receiver side.
>
> Ah, excellent! It seems your channels are similar to our message
> types, we simply dispatch on the message type (a string) with the
> appropriate handler. The twist in ipython is that we have used as an
> integral part of the design the various types of zmq sockets: req/rep
> for stdin control, xrep/xreq for execution requests multiplexed across
> clients, and pub/sub for side effects (things that don't fit in a
> functional paradigm).
>
> We thus have a very strong marriage between the abstractions that zmq
> exposes and our design. Honestly, I sometimes feel as if zmq had been
> designed for us, because it makes certain things we'd wanted for a
> very long time almost embarrassingly easy.
>
> Thanks a lot for sharing your ideas, it's always super useful to look
> at these questions from multiple perspectives.
>
> Regards,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100903/134987f9/attachment.html>
More information about the IPython-dev
mailing list