From tfetherston at aol.com  Tue Mar  2 12:18:22 2010
From: tfetherston at aol.com (tfetherston at aol.com)
Date: Tue, 02 Mar 2010 12:18:22 -0500
Subject: [IPython-dev] Demo.py on trunk
Message-ID: <8CC88437E319138-5DFC-537@webmail-d060.sysops.aol.com>



 Looking a trunk to test out demo.py, I've changed the import statements to reflect the moves due to the reorganization, but I'm running into problems in the Demo class init method. It peeks at some of the internals of IPython to store some information.

 
specifically:

self.ip_colorize = __IPYTHON__.pycolorize
self.ip_runlines = __IPYTHON__.runlines
self.ip_showtb   = __IPYTHON__.showtraceback
self.ip_runlines = __IPYTHON__.runlines
self.shell       = __IPYTHON__

this info is not stored at __IPYTHON__ any more, does someone know where it is & how to get/set it?

Tom














































































































 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100302/04b72ed6/attachment.html>

From wackywendell at gmail.com  Thu Mar  4 08:40:25 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Thu, 04 Mar 2010 14:40:25 +0100
Subject: [IPython-dev] Curses Frontend
Message-ID: <4B8FB849.7020101@gmail.com>

Hello,

I have decided to implement a curses frontend to ipython.
My main goals:
  - complete functionality of standard ipython - magics, history, etc.
  - automatic display of completion, not requiring the tab key - tab key 
fills it in
  - automatic display of docstrings, not requiring the '?' key
  - zero pollution of output by docstrings/help (easier to read through 
history)
  - syntax highlighting of input lines as well as output (using 
ipython's color scheme)

The first three will require serious work with curses, as the standard 
library curses module is quite limited, but I believe it can be managed 
in the end with no further dependencies. However, I am wondering if 
perhaps it makes more sense to depend on something like 'urwid'.
As for the fourth, I believe the tokenize module can not handle 
incomplete input, and so I am thinking that it will use 'pygments' if 
its available, and otherwise not highlight. Eventually, perhaps, it 
would have a configuration option for input syntax highlighting; if the 
option were enabled and pygments was not installed, then ipython would 
issue a warning.
I have, of course, seen and used bpython, and this is partially inspired 
from that. However, I cannot see any reasonable way to integrate the two 
projects; while it will be easy to work with ipython, the code for 
bpython does not look... well... easy to apply to ipython, shall we say.

I have a number of questions:
- First and foremost, should it be implemented as an extension, or as a 
frontend + subclass of Shell? I believe implementation as an extension 
is possible, and not terribly complicated or confusing, but it does seem 
a little heavy for the extension system.
- Secondly, how should I handle dependencies? I have not looked into 
anything beyond the main curses library, but using a third-party library 
could quite possibly substantially simplify things, but it would add a 
dependency. Also, I would like input syntax highlighting as noted above, 
and if anyone has comments on that I would welcome suggestions.
- Thirdly, I would like code hosting advice. Should this go in the 
ipython bazaar database? should I get my own something or other? This is 
something I would very much appreciate - let me know how I should handle 
this.
- Lastly, I would like comments on the overall design. My current plan 
is as follows:

------------------------
| completion/help box  |
|    6 lines or so     |
|----------------------|
| ipython output       |
| fills the main space |
| includes completed   |
| input                |
| In [1]: 1            |
| Out[1]: 1             |
|In [2]: 2             |
|Out[2]: 2             |
|----------------------|
|In[3]: | Input box    |
|       | 4 lines or so|
------------------------

My vision is that as one typed, completions  or help would pop-up above. 
If the user pressed tab, completion would fill in as much as possible, 
and also expand to show all the completions available. If one pressed 
tab again, it would page through completions; if one continued typing, 
the completions would narrow; if one finished the word, completions 
would go back to their old 6-10 lines.
If one started a function, or hit ',' within, the function docstring 
would instead show in the upper box, with the same sort of tab-paging 
capability. If one instead used the old '?' and '??' syntax, everything 
would be replaced by a pager, giving one more space to read it all.

I have many more ideas for smaller things - options to output 
python-interpreter-like code on exit, for example - and there will be 
many obstacles to overcome (such as integrating the built-in editor 
capability), I have spent the past week or so going through ipython 
code, considering possibilities, and experimenting, and I think I have a 
fairly good idea of the work ahead of me.

So that is my idea. I have the time and energy to carry it through - I 
know it will take a long time. It will, of course, require at least 
ipython 0.11 - it will almost certainly not be done before that is 
released in any case, and it makes the most sense to use the wonderfully 
reorganized code of 0.11.

Please reply with any comments or suggestions! Also, if anyone is 
already working on something like this, or would like to help me, I 
would love to hear that!

-Wendell


From ellisonbg at gmail.com  Thu Mar  4 12:21:12 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 4 Mar 2010 09:21:12 -0800
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B8FB849.7020101@gmail.com>
References: <4B8FB849.7020101@gmail.com>
Message-ID: <fa8579a41003040921p1e4f8b8ia484e79ca5a1b19@mail.gmail.com>

Wendell,

> I have decided to implement a curses frontend to ipython.

Fantastic, this would be a great addition to IPython!  This comes at a
great time.
As you know we are finishing up 0.11, which has clean up the codebase
quite a bit.
Also, Fernando has been working with some folks in Coumbia on a Qt based IPython
notebook.  I am not sure of the status (Fernando is in the Amazon
jungle this week), but
once he returns we can coordinate your work with this effort.

> My main goals:
> ?- complete functionality of standard ipython - magics, history, etc.
> ?- automatic display of completion, not requiring the tab key - tab key
> fills it in
> ?- automatic display of docstrings, not requiring the '?' key
> ?- zero pollution of output by docstrings/help (easier to read through
> history)
> ?- syntax highlighting of input lines as well as output (using
> ipython's color scheme)

That would be absolutely great to have all of these things.

> The first three will require serious work with curses, as the standard
> library curses module is quite limited, but I believe it can be managed
> in the end with no further dependencies. However, I am wondering if
> perhaps it makes more sense to depend on something like 'urwid'.

I am not familiar with urwid, and I don't know much about curses.  We
are trying to
keep dependencies to a minimum, but having curses for a dep for your
frontend is perfectly reasonable.

> As for the fourth, I believe the tokenize module can not handle
> incomplete input, and so I am thinking that it will use 'pygments' if
> its available, and otherwise not highlight. Eventually, perhaps, it
> would have a configuration option for input syntax highlighting; if the
> option were enabled and pygments was not installed, then ipython would
> issue a warning.

We would very much like to replace our current tokenizer with Pygments.
Pygments is so easy to install and so common these days, I think that *all*
of IPython should use it.  Obviously, it should switch to no coloring mode if
pygments is not installed.

> I have, of course, seen and used bpython, and this is partially inspired
> from that. However, I cannot see any reasonable way to integrate the two
> projects; while it will be easy to work with ipython, the code for
> bpython does not look... well... easy to apply to ipython, shall we say.

Yes, IPython's codebase, While cleaner now, is still a bit of a beast.
 We hope to
continue to clean it up, but that will take time.  Eventually, we would like it
if a project like bpython could use IPython as its underlying
interpreter.  But that
will take time.  For now, I think that working with IPython is a good choice.

> I have a number of questions:
> - First and foremost, should it be implemented as an extension, or as a
> frontend + subclass of Shell? I believe implementation as an extension
> is possible, and not terribly complicated or confusing, but it does seem
> a little heavy for the extension system.

Currently, IPython trunk has done away with the old Shell class.  Our core class
is not iplib.InteractiveShell.  Thus, I would start by looking there.

> - Secondly, how should I handle dependencies? I have not looked into
> anything beyond the main curses library, but using a third-party library
> could quite possibly substantially simplify things, but it would add a
> dependency. Also, I would like input syntax highlighting as noted above,
> and if anyone has comments on that I would welcome suggestions.

I would keep them minimal, but if you want to use something
additional, we can definitely
talk about it.  If it is small, we can ship it in IPython.externals.
We do recognize that
various IPython GUIs/frontends will have additional dependencies.

> - Thirdly, I would like code hosting advice. Should this go in the
> ipython bazaar database? should I get my own something or other? This is
> something I would very much appreciate - let me know how I should handle
> this.

I would handle it this way for now:

* Some of your work will likely be on the IPython core itself.  While
we have done a lot of work on it
you will quickly find areas that need to be updated in order for you
to do what you want.  For this part
of it, I would simply create a branch on launchpad of the IPython
trunk.  This will allow us to merge
your work on the IPython core quickly into trunk.

* The part of your work that is focused on the curses frontend, could
be handled in two ways: it could be a separate project on lp for now
and eventually moved into IPython when it matures.  Or you could just
branch your IPython core branch and do the work there.  My intuition
says that having a separate project for now will be a better and more
flexible option.  But this is also up to you.

> - Lastly, I would like comments on the overall design. My current plan
> is as follows:
>
> ------------------------
> | completion/help box ?|
> | ? ?6 lines or so ? ? |
> |----------------------|
> | ipython output ? ? ? |
> | fills the main space |
> | includes completed ? |
> | input ? ? ? ? ? ? ? ?|
> | In [1]: 1 ? ? ? ? ? ?|
> | Out[1]: 1 ? ? ? ? ? ? |
> |In [2]: 2 ? ? ? ? ? ? |
> |Out[2]: 2 ? ? ? ? ? ? |
> |----------------------|
> |In[3]: | Input box ? ?|
> | ? ? ? | 4 lines or so|
> ------------------------

I think having as clean of a design as possible is best, so I would
try to minimize the stuff at the top.  But, the challenge is that some
completions and help strings are quite long.  I am not sure how to
handle that in a clean way.

> My vision is that as one typed, completions ?or help would pop-up above.
> If the user pressed tab, completion would fill in as much as possible,
> and also expand to show all the completions available. If one pressed
> tab again, it would page through completions; if one continued typing,
> the completions would narrow; if one finished the word, completions
> would go back to their old 6-10 lines.
> If one started a function, or hit ',' within, the function docstring
> would instead show in the upper box, with the same sort of tab-paging
> capability. If one instead used the old '?' and '??' syntax, everything
> would be replaced by a pager, giving one more space to read it all.
>
> I have many more ideas for smaller things - options to output
> python-interpreter-like code on exit, for example - and there will be
> many obstacles to overcome (such as integrating the built-in editor
> capability), I have spent the past week or so going through ipython
> code, considering possibilities, and experimenting, and I think I have a
> fairly good idea of the work ahead of me.
>
> So that is my idea. I have the time and energy to carry it through - I
> know it will take a long time. It will, of course, require at least
> ipython 0.11 - it will almost certainly not be done before that is
> released in any case, and it makes the most sense to use the wonderfully
> reorganized code of 0.11.

This is fantastic news.  And yes, definitely start from trunk.

> Please reply with any comments or suggestions! Also, if anyone is
> already working on something like this, or would like to help me, I
> would love to hear that!

I am going to send this now as I have to run.  I will reply later with
more thoughts
on the design as it relates to the IPython core.

Cheers,

Brian

> -Wendell
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Thu Mar  4 12:53:57 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 4 Mar 2010 09:53:57 -0800
Subject: [IPython-dev] Fwd:  Demo.py on trunk
In-Reply-To: <fa8579a41003040951h4fb5fc50m57c74a81219e1bbe@mail.gmail.com>
References: <8CC88437E319138-5DFC-537@webmail-d060.sysops.aol.com>
	<fa8579a41003020922w61c86797j68713d9d28f8f710@mail.gmail.com>
	<8CC88F2E73F22B5-40F0-7D39@webmail-d060.sysops.aol.com>
	<fa8579a41003031019q58e519dtdb37cb7b9d7cbcbf@mail.gmail.com>
	<8CC8955EF774972-A1E8-8470@webmail-d025.sysops.aol.com>
	<fa8579a41003040951h4fb5fc50m57c74a81219e1bbe@mail.gmail.com>
Message-ID: <fa8579a41003040953r1283ebbakf9462f6c957737a4@mail.gmail.com>

---------- Forwarded message ----------
From: Brian Granger <ellisonbg at gmail.com>
Date: Thu, Mar 4, 2010 at 9:51 AM
Subject: Re: [IPython-dev] Demo.py on trunk
To: tfetherston at aol.com


Hi,

First I would do a basic test. ?That will at least make sure you can
checkout something from launchpad.

bzr branch lp:ipython

If this doesn't work, then you bzr install is somehow messed up. ?I
would uninstall and reinstall bzr.

For pushing here are some tips:

* Make sure you have putty setup correctly with your certificate

https://help.launchpad.net/YourAccount/CreatingAnSSHKeyPair

* Then I would do the push using the lp: syntax.

If you run into problems with the push, let me know and I can try to
help debug it.

> @SCUZZLEBUTT[C:0.11]|6> bzr push
> https://code.launchpad.net/~tfetherston/ipython/demoFixes
> bzr: ERROR: At https://code.launchpad.net/~tfetherston/ipython/demoFixes you
> have a valid .bzr control directory, but not a branch or repository. This is
> an unsupported configuration. Please move the target directory out of the
> way and try
> again.

Either your bzr install is messed up or the repo you are trying to
push is corrupted somehow.

Cheers,

Brian


> googling this error has not lead me to any fixes.
>
> Any ideas?
>



--
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From walter at livinglogic.de  Fri Mar  5 06:27:01 2010
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Fri, 05 Mar 2010 12:27:01 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B8FB849.7020101@gmail.com>
References: <4B8FB849.7020101@gmail.com>
Message-ID: <4B90EA85.2040102@livinglogic.de>

On 04.03.10 14:40, Wendell Smith wrote:
> Hello,
> 
> I have decided to implement a curses frontend to ipython.
> My main goals:
>   - complete functionality of standard ipython - magics, history, etc.
>   - automatic display of completion, not requiring the tab key - tab key 
> fills it in
>   - automatic display of docstrings, not requiring the '?' key
>   - zero pollution of output by docstrings/help (easier to read through 
> history)
>   - syntax highlighting of input lines as well as output (using 
> ipython's color scheme)
> 
> [...]
> Please reply with any comments or suggestions! Also, if anyone is 
> already working on something like this, or would like to help me, I 
> would love to hear that!

IPython contains some curses functionality (in the ipipe module). To
check it out, do

   >>> from ipipe import *
   >>> ils

Documentation can be found here http://ipython.scipy.org/moin/UsingIPipe

It would be great if your curses frontend still supported ipipe.

Servus,
   Walter


From wackywendell at gmail.com  Thu Mar  4 13:19:45 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Thu, 04 Mar 2010 19:19:45 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <fa8579a41003040921p1e4f8b8ia484e79ca5a1b19@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com>
	<fa8579a41003040921p1e4f8b8ia484e79ca5a1b19@mail.gmail.com>
Message-ID: <4B8FF9C1.9010900@gmail.com>

On 03/04/2010 06:21 PM, Brian Granger wrote:
>  Fantastic, this would be a great addition to IPython!  This comes at
>  a great time.

Thank you for your enthusiasm!

>  Currently, IPython trunk has done away with the old Shell class. Our
>  core class is not iplib.InteractiveShell.  Thus, I would start by
>  looking there.

I have only in the past few weeks started even looking through the 
ipython code, but it was my understanding that 
IPython.core.iplib.InteractiveShell was now the main class - a 
significant difference from before, but still the main class. I've 
noticed as well the Magic class and Application class, but I'm not sure 
I would need to do much with those, except instantiate them, and 
eventually play nicely with the pager/editor/etc. In fact, even with the 
InteractiveShell class, I believe I need to replace the raw_input 
function and Term.cout and Term.cerr, but other than that, I believe 
that I will mainly just have a separate input object that interacts with 
'InteractiveShell' and asks for completions, docstrings, source, etc. as 
it goes.

Anyways, I am starting right now with building objects on top of curses 
- a window that looks like a terminal and accepts terminal escapes and 
scrolls, a scrolling text input box, a pop-up window. I want these to be 
dependent on only curses - while I would make them with curses in mind, 
it makes sense to make these separate; maybe someone will reuse them.

>  * Some of your work will likely be on the IPython core itself.
>  While we have done a lot of work on it you will quickly find areas
>  that need to be updated in order for you to do what you want.  For
>  this part of it, I would simply create a branch on launchpad of the
>  IPython trunk.  This will allow us to merge your work on the IPython
>  core quickly into trunk.

You are probably right, although I do not yet see a need to change 
anything in the core, although, of course, I haven't looked at it that 
deeply nor started in.
And I think I will go on launchpad and branch ipython trunk soon.

>  I think having as clean of a design as possible is best, so I would
>  try to minimize the stuff at the top.  But, the challenge is that
>  some completions and help strings are quite long.  I am not sure how
>  to handle that in a clean way.

As for the design, I liked the idea of a separate text box below for 
input - that seems to make sense to me. There is then the challenge that 
when help strings/etc. pop up, they must, in order of importance, 1) 
avoid the input box, 2) avoid last output, 3) be in an otherwise logical 
position. I thought that perhaps having last output always directly 
above the input box (with blank space on top) would mean that the top 
would always be available for the help/completions/etc. However, I think 
you are right - the completions/help do not need to be there when they 
are blank, so it can just be a pop-up that starts from the top and 
expands downwards as necessary/directed. However... there is much code 
to be written before that is relevant; in the end, the code should make 
it easy to redesign (perhaps config-based) where the pop-ups appear.

Thanks for your help,
Wendell

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100304/492a8aa6/attachment.html>

From wackywendell at gmail.com  Fri Mar  5 13:17:49 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Fri, 05 Mar 2010 19:17:49 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B90EA85.2040102@livinglogic.de>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>
Message-ID: <4B914ACD.2030308@gmail.com>

It looks to me like ipipe is on deathrow for IPython 0.11, and in my 
version of 0.11 it crashes occasionally. It has some interesting 
functionality, but if its not going to be part of the main distribution, 
support for it will have to wait. I'm also a very long way from getting 
any sort of curses frontend working, and that's definitely highest 
priority: basic functionality.

However, I'm definitely going to keep in mind that others may wish to 
use curses from within the curses frontend; once the main curses 
frontend is working, I'll think about how that may work.

-Wendell

On 03/05/2010 12:27 PM, Walter D?rwald wrote:
> IPython contains some curses functionality (in the ipipe module). To
> check it out, do
>
>     >>>  from ipipe import *
>     >>>  ils
>
> Documentation can be found here http://ipython.scipy.org/moin/UsingIPipe
>
> It would be great if your curses frontend still supported ipipe.
>
> Servus,
>     Walter
>    



From ellisonbg at gmail.com  Mon Mar  8 15:38:16 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 8 Mar 2010 12:38:16 -0800
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B914ACD.2030308@gmail.com>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>
	<4B914ACD.2030308@gmail.com>
Message-ID: <fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>

Wendell,

> It looks to me like ipipe is on deathrow for IPython 0.11, and in my
> version of 0.11 it crashes occasionally. It has some interesting
> functionality, but if its not going to be part of the main distribution,
> support for it will have to wait. I'm also a very long way from getting
> any sort of curses frontend working, and that's definitely highest
> priority: basic functionality.

Yes, we need to decide what to do with ipipe for 0.11.  My feeling is that
it should be hosted as a separate project (that is why it is in deathrow)
but this has not been discussed.  We are really wanting to keep the core of
IPython as small as possible, as the code base has grown in size far
beyond our development teams abilities to keep up.

Minimally, ipipe needs to be updated to the new APIs, but that
shouldn't be too difficult.

> However, I'm definitely going to keep in mind that others may wish to
> use curses from within the curses frontend; once the main curses
> frontend is working, I'll think about how that may work.

Great!

Brian

> -Wendell
>
> On 03/05/2010 12:27 PM, Walter D?rwald wrote:
>> IPython contains some curses functionality (in the ipipe module). To
>> check it out, do
>>
>> ? ? >>> ?from ipipe import *
>> ? ? >>> ?ils
>>
>> Documentation can be found here http://ipython.scipy.org/moin/UsingIPipe
>>
>> It would be great if your curses frontend still supported ipipe.
>>
>> Servus,
>> ? ? Walter
>>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From walter at livinglogic.de  Tue Mar  9 05:56:09 2010
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Tue, 09 Mar 2010 11:56:09 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com>
	<4B90EA85.2040102@livinglogic.de>	<4B914ACD.2030308@gmail.com>
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>
Message-ID: <4B962949.6010006@livinglogic.de>

On 08.03.10 21:38, Brian Granger wrote:
> Wendell,
> 
>> It looks to me like ipipe is on deathrow for IPython 0.11, and in my
>> version of 0.11 it crashes occasionally. It has some interesting
>> functionality, but if its not going to be part of the main distribution,
>> support for it will have to wait. I'm also a very long way from getting
>> any sort of curses frontend working, and that's definitely highest
>> priority: basic functionality.
> 
> Yes, we need to decide what to do with ipipe for 0.11.  My feeling is that
> it should be hosted as a separate project (that is why it is in deathrow)
> but this has not been discussed.

I have no problem with taking ipipe out of the IPython distribution and
releasing it as a separate project.

We could have a page in the IPython wiki that lists all external IPython
extensions.

> We are really wanting to keep the core of
> IPython as small as possible, as the code base has grown in size far
> beyond our development teams abilities to keep up.
> 
> Minimally, ipipe needs to be updated to the new APIs, but that
> shouldn't be too difficult.

Do you have any hints on how that could be done? What ipipe currently
uses is the following:

    from IPython.utils import generics
    generics.result_display.when_type(Display)(display_display)

>> However, I'm definitely going to keep in mind that others may wish to
>> use curses from within the curses frontend; once the main curses
>> frontend is working, I'll think about how that may work.
> 
> Great!
> 
> Brian

Servus,
   Walter

>> On 03/05/2010 12:27 PM, Walter D?rwald wrote:
>>> IPython contains some curses functionality (in the ipipe module). To
>>> check it out, do
>>>
>>>     >>>  from ipipe import *
>>>     >>>  ils
>>>
>>> Documentation can be found here http://ipython.scipy.org/moin/UsingIPipe
>>>
>>> It would be great if your curses frontend still supported ipipe.
>>>
>>> Servus,
>>>     Walter
>>>
>>
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
> 
> 
> 



From cohen at lpta.in2p3.fr  Tue Mar  9 06:30:29 2010
From: cohen at lpta.in2p3.fr (Johann Cohen-Tanugi)
Date: Tue, 09 Mar 2010 12:30:29 +0100
Subject: [IPython-dev] testing ipython install on current trunk : report
Message-ID: <4B963155.3010506@lpta.in2p3.fr>

Hi there, I had to upgrade my laptop to Fedora 12, lost a few nights 
sleep, and am slowly recovering my previous working environment... I 
just took the current head of ipython and installed it in 
/home/cohen/.local (aside note : while 'setup.py install' accepts --user 
as option, but 'python setupegg.py develop' does not, which is a bit 
unfortunate).

I ran iptest and got:

Ran 374 tests in 44.044s

PASSED (successes=374)

**********************************************************************
Test suite completed for system with the following information:
IPython version: 0.11.alpha1.bzr.r1223
BZR revision   : 1223
Platform info  : os.name -> posix, sys.platform -> linux2
                : 
Linux-2.6.32.9-67.fc12.i686-i686-with-fedora-12-Omega_12.1_Fedora_Remix
Python info    : 2.6.2 (r262:71600, Jan 25 2010, 18:46:45)
[GCC 4.4.2 20091222 (Red Hat 4.4.2-20)]

Tools and libraries available at test time:
    curses foolscap gobject gtk pexpect twisted wx wx.aui zope.interface

Tools and libraries NOT available at test time:
    objc

Ran 10 test groups in 56.775s

Status:
ERROR - 2 out of 10 test groups failed.
----------------------------------------
Runner failed: IPython.core
You may wish to rerun this one individually, with:
/usr/bin/python 
/home/cohen/sources/python/ipython/IPython/testing/iptest.py IPython.core

----------------------------------------
Runner failed: IPython.extensions
You may wish to rerun this one individually, with:
/usr/bin/python 
/home/cohen/sources/python/ipython/IPython/testing/iptest.py 
IPython.extensions

[cohen at jarrett python]$ /usr/bin/python 
/home/cohen/sources/python/ipython/IPython/testing/iptest.py IPython.core
................................................................................S..............F.F..
======================================================================
FAIL: Test that object's __del__ methods are called on exit.
----------------------------------------------------------------------
Traceback (most recent call last):
   File "/usr/lib/python2.6/site-packages/nose/case.py", line 182, in 
runTest
     self.test(*self.arg)
   File 
"/home/cohen/sources/python/ipython/IPython/testing/decorators.py", line 
225, in skipper_func
     return f(*args, **kwargs)
   File 
"/home/cohen/sources/python/ipython/IPython/core/tests/test_run.py", 
line 155, in test_obj_del
     tt.ipexec_validate(self.fname, 'object A deleted')
   File "/home/cohen/sources/python/ipython/IPython/testing/tools.py", 
line 252, in ipexec_validate
     nt.assert_equals(out.strip(), expected_out.strip())
AssertionError: '\x1b[?1034hobject A deleted' != 'object A deleted'
 >>  raise self.failureException, \
           (None or '%r != %r' % ('\x1b[?1034hobject A deleted', 'object 
A deleted'))


======================================================================
FAIL: IPython.core.tests.test_run.TestMagicRunSimple.test_tclass
----------------------------------------------------------------------
Traceback (most recent call last):
   File "/usr/lib/python2.6/site-packages/nose/case.py", line 182, in 
runTest
     self.test(*self.arg)
   File 
"/home/cohen/sources/python/ipython/IPython/testing/decorators.py", line 
225, in skipper_func
     return f(*args, **kwargs)
   File 
"/home/cohen/sources/python/ipython/IPython/core/tests/test_run.py", 
line 169, in test_tclass
     tt.ipexec_validate(self.fname, out)
   File "/home/cohen/sources/python/ipython/IPython/testing/tools.py", 
line 252, in ipexec_validate
     nt.assert_equals(out.strip(), expected_out.strip())
AssertionError: "\x1b[?1034hARGV 1-: ['C-first']\nARGV 1-: 
['C-second']\ntclass.py: deleting object: C-first" != "ARGV 1-: 
['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first"
 >>  raise self.failureException, \
           (None or '%r != %r' % ("\x1b[?1034hARGV 1-: ['C-first']\nARGV 
1-: ['C-second']\ntclass.py: deleting object: C-first", "ARGV 1-: 
['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first"))


----------------------------------------------------------------------
Ran 102 tests in 1.086s

FAILED (SKIP=1, failures=2)
[cohen at jarrett python]$ /usr/bin/python 
/home/cohen/sources/python/ipython/IPython/testing/iptest.py 
IPython.extensions
..F.
======================================================================
FAIL: 
IPython.extensions.tests.test_pretty.TestPrettyInteractively.test_printers
----------------------------------------------------------------------
Traceback (most recent call last):
   File "/usr/lib/python2.6/site-packages/nose/case.py", line 182, in 
runTest
     self.test(*self.arg)
   File 
"/home/cohen/sources/python/ipython/IPython/testing/decorators.py", line 
225, in skipper_func
     return f(*args, **kwargs)
   File 
"/home/cohen/sources/python/ipython/IPython/extensions/tests/test_pretty.py", 
line 101, in test_printers
     tt.ipexec_validate(self.fname, ipy_out)
   File "/home/cohen/sources/python/ipython/IPython/testing/tools.py", 
line 252, in ipexec_validate
     nt.assert_equals(out.strip(), expected_out.strip())
AssertionError: '\x1b[?1034hA()\nB()\n<A>\n<B>' != 'A()\nB()\n<A>\n<B>'
 >>  raise self.failureException, \
           (None or '%r != %r' % ('\x1b[?1034hA()\nB()\n<A>\n<B>', 
'A()\nB()\n<A>\n<B>'))


----------------------------------------------------------------------
Ran 4 tests in 0.423s

FAILED (failures=1)


All this does not look horrendously catastrophic, but it may be useful 
to report it, so here it is! :)

cheers,
Johann


From fperez.net at gmail.com  Tue Mar  9 10:32:28 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 9 Mar 2010 10:32:28 -0500
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B8FB849.7020101@gmail.com>
References: <4B8FB849.7020101@gmail.com>
Message-ID: <db6b5ecc1003090732g50f28345q696fa0a10bc8d710@mail.gmail.com>

Hi Wendell,

On Thu, Mar 4, 2010 at 8:40 AM, Wendell Smith <wackywendell at gmail.com> wrote:
> I have decided to implement a curses frontend to ipython.

I just got back from the jungle but I have a mad day today finishing
up here before traveling tonight back to the States, so it will be
another couple of days before I can dive back into this.  I just
wanted to give you a *very enthusiastic* note: your timing is perfect,
and we are right at the point of pushing the code to implement
precisely this.

Brian's feedback is spot on, and hopefully by week's end we can start
hammering design specifics, so that we can get both a curses and a qt
frontend that share as much of the core machinery as possible.

Cheers,

f


From fperez.net at gmail.com  Tue Mar  9 10:35:32 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 9 Mar 2010 10:35:32 -0500
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B962949.6010006@livinglogic.de>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>
	<4B914ACD.2030308@gmail.com>
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>
	<4B962949.6010006@livinglogic.de>
Message-ID: <db6b5ecc1003090735k6c0bcdcfqd157e166a85184f1@mail.gmail.com>

On Tue, Mar 9, 2010 at 5:56 AM, Walter D?rwald <walter at livinglogic.de> wrote:
>> Yes, we need to decide what to do with ipipe for 0.11. ?My feeling is that
>> it should be hosted as a separate project (that is why it is in deathrow)
>> but this has not been discussed.
>
> I have no problem with taking ipipe out of the IPython distribution and
> releasing it as a separate project.
>
> We could have a page in the IPython wiki that lists all external IPython
> extensions.

I'm not convinced we need to pull ipipe out, *if it can be
maintained/tested along with the rest of  the code*.  So if you think
you can update the code/tests, I don't see a need to pull it out, as
long as it can be brought up to the current api/standards, and you can
foresee maintaining it in the future.

Cheers,

f


From antont at an.org  Tue Mar  9 12:11:11 2010
From: antont at an.org (Toni Alatalo)
Date: Tue, 09 Mar 2010 19:11:11 +0200
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <db6b5ecc1003090732g50f28345q696fa0a10bc8d710@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com>
	<db6b5ecc1003090732g50f28345q696fa0a10bc8d710@mail.gmail.com>
Message-ID: <4B96812F.3090503@an.org>

Fernando Perez kirjoitti:
> hammering design specifics, so that we can get both a curses and a qt
> frontend that share as much of the core machinery as possible.

Sounds great. I'm working on Python support in a Qt using app myself (it 
embeds py) and don't have a console there yet (am using a file as a 
replacement..), so will definitely test that at some point.

For the text mode version, I don't have first hand experience of neither 
  curses nor Urwid development, but it Urwid seems nice and I have heard 
good things about it from others who have developed using it. It seems 
to have no deps aside from py itself - I'd think something to consider.

> f

~Toni



From cohen at lpta.in2p3.fr  Thu Mar 11 21:34:13 2010
From: cohen at lpta.in2p3.fr (Johann Cohen-Tanugi)
Date: Fri, 12 Mar 2010 03:34:13 +0100
Subject: [IPython-dev] ImportError: No module named ipapp
In-Reply-To: <6ce0ac131002092034s7e6c27e6kdf34be3082af137b@mail.gmail.com>
References: <web-130131249@uni-stuttgart.de>
	<6ce0ac131002092034s7e6c27e6kdf34be3082af137b@mail.gmail.com>
Message-ID: <4B99A825.6030705@lpta.in2p3.fr>

hello,
I have an identical problem : I have a trunk build of ipython, that I 
installed with --user, and then today I wanted to have a look at the 
bundle EPD distribution. I thought that the whole point was that I could 
get a completely "waterproof" distribution with EPD, irrespective of any 
other development version of say ipython on my system.

But I get:
[cohen at jarrett ~]$ /home/cohen/sources/python/epd-6.1-1-rh5-x86/bin/ipython
Traceback (most recent call last):
   File "/home/cohen/sources/python/epd-6.1-1-rh5-x86/bin/ipython", line 
7, in <module>
     from IPython.ipapi import launch_new_instance
ImportError: No module named ipapi

if I rename .local as local, then the EPD version loads ok.... Isn't 
there a way to keep both functioning?

thanks,
Johann

On 02/10/2010 05:34 AM, Brian Granger wrote:
> Can you try removing all traces of ipython (both from bin dirs and 
> site-packages) and reinstall.  It looks like you have multiple 
> versions on your PATH/PYTHONPATH that are conflicting.
>
> Cheers,
>
> Brian
>
>
>
> On Tue, Feb 9, 2010 at 11:22 AM, Nils Wagner 
> <nwagner at iam.uni-stuttgart.de <mailto:nwagner at iam.uni-stuttgart.de>> 
> wrote:
>
>     Hi all,
>
>     I have installed ipython from scratch
>
>     bzr branch lp:ipython
>     cd ipython
>     python setup.py install --prefix=$HOME/local
>
>     If I start ipython
>
>     ipython
>     Traceback (most recent call last):
>       File "/home/nwagner/local/bin/ipython", line 4, in
>     <module>
>         from IPython.core.ipapp import launch_new_instance
>     ImportError: No module named ipapp
>
>     Any idea ?
>
>                Nils
>     _______________________________________________
>     IPython-dev mailing list
>     IPython-dev at scipy.org <mailto:IPython-dev at scipy.org>
>     http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
>
> -- 
> This message has been scanned for viruses and
> dangerous content by *MailScanner* <http://www.mailscanner.info/>, and is
> believed to be clean.
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>    
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100312/2b77ab76/attachment.html>

From cohen at lpta.in2p3.fr  Thu Mar 11 21:41:15 2010
From: cohen at lpta.in2p3.fr (Johann Cohen-Tanugi)
Date: Fri, 12 Mar 2010 03:41:15 +0100
Subject: [IPython-dev] ImportError: No module named ipapp
In-Reply-To: <4B99A825.6030705@lpta.in2p3.fr>
References: <web-130131249@uni-stuttgart.de>	<6ce0ac131002092034s7e6c27e6kdf34be3082af137b@mail.gmail.com>
	<4B99A825.6030705@lpta.in2p3.fr>
Message-ID: <4B99A9CB.4050101@lpta.in2p3.fr>

I just tried to remove the epd install, rename .local, reinstall epd, 
and rename local into .local, and that does not help...
JCT

On 03/12/2010 03:34 AM, Johann Cohen-Tanugi wrote:
> hello,
> I have an identical problem : I have a trunk build of ipython, that I 
> installed with --user, and then today I wanted to have a look at the 
> bundle EPD distribution. I thought that the whole point was that I 
> could get a completely "waterproof" distribution with EPD, 
> irrespective of any other development version of say ipython on my 
> system.
>
> But I get:
> [cohen at jarrett ~]$ 
> /home/cohen/sources/python/epd-6.1-1-rh5-x86/bin/ipython
> Traceback (most recent call last):
>   File "/home/cohen/sources/python/epd-6.1-1-rh5-x86/bin/ipython", 
> line 7, in <module>
>     from IPython.ipapi import launch_new_instance
> ImportError: No module named ipapi
>
> if I rename .local as local, then the EPD version loads ok.... Isn't 
> there a way to keep both functioning?
>
> thanks,
> Johann
>
> On 02/10/2010 05:34 AM, Brian Granger wrote:
>> Can you try removing all traces of ipython (both from bin dirs and 
>> site-packages) and reinstall.  It looks like you have multiple 
>> versions on your PATH/PYTHONPATH that are conflicting.
>>
>> Cheers,
>>
>> Brian
>>
>>
>>
>> On Tue, Feb 9, 2010 at 11:22 AM, Nils Wagner 
>> <nwagner at iam.uni-stuttgart.de <mailto:nwagner at iam.uni-stuttgart.de>> 
>> wrote:
>>
>>     Hi all,
>>
>>     I have installed ipython from scratch
>>
>>     bzr branch lp:ipython
>>     cd ipython
>>     python setup.py install --prefix=$HOME/local
>>
>>     If I start ipython
>>
>>     ipython
>>     Traceback (most recent call last):
>>       File "/home/nwagner/local/bin/ipython", line 4, in
>>     <module>
>>         from IPython.core.ipapp import launch_new_instance
>>     ImportError: No module named ipapp
>>
>>     Any idea ?
>>
>>                Nils
>>     _______________________________________________
>>     IPython-dev mailing list
>>     IPython-dev at scipy.org <mailto:IPython-dev at scipy.org>
>>     http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>>
>>
>> -- 
>> This message has been scanned for viruses and
>> dangerous content by *MailScanner* <http://www.mailscanner.info/>, 
>> and is
>> believed to be clean.
>>
>>
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>    
>
> -- 
> This message has been scanned for viruses and
> dangerous content by *MailScanner* <http://www.mailscanner.info/>, and is
> believed to be clean.
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>    
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100312/48b7cad5/attachment.html>

From ellisonbg at gmail.com  Fri Mar 12 11:59:41 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 12 Mar 2010 08:59:41 -0800
Subject: [IPython-dev] ImportError: No module named ipapp
In-Reply-To: <4B99A825.6030705@lpta.in2p3.fr>
References: <web-130131249@uni-stuttgart.de>
	<6ce0ac131002092034s7e6c27e6kdf34be3082af137b@mail.gmail.com>
	<4B99A825.6030705@lpta.in2p3.fr>
Message-ID: <fa8579a41003120859x1688390dhc9c9fd8998557961@mail.gmail.com>

Hi,

> I have an identical problem : I have a trunk build of ipython, that I
> installed with --user, and then today I wanted to have a look at the bundle
> EPD distribution. I thought that the whole point was that I could get a
> completely "waterproof" distribution with EPD, irrespective of any other
> development version of say ipython on my system.

EPD, is like any other Python installation in terms of how it resolves and finds
Python packages.  When you install EPD, it sets your PATH variables to point
to its python, ipython, etc scripts.  BUT, Python 2.6 still uses the
--user location
to find packages.  Thus, you have the following problem:

* /home/cohen/sources/python/epd-6.1-1-rh5-x86/bin/ipython is first on your
path, so when you type ipython at the command line,  that script starts.

* But when that script tries to "from IPython.ipapi import launch_new_instance"
Python goes and finds the IPython package in the --user location, which is the
wrong version of IPython (trunk doesn't have launch_new_instance).

So you must:

* Install trunk IPython into EPD so that its "ipython" script points
to the trunk version.
* Set your PATH differently so that the trunk "ipython" script is
found before the EPD
version.
* Uninstall the trunk version of IPython.

Hope this helps,

Cheers,

Brian

> But I get:
> [cohen at jarrett ~]$ /home/cohen/sources/python/epd-6.1-1-rh5-x86/bin/ipython
> Traceback (most recent call last):
> ? File "/home/cohen/sources/python/epd-6.1-1-rh5-x86/bin/ipython", line 7,
> in <module>
> ??? from IPython.ipapi import launch_new_instance
> ImportError: No module named ipapi
>
> if I rename .local as local, then the EPD version loads ok.... Isn't there a
> way to keep both functioning?
>
> thanks,
> Johann
>
> On 02/10/2010 05:34 AM, Brian Granger wrote:
>
> Can you try removing all traces of ipython (both from bin dirs and
> site-packages) and reinstall.? It looks like you have multiple versions on
> your PATH/PYTHONPATH that are conflicting.
>
> Cheers,
>
> Brian
>
>
>
> On Tue, Feb 9, 2010 at 11:22 AM, Nils Wagner <nwagner at iam.uni-stuttgart.de>
> wrote:
>>
>> Hi all,
>>
>> I have installed ipython from scratch
>>
>> bzr branch lp:ipython
>> cd ipython
>> python setup.py install --prefix=$HOME/local
>>
>> If I start ipython
>>
>> ipython
>> Traceback (most recent call last):
>> ? File "/home/nwagner/local/bin/ipython", line 4, in
>> <module>
>> ? ? from IPython.core.ipapp import launch_new_instance
>> ImportError: No module named ipapp
>>
>> Any idea ?
>>
>> ? ? ? ? ? ?Nils
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Fri Mar 12 14:22:56 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 12 Mar 2010 11:22:56 -0800
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B962949.6010006@livinglogic.de>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>
	<4B914ACD.2030308@gmail.com>
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>
	<4B962949.6010006@livinglogic.de>
Message-ID: <fa8579a41003121122l178d13aaiccad51ef53e3ae07@mail.gmail.com>

On Tue, Mar 9, 2010 at 2:56 AM, Walter D?rwald <walter at livinglogic.de> wrote:
> On 08.03.10 21:38, Brian Granger wrote:
>> Wendell,
>>
>>> It looks to me like ipipe is on deathrow for IPython 0.11, and in my
>>> version of 0.11 it crashes occasionally. It has some interesting
>>> functionality, but if its not going to be part of the main distribution,
>>> support for it will have to wait. I'm also a very long way from getting
>>> any sort of curses frontend working, and that's definitely highest
>>> priority: basic functionality.
>>
>> Yes, we need to decide what to do with ipipe for 0.11. ?My feeling is that
>> it should be hosted as a separate project (that is why it is in deathrow)
>> but this has not been discussed.
>
> I have no problem with taking ipipe out of the IPython distribution and
> releasing it as a separate project.
>
> We could have a page in the IPython wiki that lists all external IPython
> extensions.

Yes, I think this is a good idea.

>> We are really wanting to keep the core of
>> IPython as small as possible, as the code base has grown in size far
>> beyond our development teams abilities to keep up.
>>
>> Minimally, ipipe needs to be updated to the new APIs, but that
>> shouldn't be too difficult.
>
> Do you have any hints on how that could be done? What ipipe currently
> uses is the following:
>
> ? ?from IPython.utils import generics
> ? ?generics.result_display.when_type(Display)(display_display)

Is there a problem with generics?  If so it might be related to this:

https://bugs.launchpad.net/ipython/+bug/527968

It this is a different issue, could you explain further?

Cheers,

Brian

>>> However, I'm definitely going to keep in mind that others may wish to
>>> use curses from within the curses frontend; once the main curses
>>> frontend is working, I'll think about how that may work.
>>
>> Great!
>>
>> Brian
>
> Servus,
> ? Walter
>
>>> On 03/05/2010 12:27 PM, Walter D?rwald wrote:
>>>> IPython contains some curses functionality (in the ipipe module). To
>>>> check it out, do
>>>>
>>>> ? ? >>> ?from ipipe import *
>>>> ? ? >>> ?ils
>>>>
>>>> Documentation can be found here http://ipython.scipy.org/moin/UsingIPipe
>>>>
>>>> It would be great if your curses frontend still supported ipipe.
>>>>
>>>> Servus,
>>>> ? ? Walter
>>>>
>>>
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>
>>
>>
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Fri Mar 12 14:23:57 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 12 Mar 2010 11:23:57 -0800
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B96812F.3090503@an.org>
References: <4B8FB849.7020101@gmail.com>
	<db6b5ecc1003090732g50f28345q696fa0a10bc8d710@mail.gmail.com>
	<4B96812F.3090503@an.org>
Message-ID: <fa8579a41003121123k4885acd8l411a05f2349e9076@mail.gmail.com>

Toni,

> For the text mode version, I don't have first hand experience of neither
> ?curses nor Urwid development, but it Urwid seems nice and I have heard
> good things about it from others who have developed using it. It seems
> to have no deps aside from py itself - I'd think something to consider.

I agree.  I hadn't looked at Urwid before, but it looks quite nice.
It may be a better
option than curses.

Cheers,

Brian


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Fri Mar 12 14:49:30 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 12 Mar 2010 11:49:30 -0800
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B8FB849.7020101@gmail.com>
References: <4B8FB849.7020101@gmail.com>
Message-ID: <fa8579a41003121149j23016e72k773b6a9e2cb6548d@mail.gmail.com>

Wendell,

I have been busy with other things lately, so sorry I haven't gotten
back to you sooner.  I wanted to say a few more things about the
overall design of your curses based frontend.  This also applies to
other frontends...

* For a long time we have wanted to move IPython to a 2 process model
that is similar to how Mathematica works.  The idea is to have 1) a
simple lightweight frontend GUI/terminal/curses based process that
handles user input, prints output, etc and 2) a kernel/engine process
that actually executes the code.

* The GIL has made this effort very difficult to achieve.  The problem
is that if the kernel executes extension code that doesn't release the
GIL, the kernel's will appear dead for the networking.  Threads don't
help this and Twisted (which we use) doesn't either.

* Our solution so far has been to use Twisted and introduce an
additional process called the "controller" that manages traffic
between the frontend and kernel/engine.  Twisted is amazing in many
ways, but it has a number of downsides:

- Twisted tends to be an all or nothing thing.  Thus, it has been
quite difficult to integrate with other approaches and it is also
difficult to *force* everyone to use Twisted.  Also, using Twisted
forces us to have a very constrained threading model that is super
infelxible.  I can give more details on this if needed.
- It looks like Twisted will make the transition to Python 3k *very*
slowly (after zope, after everyone drops python 2.x support, etc.).
IPython, on the other hand needs to move to Python 3k quickly as so
many people use it.
- As we have spent more time looking at network protocols, we are more
and more convinced that a pure RPC style network approach is really
insufficient.  What we really need is more of an asynchronous
messaging architecture that supports different messaging topologies.
- Twisted is slow for how we are using.  For some of the things we
would like to do, we need C/C++ speed networking.

So.....for the last while we have been looking at alternatives.
Initially we were hopeful about AMQP:

http://www.amqp.org/confluence/display/AMQP/Advanced+Message+Queuing+Protocol

AMQP looks really nice, but 1) is very complex and heavyweight, 2)
adopts a very specific model of messaging that looks to be too limited
for what we want.

Recently, however, we learned of 0MQ:

http://www.zeromq.org/

Zeromq has been developed by some of the same folks as AMQP, but with
a different emphasis:

* As fast as can be (written in C++).  For somethings it is faster
than raw TCP sockets!
* Super simple API, wire protocol, etc.
* Lightweight and easy to install.
* All the network IO and message queuing are done in a C++ thread, so
it can happen without the GIL being involved.

This last point is the most important thing about 0MQ.  It means that
a process can run non-GIL releasing extension code and still do all
the network IO and message queuing.

I have tested out 0MQ quite a bit and have created new Cython based
bindings to 0MQ:

http://www.zeromq.org/bindings:python

Here is a Py0MQ example of what an IPython kernel would look like:

http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/

So....my thoughts right now are that this is the direction we are
headed.  Thus, I think the model you should use in designing the
frontend is this:

* Cureses/urwid based frontend that ....
* Talks to IPython kernel over...
* 0MQ

Obviously, you are free to use a 1 process model where Ipython runs in
the same process as the curses stuff, but you will run into all the
same problems that we have had for years.  I can ellaborate further if
you want.  What do you think about this plan?

Cheers,

Brian


From walter at livinglogic.de  Fri Mar 12 15:29:15 2010
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Fri, 12 Mar 2010 21:29:15 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <fa8579a41003121122l178d13aaiccad51ef53e3ae07@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>	
	<4B914ACD.2030308@gmail.com>	
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>	
	<4B962949.6010006@livinglogic.de>
	<fa8579a41003121122l178d13aaiccad51ef53e3ae07@mail.gmail.com>
Message-ID: <4B9AA41B.6030600@livinglogic.de>

On 12.03.10 20:22, Brian Granger wrote:

> On Tue, Mar 9, 2010 at 2:56 AM, Walter D?rwald <walter at livinglogic.de> wrote:
>> On 08.03.10 21:38, Brian Granger wrote:
>>> Wendell,
>>>
>>>> It looks to me like ipipe is on deathrow for IPython 0.11, and in my
>>>> version of 0.11 it crashes occasionally. It has some interesting
>>>> functionality, but if its not going to be part of the main distribution,
>>>> support for it will have to wait. I'm also a very long way from getting
>>>> any sort of curses frontend working, and that's definitely highest
>>>> priority: basic functionality.
>>>
>>> Yes, we need to decide what to do with ipipe for 0.11.  My feeling is that
>>> it should be hosted as a separate project (that is why it is in deathrow)
>>> but this has not been discussed.
>>
>> I have no problem with taking ipipe out of the IPython distribution and
>> releasing it as a separate project.
>>
>> We could have a page in the IPython wiki that lists all external IPython
>> extensions.
> 
> Yes, I think this is a good idea.
> 
>>> We are really wanting to keep the core of
>>> IPython as small as possible, as the code base has grown in size far
>>> beyond our development teams abilities to keep up.
>>>
>>> Minimally, ipipe needs to be updated to the new APIs, but that
>>> shouldn't be too difficult.
>>
>> Do you have any hints on how that could be done? What ipipe currently
>> uses is the following:
>>
>>    from IPython.utils import generics
>>    generics.result_display.when_type(Display)(display_display)
> 
> Is there a problem with generics?

No, they work without a problem.

> If so it might be related to this:
> 
> https://bugs.launchpad.net/ipython/+bug/527968

I'm not using generics.complete_object.

> It this is a different issue, could you explain further?

You wrote: "Minimally, ipipe needs to be updated to the new APIs", but
generics.result_display() is the only IPython API that ipipe uses, so I
thould I would have to change something.

> [...]

Servus,
   Walter


From walter at livinglogic.de  Fri Mar 12 15:45:16 2010
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Fri, 12 Mar 2010 21:45:16 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <db6b5ecc1003090735k6c0bcdcfqd157e166a85184f1@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>	
	<4B914ACD.2030308@gmail.com>	
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>	
	<4B962949.6010006@livinglogic.de>
	<db6b5ecc1003090735k6c0bcdcfqd157e166a85184f1@mail.gmail.com>
Message-ID: <4B9AA7DC.5090400@livinglogic.de>

On 09.03.10 16:35, Fernando Perez wrote:
> On Tue, Mar 9, 2010 at 5:56 AM, Walter D?rwald <walter at livinglogic.de> wrote:
>>> Yes, we need to decide what to do with ipipe for 0.11.  My feeling is that
>>> it should be hosted as a separate project (that is why it is in deathrow)
>>> but this has not been discussed.
>>
>> I have no problem with taking ipipe out of the IPython distribution and
>> releasing it as a separate project.
>>
>> We could have a page in the IPython wiki that lists all external IPython
>> extensions.
> 
> I'm not convinced we need to pull ipipe out,

Releasing ipipe as a separate package would have several advantages:

  * I wouldn't be forced to use bzr >;->
  * ipipe could have its own set of requirements
  * and be released on its own schedule

Being part of the IPython core has its advantages, but as long as ipipes
existence is documented in the core, a standalone should be no problem.

> *if it can be
> maintained/tested along with the rest of  the code*.  So if you think
> you can update the code/tests,

As ipipe is mostly interactive, ther currently are no tests. However
some parts of ipipe could be testet.

> I don't see a need to pull it out, as
> long as it can be brought up to the current api/standards, and you can
> foresee maintaining it in the future.

Servus,
   Walter


From ellisonbg at gmail.com  Fri Mar 12 17:08:19 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 12 Mar 2010 14:08:19 -0800
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B9AA7DC.5090400@livinglogic.de>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>
	<4B914ACD.2030308@gmail.com>
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>
	<4B962949.6010006@livinglogic.de>
	<db6b5ecc1003090735k6c0bcdcfqd157e166a85184f1@mail.gmail.com>
	<4B9AA7DC.5090400@livinglogic.de>
Message-ID: <fa8579a41003121408u4be1190nb9a91c75be87f74d@mail.gmail.com>

Walter,

> Releasing ipipe as a separate package would have several advantages:
>
> ?* I wouldn't be forced to use bzr >;->
> ?* ipipe could have its own set of requirements
> ?* and be released on its own schedule

Yes, these are good reasons.  Also, with the IPython code base being
refactored so heavily, it gives ipipe a bit more isolation from the
messes we are
making.

> Being part of the IPython core has its advantages, but as long as ipipes
> existence is documented in the core, a standalone should be no problem.

Yes, I think we should add a section to the documentation that lists third
party extensions and IPython-using projects.

>> *if it can be
>> maintained/tested along with the rest of ?the code*. ?So if you think
>> you can update the code/tests,
>
> As ipipe is mostly interactive, ther currently are no tests. However
> some parts of ipipe could be testet.

Yes, ipipe is probably hard to test.

Cheers,

Brian



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Fri Mar 12 17:09:54 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 12 Mar 2010 14:09:54 -0800
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4B9AA41B.6030600@livinglogic.de>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>
	<4B914ACD.2030308@gmail.com>
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>
	<4B962949.6010006@livinglogic.de>
	<fa8579a41003121122l178d13aaiccad51ef53e3ae07@mail.gmail.com>
	<4B9AA41B.6030600@livinglogic.de>
Message-ID: <fa8579a41003121409i1e65de9bof70272b0a5aa22a5@mail.gmail.com>

Walter,

>> Is there a problem with generics?
>
> No, they work without a problem.

Ok, I misunderstood.

>> If so it might be related to this:
>>
>> https://bugs.launchpad.net/ipython/+bug/527968
>
> I'm not using generics.complete_object.
>
>> It this is a different issue, could you explain further?
>
> You wrote: "Minimally, ipipe needs to be updated to the new APIs", but
> generics.result_display() is the only IPython API that ipipe uses, so I
> thould I would have to change something.

OK, but that shouldn't be too difficult right?  If you do want to
continue to use this,
we can look to see what the new API looks like for this.

Cheers,

Brian




-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From tfetherston at aol.com  Wed Mar 17 01:25:27 2010
From: tfetherston at aol.com (tfetherston at aol.com)
Date: Wed, 17 Mar 2010 01:25:27 -0400
Subject: [IPython-dev] 0.11 Release
Message-ID: <8CC93A96338B8C5-351C-3DC5@webmail-d055.sysops.aol.com>


 Saw Brian mention, "We're about to release 0.11", what's the plan on this?

My branch DemoFixer, puts in the fixes needed for Demos to run in 0.11, but I haven't proposed it for merging yet as I was planning to spruce up the example file, if there is a rush, it could be put in as is.

Tom

 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100317/b7b503e6/attachment.html>

From tfetherston at aol.com  Wed Mar 17 01:52:48 2010
From: tfetherston at aol.com (tfetherston at aol.com)
Date: Wed, 17 Mar 2010 01:52:48 -0400
Subject: [IPython-dev] grin and ansi escapes
Message-ID: <8CC93AD355A59CE-351C-4139@webmail-d055.sysops.aol.com>


 
grin is a grep-like python utility that can mine files/directories via regexes and output its result colorized by ansi escapes.  If you have it installed you can use it in IPython with the system command escape:

!grin <regex> <file/directory>

On linux and Mac this probably is enough to get you colored output as those terminals recognize ansi escapes which grin should generate as it has some awareness of that capability.  However, windows doesn't have that capability built in, the pyreadline utility adds it, but it must be told to interpret the ansi escapes.

Here is how I got colored output on windows:

from IPython.genutils import Term

oc = !grin hi "C:\PyDevArea\IPy-repo\0.11DemoFix\docs\examples\lib" --force-color
print >> Term.cout, oc.n

I'd like to make this simpler, use an alias, custom magic, add a util to handle output expected to be ansi escape colorized?

Suggestions?

Tom


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100317/4dd409bc/attachment.html>

From ellisonbg at gmail.com  Thu Mar 18 02:27:49 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 17 Mar 2010 23:27:49 -0700
Subject: [IPython-dev] 0.11 Release
In-Reply-To: <8CC93A96338B8C5-351C-3DC5@webmail-d055.sysops.aol.com>
References: <8CC93A96338B8C5-351C-3DC5@webmail-d055.sysops.aol.com>
Message-ID: <fa8579a41003172327o794046d4ob4927c0dcf554bcc@mail.gmail.com>

Tom,

We are still a ways off on the release, so don't hesitate to submit
the branch for review.

Cheers,

Brian

On Tue, Mar 16, 2010 at 10:25 PM,  <tfetherston at aol.com> wrote:
> Saw Brian mention, "We're about to release 0.11", what's the plan on this?
>
> My branch DemoFixer, puts in the fixes needed for Demos to run in 0.11, but
> I haven't proposed it for merging yet as I was planning to spruce up the
> example file, if there is a rush, it could be put in as is.
>
> Tom
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From wackywendell at gmail.com  Sun Mar 21 09:41:43 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Sun, 21 Mar 2010 14:41:43 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <fa8579a41003121123k4885acd8l411a05f2349e9076@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com>	<db6b5ecc1003090732g50f28345q696fa0a10bc8d710@mail.gmail.com>	<4B96812F.3090503@an.org>
	<fa8579a41003121123k4885acd8l411a05f2349e9076@mail.gmail.com>
Message-ID: <4BA62217.9090804@gmail.com>

Sorry I'm late in responding - I was traveling for some time, and then 
came back to a lack of internet which was only restored today...

As for urwid, I'm not sure... I haven't looked at it in detail, but at 
first glance, it seems to want to run its own mainloop, the 
documentation isn't great, and there seems to be nothing written 
anywhere about when / if it will be ported to python 3. I'll take a 
closer look, but I'm not sure. At least in my plan, there would be 
nothing very complicated in the use of curses - no menus, for example - 
and while it will be somewhat complicated to implement a nice text 
editing box on top of basic curses, I'm not sure if adapting the one 
from urwid will fit.

But with two positive recommendations to go on now, I'll give it a 
closer look than I did earlier; maybe it will make things easier.

-Wendell


On 03/12/2010 08:23 PM, Brian Granger wrote:
> Toni,
>
>    
>> For the text mode version, I don't have first hand experience of neither
>>   curses nor Urwid development, but it Urwid seems nice and I have heard
>> good things about it from others who have developed using it. It seems
>> to have no deps aside from py itself - I'd think something to consider.
>>      
> I agree.  I hadn't looked at Urwid before, but it looks quite nice.
> It may be a better
> option than curses.
>
> Cheers,
>
> Brian
>
>
>    



From wackywendell at gmail.com  Sun Mar 21 10:04:31 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Sun, 21 Mar 2010 15:04:31 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <fa8579a41003121149j23016e72k773b6a9e2cb6548d@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com>
	<fa8579a41003121149j23016e72k773b6a9e2cb6548d@mail.gmail.com>
Message-ID: <4BA6276F.1080609@gmail.com>

Thanks again for your response!

OK, I have very little experience with twisted, and none with 0MQ, and 
would very much appreciate if someone could give me some help there.

My plan so far is:
1. Figure out whether to use curses or urwid
2. Make the necessary widgets/etc. for curses/urwid, completely 
independent of ipython (mainly, a nice text editing box that can handle 
coloring, a main scrollable text box for the output, and a pop-up window 
for completions)
3. Figure out how to handle coloring - pygments, pycolorize, etc. 
Ipython's built in colorizer is not good enough: firstly, curses doesn't 
take terminal escapes, and secondly, I want to colorize the input as one 
types, which requires handling unfinished input - which the built in 
lexer can't handle. It looks like pygments is the way to go, and I'll 
need to write a formatter for curses (pygments doesn't have one, and 
curses is a special case, anyway)
4. Start working on integration with ipython. While of course ipython 
will be on my mind for the above, I would like the widgets to be sort of 
their own thing, that could be used independently of ipython, for 2 
reasons: firstly, someone might find them useful anyway, and secondly, 
it should make it easier to do bug fixes and isolate bugs.

I hope to get #1 done today or tomorrow, and get  back to #2. I've 
already gotten a good start on a curses based text-editing, 
color-capable widget, but if urwid looks good, I may drop it and go for 
theirs.

A couple of questions/problems:
1. Curses likes to block while waiting for input - is this ok? Should I 
try and get around this?
2. Colorizing. Someone already mentioned that they were thinking of 
switching ipython over to pygments. I could do this - it would make 
things easier for me in the long run. If I don't go that route, then it 
gets really complicated. As coloring the input pretty much requires 
pygments (built-in lexer can't handle unfinished input), there then 
become several options:
     a. Require pygments for the curses frontend.
     b. Don't require pygments, but be completely colorless without it.
     c. Write some fancy way of using the built in colorizer to handle 
the output, as well as one for pygments for the input. That way, without 
pygments installed, there is still color in the output portion.
I would, of course, prefer to just convert all ipython to pygments - 
which I think I can do - but of course that's not my decision to make, 
I'm new here. But if that is a good way to go, I'll happily go there.
Otherwise, I'm leaning towards (b). It's barely more complicated than 
(a) while having advantages over (a), and (c) is just too complicated.

OK, that was a lot. I'm going to get back to looking at urwid, but if 
anyone else has any answers/recommendations/comments/etc., I would love 
to hear them...

-Wendell


On 03/12/2010 08:49 PM, Brian Granger wrote:
> Wendell,
>
> I have been busy with other things lately, so sorry I haven't gotten
> back to you sooner.  I wanted to say a few more things about the
> overall design of your curses based frontend.  This also applies to
> other frontends...
>
> * For a long time we have wanted to move IPython to a 2 process model
> that is similar to how Mathematica works.  The idea is to have 1) a
> simple lightweight frontend GUI/terminal/curses based process that
> handles user input, prints output, etc and 2) a kernel/engine process
> that actually executes the code.
>
> * The GIL has made this effort very difficult to achieve.  The problem
> is that if the kernel executes extension code that doesn't release the
> GIL, the kernel's will appear dead for the networking.  Threads don't
> help this and Twisted (which we use) doesn't either.
>
> * Our solution so far has been to use Twisted and introduce an
> additional process called the "controller" that manages traffic
> between the frontend and kernel/engine.  Twisted is amazing in many
> ways, but it has a number of downsides:
>
> - Twisted tends to be an all or nothing thing.  Thus, it has been
> quite difficult to integrate with other approaches and it is also
> difficult to *force* everyone to use Twisted.  Also, using Twisted
> forces us to have a very constrained threading model that is super
> infelxible.  I can give more details on this if needed.
> - It looks like Twisted will make the transition to Python 3k *very*
> slowly (after zope, after everyone drops python 2.x support, etc.).
> IPython, on the other hand needs to move to Python 3k quickly as so
> many people use it.
> - As we have spent more time looking at network protocols, we are more
> and more convinced that a pure RPC style network approach is really
> insufficient.  What we really need is more of an asynchronous
> messaging architecture that supports different messaging topologies.
> - Twisted is slow for how we are using.  For some of the things we
> would like to do, we need C/C++ speed networking.
>
> So.....for the last while we have been looking at alternatives.
> Initially we were hopeful about AMQP:
>
> http://www.amqp.org/confluence/display/AMQP/Advanced+Message+Queuing+Protocol
>
> AMQP looks really nice, but 1) is very complex and heavyweight, 2)
> adopts a very specific model of messaging that looks to be too limited
> for what we want.
>
> Recently, however, we learned of 0MQ:
>
> http://www.zeromq.org/
>
> Zeromq has been developed by some of the same folks as AMQP, but with
> a different emphasis:
>
> * As fast as can be (written in C++).  For somethings it is faster
> than raw TCP sockets!
> * Super simple API, wire protocol, etc.
> * Lightweight and easy to install.
> * All the network IO and message queuing are done in a C++ thread, so
> it can happen without the GIL being involved.
>
> This last point is the most important thing about 0MQ.  It means that
> a process can run non-GIL releasing extension code and still do all
> the network IO and message queuing.
>
> I have tested out 0MQ quite a bit and have created new Cython based
> bindings to 0MQ:
>
> http://www.zeromq.org/bindings:python
>
> Here is a Py0MQ example of what an IPython kernel would look like:
>
> http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/
>
> So....my thoughts right now are that this is the direction we are
> headed.  Thus, I think the model you should use in designing the
> frontend is this:
>
> * Cureses/urwid based frontend that ....
> * Talks to IPython kernel over...
> * 0MQ
>
> Obviously, you are free to use a 1 process model where Ipython runs in
> the same process as the curses stuff, but you will run into all the
> same problems that we have had for years.  I can ellaborate further if
> you want.  What do you think about this plan?
>
> Cheers,
>
> Brian
>    



From ellisonbg at gmail.com  Sun Mar 21 13:28:24 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 21 Mar 2010 10:28:24 -0700
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4BA6276F.1080609@gmail.com>
References: <4B8FB849.7020101@gmail.com>
	<fa8579a41003121149j23016e72k773b6a9e2cb6548d@mail.gmail.com>
	<4BA6276F.1080609@gmail.com>
Message-ID: <fa8579a41003211028r1041129cp58e1380453737842@mail.gmail.com>

Wendell,

Fernando and I will be sprinting today and tomorrow on IPython.  We
will get back to you shortly on this stuff.  In the meantime, can you
join the #ipython channel on irc.freenode.net?

Cheers,

Brian

On Sun, Mar 21, 2010 at 7:04 AM, Wendell Smith <wackywendell at gmail.com> wrote:
> Thanks again for your response!
>
> OK, I have very little experience with twisted, and none with 0MQ, and would
> very much appreciate if someone could give me some help there.
>
> My plan so far is:
> 1. Figure out whether to use curses or urwid
> 2. Make the necessary widgets/etc. for curses/urwid, completely independent
> of ipython (mainly, a nice text editing box that can handle coloring, a main
> scrollable text box for the output, and a pop-up window for completions)
> 3. Figure out how to handle coloring - pygments, pycolorize, etc. Ipython's
> built in colorizer is not good enough: firstly, curses doesn't take terminal
> escapes, and secondly, I want to colorize the input as one types, which
> requires handling unfinished input - which the built in lexer can't handle.
> It looks like pygments is the way to go, and I'll need to write a formatter
> for curses (pygments doesn't have one, and curses is a special case, anyway)
> 4. Start working on integration with ipython. While of course ipython will
> be on my mind for the above, I would like the widgets to be sort of their
> own thing, that could be used independently of ipython, for 2 reasons:
> firstly, someone might find them useful anyway, and secondly, it should make
> it easier to do bug fixes and isolate bugs.
>
> I hope to get #1 done today or tomorrow, and get ?back to #2. I've already
> gotten a good start on a curses based text-editing, color-capable widget,
> but if urwid looks good, I may drop it and go for theirs.
>
> A couple of questions/problems:
> 1. Curses likes to block while waiting for input - is this ok? Should I try
> and get around this?
> 2. Colorizing. Someone already mentioned that they were thinking of
> switching ipython over to pygments. I could do this - it would make things
> easier for me in the long run. If I don't go that route, then it gets really
> complicated. As coloring the input pretty much requires pygments (built-in
> lexer can't handle unfinished input), there then become several options:
> ? ?a. Require pygments for the curses frontend.
> ? ?b. Don't require pygments, but be completely colorless without it.
> ? ?c. Write some fancy way of using the built in colorizer to handle the
> output, as well as one for pygments for the input. That way, without
> pygments installed, there is still color in the output portion.
> I would, of course, prefer to just convert all ipython to pygments - which I
> think I can do - but of course that's not my decision to make, I'm new here.
> But if that is a good way to go, I'll happily go there.
> Otherwise, I'm leaning towards (b). It's barely more complicated than (a)
> while having advantages over (a), and (c) is just too complicated.
>
> OK, that was a lot. I'm going to get back to looking at urwid, but if anyone
> else has any answers/recommendations/comments/etc., I would love to hear
> them...
>
> -Wendell
>
>
> On 03/12/2010 08:49 PM, Brian Granger wrote:
>>
>> Wendell,
>>
>> I have been busy with other things lately, so sorry I haven't gotten
>> back to you sooner. ?I wanted to say a few more things about the
>> overall design of your curses based frontend. ?This also applies to
>> other frontends...
>>
>> * For a long time we have wanted to move IPython to a 2 process model
>> that is similar to how Mathematica works. ?The idea is to have 1) a
>> simple lightweight frontend GUI/terminal/curses based process that
>> handles user input, prints output, etc and 2) a kernel/engine process
>> that actually executes the code.
>>
>> * The GIL has made this effort very difficult to achieve. ?The problem
>> is that if the kernel executes extension code that doesn't release the
>> GIL, the kernel's will appear dead for the networking. ?Threads don't
>> help this and Twisted (which we use) doesn't either.
>>
>> * Our solution so far has been to use Twisted and introduce an
>> additional process called the "controller" that manages traffic
>> between the frontend and kernel/engine. ?Twisted is amazing in many
>> ways, but it has a number of downsides:
>>
>> - Twisted tends to be an all or nothing thing. ?Thus, it has been
>> quite difficult to integrate with other approaches and it is also
>> difficult to *force* everyone to use Twisted. ?Also, using Twisted
>> forces us to have a very constrained threading model that is super
>> infelxible. ?I can give more details on this if needed.
>> - It looks like Twisted will make the transition to Python 3k *very*
>> slowly (after zope, after everyone drops python 2.x support, etc.).
>> IPython, on the other hand needs to move to Python 3k quickly as so
>> many people use it.
>> - As we have spent more time looking at network protocols, we are more
>> and more convinced that a pure RPC style network approach is really
>> insufficient. ?What we really need is more of an asynchronous
>> messaging architecture that supports different messaging topologies.
>> - Twisted is slow for how we are using. ?For some of the things we
>> would like to do, we need C/C++ speed networking.
>>
>> So.....for the last while we have been looking at alternatives.
>> Initially we were hopeful about AMQP:
>>
>>
>> http://www.amqp.org/confluence/display/AMQP/Advanced+Message+Queuing+Protocol
>>
>> AMQP looks really nice, but 1) is very complex and heavyweight, 2)
>> adopts a very specific model of messaging that looks to be too limited
>> for what we want.
>>
>> Recently, however, we learned of 0MQ:
>>
>> http://www.zeromq.org/
>>
>> Zeromq has been developed by some of the same folks as AMQP, but with
>> a different emphasis:
>>
>> * As fast as can be (written in C++). ?For somethings it is faster
>> than raw TCP sockets!
>> * Super simple API, wire protocol, etc.
>> * Lightweight and easy to install.
>> * All the network IO and message queuing are done in a C++ thread, so
>> it can happen without the GIL being involved.
>>
>> This last point is the most important thing about 0MQ. ?It means that
>> a process can run non-GIL releasing extension code and still do all
>> the network IO and message queuing.
>>
>> I have tested out 0MQ quite a bit and have created new Cython based
>> bindings to 0MQ:
>>
>> http://www.zeromq.org/bindings:python
>>
>> Here is a Py0MQ example of what an IPython kernel would look like:
>>
>> http://github.com/ellisonbg/pyzmq/tree/master/examples/kernel/
>>
>> So....my thoughts right now are that this is the direction we are
>> headed. ?Thus, I think the model you should use in designing the
>> frontend is this:
>>
>> * Cureses/urwid based frontend that ....
>> * Talks to IPython kernel over...
>> * 0MQ
>>
>> Obviously, you are free to use a 1 process model where Ipython runs in
>> the same process as the curses stuff, but you will run into all the
>> same problems that we have had for years. ?I can ellaborate further if
>> you want. ?What do you think about this plan?
>>
>> Cheers,
>>
>> Brian
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From walter at livinglogic.de  Sun Mar 21 16:09:04 2010
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Sun, 21 Mar 2010 21:09:04 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <fa8579a41003121409i1e65de9bof70272b0a5aa22a5@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>	
	<4B914ACD.2030308@gmail.com>	
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>	
	<4B962949.6010006@livinglogic.de>	
	<fa8579a41003121122l178d13aaiccad51ef53e3ae07@mail.gmail.com>	
	<4B9AA41B.6030600@livinglogic.de>
	<fa8579a41003121409i1e65de9bof70272b0a5aa22a5@mail.gmail.com>
Message-ID: <4BA67CE0.9070203@livinglogic.de>

On 12.03.10 23:09, Brian Granger wrote:

> Walter,
> 
>>> Is there a problem with generics?
>>
>> No, they work without a problem.
> 
> Ok, I misunderstood.
> 
>>> If so it might be related to this:
>>>
>>> https://bugs.launchpad.net/ipython/+bug/527968
>>
>> I'm not using generics.complete_object.
>>
>>> It this is a different issue, could you explain further?
>>
>> You wrote: "Minimally, ipipe needs to be updated to the new APIs", but
>> generics.result_display() is the only IPython API that ipipe uses, so I
>> thould I would have to change something.
> 
> OK, but that shouldn't be too difficult right?  If you do want to
> continue to use this,
> we can look to see what the new API looks like for this.

So does this mean that generics.result_display() *will* go away in 0.11?
If yes, what *is* the new API that I can hook into?

What I need is a hook where I can register a callback which gets called
when objects of a certain type have to be output to the screen the
return value of the hook is the object that gets assigned to the _ variable.

Servus,
   Walter



From walter at livinglogic.de  Sun Mar 21 16:10:34 2010
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Sun, 21 Mar 2010 21:10:34 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <fa8579a41003121408u4be1190nb9a91c75be87f74d@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>	
	<4B914ACD.2030308@gmail.com>	
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>	
	<4B962949.6010006@livinglogic.de>	
	<db6b5ecc1003090735k6c0bcdcfqd157e166a85184f1@mail.gmail.com>	
	<4B9AA7DC.5090400@livinglogic.de>
	<fa8579a41003121408u4be1190nb9a91c75be87f74d@mail.gmail.com>
Message-ID: <4BA67D3A.3020508@livinglogic.de>

On 12.03.10 23:08, Brian Granger wrote:

> Walter,
> 
>> Releasing ipipe as a separate package would have several advantages:
>>
>>  * I wouldn't be forced to use bzr >;->
>>  * ipipe could have its own set of requirements
>>  * and be released on its own schedule
> 
> Yes, these are good reasons.  Also, with the IPython code base being
> refactored so heavily, it gives ipipe a bit more isolation from the
> messes we are making.

So, if all are OK with this approach, I'm going to move ipipe to a
separate project.

>> Being part of the IPython core has its advantages, but as long as ipipes
>> existence is documented in the core, a standalone should be no problem.
> 
> Yes, I think we should add a section to the documentation that lists third
> party extensions and IPython-using projects.

OK.

>>> *if it can be
>>> maintained/tested along with the rest of  the code*.  So if you think
>>> you can update the code/tests,
>>
>> As ipipe is mostly interactive, ther currently are no tests. However
>> some parts of ipipe could be testet.
> 
> Yes, ipipe is probably hard to test.

However the pipeline objects themselves *are* testable.

Servus,
   Walter


From gael.varoquaux at normalesup.org  Sun Mar 21 17:13:57 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sun, 21 Mar 2010 22:13:57 +0100
Subject: [IPython-dev] IPython threading bug (was: [Enthought-Dev] Bug
	concering X-Server)
In-Reply-To: <e419d6521003211407ldf436c2x38135b48dcbfd0e7@mail.gmail.com>
References: <e419d6521003211407ldf436c2x38135b48dcbfd0e7@mail.gmail.com>
Message-ID: <20100321211357.GB23232@phare.normalesup.org>

On Sun, Mar 21, 2010 at 10:07:02PM +0100, Martin Bothe wrote:
>    Hello enthought-list-users,?
>    I tried a bit around and found a bug, so I report here.
>    After creating a mayavi plot in ipython and attaching axes to it like so:
>    ax = mlab.axes()
>    I wrote in ipython: ax.axes. and hit the tab key which led to an error,
>    making the terminal useless:
>    In [32]: ax.axes.The program 'python' received an X Window System error.
>    This probably reflects a bug in the program.
>    The error was 'BadAccess (attempt to access private resource denied)'.
>    ??(Details: serial 162967 error_code 10 request_code 153 minor_code 26)
>    ??(Note to programmers: normally, X errors are reported asynchronously;
>    ?? that is, you will receive the error a while after causing it.
>    ?? To debug your program, run it with the --sync command line
>    ?? option to change this behavior. You can then get a meaningful
>    ?? backtrace from your debugger if you break on the gdk_x_error()
>    function.)

Hi Martin,

Indeed, this is a bug from IPython: they are inspecting the object by
calling some of its methods outside the GUI mainloop, in a separate
thread. GUI toolkits cannot deal with such calls outside the main loop
(they are not thread safe). As a result, you sometimes get crashes...

The problem, I believe, is that the IPython codebase does not control
when this call is made, but readline does, so it's a bit hard to inject
it in the mainloop. That, said, I don't see why the readline callback
couldn't inject the inspection code in the mainloop and busy wait for it
to be called in the readline thread. Of course this is code to be
written, and its probably tricky.

Anyhow, I am Ccing the IPython mailing list. I suspect that they are
aware of the problem, and simply lack man-power to address it properly.

Cheers,

Ga?l


From wackywendell at gmail.com  Mon Mar 22 05:13:47 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Mon, 22 Mar 2010 10:13:47 +0100
Subject: [IPython-dev] Frontend: curses or urwid?
Message-ID: <4BA734CB.6010904@gmail.com>

Hello,

I have been looking at urwid and curses, and I think my first 
impressions of urwid were quite wrong. It looks like it could actually 
be an excellent basis for a curses frontend - its widgets are extremely 
useful, and its text editing capabilities are well beyond that of the 
normal curses. Its documentation is not the best, and is more detail 
oriented than big-picture oriented, but I think I've now seen that big 
picture and can work with it.

I sent an email to one of the urwid developers asking about python 3 
support, and have attached his reply - basically, someone is working on 
'cleanup related to python 3 support', but there is no clear picture as 
to when py3k will be supported.

Urwid does generally use its own main-loop, but after digging around, it 
seems fairly trivial to instead incorporate urwid into a different main 
loop; examples are given in the source code for incorporation into 
twisted, for example.

The other problem with it (also addressed in the attached email) is that 
it gives error messages that are extremely unhelpful - giving little 
more information than that the error is urwid related. However, if we 
did decide to go with urwid, I think this would turn out ok - we only 
need a few widgets: a box for interpreter output, a box for text input, 
and a pop-up window for completions/help. Urwid would make this much 
easier than plain curses.

So those are the pros and cons: urwid would be very easy to use now, and 
would also greatly simplify the code related to the frontend - but with 
unknown python 3 support, unhelpful error messages when it does break, 
and all the other downsides of another dependency.

This is a big decision: urwid is very different from plain curses, and 
it would be very difficult to switch later. What do you think?

-Wendell
-------------- next part --------------
An embedded message was scrubbed...
From: Ian Ward <ian at excess.org>
Subject: Re: Using urwid for an ipython frontend
Date: Sun, 21 Mar 2010 16:05:45 -0400
Size: 3150
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100322/0789e414/attachment.mht>

From wackywendell at gmail.com  Mon Mar 22 05:18:30 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Mon, 22 Mar 2010 10:18:30 +0100
Subject: [IPython-dev] Frontend: curses or urwid?
Message-ID: <4BA735E6.3060004@gmail.com>

Hello,

I have been looking at urwid and curses, and I think my first 
impressions of urwid were quite wrong. It looks like it could actually 
be an excellent basis for a curses frontend - its widgets are extremely 
useful, and its text editing capabilities are well beyond that of the 
normal curses. Its documentation is not the best, and is more detail 
oriented than big-picture oriented, but I think I've now seen that big 
picture and can work with it.

I sent an email to one of the urwid developers asking about python 3 
support, and have attached his reply - basically, someone is working on 
'cleanup related to python 3 support', but there is no clear picture as 
to when py3k will be supported.

Urwid does generally use its own main-loop, but after digging around, it 
seems fairly trivial to instead incorporate urwid into a different main 
loop; examples are given in the source code for incorporation into 
twisted, for example.

The other problem with it (also addressed in the attached email) is that 
it gives error messages that are extremely unhelpful - giving little 
more information than that the error is urwid related. However, if we 
did decide to go with urwid, I think this would turn out ok - we only 
need a few widgets: a box for interpreter output, a box for text input, 
and a pop-up window for completions/help. Urwid would make this much 
easier than plain curses.

So those are the pros and cons: urwid would be very easy to use now, and 
would also greatly simplify the code related to the frontend - but with 
unknown python 3 support, unhelpful error messages when it does break, 
and all the other downsides of another dependency.

This is a big decision: urwid is very different from plain curses, and 
it would be very difficult to switch later. What do you think?

-Wendell
-------------- next part --------------
An embedded message was scrubbed...
From: Ian Ward <ian at excess.org>
Subject: Re: Using urwid for an ipython frontend
Date: Sun, 21 Mar 2010 16:05:45 -0400
Size: 3150
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100322/cd089fe9/attachment.mht>

From ellisonbg at gmail.com  Mon Mar 22 11:57:13 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 22 Mar 2010 08:57:13 -0700
Subject: [IPython-dev] IPython threading bug (was: [Enthought-Dev] Bug
	concering X-Server)
In-Reply-To: <20100321211357.GB23232@phare.normalesup.org>
References: <e419d6521003211407ldf436c2x38135b48dcbfd0e7@mail.gmail.com>
	<20100321211357.GB23232@phare.normalesup.org>
Message-ID: <fa8579a41003220857v44ce3d16odff5cf6cdbac89cb@mail.gmail.com>

What version of IPython are you running Martin?

Brian

On Sun, Mar 21, 2010 at 2:13 PM, Gael Varoquaux
<gael.varoquaux at normalesup.org> wrote:
> On Sun, Mar 21, 2010 at 10:07:02PM +0100, Martin Bothe wrote:
>> ? ?Hello enthought-list-users,
>> ? ?I tried a bit around and found a bug, so I report here.
>> ? ?After creating a mayavi plot in ipython and attaching axes to it like so:
>> ? ?ax = mlab.axes()
>> ? ?I wrote in ipython: ax.axes. and hit the tab key which led to an error,
>> ? ?making the terminal useless:
>> ? ?In [32]: ax.axes.The program 'python' received an X Window System error.
>> ? ?This probably reflects a bug in the program.
>> ? ?The error was 'BadAccess (attempt to access private resource denied)'.
>> ? ???(Details: serial 162967 error_code 10 request_code 153 minor_code 26)
>> ? ???(Note to programmers: normally, X errors are reported asynchronously;
>> ? ??? that is, you will receive the error a while after causing it.
>> ? ??? To debug your program, run it with the --sync command line
>> ? ??? option to change this behavior. You can then get a meaningful
>> ? ??? backtrace from your debugger if you break on the gdk_x_error()
>> ? ?function.)
>
> Hi Martin,
>
> Indeed, this is a bug from IPython: they are inspecting the object by
> calling some of its methods outside the GUI mainloop, in a separate
> thread. GUI toolkits cannot deal with such calls outside the main loop
> (they are not thread safe). As a result, you sometimes get crashes...
>
> The problem, I believe, is that the IPython codebase does not control
> when this call is made, but readline does, so it's a bit hard to inject
> it in the mainloop. That, said, I don't see why the readline callback
> couldn't inject the inspection code in the mainloop and busy wait for it
> to be called in the readline thread. Of course this is code to be
> written, and its probably tricky.
>
> Anyhow, I am Ccing the IPython mailing list. I suspect that they are
> aware of the problem, and simply lack man-power to address it properly.
>
> Cheers,
>
> Ga?l
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Tue Mar 23 05:00:01 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 23 Mar 2010 02:00:01 -0700
Subject: [IPython-dev] Frontend: curses or urwid?
In-Reply-To: <4BA734CB.6010904@gmail.com>
References: <4BA734CB.6010904@gmail.com>
Message-ID: <o2kdb6b5ecc1003230200r10f390b7yba76b36daf6caff2@mail.gmail.com>

Hi Wendell,

On Mon, Mar 22, 2010 at 2:13 AM, Wendell Smith <wackywendell at gmail.com> wrote:
>
> This is a big decision: urwid is very different from plain curses, and it
> would be very difficult to switch later. What do you think?

It does sound like an important decision; some things to keep in mind:

- does it look like urwid is well maintained/alive?  You don't want to
have a dependency that has stalled (we have a bit of that with nose,
for example, though only for testing).

- Can it be easy_installed, and does that work well on all platforms?


You may want to play with a very simple example firsts before diving
into all  the ipython complexities, and prototype it both with curses
and urwid.  That will be a very informative exercise that even if it
takes a little time now, will likely save you a lot in the long run.

I'll post more details tomorrow, but I'd encourage you to play with this code:

http://github.com/ellisonbg/pyzmq/tree/kernel/examples/kernel/

Brian and I just finished a 2-day sprint on this, and we're *super*
excited.  It looks like with zmq we're going to have all the pieces
for really clean asynchronous clients.

To run it, open 3 terminals, and run kernel.py in one and frontend.py
in the other two.  Then, type away in both frontends :)

Cheers,

f


From Chris.Barker at noaa.gov  Tue Mar 23 12:47:53 2010
From: Chris.Barker at noaa.gov (Christopher Barker)
Date: Tue, 23 Mar 2010 09:47:53 -0700
Subject: [IPython-dev] What is the status of iPython+wx?
Message-ID: <4BA8F0B9.8080600@noaa.gov>

Hi folks,

There was a thread here fairly recently about the re-structuring of how 
iPython works with GUI toolkit event loops.

IIUC, it's going to require a bit for special code in the GUI programs, 
so that they don't try to start multiple app instances, etc.

What is the status of this?

I'd like to use iPython for my wxPython development, and the current 
multi-threading mode isn't all that robust (though it mostly works). Is 
this a good time to upgrade to a devel version and start plugging away? 
or is it not really ready for daily use yet.

One of my thoughts is that I could work on the boilerplate code required 
to run a wx app with the new iPython, and if the timing is right, get it 
into the next wxPython release ( which is coming "soon" ).

-Chris




-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov


From fperez.net at gmail.com  Tue Mar 23 16:49:57 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 23 Mar 2010 13:49:57 -0700
Subject: [IPython-dev] Frontend: curses or urwid?
In-Reply-To: <4BA8B6CE.4000204@gmail.com>
References: <4BA734CB.6010904@gmail.com>
	<o2kdb6b5ecc1003230200r10f390b7yba76b36daf6caff2@mail.gmail.com>
	<4BA8B6CE.4000204@gmail.com>
Message-ID: <s2idb6b5ecc1003231349neac73f25p6af687b26bed4e6@mail.gmail.com>

Hi Wendell,

It's best to keep replies on-list so we get feedback from everyone, so
I've put the  list back in the reply, keeping your context in full.

On Tue, Mar 23, 2010 at 5:40 AM, Wendell Smith <wackywendell at gmail.com> wrote:
> To respond to your points - urwid looks quite actively maintained, although
> possibly short on manpower. In the last month, there have been commits from
> 3 or 4 different people every couple days, and the last two versions came
> out on Jan 25 (0.9.9.1) and Nov 15 (0.9.9). They are, however, rather late
> in hitting their milestones (4-5 months or so), and python 3 compatibility
> is not listed as a goal.

OK, thanks for that info. It's not like in ipython we've been stellar
about milestones :)  Python3 is an issue, because eventually we will
want to move there, and the more dependencies we have stuck on 2.x,
the harder that makes life.  One reason we're so excited about zmq is
because Twisted is looking very slow to move to 3.x, while the zmq
bindings can be generated for py3 *today* (since Cython is 3.x-savvy).
 So if we can completely move off twisted, we'd have one more thing on
3.x.

And that is an argument for going with curses: being part of the
standard library has several benefits, the most obvious is that  it's
installed already with any python (on posix), and the other is that
the 3.x maintenance is done for us.


> As for easy_install - that's how I got it. It worked fine for me (Ubuntu
> 9.10). As for other platforms, curses is incompatible with windows, and the
> pypi page claims it runs on anything posix, including macs.
>
> I've already created extremely limited prototypes in both curses and urwid -
> more as a learning exercise than anything else, and the code for both is
> hideous. I think I'll take your suggestion and start working on more serious
> prototypes.

Ok, sounds good.  Choosing supporting libraries is always a tricky
game: the stdlib is my first choice when possible, for the reasons
stated above.  But sometimes the advantages of a third-party tool are
significant enough to warrant its use.  It's a judgment call that must
be made on a case by case basis. The way I think of it, the non-stdlib
tool must prove it's 'better enough' than whatever is in the stdlib to
use it, *unless* it's tiny enough (single-file, more or less) that we
can ship a copy in ipython.externals.  But that's a practice we should
reserve to  special cases that are really very small.

If you get prototypes working, do put them in a branch that everyone
can play with, it will be easier to give you feedback that way.

>
> As for the zmq frontend kernel - great! I'm having trouble getting it to
> work, but that could be my fault - getting zmq installed, pyzmq installed,
> and then the example was a bit much. But I don't need it that much right
> now, anyway. Hopefully, zmq will be easier to install soon, right?

Well, the install didn't have any problems for me, but I did have
Brian next to me telling me what to do :)  I'll  post a separate
message about the zmq stuff, and I'll include instructions that should
help.

> Anyways, I'll get back to coding!

Sounds good :)

Cheers,

f


From fperez.net at gmail.com  Tue Mar 23 17:01:37 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 23 Mar 2010 14:01:37 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the weekend
	mini-sprint (or having fun with 0mq)
Message-ID: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>

Hi all,

I realize that we have a significant task ahead in sorting some thorny
issues out to get 0.11 ready for release, especially with respect to
GUI handling.  But the big refactor of 0.11 starts to give us a clean
codebase on which new/better interfaces using ipython can be built,
and for that several people want to  contribute right away, so it's
best that we build a path forward for those contributions, even if
some of us still have to finish the 0.11 struggle.

Wendell is looking into curses and Gerardo and Omar in Colombia are
looking at Qt, and it would be fantastic to be able to get these
developments moving forward soon.  So Brian and I got together over
the weekend and  did a design/coding sprint that turned out to be
surprisingly fun and productive.  This is the summary of those
results.  I hope this thread can serve as a design brief we'll later
formalize in the docs and which can be used to plan for a possible
GSOC submission, for example (which the Qt guys have in mind).

The basic issue we need to solve is the ability to have out-of-process
interfaces that are efficient, simple to develop,  and that support
fully asynchronous operation.  In today's ipython, you type code into
a program that is the same tasked with executing the code,  so that if
your code crashes, it takes the interface down with it.  So we need to
have a two-process system where the user-facing client and the kernel
that executes code live in separate processes (we'll retain a minimal
in-process interface for embedding,  no worries, but the bulk of the
real-world use should be in two processes).

We want the user-facing client (be it readline-, curses- or qt-based)
to remain responsive when the kernel is executing code, and to survive
a full kernel crash.  So client/kernel need to communicate, and the
communication should hopefully be possible *even when the kernel is
busy*, at least to the extent that low-level messaging should continue
to function even if the kernel is busy with Python  code.

Up until now our engines use Twisted, and the above requirements can
simply not be met with Twisted (not to mention Twisted's complexity
and the concern we have with it not being ported soon to py3).  We
recently stumbled on the 0mq messaging library:

http://www.zeromq.org/

and Brian was able to quickly build a set of Python bindings for it
(see  link at the 0mq site, I'm writing this offline) using Cython.
They are fast, we have  full  control over them, and since Cython is
python-3 compliant, it means we can get a py3 version anytime we need.

0mq is a really amazing library: I'd only read about it recently and
only used it for the first time this weekend (I started installing it
at Brian's two days ago), and I was blown away by it.  It does all the
messaging in C++ system threads that are 100% Python-threads safe, so
the library is capable of queuing messages until the Python layer is
available to handle them.  The api is dead-simple, it's blazingly
fast, and we were able to get in two intense days a very real
prototype that solves a number of problems that we were never able to
make a dent into with Twisted.  Furthermore, with Twisted it was only
really Brian and Min who ever wrote major amounts of code  for
Ipython: twisted is really hard to grasp and has layers upon layers of
abstraction,  making it a very difficult library to  pick up without a
major commitment.  0mq is exactly the opposite: Brian explained the
basic concepts to me in a few minutes (I haven't read a single doc
yet!), we did some queuing tests interactively (by just making objects
at an ipython prompt) and we then started writing a real prototype
that now works.  We are very much considering abandoning twisted as we
move forward and using 0mq for everything, including the distributed
computing support (while keeping the user-facing apis unchanged).

So what's in this example?  For now, you'll need to install 0mq and
pyzmq from git; for 0mq, clone the repo at:

git://github.com/sustrik/zeromq2.git

then run

./autogen.sh
./configure --prefix=your_favorite_installation_prefix
make
make install

This should give you a fully working 0mq.  Then for the python
bindings, clone Brian's repo and get the kernel branch:

git clone git://github.com/ellisonbg/pyzmq.git
cd pyzmq
git co -b kernel origin/kernel

then

cp setup.cfg.template setup.cfg

and edit setup.cfg to indicate where you put your libraries.  This is
basically the prefix above with /lib and /include appended.

Then you can do the usual

python setup.py install --prefix=your_favorite_installation_prefix


The prototype we wrote lives in examples/kernel.  To play with it,
open 3 terminals:

- T1: run kernel.py, just leave it there.
- T2 and T2: run frontend.py

Both T2 and T3 are simple interactive prompts that run python code.
You can quit them and restart them, type in both of them and they both
manage the same namespace from the kernel in T1.  Each time you hit
return, they synchronize with each other's input/output, so you can
see what each client is sending to the kernel.  In a terminal this is
done in-line and only when you hit return, but a curses/qt client with
a real event loop can actually fire events when data arrives and
display output from other clients as it is produced.

You can 'background' inputs by putting ';' as the last character, and
you can keep typing interactively while the kernel continues to
process.  If you type something that's taking a long time, Ctrl-C will
break out of  the wait but will leave the code running in the
background (like Ctrl-Z in unix).

This is NOT meant to be production code, it has no ipython
dependencies at all,  no tab-completion yet,  etc.  It's meant to:

- let us understand the basic design questions,
- settle on the messaging api between clients and kernel,
- establish what common api all clients can use: a base to be shared
by readline/curses/qt clients, on top of which the frontend-specific
code will go.

So I'd encourage those of you who are interested in this problem to
have a look and let us know how it goes.  For now the code lives in
pyzmq because it makes for a great zmq example, but we're almost ready
to start from it putting real ipython machinery.  For thinking about
this design though, it's a lot easier to work with a tiny prototype
that fits in 3 files than to deal with all of ipython's complexity.

Cheers,

f


From Chris.Barker at noaa.gov  Tue Mar 23 17:46:34 2010
From: Chris.Barker at noaa.gov (Christopher Barker)
Date: Tue, 23 Mar 2010 14:46:34 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
 weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
Message-ID: <4BA936BA.90009@noaa.gov>

Fernando Perez wrote:
> The basic issue we need to solve is the ability to have out-of-process
> interfaces that are efficient, simple to develop,  and that support
> fully asynchronous operation.

This is absolutely fabulous!

> We want the user-facing client (be it readline-, curses- or qt-based)

or wx-based?

Do you have any idea when this might be ready to mess around with for us 
more casual users? i.e. integrated with iPython

Also, if I read this right, you're building a tool that should be used 
every python IDE out there -- most of which greatly suffer from the 
in-process model.

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov


From gael.varoquaux at normalesup.org  Tue Mar 23 17:48:31 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Tue, 23 Mar 2010 22:48:31 +0100
Subject: [IPython-dev] Qt/Curses interfaces future: results of
	the	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <4BA936BA.90009@noaa.gov>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<4BA936BA.90009@noaa.gov>
Message-ID: <20100323214831.GA24398@phare.normalesup.org>

On Tue, Mar 23, 2010 at 02:46:34PM -0700, Christopher Barker wrote:
> Also, if I read this right, you're building a tool that should be used 
> every python IDE out there -- most of which greatly suffer from the 
> in-process model.

I would say 'Oooo yeah!'.

Congratulations to Fernando and Brian for their hard work. I am very
optimistic about this work: things seem to be done just right from what I
can see.

Ga?l


From ellisonbg at gmail.com  Tue Mar 23 17:54:00 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 23 Mar 2010 14:54:00 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <4BA936BA.90009@noaa.gov>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<4BA936BA.90009@noaa.gov>
Message-ID: <fa8579a41003231454y4340cbfcq76f66e8138ef374e@mail.gmail.com>

Chris,

>> The basic issue we need to solve is the ability to have out-of-process
>> interfaces that are efficient, simple to develop, ?and that support
>> fully asynchronous operation.
>
> This is absolutely fabulous!

We agree!

>> We want the user-facing client (be it readline-, curses- or qt-based)
>
> or wx-based?

Yes, of course.  But, we should be clear of an important point.
Because this new approach uses the 2 process model, the GUI used by
the frontend could be different from the GUI used by the kernel (where
use code is run).  Thus, a Qt-based frontend GUI could drive a kernel
that uses wx for matplotlib/traits/etc.
There will be complete flexibility in this.  Obviously, someone would
have to step up and implement a wx based frontend though.

> Do you have any idea when this might be ready to mess around with for us
> more casual users? i.e. integrated with iPython

We realize this is a *huge* thing that many IPython users want ASAP
and we want it too.  In the past, it looked like a *massive* amount of
work.  With the new 0MQ based approach, the work should go much
faster.  With that said there is still a fair amount of work to do:

* People have to write the various frontends they want.  Fernando and
I are not highly skilled at GUI work, so we are hoping others can help
with this.
* Fernando, myself and other IPython core people still have to do more
work on IPython's core to get this new mode fully working.

We are planning to continue to pick away at this, but if you or others
have any funding ideas (that would help us to prioritize this work
over the many other things we have going on), please let us know.

Bottom line: it will be a while before regular users are using this
stuff, but more man power and $ will speed things along.

> Also, if I read this right, you're building a tool that should be used
> every python IDE out there -- most of which greatly suffer from the
> in-process model.

Yep!  That is definitely our vision.

Cheers,

Brian

> -Chris
>
>
> --
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959 ? voice
> 7600 Sand Point Way NE ? (206) 526-6329 ? fax
> Seattle, WA ?98115 ? ? ? (206) 526-6317 ? main reception
>
> Chris.Barker at noaa.gov
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Tue Mar 23 18:05:23 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 23 Mar 2010 15:05:23 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
Message-ID: <fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>

All,

As Fernando has summarized our work very well. I too am very excited
about this development.  One thing that I don't think Fernando
mentioned is that stdout/stderr and Out (displayhook) are all handled
asynchronously AND broadcast to all users.

Thus, if you run the following

Py>>> for i in range(10):
  ...   print i
  ...   i**2
  ...   time.sleep(1)
  ...

You get the result asynchronously:

0
Out : 0
Out : None
[then wait 1 second]
1
Out : 1
Out : None
[then wait 1 second]
2
Out : 4
Out : None
[then wait 1 second]
3
Out : 9
Out : None
[then wait 1 second]
4
Out : 16
Out : None
[then wait 1 second]

etc.  If another user is connected to the kernel, They will also
receive these (along with the corresponding input) asynchronously.  In
a terminal based frontend these things are a little bit difficult to
demonstrate, but in a nice GUI frontend, we could imagine a nice a
interface to represent these things.

Cheers,

Brian

PS: Fernando, do you notice that time.sleep(1) (which returns None)
also triggers displayhook?  That is a bit odd.  Do we want to filter
out None from the displayhook?


On Tue, Mar 23, 2010 at 2:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi all,
>
> I realize that we have a significant task ahead in sorting some thorny
> issues out to get 0.11 ready for release, especially with respect to
> GUI handling. ?But the big refactor of 0.11 starts to give us a clean
> codebase on which new/better interfaces using ipython can be built,
> and for that several people want to ?contribute right away, so it's
> best that we build a path forward for those contributions, even if
> some of us still have to finish the 0.11 struggle.
>
> Wendell is looking into curses and Gerardo and Omar in Colombia are
> looking at Qt, and it would be fantastic to be able to get these
> developments moving forward soon. ?So Brian and I got together over
> the weekend and ?did a design/coding sprint that turned out to be
> surprisingly fun and productive. ?This is the summary of those
> results. ?I hope this thread can serve as a design brief we'll later
> formalize in the docs and which can be used to plan for a possible
> GSOC submission, for example (which the Qt guys have in mind).
>
> The basic issue we need to solve is the ability to have out-of-process
> interfaces that are efficient, simple to develop, ?and that support
> fully asynchronous operation. ?In today's ipython, you type code into
> a program that is the same tasked with executing the code, ?so that if
> your code crashes, it takes the interface down with it. ?So we need to
> have a two-process system where the user-facing client and the kernel
> that executes code live in separate processes (we'll retain a minimal
> in-process interface for embedding, ?no worries, but the bulk of the
> real-world use should be in two processes).
>
> We want the user-facing client (be it readline-, curses- or qt-based)
> to remain responsive when the kernel is executing code, and to survive
> a full kernel crash. ?So client/kernel need to communicate, and the
> communication should hopefully be possible *even when the kernel is
> busy*, at least to the extent that low-level messaging should continue
> to function even if the kernel is busy with Python ?code.
>
> Up until now our engines use Twisted, and the above requirements can
> simply not be met with Twisted (not to mention Twisted's complexity
> and the concern we have with it not being ported soon to py3). ?We
> recently stumbled on the 0mq messaging library:
>
> http://www.zeromq.org/
>
> and Brian was able to quickly build a set of Python bindings for it
> (see ?link at the 0mq site, I'm writing this offline) using Cython.
> They are fast, we have ?full ?control over them, and since Cython is
> python-3 compliant, it means we can get a py3 version anytime we need.
>
> 0mq is a really amazing library: I'd only read about it recently and
> only used it for the first time this weekend (I started installing it
> at Brian's two days ago), and I was blown away by it. ?It does all the
> messaging in C++ system threads that are 100% Python-threads safe, so
> the library is capable of queuing messages until the Python layer is
> available to handle them. ?The api is dead-simple, it's blazingly
> fast, and we were able to get in two intense days a very real
> prototype that solves a number of problems that we were never able to
> make a dent into with Twisted. ?Furthermore, with Twisted it was only
> really Brian and Min who ever wrote major amounts of code ?for
> Ipython: twisted is really hard to grasp and has layers upon layers of
> abstraction, ?making it a very difficult library to ?pick up without a
> major commitment. ?0mq is exactly the opposite: Brian explained the
> basic concepts to me in a few minutes (I haven't read a single doc
> yet!), we did some queuing tests interactively (by just making objects
> at an ipython prompt) and we then started writing a real prototype
> that now works. ?We are very much considering abandoning twisted as we
> move forward and using 0mq for everything, including the distributed
> computing support (while keeping the user-facing apis unchanged).
>
> So what's in this example? ?For now, you'll need to install 0mq and
> pyzmq from git; for 0mq, clone the repo at:
>
> git://github.com/sustrik/zeromq2.git
>
> then run
>
> ./autogen.sh
> ./configure --prefix=your_favorite_installation_prefix
> make
> make install
>
> This should give you a fully working 0mq. ?Then for the python
> bindings, clone Brian's repo and get the kernel branch:
>
> git clone git://github.com/ellisonbg/pyzmq.git
> cd pyzmq
> git co -b kernel origin/kernel
>
> then
>
> cp setup.cfg.template setup.cfg
>
> and edit setup.cfg to indicate where you put your libraries. ?This is
> basically the prefix above with /lib and /include appended.
>
> Then you can do the usual
>
> python setup.py install --prefix=your_favorite_installation_prefix
>
>
> The prototype we wrote lives in examples/kernel. ?To play with it,
> open 3 terminals:
>
> - T1: run kernel.py, just leave it there.
> - T2 and T2: run frontend.py
>
> Both T2 and T3 are simple interactive prompts that run python code.
> You can quit them and restart them, type in both of them and they both
> manage the same namespace from the kernel in T1. ?Each time you hit
> return, they synchronize with each other's input/output, so you can
> see what each client is sending to the kernel. ?In a terminal this is
> done in-line and only when you hit return, but a curses/qt client with
> a real event loop can actually fire events when data arrives and
> display output from other clients as it is produced.
>
> You can 'background' inputs by putting ';' as the last character, and
> you can keep typing interactively while the kernel continues to
> process. ?If you type something that's taking a long time, Ctrl-C will
> break out of ?the wait but will leave the code running in the
> background (like Ctrl-Z in unix).
>
> This is NOT meant to be production code, it has no ipython
> dependencies at all, ?no tab-completion yet, ?etc. ?It's meant to:
>
> - let us understand the basic design questions,
> - settle on the messaging api between clients and kernel,
> - establish what common api all clients can use: a base to be shared
> by readline/curses/qt clients, on top of which the frontend-specific
> code will go.
>
> So I'd encourage those of you who are interested in this problem to
> have a look and let us know how it goes. ?For now the code lives in
> pyzmq because it makes for a great zmq example, but we're almost ready
> to start from it putting real ipython machinery. ?For thinking about
> this design though, it's a lot easier to work with a tiny prototype
> that fits in 3 files than to deal with all of ipython's complexity.
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From Chris.Barker at noaa.gov  Tue Mar 23 18:44:08 2010
From: Chris.Barker at noaa.gov (Christopher Barker)
Date: Tue, 23 Mar 2010 15:44:08 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
 weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <fa8579a41003231454y4340cbfcq76f66e8138ef374e@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<4BA936BA.90009@noaa.gov>
	<fa8579a41003231454y4340cbfcq76f66e8138ef374e@mail.gmail.com>
Message-ID: <4BA94438.2050704@noaa.gov>

Brian Granger wrote:

>>> We want the user-facing client (be it readline-, curses- or qt-based)
>> or wx-based?
> 
> Yes, of course.  But, we should be clear of an important point.
> Because this new approach uses the 2 process model, the GUI used by
> the frontend could be different from the GUI used by the kernel (where
> use code is run).  Thus, a Qt-based frontend GUI could drive a kernel
> that uses wx for matplotlib/traits/etc.

got it -- nice.

However, I"d like to see it embedded in my editor of choice (Peppy), 
which is written in wx, so it would be nice to have a wx version ready 
to embed.

> Obviously, someone would have to step up and implement a wx based frontend though.

yes, there is that. I wish I had more time for all the stuff I"d like to do.

> We are planning to continue to pick away at this, but if you or others
> have any funding ideas (that would help us to prioritize this work
> over the many other things we have going on), please let us know.

I'll be patient -- I wish I had ideas for $$$

I suppose you've thought of Google Summer of Code already  -- if you can 
give it a py3k slant, anyway.

Thanks for the great work!

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov


From ellisonbg at gmail.com  Tue Mar 23 19:21:53 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 23 Mar 2010 16:21:53 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <4BA94438.2050704@noaa.gov>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<4BA936BA.90009@noaa.gov>
	<fa8579a41003231454y4340cbfcq76f66e8138ef374e@mail.gmail.com>
	<4BA94438.2050704@noaa.gov>
Message-ID: <fa8579a41003231621t3b15434dgc0a182614e15e50e@mail.gmail.com>

Chris,

> However, I"d like to see it embedded in my editor of choice (Peppy),
> which is written in wx, so it would be nice to have a wx version ready
> to embed.

"Embedding" in your favorite editor will be much easier.  But
"embedding" is probably a bad word for it because the actual kernel
(where user code is running) won't run in the same process.  The only
part of it that will be "embedded" in the editor process is a thin
communication layer that allows the editor to send code to the kernel,
get back stdout/stderr, output, etc.

But wx will definitely be high on everyones list for GUI support.

Cheers,

Brian


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From dsdale24 at gmail.com  Tue Mar 23 19:33:33 2010
From: dsdale24 at gmail.com (Darren Dale)
Date: Tue, 23 Mar 2010 19:33:33 -0400
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
Message-ID: <a08e5f81003231633p69a24d79t4d271861c1d2eb47@mail.gmail.com>

Hi Fernando, Brian,

This sounds really exciting. I am having some trouble installing pyzmq:

On Tue, Mar 23, 2010 at 5:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> git://github.com/sustrik/zeromq2.git
>
> then run
>
> ./autogen.sh
> ./configure --prefix=your_favorite_installation_prefix
> make
> make install
>
> This should give you a fully working 0mq.

I used --prefix=/usr/local

> Then for the python
> bindings, clone Brian's repo and get the kernel branch:
>
> git clone git://github.com/ellisonbg/pyzmq.git
> cd pyzmq
> git co -b kernel origin/kernel
>
> then
>
> cp setup.cfg.template setup.cfg
>
> and edit setup.cfg to indicate where you put your libraries. ?This is
> basically the prefix above with /lib and /include appended.

[build_ext]
# Edit these to point to your installed zeromq library and header dirs.
library_dirs = /usr/local/lib
include_dirs = /usr/local/include

I checked that libzmq.so* exist in /usr/local/lib, same for zmq.* in
/usr/local/include

> Then you can do the usual
>
> python setup.py install --prefix=your_favorite_installation_prefix

First I did "python setup.py build":

running build
running build_py
creating build
creating build/lib.linux-x86_64-2.6
creating build/lib.linux-x86_64-2.6/zmq
copying zmq/__init__.py -> build/lib.linux-x86_64-2.6/zmq
running build_ext
building 'zmq._zmq' extension
creating build/temp.linux-x86_64-2.6
creating build/temp.linux-x86_64-2.6/zmq
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
-Wstrict-prototypes -fPIC -I/usr/local/include
-I/usr/include/python2.6 -c zmq/_zmq.c -o
build/temp.linux-x86_64-2.6/zmq/_zmq.o
gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions
build/temp.linux-x86_64-2.6/zmq/_zmq.o -L/usr/local/lib -lzmq -o
build/lib.linux-x86_64-2.6/zmq/_zmq.so

Next I used "python setup.py install --user" (which is equivalent to
"--prefix=~/.local"):

running install
running build
running build_py
running build_ext
running install_lib
copying build/lib.linux-x86_64-2.6/zmq/_zmq.so ->
/home/darren/.local/lib/python2.6/site-packages/zmq
running install_egg_info
Removing /home/darren/.local/lib/python2.6/site-packages/pyzmq-0.1.egg-info
Writing /home/darren/.local/lib/python2.6/site-packages/pyzmq-0.1.egg-info

> The prototype we wrote lives in examples/kernel. ?To play with it,
> open 3 terminals:
>
> - T1: run kernel.py, just leave it there.

Here is where I run into trouble:

Traceback (most recent call last):
  File "kernel.py", line 21, in <module>
    import zmq
  File "/home/darren/.local/lib/python2.6/site-packages/zmq/__init__.py",
line 26, in <module>
    from zmq import _zmq
ImportError: libzmq.so.0: cannot open shared object file: No such file
or directory

Any ideas? I am using kubuntu 10.04 beta.

Darren


From barrywark at gmail.com  Tue Mar 23 19:32:27 2010
From: barrywark at gmail.com (Barry Wark)
Date: Tue, 23 Mar 2010 16:32:27 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com> 
	<fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>
Message-ID: <cd7634ce1003231632m15bcb63bufa5a9988cba240bf@mail.gmail.com>

Congratulations Brian and Fernando! This is a huge advance for UI
integration (and possible for parallel ipython as well).

Having written several two-process UI/kernel systems, I suspect that
the direction things are heading will make it quite easy to implement
a UI frontend for the kernel using modern UI toolkits (e.g. Qt, Cocoa,
WPF, etc.)

I suppose this new paradigm brings to the fore the ongoing discussion
of protocol for communication between frontend and the kernel. As this
could affect format for a persistent "notebook" format (it would be
nice to be able to send a notebook or notebook section to the kernel),
it might be worth considering the two issues together. The previous
discussion settled, I think, leaning towards an XML notebook. Assuming
an entity that describes a block of python code, the UI->kernel
message could be an XML-serialized block and the response could be the
corresponding XML-serialized output. The other possibility is to
separate notebook format (XML) from UI<->kernel protocol. In this
case, something like Google's protocol buffers make sense. These
protocol buffer messages are somewhat easier (and much faster) to work
with from many languages than XML, but are not as easily human
readable (if at all) and would add yet an other non-stdlib dependency.
Just starting the discussion...

Looking forward to hacking some UIs,
Barry



On Tue, Mar 23, 2010 at 3:05 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> All,
>
> As Fernando has summarized our work very well. I too am very excited
> about this development. ?One thing that I don't think Fernando
> mentioned is that stdout/stderr and Out (displayhook) are all handled
> asynchronously AND broadcast to all users.
>
> Thus, if you run the following
>
> Py>>> for i in range(10):
> ?... ? print i
> ?... ? i**2
> ?... ? time.sleep(1)
> ?...
>
> You get the result asynchronously:
>
> 0
> Out : 0
> Out : None
> [then wait 1 second]
> 1
> Out : 1
> Out : None
> [then wait 1 second]
> 2
> Out : 4
> Out : None
> [then wait 1 second]
> 3
> Out : 9
> Out : None
> [then wait 1 second]
> 4
> Out : 16
> Out : None
> [then wait 1 second]
>
> etc. ?If another user is connected to the kernel, They will also
> receive these (along with the corresponding input) asynchronously. ?In
> a terminal based frontend these things are a little bit difficult to
> demonstrate, but in a nice GUI frontend, we could imagine a nice a
> interface to represent these things.
>
> Cheers,
>
> Brian
>
> PS: Fernando, do you notice that time.sleep(1) (which returns None)
> also triggers displayhook? ?That is a bit odd. ?Do we want to filter
> out None from the displayhook?
>
>
> On Tue, Mar 23, 2010 at 2:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>> Hi all,
>>
>> I realize that we have a significant task ahead in sorting some thorny
>> issues out to get 0.11 ready for release, especially with respect to
>> GUI handling. ?But the big refactor of 0.11 starts to give us a clean
>> codebase on which new/better interfaces using ipython can be built,
>> and for that several people want to ?contribute right away, so it's
>> best that we build a path forward for those contributions, even if
>> some of us still have to finish the 0.11 struggle.
>>
>> Wendell is looking into curses and Gerardo and Omar in Colombia are
>> looking at Qt, and it would be fantastic to be able to get these
>> developments moving forward soon. ?So Brian and I got together over
>> the weekend and ?did a design/coding sprint that turned out to be
>> surprisingly fun and productive. ?This is the summary of those
>> results. ?I hope this thread can serve as a design brief we'll later
>> formalize in the docs and which can be used to plan for a possible
>> GSOC submission, for example (which the Qt guys have in mind).
>>
>> The basic issue we need to solve is the ability to have out-of-process
>> interfaces that are efficient, simple to develop, ?and that support
>> fully asynchronous operation. ?In today's ipython, you type code into
>> a program that is the same tasked with executing the code, ?so that if
>> your code crashes, it takes the interface down with it. ?So we need to
>> have a two-process system where the user-facing client and the kernel
>> that executes code live in separate processes (we'll retain a minimal
>> in-process interface for embedding, ?no worries, but the bulk of the
>> real-world use should be in two processes).
>>
>> We want the user-facing client (be it readline-, curses- or qt-based)
>> to remain responsive when the kernel is executing code, and to survive
>> a full kernel crash. ?So client/kernel need to communicate, and the
>> communication should hopefully be possible *even when the kernel is
>> busy*, at least to the extent that low-level messaging should continue
>> to function even if the kernel is busy with Python ?code.
>>
>> Up until now our engines use Twisted, and the above requirements can
>> simply not be met with Twisted (not to mention Twisted's complexity
>> and the concern we have with it not being ported soon to py3). ?We
>> recently stumbled on the 0mq messaging library:
>>
>> http://www.zeromq.org/
>>
>> and Brian was able to quickly build a set of Python bindings for it
>> (see ?link at the 0mq site, I'm writing this offline) using Cython.
>> They are fast, we have ?full ?control over them, and since Cython is
>> python-3 compliant, it means we can get a py3 version anytime we need.
>>
>> 0mq is a really amazing library: I'd only read about it recently and
>> only used it for the first time this weekend (I started installing it
>> at Brian's two days ago), and I was blown away by it. ?It does all the
>> messaging in C++ system threads that are 100% Python-threads safe, so
>> the library is capable of queuing messages until the Python layer is
>> available to handle them. ?The api is dead-simple, it's blazingly
>> fast, and we were able to get in two intense days a very real
>> prototype that solves a number of problems that we were never able to
>> make a dent into with Twisted. ?Furthermore, with Twisted it was only
>> really Brian and Min who ever wrote major amounts of code ?for
>> Ipython: twisted is really hard to grasp and has layers upon layers of
>> abstraction, ?making it a very difficult library to ?pick up without a
>> major commitment. ?0mq is exactly the opposite: Brian explained the
>> basic concepts to me in a few minutes (I haven't read a single doc
>> yet!), we did some queuing tests interactively (by just making objects
>> at an ipython prompt) and we then started writing a real prototype
>> that now works. ?We are very much considering abandoning twisted as we
>> move forward and using 0mq for everything, including the distributed
>> computing support (while keeping the user-facing apis unchanged).
>>
>> So what's in this example? ?For now, you'll need to install 0mq and
>> pyzmq from git; for 0mq, clone the repo at:
>>
>> git://github.com/sustrik/zeromq2.git
>>
>> then run
>>
>> ./autogen.sh
>> ./configure --prefix=your_favorite_installation_prefix
>> make
>> make install
>>
>> This should give you a fully working 0mq. ?Then for the python
>> bindings, clone Brian's repo and get the kernel branch:
>>
>> git clone git://github.com/ellisonbg/pyzmq.git
>> cd pyzmq
>> git co -b kernel origin/kernel
>>
>> then
>>
>> cp setup.cfg.template setup.cfg
>>
>> and edit setup.cfg to indicate where you put your libraries. ?This is
>> basically the prefix above with /lib and /include appended.
>>
>> Then you can do the usual
>>
>> python setup.py install --prefix=your_favorite_installation_prefix
>>
>>
>> The prototype we wrote lives in examples/kernel. ?To play with it,
>> open 3 terminals:
>>
>> - T1: run kernel.py, just leave it there.
>> - T2 and T2: run frontend.py
>>
>> Both T2 and T3 are simple interactive prompts that run python code.
>> You can quit them and restart them, type in both of them and they both
>> manage the same namespace from the kernel in T1. ?Each time you hit
>> return, they synchronize with each other's input/output, so you can
>> see what each client is sending to the kernel. ?In a terminal this is
>> done in-line and only when you hit return, but a curses/qt client with
>> a real event loop can actually fire events when data arrives and
>> display output from other clients as it is produced.
>>
>> You can 'background' inputs by putting ';' as the last character, and
>> you can keep typing interactively while the kernel continues to
>> process. ?If you type something that's taking a long time, Ctrl-C will
>> break out of ?the wait but will leave the code running in the
>> background (like Ctrl-Z in unix).
>>
>> This is NOT meant to be production code, it has no ipython
>> dependencies at all, ?no tab-completion yet, ?etc. ?It's meant to:
>>
>> - let us understand the basic design questions,
>> - settle on the messaging api between clients and kernel,
>> - establish what common api all clients can use: a base to be shared
>> by readline/curses/qt clients, on top of which the frontend-specific
>> code will go.
>>
>> So I'd encourage those of you who are interested in this problem to
>> have a look and let us know how it goes. ?For now the code lives in
>> pyzmq because it makes for a great zmq example, but we're almost ready
>> to start from it putting real ipython machinery. ?For thinking about
>> this design though, it's a lot easier to work with a tiny prototype
>> that fits in 3 files than to deal with all of ipython's complexity.
>>
>> Cheers,
>>
>> f
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>


From ellisonbg at gmail.com  Tue Mar 23 19:43:33 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 23 Mar 2010 16:43:33 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <a08e5f81003231633p69a24d79t4d271861c1d2eb47@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<a08e5f81003231633p69a24d79t4d271861c1d2eb47@mail.gmail.com>
Message-ID: <fa8579a41003231643s2fbdb34blb405ccd60a45635a@mail.gmail.com>

Darren,

> This sounds really exciting. I am having some trouble installing pyzmq:

OK, let's get this figured out.  Should be simple.

> On Tue, Mar 23, 2010 at 5:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>> git://github.com/sustrik/zeromq2.git
>>
>> then run
>>
>> ./autogen.sh
>> ./configure --prefix=your_favorite_installation_prefix
>> make
>> make install
>>
>> This should give you a fully working 0mq.
>
> I used --prefix=/usr/local
>
>> Then for the python
>> bindings, clone Brian's repo and get the kernel branch:
>>
>> git clone git://github.com/ellisonbg/pyzmq.git
>> cd pyzmq
>> git co -b kernel origin/kernel
>>
>> then
>>
>> cp setup.cfg.template setup.cfg
>>
>> and edit setup.cfg to indicate where you put your libraries. ?This is
>> basically the prefix above with /lib and /include appended.
>
> [build_ext]
> # Edit these to point to your installed zeromq library and header dirs.
> library_dirs = /usr/local/lib
> include_dirs = /usr/local/include
>
> I checked that libzmq.so* exist in /usr/local/lib, same for zmq.* in
> /usr/local/include
>
>> Then you can do the usual
>>
>> python setup.py install --prefix=your_favorite_installation_prefix
>
> First I did "python setup.py build":
>
> running build
> running build_py
> creating build
> creating build/lib.linux-x86_64-2.6
> creating build/lib.linux-x86_64-2.6/zmq
> copying zmq/__init__.py -> build/lib.linux-x86_64-2.6/zmq
> running build_ext
> building 'zmq._zmq' extension
> creating build/temp.linux-x86_64-2.6
> creating build/temp.linux-x86_64-2.6/zmq
> gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
> -Wstrict-prototypes -fPIC -I/usr/local/include
> -I/usr/include/python2.6 -c zmq/_zmq.c -o
> build/temp.linux-x86_64-2.6/zmq/_zmq.o
> gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions
> build/temp.linux-x86_64-2.6/zmq/_zmq.o -L/usr/local/lib -lzmq -o
> build/lib.linux-x86_64-2.6/zmq/_zmq.so
>
> Next I used "python setup.py install --user" (which is equivalent to
> "--prefix=~/.local"):
>
> running install
> running build
> running build_py
> running build_ext
> running install_lib
> copying build/lib.linux-x86_64-2.6/zmq/_zmq.so ->
> /home/darren/.local/lib/python2.6/site-packages/zmq
> running install_egg_info
> Removing /home/darren/.local/lib/python2.6/site-packages/pyzmq-0.1.egg-info
> Writing /home/darren/.local/lib/python2.6/site-packages/pyzmq-0.1.egg-info
>
>> The prototype we wrote lives in examples/kernel. ?To play with it,
>> open 3 terminals:
>>
>> - T1: run kernel.py, just leave it there.
>
> Here is where I run into trouble:
>
> Traceback (most recent call last):
> ?File "kernel.py", line 21, in <module>
> ? ?import zmq
> ?File "/home/darren/.local/lib/python2.6/site-packages/zmq/__init__.py",
> line 26, in <module>
> ? ?from zmq import _zmq
> ImportError: libzmq.so.0: cannot open shared object file: No such file
> or directory

We link to libzmq.so dynamically so on Linux you have to set
LD_LIBRARY_PATH to point to /usr/local/lib.  Let me know if that
doesn't help.  I am also on #ipython right now.

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Tue Mar 23 19:55:56 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 23 Mar 2010 16:55:56 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <4BA936BA.90009@noaa.gov>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com> 
	<4BA936BA.90009@noaa.gov>
Message-ID: <p2odb6b5ecc1003231655nd5646185o55223768eb94b015@mail.gmail.com>

On Tue, Mar 23, 2010 at 2:46 PM, Christopher Barker
<Chris.Barker at noaa.gov> wrote:
> Fernando Perez wrote:
>> The basic issue we need to solve is the ability to have out-of-process
>> interfaces that are efficient, simple to develop, ?and that support
>> fully asynchronous operation.
>
> This is absolutely fabulous!

Thanks, we're really excited too :)

>> We want the user-facing client (be it readline-, curses- or qt-based)
>
> or wx-based?

Absolutely: I didn't mean to slight wx.  I get the sense that
development momentum is moving towards qt now that it's lgpl (esp.
with the upcoming pyside bindings by nokia), but there's no technical
reason whatsoever why a Wx frontend couldn't be written.  And as Brian
pointed out, since they are in different processes, a Wx frontend
could control a kernel running Qt code or vice-versa (so you can use
your favorite Wx IDE, for example, while executing code that uses a Qt
GUI such as VisIt).

Cheers,

f


From ellisonbg at gmail.com  Tue Mar 23 20:02:54 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 23 Mar 2010 17:02:54 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <cd7634ce1003231632m15bcb63bufa5a9988cba240bf@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>
	<cd7634ce1003231632m15bcb63bufa5a9988cba240bf@mail.gmail.com>
Message-ID: <fa8579a41003231702l35c0e73ax8d1e04c7e990581d@mail.gmail.com>

Barry,

Good to hear from you.

> Congratulations Brian and Fernando! This is a huge advance for UI
> integration (and possible for parallel ipython as well).

Thanks!

> Having written several two-process UI/kernel systems, I suspect that
> the direction things are heading will make it quite easy to implement
> a UI frontend for the kernel using modern UI toolkits (e.g. Qt, Cocoa,
> WPF, etc.)

It should be.  I personally can't wait to see a Cocoa version of a UI :)

> I suppose this new paradigm brings to the fore the ongoing discussion
> of protocol for communication between frontend and the kernel. As this
> could affect format for a persistent "notebook" format (it would be
> nice to be able to send a notebook or notebook section to the kernel),
> it might be worth considering the two issues together.

The Python bindings to 0MQ (pyzmq) have the ability to
serialize/unseralize Python objects using JSON or pickle.  In our
prototype, we have chosen JSON because:

* JSON can be handled by virtually any language, including in browser JS.
* It leads to a very pythonic interface for messages in python.
* It is reasonably fast.
* Very flexible.
* Take almost 0 code to support.

Thus, my current leaning for UI<->kernel messages is JSON.  XML is a
possibility, but it just doesn't seem to fit.

But, I think that XML is worth thinking about for the notebook
persistence format.  Using JSON for that doesn't seem like a good fit.
 Maybe it is though, I had never though about that.  the other option
is pure python code, which I like, but I am not sure there is broad
support for it.

> The previous
> discussion settled, I think, leaning towards an XML notebook. Assuming
> an entity that describes a block of python code, the UI->kernel
> message could be an XML-serialized block and the response could be the
> corresponding XML-serialized output. The other possibility is to
> separate notebook format (XML) from UI<->kernel protocol.

I do think the UI<->kernel protocol should be isolated from the
notebook format.  The reason is that the UI<->kernel will know zip
about the notebook.  It is only responsible for code execution.
Another reason I want this separation is that different UIs might need
different things for the notebook format.

Here is my current thinking about the notebook format.  We have had
multiple attempts at creating a notebook format.  Fernando and Robert
Kern mentored a GSOC project long ago that created an SML based
notebook format.  Then Min created one that used XML and sqlalchemy.
Both never took off.  Why?  My take is that a notebook format is
useless until there is:

1. A kernel that the notebook frontend can actually talk to.
2. A nice notebook frontend that people actually want to use.

Thus, I think we should start with (1), then do (2) and *lastly* worry
about how to save the frontend sessions (notebook format).  I think
that once we have (1) and (2) working, the notebook format will fall
into place much more naturally.

> In this
> case, something like Google's protocol buffers make sense. These
> protocol buffer messages are somewhat easier (and much faster) to work
> with from many languages than XML, but are not as easily human
> readable (if at all) and would add yet an other non-stdlib dependency.
> Just starting the discussion...

Yes, I like protocol buffers.  The main reasons I see to go with them are:

* Performance.
* Less flexible - can make sure that a message has the right structure, etc.

In the case of the UI<->kernel protocol, our messages are pretty small
so performance isn't much of an issue and I think for now we want them
to be super flexible.  This brings me back to JSON.

> Looking forward to hacking some UIs,

Cool.

Brian

> Barry
>
>
>
> On Tue, Mar 23, 2010 at 3:05 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>> All,
>>
>> As Fernando has summarized our work very well. I too am very excited
>> about this development. ?One thing that I don't think Fernando
>> mentioned is that stdout/stderr and Out (displayhook) are all handled
>> asynchronously AND broadcast to all users.
>>
>> Thus, if you run the following
>>
>> Py>>> for i in range(10):
>> ?... ? print i
>> ?... ? i**2
>> ?... ? time.sleep(1)
>> ?...
>>
>> You get the result asynchronously:
>>
>> 0
>> Out : 0
>> Out : None
>> [then wait 1 second]
>> 1
>> Out : 1
>> Out : None
>> [then wait 1 second]
>> 2
>> Out : 4
>> Out : None
>> [then wait 1 second]
>> 3
>> Out : 9
>> Out : None
>> [then wait 1 second]
>> 4
>> Out : 16
>> Out : None
>> [then wait 1 second]
>>
>> etc. ?If another user is connected to the kernel, They will also
>> receive these (along with the corresponding input) asynchronously. ?In
>> a terminal based frontend these things are a little bit difficult to
>> demonstrate, but in a nice GUI frontend, we could imagine a nice a
>> interface to represent these things.
>>
>> Cheers,
>>
>> Brian
>>
>> PS: Fernando, do you notice that time.sleep(1) (which returns None)
>> also triggers displayhook? ?That is a bit odd. ?Do we want to filter
>> out None from the displayhook?
>>
>>
>> On Tue, Mar 23, 2010 at 2:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>>> Hi all,
>>>
>>> I realize that we have a significant task ahead in sorting some thorny
>>> issues out to get 0.11 ready for release, especially with respect to
>>> GUI handling. ?But the big refactor of 0.11 starts to give us a clean
>>> codebase on which new/better interfaces using ipython can be built,
>>> and for that several people want to ?contribute right away, so it's
>>> best that we build a path forward for those contributions, even if
>>> some of us still have to finish the 0.11 struggle.
>>>
>>> Wendell is looking into curses and Gerardo and Omar in Colombia are
>>> looking at Qt, and it would be fantastic to be able to get these
>>> developments moving forward soon. ?So Brian and I got together over
>>> the weekend and ?did a design/coding sprint that turned out to be
>>> surprisingly fun and productive. ?This is the summary of those
>>> results. ?I hope this thread can serve as a design brief we'll later
>>> formalize in the docs and which can be used to plan for a possible
>>> GSOC submission, for example (which the Qt guys have in mind).
>>>
>>> The basic issue we need to solve is the ability to have out-of-process
>>> interfaces that are efficient, simple to develop, ?and that support
>>> fully asynchronous operation. ?In today's ipython, you type code into
>>> a program that is the same tasked with executing the code, ?so that if
>>> your code crashes, it takes the interface down with it. ?So we need to
>>> have a two-process system where the user-facing client and the kernel
>>> that executes code live in separate processes (we'll retain a minimal
>>> in-process interface for embedding, ?no worries, but the bulk of the
>>> real-world use should be in two processes).
>>>
>>> We want the user-facing client (be it readline-, curses- or qt-based)
>>> to remain responsive when the kernel is executing code, and to survive
>>> a full kernel crash. ?So client/kernel need to communicate, and the
>>> communication should hopefully be possible *even when the kernel is
>>> busy*, at least to the extent that low-level messaging should continue
>>> to function even if the kernel is busy with Python ?code.
>>>
>>> Up until now our engines use Twisted, and the above requirements can
>>> simply not be met with Twisted (not to mention Twisted's complexity
>>> and the concern we have with it not being ported soon to py3). ?We
>>> recently stumbled on the 0mq messaging library:
>>>
>>> http://www.zeromq.org/
>>>
>>> and Brian was able to quickly build a set of Python bindings for it
>>> (see ?link at the 0mq site, I'm writing this offline) using Cython.
>>> They are fast, we have ?full ?control over them, and since Cython is
>>> python-3 compliant, it means we can get a py3 version anytime we need.
>>>
>>> 0mq is a really amazing library: I'd only read about it recently and
>>> only used it for the first time this weekend (I started installing it
>>> at Brian's two days ago), and I was blown away by it. ?It does all the
>>> messaging in C++ system threads that are 100% Python-threads safe, so
>>> the library is capable of queuing messages until the Python layer is
>>> available to handle them. ?The api is dead-simple, it's blazingly
>>> fast, and we were able to get in two intense days a very real
>>> prototype that solves a number of problems that we were never able to
>>> make a dent into with Twisted. ?Furthermore, with Twisted it was only
>>> really Brian and Min who ever wrote major amounts of code ?for
>>> Ipython: twisted is really hard to grasp and has layers upon layers of
>>> abstraction, ?making it a very difficult library to ?pick up without a
>>> major commitment. ?0mq is exactly the opposite: Brian explained the
>>> basic concepts to me in a few minutes (I haven't read a single doc
>>> yet!), we did some queuing tests interactively (by just making objects
>>> at an ipython prompt) and we then started writing a real prototype
>>> that now works. ?We are very much considering abandoning twisted as we
>>> move forward and using 0mq for everything, including the distributed
>>> computing support (while keeping the user-facing apis unchanged).
>>>
>>> So what's in this example? ?For now, you'll need to install 0mq and
>>> pyzmq from git; for 0mq, clone the repo at:
>>>
>>> git://github.com/sustrik/zeromq2.git
>>>
>>> then run
>>>
>>> ./autogen.sh
>>> ./configure --prefix=your_favorite_installation_prefix
>>> make
>>> make install
>>>
>>> This should give you a fully working 0mq. ?Then for the python
>>> bindings, clone Brian's repo and get the kernel branch:
>>>
>>> git clone git://github.com/ellisonbg/pyzmq.git
>>> cd pyzmq
>>> git co -b kernel origin/kernel
>>>
>>> then
>>>
>>> cp setup.cfg.template setup.cfg
>>>
>>> and edit setup.cfg to indicate where you put your libraries. ?This is
>>> basically the prefix above with /lib and /include appended.
>>>
>>> Then you can do the usual
>>>
>>> python setup.py install --prefix=your_favorite_installation_prefix
>>>
>>>
>>> The prototype we wrote lives in examples/kernel. ?To play with it,
>>> open 3 terminals:
>>>
>>> - T1: run kernel.py, just leave it there.
>>> - T2 and T2: run frontend.py
>>>
>>> Both T2 and T3 are simple interactive prompts that run python code.
>>> You can quit them and restart them, type in both of them and they both
>>> manage the same namespace from the kernel in T1. ?Each time you hit
>>> return, they synchronize with each other's input/output, so you can
>>> see what each client is sending to the kernel. ?In a terminal this is
>>> done in-line and only when you hit return, but a curses/qt client with
>>> a real event loop can actually fire events when data arrives and
>>> display output from other clients as it is produced.
>>>
>>> You can 'background' inputs by putting ';' as the last character, and
>>> you can keep typing interactively while the kernel continues to
>>> process. ?If you type something that's taking a long time, Ctrl-C will
>>> break out of ?the wait but will leave the code running in the
>>> background (like Ctrl-Z in unix).
>>>
>>> This is NOT meant to be production code, it has no ipython
>>> dependencies at all, ?no tab-completion yet, ?etc. ?It's meant to:
>>>
>>> - let us understand the basic design questions,
>>> - settle on the messaging api between clients and kernel,
>>> - establish what common api all clients can use: a base to be shared
>>> by readline/curses/qt clients, on top of which the frontend-specific
>>> code will go.
>>>
>>> So I'd encourage those of you who are interested in this problem to
>>> have a look and let us know how it goes. ?For now the code lives in
>>> pyzmq because it makes for a great zmq example, but we're almost ready
>>> to start from it putting real ipython machinery. ?For thinking about
>>> this design though, it's a lot easier to work with a tiny prototype
>>> that fits in 3 files than to deal with all of ipython's complexity.
>>>
>>> Cheers,
>>>
>>> f
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Tue Mar 23 21:00:08 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 23 Mar 2010 18:00:08 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com> 
	<fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>
Message-ID: <h2jdb6b5ecc1003231800zd64a4378q67fcf8a9c8319e20@mail.gmail.com>

On Tue, Mar 23, 2010 at 3:05 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> One thing that I don't think Fernando
> mentioned is that stdout/stderr and Out (displayhook) are all handled
> asynchronously AND broadcast to all users.

Yes, I forgot to mention this, good point!  It will be great to demo
this in a really async frontend so it's visible in real time instead
of using the finger on the return key as an idle timer :)

> PS: Fernando, do you notice that time.sleep(1) (which returns None)
> also triggers displayhook? ?That is a bit odd. ?Do we want to filter
> out None from the displayhook?

Absolutely, we forgot that.  I just pushed the commit that fixes it.

Cheers,

f


From dsdale24 at gmail.com  Tue Mar 23 21:11:05 2010
From: dsdale24 at gmail.com (Darren Dale)
Date: Tue, 23 Mar 2010 21:11:05 -0400
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <fa8579a41003231643s2fbdb34blb405ccd60a45635a@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<a08e5f81003231633p69a24d79t4d271861c1d2eb47@mail.gmail.com>
	<fa8579a41003231643s2fbdb34blb405ccd60a45635a@mail.gmail.com>
Message-ID: <a08e5f81003231811i2a72564o90337176f827593@mail.gmail.com>

On Tue, Mar 23, 2010 at 7:43 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> Darren,
>
>> This sounds really exciting. I am having some trouble installing pyzmq:
>
> OK, let's get this figured out. ?Should be simple.
>
>> On Tue, Mar 23, 2010 at 5:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>>> git://github.com/sustrik/zeromq2.git
>>>
>>> then run
>>>
>>> ./autogen.sh
>>> ./configure --prefix=your_favorite_installation_prefix
>>> make
>>> make install
>>>
>>> This should give you a fully working 0mq.
>>
>> I used --prefix=/usr/local
>>
>>> Then for the python
>>> bindings, clone Brian's repo and get the kernel branch:
>>>
>>> git clone git://github.com/ellisonbg/pyzmq.git
>>> cd pyzmq
>>> git co -b kernel origin/kernel
>>>
>>> then
>>>
>>> cp setup.cfg.template setup.cfg
>>>
>>> and edit setup.cfg to indicate where you put your libraries. ?This is
>>> basically the prefix above with /lib and /include appended.
>>
>> [build_ext]
>> # Edit these to point to your installed zeromq library and header dirs.
>> library_dirs = /usr/local/lib
>> include_dirs = /usr/local/include
>>
>> I checked that libzmq.so* exist in /usr/local/lib, same for zmq.* in
>> /usr/local/include
>>
>>> Then you can do the usual
>>>
>>> python setup.py install --prefix=your_favorite_installation_prefix
>>
>> First I did "python setup.py build":
>>
>> running build
>> running build_py
>> creating build
>> creating build/lib.linux-x86_64-2.6
>> creating build/lib.linux-x86_64-2.6/zmq
>> copying zmq/__init__.py -> build/lib.linux-x86_64-2.6/zmq
>> running build_ext
>> building 'zmq._zmq' extension
>> creating build/temp.linux-x86_64-2.6
>> creating build/temp.linux-x86_64-2.6/zmq
>> gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
>> -Wstrict-prototypes -fPIC -I/usr/local/include
>> -I/usr/include/python2.6 -c zmq/_zmq.c -o
>> build/temp.linux-x86_64-2.6/zmq/_zmq.o
>> gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions
>> build/temp.linux-x86_64-2.6/zmq/_zmq.o -L/usr/local/lib -lzmq -o
>> build/lib.linux-x86_64-2.6/zmq/_zmq.so
>>
>> Next I used "python setup.py install --user" (which is equivalent to
>> "--prefix=~/.local"):
>>
>> running install
>> running build
>> running build_py
>> running build_ext
>> running install_lib
>> copying build/lib.linux-x86_64-2.6/zmq/_zmq.so ->
>> /home/darren/.local/lib/python2.6/site-packages/zmq
>> running install_egg_info
>> Removing /home/darren/.local/lib/python2.6/site-packages/pyzmq-0.1.egg-info
>> Writing /home/darren/.local/lib/python2.6/site-packages/pyzmq-0.1.egg-info
>>
>>> The prototype we wrote lives in examples/kernel. ?To play with it,
>>> open 3 terminals:
>>>
>>> - T1: run kernel.py, just leave it there.
>>
>> Here is where I run into trouble:
>>
>> Traceback (most recent call last):
>> ?File "kernel.py", line 21, in <module>
>> ? ?import zmq
>> ?File "/home/darren/.local/lib/python2.6/site-packages/zmq/__init__.py",
>> line 26, in <module>
>> ? ?from zmq import _zmq
>> ImportError: libzmq.so.0: cannot open shared object file: No such file
>> or directory
>
> We link to libzmq.so dynamically so on Linux you have to set
> LD_LIBRARY_PATH to point to /usr/local/lib. ?Let me know if that
> doesn't help. ?I am also on #ipython right now.

I had a look at h5py's setup.py, and discovered that Andrew was
passing a runtime_library_dirs kwarg to Extension(). This diff seems
to fix the problem in my particular case:

diff --git a/setup.py b/setup.py
index 86283c6..dc38454 100644
--- a/setup.py
+++ b/setup.py
@@ -49,7 +49,8 @@ else:
 zmq = Extension(
     'zmq._zmq',
     sources = [zmq_source],
-    libraries = [libzmq]
+    libraries = [libzmq],
+    runtime_library_dirs = ['/usr/local/lib'],
 )

Darren


From ellisonbg at gmail.com  Tue Mar 23 21:12:02 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 23 Mar 2010 18:12:02 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <a08e5f81003231811i2a72564o90337176f827593@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<a08e5f81003231633p69a24d79t4d271861c1d2eb47@mail.gmail.com>
	<fa8579a41003231643s2fbdb34blb405ccd60a45635a@mail.gmail.com>
	<a08e5f81003231811i2a72564o90337176f827593@mail.gmail.com>
Message-ID: <fa8579a41003231812q381a597ag8a83c35f354dd9fe@mail.gmail.com>

Darren,

Thanks for chasing this down.  I will commit this and test on various platforms.

Cheers,

Brian

On Tue, Mar 23, 2010 at 6:11 PM, Darren Dale <dsdale24 at gmail.com> wrote:
> On Tue, Mar 23, 2010 at 7:43 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>> Darren,
>>
>>> This sounds really exciting. I am having some trouble installing pyzmq:
>>
>> OK, let's get this figured out. ?Should be simple.
>>
>>> On Tue, Mar 23, 2010 at 5:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>>>> git://github.com/sustrik/zeromq2.git
>>>>
>>>> then run
>>>>
>>>> ./autogen.sh
>>>> ./configure --prefix=your_favorite_installation_prefix
>>>> make
>>>> make install
>>>>
>>>> This should give you a fully working 0mq.
>>>
>>> I used --prefix=/usr/local
>>>
>>>> Then for the python
>>>> bindings, clone Brian's repo and get the kernel branch:
>>>>
>>>> git clone git://github.com/ellisonbg/pyzmq.git
>>>> cd pyzmq
>>>> git co -b kernel origin/kernel
>>>>
>>>> then
>>>>
>>>> cp setup.cfg.template setup.cfg
>>>>
>>>> and edit setup.cfg to indicate where you put your libraries. ?This is
>>>> basically the prefix above with /lib and /include appended.
>>>
>>> [build_ext]
>>> # Edit these to point to your installed zeromq library and header dirs.
>>> library_dirs = /usr/local/lib
>>> include_dirs = /usr/local/include
>>>
>>> I checked that libzmq.so* exist in /usr/local/lib, same for zmq.* in
>>> /usr/local/include
>>>
>>>> Then you can do the usual
>>>>
>>>> python setup.py install --prefix=your_favorite_installation_prefix
>>>
>>> First I did "python setup.py build":
>>>
>>> running build
>>> running build_py
>>> creating build
>>> creating build/lib.linux-x86_64-2.6
>>> creating build/lib.linux-x86_64-2.6/zmq
>>> copying zmq/__init__.py -> build/lib.linux-x86_64-2.6/zmq
>>> running build_ext
>>> building 'zmq._zmq' extension
>>> creating build/temp.linux-x86_64-2.6
>>> creating build/temp.linux-x86_64-2.6/zmq
>>> gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
>>> -Wstrict-prototypes -fPIC -I/usr/local/include
>>> -I/usr/include/python2.6 -c zmq/_zmq.c -o
>>> build/temp.linux-x86_64-2.6/zmq/_zmq.o
>>> gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions
>>> build/temp.linux-x86_64-2.6/zmq/_zmq.o -L/usr/local/lib -lzmq -o
>>> build/lib.linux-x86_64-2.6/zmq/_zmq.so
>>>
>>> Next I used "python setup.py install --user" (which is equivalent to
>>> "--prefix=~/.local"):
>>>
>>> running install
>>> running build
>>> running build_py
>>> running build_ext
>>> running install_lib
>>> copying build/lib.linux-x86_64-2.6/zmq/_zmq.so ->
>>> /home/darren/.local/lib/python2.6/site-packages/zmq
>>> running install_egg_info
>>> Removing /home/darren/.local/lib/python2.6/site-packages/pyzmq-0.1.egg-info
>>> Writing /home/darren/.local/lib/python2.6/site-packages/pyzmq-0.1.egg-info
>>>
>>>> The prototype we wrote lives in examples/kernel. ?To play with it,
>>>> open 3 terminals:
>>>>
>>>> - T1: run kernel.py, just leave it there.
>>>
>>> Here is where I run into trouble:
>>>
>>> Traceback (most recent call last):
>>> ?File "kernel.py", line 21, in <module>
>>> ? ?import zmq
>>> ?File "/home/darren/.local/lib/python2.6/site-packages/zmq/__init__.py",
>>> line 26, in <module>
>>> ? ?from zmq import _zmq
>>> ImportError: libzmq.so.0: cannot open shared object file: No such file
>>> or directory
>>
>> We link to libzmq.so dynamically so on Linux you have to set
>> LD_LIBRARY_PATH to point to /usr/local/lib. ?Let me know if that
>> doesn't help. ?I am also on #ipython right now.
>
> I had a look at h5py's setup.py, and discovered that Andrew was
> passing a runtime_library_dirs kwarg to Extension(). This diff seems
> to fix the problem in my particular case:
>
> diff --git a/setup.py b/setup.py
> index 86283c6..dc38454 100644
> --- a/setup.py
> +++ b/setup.py
> @@ -49,7 +49,8 @@ else:
> ?zmq = Extension(
> ? ? 'zmq._zmq',
> ? ? sources = [zmq_source],
> - ? ?libraries = [libzmq]
> + ? ?libraries = [libzmq],
> + ? ?runtime_library_dirs = ['/usr/local/lib'],
> ?)
>
> Darren
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From barrywark at gmail.com  Tue Mar 23 23:35:37 2010
From: barrywark at gmail.com (Barry Wark)
Date: Tue, 23 Mar 2010 20:35:37 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <fa8579a41003231702l35c0e73ax8d1e04c7e990581d@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>
	<cd7634ce1003231632m15bcb63bufa5a9988cba240bf@mail.gmail.com>
	<fa8579a41003231702l35c0e73ax8d1e04c7e990581d@mail.gmail.com>
Message-ID: <cd7634ce1003232035k6398aec1xd754c20cf26e6eb0@mail.gmail.com>

On Tuesday, March 23, 2010, Brian Granger <ellisonbg at gmail.com> wrote:
> Barry,
>
> Good to hear from you.

It feels good to try to get involved again. A PhD and starting a
family and a company have intervened. The last two will doubtless do
so again, but this announcement is too exciting not to draw me in a
bit.

>
>> Congratulations Brian and Fernando! This is a huge advance for UI
>> integration (and possible for parallel ipython as well).
>
> Thanks!
>
>> Having written several two-process UI/kernel systems, I suspect that
>> the direction things are heading will make it quite easy to implement
>> a UI frontend for the kernel using modern UI toolkits (e.g. Qt, Cocoa,
>> WPF, etc.)
>
> It should be. ?I personally can't wait to see a Cocoa version of a UI :)

yes, I'm looking forward to doing some
hacking.

>
>> I suppose this new paradigm brings to the fore the ongoing discussion
>> of protocol for communication between frontend and the kernel. As this
>> could affect format for a persistent "notebook" format (it would be
>> nice to be able to send a notebook or notebook section to the kernel),
>> it might be worth considering the two issues together.
>
> The Python bindings to 0MQ (pyzmq) have the ability to
> serialize/unseralize Python objects using JSON or pickle. ?In our
> prototype, we have chosen JSON because:
>
> * JSON can be handled by virtually any language, including in browser JS.
> * It leads to a very pythonic interface for messages in python.
> * It is reasonably fast.
> * Very flexible.
> * Take almost 0 code to support.
>
> Thus, my current leaning for UI<->kernel messages is JSON. ?XML is a
> possibility, but it just doesn't seem to fit.

I see the logic in JSON over XML for sure and see your point about its
flexibility over, e.g. Protocol Buffets (see below). My take is that
JSON serialization seems to work best in dynamically types languages
whereas a mote structured approach fits better into statically typed
languages. Given the first implementations will almost certainly be in
python, I'm fine throwing my chips in with JSON for now as long ad
we're careful with string encodigs etc.

>
> But, I think that XML is worth thinking about for the notebook
> persistence format. ?Using JSON for that doesn't seem like a good fit.
> ?Maybe it is though, I had never though about that. ?the other option
> is pure python code, which I like, but I am not sure there is broad
> support for it.
>
>> The previous
>> discussion settled, I think, leaning towards an XML notebook. Assuming
>> an entity that describes a block of python code, the UI->kernel
>> message could be an XML-serialized block and the response could be the
>> corresponding XML-serialized output. The other possibility is to
>> separate notebook format (XML) from UI<->kernel protocol.
>
> I do think the UI<->kernel protocol should be isolated from the
> notebook format. ?The reason is that the UI<->kernel will know zip
> about the notebook. ?It is only responsible for code execution.
> Another reason I want this separation is that different UIs might need
> different things for the notebook format.

True. On the flip side, notebooks hold
the inputs that get sent to the kernel plus the outputs that come
back. Why repeat code to serialize these items into different formats
etc. I'm not trying to take a strong stand on this one, just thinking
out loud.

And wouldn't it be nice for the project to have a consistent notebook
format across frontends? Seems like a useabity disaster to
be unable to take notebooks between platforms and/or frontends.

>
> Here is my current thinking about the notebook format. ?We have had
> multiple attempts at creating a notebook format. ?Fernando and Robert
> Kern mentored a GSOC project long ago that created an SML based
> notebook format. ?Then Min created one that used XML and sqlalchemy.
> Both never took off. ?Why? ?My take is that a notebook format is
> useless until there is:
>
> 1. A kernel that the notebook frontend can actually talk to.
> 2. A nice notebook frontend that people actually want to use.
>
> Thus, I think we should start with (1), then do (2) and *lastly* worry
> about how to save the frontend sessions (notebook format). ?I think
> that once we have (1) and (2) working, the notebook format will fall
> into place much more naturally.

Very true.

>
>> In this
>> case, something like Google's protocol buffers make sense. These
>> protocol buffer messages are somewhat easier (and much faster) to work
>> with from many languages than XML, but are not as easily human
>> readable (if at all) and would add yet an other non-stdlib dependency.
>> Just starting the discussion...
>
> Yes, I like protocol buffers. ?The main reasons I see to go with them are:
>
> * Performance.
> * Less flexible - can make sure that a message has the right structure, etc.
>
> In the case of the UI<->kernel protocol, our messages are pretty small
> so performance isn't much of an issue and I think for now we want them
> to be super flexible. ?This brings me back to JSON.
>
>> Looking forward to hacking some UIs,
>
> Cool.
>
> Brian
>
>> Barry
>>
>>
>>
>> On Tue, Mar 23, 2010 at 3:05 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>>> All,
>>>
>>> As Fernando has summarized our work very well. I too am very excited
>>> about this development. ?One thing that I don't think Fernando
>>> mentioned is that stdout/stderr and Out (displayhook) are all handled
>>> asynchronously AND broadcast to all users.
>>>
>>> Thus, if you run the following
>>>
>>> Py>>> for i in range(10):
>>> ?... ? print i
>>> ?... ? i**2
>>> ?... ? time.sleep(1)
>>> ?...
>>>
>>> You get the result asynchronously:
>>>
>>> 0
>>> Out : 0
>>> Out : None
>>> [then wait 1 second]
>>> 1
>>> Out : 1
>>> Out : None
>>> [then wait 1 second]
>>> 2
>>> Out : 4
>>> Out : None
>>> [then wait 1 second]
>>> 3
>>> Out : 9
>>> Out : None
>>> [then wait 1 second]
>>> 4
>>> Out : 16
>>> Out : None
>>> [then wait 1 second]
>>>
>>> etc. ?If another user is connected to the kernel, They will also
>>> receive these (along with the corresponding input) asynchronously. ?In
>>> a terminal based frontend these things are a little bit difficult to
>>> demonstrate, but in a nice GUI frontend, we could imagine a nice a
>>> interface to represent these things.
>>>
>>> Cheers,
>>>
>>> Brian
>>>
>>> PS: Fernando, do you notice that time.sleep(1) (which returns None)
>>> also triggers displayhook? ?That is a bit odd. ?Do we want to filter
>>> out None from the displayhook?
>>>
>>>
>>> On Tue, Mar 23, 2010 at 2:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>>>> Hi all,
>>>>
>>>> I realize that we have a significant task ahead in sorting some thorny
>>>> issues out to get 0.11 ready for release, especially with respect to
>>>> GUI handling. ?But the big refactor of 0.11 starts to give us a clean
>>>> codebase on which new/better interfaces using ipython can be built,
>>>> and for that several people want to ?contribute right away, so it's
>>>> best that we build a path forward for those contributions, even if
>>>> some of us still have to finish the 0.11 struggle.
>>>>
>>>> Wendell is looking into curses and Gerardo and Omar in Colombia are
>>>> looking at Qt, and it would be fantastic to be able to get these
>>>> developments moving forward soon. ?So Brian and I got together over
>>>> the weekend and ?did a design/coding sprint that turned out to be
>>>> surprisingly fun and productive. ?This is the summary of those
>>>> results. ?I hope this thread can serve as a design brief we'll later
>>>> formalize in the docs and which can be used to plan for a possible
>>>> GSOC submission, for example (which the Qt guys have in mind).
>>>>
>>>> The basic issue we need to solve is the ability to have out-of-process
>>>> interfaces that are efficient, simple to develop, ?and that support
>>>> fully asynchronous operation. ?In today's ipython, you type code into
>>>> a program that is the same tasked with executing the code, ?so that if
>>>> your code crashes, it takes the interface down with it. ?So we need to
>>>> have a two-process system where the user-facing client and the kernel
>>>> that executes code live in separate processes (we'll retain a minimal
>>>> in-process interface for embedding, ?no worries, but the bulk of the
>>>> real-world use should be in two processes).
>>>>
>>>> We want the user-facing client (be it readline-, curses- or qt-based)
>>>> to remain responsive when the kernel is executing code, and to survive
>>>> a full kernel crash. ?So client/kernel need to communicate, and the
>>>> communication should hopefully be possible *even when the kernel is
>>>> busy*, at least to the extent that low-level messaging should continue
>>>> to function even if the kernel is busy with Python ?code.
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>


From ellisonbg at gmail.com  Wed Mar 24 02:55:34 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 23 Mar 2010 23:55:34 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <cd7634ce1003232035k6398aec1xd754c20cf26e6eb0@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>
	<cd7634ce1003231632m15bcb63bufa5a9988cba240bf@mail.gmail.com>
	<fa8579a41003231702l35c0e73ax8d1e04c7e990581d@mail.gmail.com>
	<cd7634ce1003232035k6398aec1xd754c20cf26e6eb0@mail.gmail.com>
Message-ID: <fa8579a41003232355j22dbe41r17232c3f86f0a326@mail.gmail.com>

Barry,

> It feels good to try to get involved again. A PhD and starting a
> family and a company have intervened. The last two will doubtless do
> so again, but this announcement is too exciting not to draw me in a
> bit.

It would be great to have your help again, but we totally understand.

>> I do think the UI<->kernel protocol should be isolated from the
>> notebook format. ?The reason is that the UI<->kernel will know zip
>> about the notebook. ?It is only responsible for code execution.
>> Another reason I want this separation is that different UIs might need
>> different things for the notebook format.
>
> True. On the flip side, notebooks hold
> the inputs that get sent to the kernel plus the outputs that come
> back. Why repeat code to serialize these items into different formats
> etc. I'm not trying to take a strong stand on this one, just thinking
> out loud.

This is a good point that we need to think about further.  In a web browser
I think JSON is a better option.  The only place I think XML seems better is
on disk.  Hmmm.

> And wouldn't it be nice for the project to have a consistent notebook
> format across frontends? Seems like a useabity disaster to
> be unable to take notebooks between platforms and/or frontends.

Yes, I do think in the long run we absolutely want to have a single
notebook format
that ships with IPython that any frontend can use. I can even imagine hiding
the notebook format implementation from the frontend with some sort of API.

Cheers,

Brian





-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From gokhansever at gmail.com  Wed Mar 24 12:04:42 2010
From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=)
Date: Wed, 24 Mar 2010 11:04:42 -0500
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <4BA94438.2050704@noaa.gov>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<4BA936BA.90009@noaa.gov>
	<fa8579a41003231454y4340cbfcq76f66e8138ef374e@mail.gmail.com>
	<4BA94438.2050704@noaa.gov>
Message-ID: <49d6b3501003240904n7365b24eg6455449fd84cc758@mail.gmail.com>

On Tue, Mar 23, 2010 at 5:44 PM, Christopher Barker
<Chris.Barker at noaa.gov>wrote:

>
> However, I"d like to see it embedded in my editor of choice (Peppy),
> which is written in wx, so it would be nice to have a wx version ready
> to embed.
>

Hi Chris, hi everybody...

I see Peppy has a lot of similar features like in spyder (
http://code.google.com/p/spyderlib) and pida (http://pida.co.uk/) IDE's not
to mentione many other open-source and free IDE's.

I wonder this exciting implementation of Fernando and Brian could create
enough momentum to start a combined effort to develop a unified IDE for
scientific Python users.


-- 
G?khan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100324/7e37086f/attachment.html>

From fperez.net at gmail.com  Wed Mar 24 12:11:06 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 24 Mar 2010 09:11:06 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
Message-ID: <s2qdb6b5ecc1003240911hdb19d812v78eea8f69ceda022@mail.gmail.com>

On Tue, Mar 23, 2010 at 2:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> This is the summary of those
> results. ?I hope this thread can serve as a design brief we'll later
> formalize in the docs and which can be used to plan for a possible
> GSOC submission, for example (which the Qt guys have in mind).

Sorry for the brevity, I have a crazy day today and can't work on this
but want to jot down some design thoughts from last nights
tab-completion exercise before I forget so they're here for further
discussion.

- We probably want a second REQ/XREP socket used strictly for control
messages.  This will  make it easier to handle them separate from code
execution.

- The kernel should also  have  a second PUB socket where it simply
posts busy/ready status updates.  This can then be used by clients to
check before making certain control requests like tab completion that
should be avoided when busy (I know, there's a race condition if it's
implemented naively, but I think it can be avoided simply by assuming
that control requests are made only when the status socket is in
'ready' status, but that clients can't assume they will get them
honored, they have to  check the result and be ready to time out if
needed).

- We're starting to see the architecture needed for qt/wx/curses
applications now: we should break what we now call the 'frontend' into
2 objects:

1. 'Client': object that talks to kernel with zmq messages, does NOT
talk directly to user and doesn't know if it's in qt, wx, curses or
terminal.

2. 'Frontend': object that talks to user, has UI dependencies (qt,
readline, etc) but does NOT have zmq dependencies.  It *only* talks to
client object via python calls,  it does not do messaging.

Even the code in frontend.py today is starting to have a bit of this,
now we just have to make the split, and that will quickly indicate
where the design divisions need to go.


Gotta go,

f


From ellisonbg at gmail.com  Wed Mar 24 12:56:19 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 24 Mar 2010 09:56:19 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <s2qdb6b5ecc1003240911hdb19d812v78eea8f69ceda022@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<s2qdb6b5ecc1003240911hdb19d812v78eea8f69ceda022@mail.gmail.com>
Message-ID: <fa8579a41003240956l14336d5hb568a7655a8bc69f@mail.gmail.com>

Fernando,


> Sorry for the brevity, I have a crazy day today and can't work on this
> but want to jot down some design thoughts from last nights
> tab-completion exercise before I forget so they're here for further
> discussion.

Thanks for writing these things down...

> - We probably want a second REQ/XREP socket used strictly for control
> messages. ?This will ?make it easier to handle them separate from code
> execution.

I think we actually need to have the control messages over the same
socket as execute.  The reason is that if we had a 2nd channel, the
control messages could overtake the execute ones:

>>> time.sleep(10);
>>> a = 10
>>> a.[TAB]  # if we have a 2nd channel, this will get to the kernel before a = 10!!!

I have some ideas though on how we can better use the single XREQ/XREP
pair for both control and execution.

> - The kernel should also ?have ?a second PUB socket where it simply
> posts busy/ready status updates. ?This can then be used by clients to
> check before making certain control requests like tab completion that
> should be avoided when busy (I know, there's a race condition if it's
> implemented naively, but I think it can be avoided simply by assuming
> that control requests are made only when the status socket is in
> 'ready' status, but that clients can't assume they will get them
> honored, they have to ?check the result and be ready to time out if
> needed).

Nice idea to have the status updates published.  We should definitely
do that.  I think we can easily do this using a single PUB/SUB pair
though.  I just need to write down these ideas I have about how to
handle multiple types of actions on a single socket.  Shouldn't be a
problem though.  I am a little weary of having too many open sockets
and there is really no reason we can't handle all the actions on a
single socket.

> - We're starting to see the architecture needed for qt/wx/curses
> applications now: we should break what we now call the 'frontend' into
> 2 objects:
>
> 1. 'Client': object that talks to kernel with zmq messages, does NOT
> talk directly to user and doesn't know if it's in qt, wx, curses or
> terminal.
>
> 2. 'Frontend': object that talks to user, has UI dependencies (qt,
> readline, etc) but does NOT have zmq dependencies. ?It *only* talks to
> client object via python calls, ?it does not do messaging.
>
> Even the code in frontend.py today is starting to have a bit of this,
> now we just have to make the split, and that will quickly indicate
> where the design divisions need to go.

I know this is something you really want to have.  But I don't think
it is possible, even for the synchronous line based frontend.  This is
because all frontends will need to have an event loop and the event
loop itself needs handle the 0MQ messaging stuff.  But I am willing to
explore this idea further to see if it is possible.  I think the next
step is to implement a real event loop in pyzmq and then use in for
our current frontend/kernel prototype.  That will better show us what
the abstractions and interfaces are.

Cheers,

Brian


>
> Gotta go,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Wed Mar 24 13:10:33 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 24 Mar 2010 10:10:33 -0700
Subject: [IPython-dev] What is the status of iPython+wx?
In-Reply-To: <4BA8F0B9.8080600@noaa.gov>
References: <4BA8F0B9.8080600@noaa.gov>
Message-ID: <fa8579a41003241010q52bb2241o93019fba6718cc6e@mail.gmail.com>

Chris,

> There was a thread here fairly recently about the re-structuring of how
> iPython works with GUI toolkit event loops.

Yes, you saw that.

> IIUC, it's going to require a bit for special code in the GUI programs,
> so that they don't try to start multiple app instances, etc.
>
> What is the status of this?

We have not fixed the issue.  But, it is only an issue with
matplotlib/traits.  If you are developing your own wxpython code you
definitely should use dev trunk and look at the %gui magic.  We also
have some details about how it works in the nightly docs.  The dev
version is *much* more stable that 0.10 for this type of thing.

Let us know how it goes.

Cheers,

Brian

> I'd like to use iPython for my wxPython development, and the current
> multi-threading mode isn't all that robust (though it mostly works). Is
> this a good time to upgrade to a devel version and start plugging away?
> or is it not really ready for daily use yet.
>
> One of my thoughts is that I could work on the boilerplate code required
> to run a wx app with the new iPython, and if the timing is right, get it
> into the next wxPython release ( which is coming "soon" ).
>
> -Chris
>
>
>
>
> --
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959 ? voice
> 7600 Sand Point Way NE ? (206) 526-6329 ? fax
> Seattle, WA ?98115 ? ? ? (206) 526-6317 ? main reception
>
> Chris.Barker at noaa.gov
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Wed Mar 24 13:14:13 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 24 Mar 2010 10:14:13 -0700
Subject: [IPython-dev] IPython threading bug (was: [Enthought-Dev] Bug
	concering X-Server)
In-Reply-To: <20100321211357.GB23232@phare.normalesup.org>
References: <e419d6521003211407ldf436c2x38135b48dcbfd0e7@mail.gmail.com>
	<20100321211357.GB23232@phare.normalesup.org>
Message-ID: <fa8579a41003241014g50dc6b30q8eb1d2d71b5236af@mail.gmail.com>

Gael,

[this email is not on enthoght-dev so can you forward]

On Sun, Mar 21, 2010 at 2:13 PM, Gael Varoquaux
<gael.varoquaux at normalesup.org> wrote:
> On Sun, Mar 21, 2010 at 10:07:02PM +0100, Martin Bothe wrote:
>> ? ?Hello enthought-list-users,
>> ? ?I tried a bit around and found a bug, so I report here.
>> ? ?After creating a mayavi plot in ipython and attaching axes to it like so:
>> ? ?ax = mlab.axes()
>> ? ?I wrote in ipython: ax.axes. and hit the tab key which led to an error,
>> ? ?making the terminal useless:
>> ? ?In [32]: ax.axes.The program 'python' received an X Window System error.
>> ? ?This probably reflects a bug in the program.
>> ? ?The error was 'BadAccess (attempt to access private resource denied)'.
>> ? ???(Details: serial 162967 error_code 10 request_code 153 minor_code 26)
>> ? ???(Note to programmers: normally, X errors are reported asynchronously;
>> ? ??? that is, you will receive the error a while after causing it.
>> ? ??? To debug your program, run it with the --sync command line
>> ? ??? option to change this behavior. You can then get a meaningful
>> ? ??? backtrace from your debugger if you break on the gdk_x_error()
>> ? ?function.)
>
> Hi Martin,
>
> Indeed, this is a bug from IPython: they are inspecting the object by
> calling some of its methods outside the GUI mainloop, in a separate
> thread. GUI toolkits cannot deal with such calls outside the main loop
> (they are not thread safe). As a result, you sometimes get crashes...

Yes, if this is the underlying problem, there is no way to solve the
problem other than moving to ipython dev trunk, which no longer uses
threads for GUI integration.

> The problem, I believe, is that the IPython codebase does not control
> when this call is made, but readline does, so it's a bit hard to inject
> it in the mainloop. That, said, I don't see why the readline callback
> couldn't inject the inspection code in the mainloop and busy wait for it
> to be called in the readline thread. Of course this is code to be
> written, and its probably tricky.

It is worse than tricky.  It is ugly and super thread-unsafe.  Like
playing tennis with a stick of dynamite.

> Anyhow, I am Ccing the IPython mailing list. I suspect that they are
> aware of the problem, and simply lack man-power to address it properly.

If I understand the issue correctly, we *have* solved it in dev trunk.
 But dev trunk currently has other issues with Mayavi/traits that may
bite you.  We know about these and have a plan to solve them.

Cheers,

Brian



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Wed Mar 24 13:15:40 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 24 Mar 2010 10:15:40 -0700
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <4BA67CE0.9070203@livinglogic.de>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>
	<4B914ACD.2030308@gmail.com>
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>
	<4B962949.6010006@livinglogic.de>
	<fa8579a41003121122l178d13aaiccad51ef53e3ae07@mail.gmail.com>
	<4B9AA41B.6030600@livinglogic.de>
	<fa8579a41003121409i1e65de9bof70272b0a5aa22a5@mail.gmail.com>
	<4BA67CE0.9070203@livinglogic.de>
Message-ID: <fa8579a41003241015j1f4db03asa36ce7d6b87ef17d@mail.gmail.com>

Walter,

On Sun, Mar 21, 2010 at 1:09 PM, Walter D?rwald <walter at livinglogic.de> wrote:
> On 12.03.10 23:09, Brian Granger wrote:
>
>> Walter,
>>
>>>> Is there a problem with generics?
>>>
>>> No, they work without a problem.
>>
>> Ok, I misunderstood.
>>
>>>> If so it might be related to this:
>>>>
>>>> https://bugs.launchpad.net/ipython/+bug/527968
>>>
>>> I'm not using generics.complete_object.
>>>
>>>> It this is a different issue, could you explain further?
>>>
>>> You wrote: "Minimally, ipipe needs to be updated to the new APIs", but
>>> generics.result_display() is the only IPython API that ipipe uses, so I
>>> thould I would have to change something.
>>
>> OK, but that shouldn't be too difficult right? ?If you do want to
>> continue to use this,
>> we can look to see what the new API looks like for this.
>
> So does this mean that generics.result_display() *will* go away in 0.11?
> If yes, what *is* the new API that I can hook into?

I think it is still there, and I doubt it would be removed for 0.11.
But that part of the code base has not been refactored, so in the long
run it may go away.

> What I need is a hook where I can register a callback which gets called
> when objects of a certain type have to be output to the screen the
> return value of the hook is the object that gets assigned to the _ variable.

For now I think this is what you want.

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From termim at gmail.com  Wed Mar 24 23:15:44 2010
From: termim at gmail.com (Mikhail Terekhov)
Date: Wed, 24 Mar 2010 23:15:44 -0400
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
Message-ID: <12aaa0811003242015l7d758507h7d04d9df4587c28d@mail.gmail.com>

On Tue, Mar 23, 2010 at 5:01 PM, Fernando Perez <fperez.net at gmail.com>wrote:

> The basic issue we need to solve is the ability to have out-of-process
> interfaces that are efficient, simple to develop,  and that support
> fully asynchronous operation.  In today's ipython, you type code into
> a program that is the same tasked with executing the code,  so that if
> your code crashes, it takes the interface down with it.  So we need to
> have a two-process system where the user-facing client and the kernel
> that executes code live in separate processes (we'll retain a minimal
> in-process interface for embedding,  no worries, but the bulk of the
> real-world use should be in two processes).
>
> We want the user-facing client (be it readline-, curses- or qt-based)
> to remain responsive when the kernel is executing code, and to survive
> a full kernel crash.  So client/kernel need to communicate, and the
> communication should hopefully be possible *even when the kernel is
> busy*, at least to the extent that low-level messaging should continue
> to function even if the kernel is busy with Python  code.
>
> Up until now our engines use Twisted, and the above requirements can
> simply not be met with Twisted (not to mention Twisted's complexity
> and the concern we have with it not being ported soon to py3).  We
> recently stumbled on the 0mq messaging library:
>
> http://www.zeromq.org/
>
> and Brian was able to quickly build a set of Python bindings for it
> (see  link at the 0mq site, I'm writing this offline) using Cython.
> They are fast, we have  full  control over them, and since Cython is
> python-3 compliant, it means we can get a py3 version anytime we need.
>
> 0mq is a really amazing library: I'd only read about it recently and
> only used it for the first time this weekend (I started installing it
> at Brian's two days ago), and I was blown away by it.  It does all the
> messaging in C++ system threads that are 100% Python-threads safe, so
> the library is capable of queuing messages until the Python layer is
> available to handle them.  The api is dead-simple, it's blazingly
> fast, and we were able to get in two intense days a very real
> prototype that solves a number of problems that we were never able to
> make a dent into with Twisted.  Furthermore, with Twisted it was only
> really Brian and Min who ever wrote major amounts of code  for
> Ipython: twisted is really hard to grasp and has layers upon layers of
> abstraction,  making it a very difficult library to  pick up without a
> major commitment.  0mq is exactly the opposite: Brian explained the
> basic concepts to me in a few minutes (I haven't read a single doc
> yet!), we did some queuing tests interactively (by just making objects
> at an ipython prompt) and we then started writing a real prototype
> that now works.  We are very much considering abandoning twisted as we
> move forward and using 0mq for everything, including the distributed
> computing support (while keeping the user-facing apis unchanged).
>
> IMHO it is a great idea to separate the main IPython engine from the
frontend.
But while implementing an RPC framework over 0mq from ground up should
not be a very difficult task and will definitely bring you a lot of fun,
have you
considered something preexisting like RPyC (http://rpyc.wikidot.com/) for
example. The reason is that IPython already has a lot of useful and exciting
functionality and yet another RPC framework is somewhat too much. Plus,
you don't have to think about these too low level details like communication

protocols, serialization etc.

Regards,
-- 
Mikhail Terekhov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100324/ae8ac683/attachment.html>

From ellisonbg at gmail.com  Wed Mar 24 23:41:35 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 24 Mar 2010 20:41:35 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <12aaa0811003242015l7d758507h7d04d9df4587c28d@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<12aaa0811003242015l7d758507h7d04d9df4587c28d@mail.gmail.com>
Message-ID: <fa8579a41003242041s6c996fe8ya322fa8ba59ad748@mail.gmail.com>

Mikhail,

> IMHO it is a great idea to separate the main IPython engine from the
> frontend.
> But while implementing an RPC framework over 0mq from ground up should
> not be a very difficult task and will definitely bring you a lot of fun,
> have you
> considered something preexisting like RPyC (http://rpyc.wikidot.com/) for
> example.

We have considered everything :).  The story of how we have arrived at
0MQ is pretty interesting and worth recording.  We have had
implementations based on XML-RPC, Twisted (numerous protocols, HTTP,
PB, Foolscap) and raw sockets. I have played with earlier versions of
RPyC as well.

There are a couple of issue we keep running into with *every* solution
we have tried (except for 0MQ):

* The GIL kills.  Because IPython is designed to execute arbitrary
user code, and our users often run wrapped C/C++ libraries, it is not
uncommon for non-GIL releasing code to be run in IPython.  When this
happens, any Python thread *completely stops*.  When you are building
a robust distributed systems, you simply can't have this.  As far as I
know all Python based networking and RPC libraries suffer from this
same exact issue.  Note: it is not enough that the underlying socket
send/recv happen with the GIL released.

* Performance. We need network protocols that have near ping latencies
but can also easily handle many MB - GB sized messages at the same
time.  Prior to 0MQ I have not seen a network protocols that can do
both.  Our experiments with 0MQ have been shocking.  We see near ping
latencies for small messages and can send massive messages without
even thinking about it.  All of this is while CPU and memory usage is
minimal.  One of the difficulties that networking libraries in Python
face (at least currently) is that they all use strings for network
buffers.  The problem with this is that you end up copying them all
over the place.  With Twisted, we have to go to incredible lengths to
avoid this.  Is the situation different with RPyC?

* Messaging not RPC.  As we have developed a distributed architecture
that is more and more complex, we have realized something quite
significant: we are not really doing RPC, we are sending messages in
various patterns and 0MQ encodes these patterns extremely well.
Examples are request/reply and pub/sub, but other more complex
messaging patterns are possible as well - and we need those. In my
mind, the key difference between RPC is the presence of message queues
in an architecture.  Multiprocessing has some of this actually, but I
haven't looked at what they are doing underneath the hood.  I
encourage you to look at the example Fernando described.  It really
shows in significant ways that we are not doing RPC.

> The reason is that IPython already has a lot of useful and exciting
> functionality and yet another RPC framework is somewhat too much. Plus,
> you don't have to think about these too low level details like communication
> protocols, serialization etc.

0MQ is definitely not another RPC framework.  If you know that RPyC
addresses some or all of these issue I have brought up above, i would
seriously love to know.  One of these days, I will probably try to do
some benchmarks that compare twisted, multiprocessing, RPyC and 0MQ
for things like latency and throughput.  That would be quite
interesting.

Another important part of 0MQ is that is runs over protocols other
than tcp and interconnects like infiniband.  The performance on
infiniband is quite impressive.

Great question.

Cheers,

Brian

> Regards,
> --
> Mikhail Terekhov
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Thu Mar 25 02:56:14 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 24 Mar 2010 23:56:14 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <20100323214831.GA24398@phare.normalesup.org>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<4BA936BA.90009@noaa.gov>
	<20100323214831.GA24398@phare.normalesup.org>
Message-ID: <z2gdb6b5ecc1003242356y2b32a570ja9e59d430e65dbcb@mail.gmail.com>

On Tue, Mar 23, 2010 at 2:48 PM, Gael Varoquaux
<gael.varoquaux at normalesup.org> wrote:
>
> Congratulations to Fernando and Brian for their hard work. I am very
> optimistic about this work: things seem to be done just right from what I
> can see.
>

Thanks, much appreciated!  We're really happy too, we have the feeling
that there's finally a solution to a number of  problems that have
plagued every attempt at this for a long time (both in our codes and
other tools).

Cheers,

f


From fperez.net at gmail.com  Thu Mar 25 02:59:49 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 24 Mar 2010 23:59:49 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <cd7634ce1003231632m15bcb63bufa5a9988cba240bf@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<fa8579a41003231505p26b6d019web740ae0111497b5@mail.gmail.com>
	<cd7634ce1003231632m15bcb63bufa5a9988cba240bf@mail.gmail.com>
Message-ID: <p2odb6b5ecc1003242359s939f43d0udf963a1b8ce0d6e2@mail.gmail.com>

Hey Barry,

On Tue, Mar 23, 2010 at 4:32 PM, Barry Wark <barrywark at gmail.com> wrote:
> Congratulations Brian and Fernando! This is a huge advance for UI
> integration (and possible for parallel ipython as well).

Thanks, and I second Brian's note on native osx tools :)

The point is to get the right architecture and messaging first, so I
fully concur with Brian's view that as we get this working, we'll
build a better feel via actual prototypes of the session
persistence/notebook issues.  One step at a time :)

Cheers,

f


From fperez.net at gmail.com  Thu Mar 25 04:21:06 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 25 Mar 2010 01:21:06 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <fa8579a41003240956l14336d5hb568a7655a8bc69f@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<s2qdb6b5ecc1003240911hdb19d812v78eea8f69ceda022@mail.gmail.com>
	<fa8579a41003240956l14336d5hb568a7655a8bc69f@mail.gmail.com>
Message-ID: <v2wdb6b5ecc1003250121g5a2a1b79z270f9c9b27f13e0@mail.gmail.com>

On Wed, Mar 24, 2010 at 9:56 AM, Brian Granger <ellisonbg at gmail.com> wrote:

>> - We probably want a second REQ/XREP socket used strictly for control
>> messages. ?This will ?make it easier to handle them separate from code
>> execution.
>
> I think we actually need to have the control messages over the same
> socket as execute. ?The reason is that if we had a 2nd channel, the
> control messages could overtake the execute ones:
>
>>>> time.sleep(10);
>>>> a = 10
>>>> a.[TAB] ?# if we have a 2nd channel, this will get to the kernel before a = 10!!!
>
> I have some ideas though on how we can better use the single XREQ/XREP
> pair for both control and execution.

With good use of the status socket the above shouldn't happen, as
control requests shouldn't be posted by clients if the kernel is busy,
I think.  But in general, I do agree that we'll probably better off
with a single channel for execution and one for publication, the
proliferation of sockets isn't a good thing.  I think the status and
control 'channels' (not sockets) are needed though, we just need to
have a nice api to manage the messaging on these channels that makes
it not too confusing in practice.  I'm pretty sure it's quite doable,
from the experience so far.

>> - The kernel should also ?have ?a second PUB socket where it simply
>> posts busy/ready status updates. ?This can then be used by clients to
>> check before making certain control requests like tab completion that
>> should be avoided when busy (I know, there's a race condition if it's
>> implemented naively, but I think it can be avoided simply by assuming
>> that control requests are made only when the status socket is in
>> 'ready' status, but that clients can't assume they will get them
>> honored, they have to ?check the result and be ready to time out if
>> needed).
>
> Nice idea to have the status updates published. ?We should definitely
> do that. ?I think we can easily do this using a single PUB/SUB pair
> though. ?I just need to write down these ideas I have about how to
> handle multiple types of actions on a single socket. ?Shouldn't be a
> problem though. ?I am a little weary of having too many open sockets
> and there is really no reason we can't handle all the actions on a
> single socket.

Fully agreed.

>> - We're starting to see the architecture needed for qt/wx/curses
>> applications now: we should break what we now call the 'frontend' into
>> 2 objects:
>>
>> 1. 'Client': object that talks to kernel with zmq messages, does NOT
>> talk directly to user and doesn't know if it's in qt, wx, curses or
>> terminal.
>>
>> 2. 'Frontend': object that talks to user, has UI dependencies (qt,
>> readline, etc) but does NOT have zmq dependencies. ?It *only* talks to
>> client object via python calls, ?it does not do messaging.
>>
>> Even the code in frontend.py today is starting to have a bit of this,
>> now we just have to make the split, and that will quickly indicate
>> where the design divisions need to go.
>
> I know this is something you really want to have. ?But I don't think
> it is possible, even for the synchronous line based frontend. ?This is
> because all frontends will need to have an event loop and the event
> loop itself needs handle the 0MQ messaging stuff. ?But I am willing to
> explore this idea further to see if it is possible. ?I think the next
> step is to implement a real event loop in pyzmq and then use in for
> our current frontend/kernel prototype. ?That will better show us what
> the abstractions and interfaces are.

Let's finish up the messaging until we're happy and we'll see if this
is doable or not in practice.  I trust your intuition and you may be
right, though we should still strive to centralize as much as possible
in one common api that all frontends can reuse, to minimize code
duplication.

I'll try to work a bit on this again tomorrow.

Cheers,

f


From walter at livinglogic.de  Thu Mar 25 06:20:30 2010
From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=)
Date: Thu, 25 Mar 2010 11:20:30 +0100
Subject: [IPython-dev] Curses Frontend
In-Reply-To: <fa8579a41003241015j1f4db03asa36ce7d6b87ef17d@mail.gmail.com>
References: <4B8FB849.7020101@gmail.com> <4B90EA85.2040102@livinglogic.de>	
	<4B914ACD.2030308@gmail.com>	
	<fa8579a41003081238u635b883bla572f4c27af5513e@mail.gmail.com>	
	<4B962949.6010006@livinglogic.de>	
	<fa8579a41003121122l178d13aaiccad51ef53e3ae07@mail.gmail.com>	
	<4B9AA41B.6030600@livinglogic.de>	
	<fa8579a41003121409i1e65de9bof70272b0a5aa22a5@mail.gmail.com>	
	<4BA67CE0.9070203@livinglogic.de>
	<fa8579a41003241015j1f4db03asa36ce7d6b87ef17d@mail.gmail.com>
Message-ID: <4BAB38EE.80900@livinglogic.de>

On 24.03.10 18:15, Brian Granger wrote:

> Walter,
> 
> On Sun, Mar 21, 2010 at 1:09 PM, Walter D?rwald <walter at livinglogic.de> wrote:
>> On 12.03.10 23:09, Brian Granger wrote:
>>
>>> Walter,
>>>
>>>>> Is there a problem with generics?
>>>>
>>>> No, they work without a problem.
>>>
>>> Ok, I misunderstood.
>>>
>>>>> If so it might be related to this:
>>>>>
>>>>> https://bugs.launchpad.net/ipython/+bug/527968
>>>>
>>>> I'm not using generics.complete_object.
>>>>
>>>>> It this is a different issue, could you explain further?
>>>>
>>>> You wrote: "Minimally, ipipe needs to be updated to the new APIs", but
>>>> generics.result_display() is the only IPython API that ipipe uses, so I
>>>> thould I would have to change something.
>>>
>>> OK, but that shouldn't be too difficult right?  If you do want to
>>> continue to use this,
>>> we can look to see what the new API looks like for this.
>>
>> So does this mean that generics.result_display() *will* go away in 0.11?
>> If yes, what *is* the new API that I can hook into?
> 
> I think it is still there, and I doubt it would be removed for 0.11.
> But that part of the code base has not been refactored, so in the long
> run it may go away.

Understood. OK, so I'll stick to generics.result_display() for now.

>> What I need is a hook where I can register a callback which gets called
>> when objects of a certain type have to be output to the screen the
>> return value of the hook is the object that gets assigned to the _ variable.
> 
> For now I think this is what you want.

Servus,
   Walter


From ellisonbg at gmail.com  Thu Mar 25 12:14:06 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 25 Mar 2010 09:14:06 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <v2wdb6b5ecc1003250121g5a2a1b79z270f9c9b27f13e0@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<s2qdb6b5ecc1003240911hdb19d812v78eea8f69ceda022@mail.gmail.com>
	<fa8579a41003240956l14336d5hb568a7655a8bc69f@mail.gmail.com>
	<v2wdb6b5ecc1003250121g5a2a1b79z270f9c9b27f13e0@mail.gmail.com>
Message-ID: <fa8579a41003250914o5786ad92l5f439da2b9a7f954@mail.gmail.com>

Fernando,

> With good use of the status socket the above shouldn't happen, as
> control requests shouldn't be posted by clients if the kernel is busy,
> I think. ?But in general, I do agree that we'll probably better off
> with a single channel for execution and one for publication, the
> proliferation of sockets isn't a good thing. ?I think the status and
> control 'channels' (not sockets) are needed though, we just need to
> have a nice api to manage the messaging on these channels that makes
> it not too confusing in practice. ?I'm pretty sure it's quite doable,
> from the experience so far.

Agreed.


>> I know this is something you really want to have. ?But I don't think
>> it is possible, even for the synchronous line based frontend. ?This is
>> because all frontends will need to have an event loop and the event
>> loop itself needs handle the 0MQ messaging stuff. ?But I am willing to
>> explore this idea further to see if it is possible. ?I think the next
>> step is to implement a real event loop in pyzmq and then use in for
>> our current frontend/kernel prototype. ?That will better show us what
>> the abstractions and interfaces are.
>
> Let's finish up the messaging until we're happy and we'll see if this
> is doable or not in practice. ?I trust your intuition and you may be
> right, though we should still strive to centralize as much as possible
> in one common api that all frontends can reuse, to minimize code
> duplication.
>
> I'll try to work a bit on this again tomorrow.

Sounds great.

Cheers,

Brian


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From termim at gmail.com  Thu Mar 25 14:16:12 2010
From: termim at gmail.com (Mikhail Terekhov)
Date: Thu, 25 Mar 2010 14:16:12 -0400
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <fa8579a41003242041s6c996fe8ya322fa8ba59ad748@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<12aaa0811003242015l7d758507h7d04d9df4587c28d@mail.gmail.com>
	<fa8579a41003242041s6c996fe8ya322fa8ba59ad748@mail.gmail.com>
Message-ID: <12aaa0811003251116u5a729d87t19f8112769b73f58@mail.gmail.com>

Brian,

We have considered everything :).  The story of how we have arrived at
> 0MQ is pretty interesting and worth recording.  We have had
> implementations based on XML-RPC, Twisted (numerous protocols, HTTP,
> PB, Foolscap) and raw sockets. I have played with earlier versions of
> RPyC as well.
>
> There are a couple of issue we keep running into with *every* solution
> we have tried (except for 0MQ):
>
> * The GIL kills.  Because IPython is designed to execute arbitrary
> user code, and our users often run wrapped C/C++ libraries, it is not
> uncommon for non-GIL releasing code to be run in IPython.  When this
> happens, any Python thread *completely stops*.  When you are building
> a robust distributed systems, you simply can't have this.  As far as I
> know all Python based networking and RPC libraries suffer from this
> same exact issue.  Note: it is not enough that the underlying socket
> send/recv happen with the GIL released.
>
> That sounds intriguing! How 0MQ is different in this regard, does it
maintain its own threads inside independent of GIL?


> * Performance. We need network protocols that have near ping latencies
> but can also easily handle many MB - GB sized messages at the same
> time.  Prior to 0MQ I have not seen a network protocols that can do
> both.  Our experiments with 0MQ have been shocking.  We see near ping
> latencies for small messages and can send massive messages without
> even thinking about it.  All of this is while CPU and memory usage is
> minimal.

It sounds you've found a silver bullet :)
BTW I use twisted for client/server communication in my projects these days
and while I never had a need to transfer GB sized messages back and forth,
I've never had any issues with latencies either, except for the delays
immanent
to some particular network.

One of the difficulties that networking libraries in Python
> face (at least currently) is that they all use strings for network
> buffers.  The problem with this is that you end up copying them all
> over the place.  With Twisted, we have to go to incredible lengths to
> avoid this.  Is the situation different with RPyC?
>
> Yes string type is an old workhorse in python. I don't know internals of
RPyC
but I suspect it uses strings extensively as well. What pyzmq uses instead
of
strings?


> * Messaging not RPC.  As we have developed a distributed architecture
> that is more and more complex, we have realized something quite
> significant: we are not really doing RPC, we are sending messages in
> various patterns and 0MQ encodes these patterns extremely well.
> Examples are request/reply and pub/sub, but other more complex
> messaging patterns are possible as well - and we need those. In my
> mind, the key difference between RPC is the presence of message queues
> in an architecture.  Multiprocessing has some of this actually, but I
> haven't looked at what they are doing underneath the hood.  I
> encourage you to look at the example Fernando described.  It really
> shows in significant ways that we are not doing RPC.
>
> Frankly I think the difference between messaging and RPC is mostly a
terminological one. A message queues presence really just means that the
system provides asynchronous services and many RPC frameworks
provide that. (For some digression: In OO design world they even say
"send a message to the object" instead of "call an object's method"
sometimes. Wieird geeks :))

> The reason is that IPython already has a lot of useful and exciting
> > functionality and yet another RPC framework is somewhat too much. Plus,
> > you don't have to think about these too low level details like
> communication
> > protocols, serialization etc.
>
> 0MQ is definitely not another RPC framework.  If you know that RPyC
> addresses some or all of these issue I have brought up above, i would
> seriously love to know.  One of these days, I will probably try to do
> some benchmarks that compare twisted, multiprocessing, RPyC and 0MQ
> for things like latency and throughput.  That would be quite
> interesting.
>
> Yes, 0MQ is not an RPC framework - it is just a low level protocol (albeit
probably a good one) that you will use to build your own RPC/RMI/messaging
system. Frankly I do not see 0MQ to be immune to all the issues you've
brought
up above unless you'll drop python and code everything in C/C++. In my
experience latencies and and performance bottlenecks usually came from the
code that serves messages (i.e. server part) not the transport layer, unless
you
develop some high load server with thousands messages per second which is
not the case for IPython I believe. Or the network itself could be just
slow, but
in this case no library could help unfortunately. But of course I can easily
miss
something obvious.

Please do not think that I'm tying to bash the pyzmq idea, not at all! I
think it is a
great idea for IPython and it will be a real fun to implement. I'm just
trying to
understand what is so different in IPython that any other RPC/RMI/messaging
framework can't fit? RPyC along side with Pyro was just the first one that
came
to mind when I read Fernando's post but there are a lot of them, see for
example
python's wiki for a list: http://wiki.python.org/moin/ParallelProcessing.
I personally have successfully used another toolkit not mentioned on the
above
page  - http://www.spread.org - it is a group communication toolkit that
provides
guarantied message delivery and so called virtual synchrony.

I think that when the first excitement ends and you will start to develop
this new
interface, you will end up implementing all this functionality that other
RPC
frameworks have or the most of it, so it would be useful to at least check
them
before implementation.

Another important part of 0MQ is that is runs over protocols other
> than tcp and interconnects like infiniband.  The performance on
> infiniband is quite impressive.
>
> Cool! Any Idea how to utilize it in python/IPython?

Regards,
-- 
Mikhail Terekhov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100325/efb446e0/attachment.html>

From ellisonbg at gmail.com  Thu Mar 25 15:16:12 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 25 Mar 2010 12:16:12 -0700
Subject: [IPython-dev] Qt/Curses interfaces future: results of the
	weekend mini-sprint (or having fun with 0mq)
In-Reply-To: <12aaa0811003251116u5a729d87t19f8112769b73f58@mail.gmail.com>
References: <p2pdb6b5ecc1003231401sa46e5493k895a2d22cb2c2dee@mail.gmail.com>
	<12aaa0811003242015l7d758507h7d04d9df4587c28d@mail.gmail.com>
	<fa8579a41003242041s6c996fe8ya322fa8ba59ad748@mail.gmail.com>
	<12aaa0811003251116u5a729d87t19f8112769b73f58@mail.gmail.com>
Message-ID: <fa8579a41003251216i32a2d738k248cfeb8ba5c8905@mail.gmail.com>

Mikhail,

>> * The GIL kills. ?Because IPython is designed to execute arbitrary
>> user code, and our users often run wrapped C/C++ libraries, it is not
>> uncommon for non-GIL releasing code to be run in IPython. ?When this
>> happens, any Python thread *completely stops*. ?When you are building
>> a robust distributed systems, you simply can't have this. ?As far as I
>> know all Python based networking and RPC libraries suffer from this
>> same exact issue. ?Note: it is not enough that the underlying socket
>> send/recv happen with the GIL released.
>>
> That sounds intriguing! How 0MQ is different in this regard, does it
> maintain its own threads inside independent of GIL?

0MQ is written in C++ and it maintains its own native treads for
network IO and message queueing.  The Python bindings are careful to
release the GIL when calling into 0MQ as well.  The result is that 0MQ
sockets can continue to do network IO and message queueing while
Python holds the GIL.

>>
>> * Performance. We need network protocols that have near ping latencies
>> but can also easily handle many MB - GB sized messages at the same
>> time. ?Prior to 0MQ I have not seen a network protocols that can do
>> both. ?Our experiments with 0MQ have been shocking. ?We see near ping
>> latencies for small messages and can send massive messages without
>> even thinking about it. ?All of this is while CPU and memory usage is
>> minimal.

> It sounds you've found a silver bullet :)

At least the bullet that we needed.

> BTW I use twisted for client/server communication in my projects these days
> and while I never had a need to transfer GB sized messages back and forth,
> I've never had any issues with latencies either, except for the delays
> immanent
> to some particular network.

Yes, I still like Twisted very much and it the GIL is a constraint
that Twisted has to live with.  I think you can get Twisted to handle
large messages though - it is just more work.

>> One of the difficulties that networking libraries in Python
>> face (at least currently) is that they all use strings for network
>> buffers. ?The problem with this is that you end up copying them all
>> over the place. ?With Twisted, we have to go to incredible lengths to
>> avoid this. ?Is the situation different with RPyC?
>>
> Yes string type is an old workhorse in python. I don't know internals of
> RPyC
> but I suspect it uses strings extensively as well. What pyzmq uses instead
> of
> strings?

For the Python rep of messages we do use strings.  But once they are
passed down to the C++ 0MQ code they probably use some STL container
and are careful to not copy.  Also, it is possible to have 0MQ use the
buffer of the Python string without copying.  But there are some
issues with this that we are still sorting out.

>>
>> * Messaging not RPC. ?As we have developed a distributed architecture
>> that is more and more complex, we have realized something quite
>> significant: we are not really doing RPC, we are sending messages in
>> various patterns and 0MQ encodes these patterns extremely well.
>> Examples are request/reply and pub/sub, but other more complex
>> messaging patterns are possible as well - and we need those. In my
>> mind, the key difference between RPC is the presence of message queues
>> in an architecture. ?Multiprocessing has some of this actually, but I
>> haven't looked at what they are doing underneath the hood. ?I
>> encourage you to look at the example Fernando described. ?It really
>> shows in significant ways that we are not doing RPC.
>>
> Frankly I think the difference between messaging and RPC is mostly a
> terminological one. A message queues presence really just means that the
> system provides asynchronous services and many RPC frameworks
> provide that. (For some digression: In OO design world they even say
> "send a message to the object" instead of "call an object's method"
> sometimes. Wieird geeks :))

Yes, the terminology is slippery.  I guess the other thing that I
think of with messaging is the various messaging patterns and routing
patterns:

* Publish/subscribe with topic based filtering.
* Request/reply, including load balancing/fair queueing amongst
multiple consumers and producers.
* Peer-to-peer messaging.
* Simple message forwarding.
* General message routing based on endpoint identify.

I think you can implement all of these things with an good two-way,
asynchronous RPC system (like Twisted's perspective broker), but it
can be pretty painful.

>> > The reason is that IPython already has a lot of useful and exciting
>> > functionality and yet another RPC framework is somewhat too much. Plus,
>> > you don't have to think about these too low level details like
>> > communication
>> > protocols, serialization etc.
>>
>> 0MQ is definitely not another RPC framework. ?If you know that RPyC
>> addresses some or all of these issue I have brought up above, i would
>> seriously love to know. ?One of these days, I will probably try to do
>> some benchmarks that compare twisted, multiprocessing, RPyC and 0MQ
>> for things like latency and throughput. ?That would be quite
>> interesting.
>>
> Yes, 0MQ is not an RPC framework - it is just a low level protocol (albeit
> probably a good one) that you will use to build your own RPC/RMI/messaging
> system. Frankly I do not see 0MQ to be immune to all the issues you've
> brought
> up above unless you'll drop python and code everything in C/C++. In my
> experience latencies and and performance bottlenecks usually came from the
> code that serves messages (i.e. server part) not the transport layer, unless
> you
> develop some high load server with thousands messages per second which is
> not the case for IPython I believe.

Yes, you are right.  There are two places we have had performance problems:

* Network protocol and message queueing.  Low latency, large messages
basic messaging patterns.  0MQ solving these issues.
* Application logic.  Our "servers" and "clients" will still need to
implement non-trivial logic and that may still be a bottleneck for us.

> Please do not think that I'm tying to bash the pyzmq idea, not at all! I
> think it is a
> great idea for IPython and it will be a real fun to implement. I'm just
> trying to
> understand what is so different in IPython that any other RPC/RMI/messaging
> framework can't fit? RPyC along side with Pyro was just the first one that
> came
> to mind when I read Fernando's post but there are a lot of them, see for
> example
> python's wiki for a list: http://wiki.python.org/moin/ParallelProcessing.
> I personally have successfully used another toolkit not mentioned on the
> above
> page? - http://www.spread.org - it is a group communication toolkit that
> provides
> guarantied message delivery and so called virtual synchrony.

Yes, I have looked at spread before, but probably should spend more
time with it.  It is similar to 0MQ, but has a different flavor.  But
still, quite impressive.  Do you know how the python bindings to
spread handle the GIL stuff?

> I think that when the first excitement ends and you will start to develop
> this new
> interface, you will end up implementing all this functionality that other
> RPC
> frameworks have or the most of it, so it would be useful to at least check
> them
> before implementation.

I am sure you are right at some level that we will end up implementing
aspects that other frameworks have.

>> Another important part of 0MQ is that is runs over protocols other
>> than tcp and interconnects like infiniband. ?The performance on
>> infiniband is quite impressive.
>>
> Cool! Any Idea how to utilize it in python/IPython?

IPython has a parallel computing infrastructure that runs on
cluster/supercomputers.  We would *love* to be able to use infiniband
for messaging in that context - currently we use twisted over tcp.

Cheers and thank!

Brian

> Regards,
> --
> Mikhail Terekhov
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From Chris.Barker at noaa.gov  Thu Mar 25 19:59:10 2010
From: Chris.Barker at noaa.gov (Christopher Barker)
Date: Thu, 25 Mar 2010 16:59:10 -0700
Subject: [IPython-dev] What is the status of iPython+wx?
In-Reply-To: <fa8579a41003241010q52bb2241o93019fba6718cc6e@mail.gmail.com>
References: <4BA8F0B9.8080600@noaa.gov>
	<fa8579a41003241010q52bb2241o93019fba6718cc6e@mail.gmail.com>
Message-ID: <4BABF8CE.6020203@noaa.gov>

Brian Granger wrote:
> We have not fixed the issue.  But, it is only an issue with
> matplotlib/traits.  If you are developing your own wxpython code you
> definitely should use dev trunk and look at the %gui magic.  We also
> have some details about how it works in the nightly docs.  The dev
> version is *much* more stable that 0.10 for this type of thing.
> 
> Let us know how it goes.

not so well -- I can start up a wx app and have a nice interactive 
command line, but there doesn't appear to be any way to re-run it.

If I close the frame, then call run gui-wx.py -- it is unstable, 
freezing up on me fairly quickly.

If I don't close the frame, It opens up a second frame (have you hooked 
in to have re-run wx.App.OnInit?), but then it's also unstable.

I haven't looked yet at the ipython code to see what you are doing in 
appstart_wx. I'll try to do that soon.

Also, I found wx.App.SetExitOnFrameDelete(False) which should keep the 
app running, even when all the Windows have closed. That may end up 
being helpful.

Maybe appstart_wx could close all the top level windows if there is an 
app already running.


NOTE:

Python 2.6.5

IPython 0.11.alpha1.bzr.r1223

OS-X 10.5 PPC

-Chris





-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov


From ondrej at certik.cz  Fri Mar 26 19:07:02 2010
From: ondrej at certik.cz (Ondrej Certik)
Date: Fri, 26 Mar 2010 16:07:02 -0700
Subject: [IPython-dev] make ipython work over web
Message-ID: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>

Hi,

it just occurred to me that it'd be a cool idea to be able to use
ipython as the Sage/FEMhub notebook, in particular, you would use it
just like regular ipython in the terminal (only at the beginning you
would log in) and it would interface the Sage/FEMhub server over some
API (I am playing with json-rpc api at [0]) and I guess it would
always create a new worksheet and only allow to add new cells at the
bottom (which is the way ipython works).

So it will be a nice thin client. I don't know how this fits in the
recent ipython refactoring. Essentially I am trying to figure out some
nice API for evaluating cells, doctests ("?"), code inspection ("??"),
code completion ("TAB"), and it takes some time to always implement
this in the web notebook directly, so I want to play with this in a
simple terminal client.

Essentially almost all ipython features could work remotely over some
API. And the web notebook would then use the exact same interface, so
it should be easy for people to write the web notebooks.

I guess some of you must have thought about this, but I am just
posting it here, as I like this idea (so far).

Ondrej


[0] http://groups.google.com/group/sympy/browse_thread/thread/849dd3e9811f5d62


From fperez.net at gmail.com  Fri Mar 26 22:15:37 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 26 Mar 2010 19:15:37 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
Message-ID: <o2sdb6b5ecc1003261915hf104f7bai603f5ec9f95d06bf@mail.gmail.com>

Hey,

On Fri, Mar 26, 2010 at 4:07 PM, Ondrej Certik <ondrej at certik.cz> wrote:
> Hi,
>
> it just occurred to me that it'd be a cool idea to be able to use
> ipython as the Sage/FEMhub notebook, in particular, you would use it
> just like regular ipython in the terminal (only at the beginning you
> would log in) and it would interface the Sage/FEMhub server over some
> API (I am playing with json-rpc api at [0]) and I guess it would
> always create a new worksheet and only allow to add new cells at the
> bottom (which is the way ipython works).
>
> So it will be a nice thin client. I don't know how this fits in the
> recent ipython refactoring. Essentially I am trying to figure out some
> nice API for evaluating cells, doctests ("?"), code inspection ("??"),
> code completion ("TAB"), and it takes some time to always implement
> this in the web notebook directly, so I want to play with this in a
> simple terminal client.
>
> Essentially almost all ipython features could work remotely over some
> API. And the web notebook would then use the exact same interface, so
> it should be easy for people to write the web notebooks.
>
> I guess some of you must have thought about this, but I am just
> posting it here, as I like this idea (so far).
>
> Ondrej

Sure!  The recent code Brian and I put up:

http://github.com/ellisonbg/pyzmq/tree/completer

already has even tab-completion implemented, it uses json for the
messaging, so it's precisely that idea.  Just go to

http://github.com/ellisonbg/pyzmq/tree/completer/examples/kernel/

run 'kernel' in one window, and as many 'frontend.py' as you want.
They all tab-complete, send input and get output from the same kernel.

We're making sure we build the whole thing with multi-client support
from the get-go, so we don't get bitten later by issues we hadn't
thought of.

We deliberately made this tiny prototype *outside* of ipython to get
the api right and see the design issues in isolation.  Once it's
finished, we can build a real system out of it (once we get 0.11 out
:)

Cheers,

f


From ellisonbg at gmail.com  Fri Mar 26 23:17:55 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 26 Mar 2010 20:17:55 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
Message-ID: <fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>

Ondrej,

Yes, we definitely want people to be able to use IPython in this
manner.  As Fernando mentioned, earlier this week he and I did a short
2 day sprint to create a first prototype of a 2 process
kernel+frontend model using 0MQ.  It worked even better than we hoped
and we are convinced that this is the future for IPython.  The idea is
that the IPython "kernel" would run in a separate process and listen
on a few 0MQ sockets.  A "frontend" (which could be a web server)
would talk to the kernel using JSON based messages and a thin 0MQ
based API.

> it just occurred to me that it'd be a cool idea to be able to use
> ipython as the Sage/FEMhub notebook, in particular, you would use it
> just like regular ipython in the terminal (only at the beginning you
> would log in) and it would interface the Sage/FEMhub server over some
> API (I am playing with json-rpc api at [0]) and I guess it would
> always create a new worksheet and only allow to add new cells at the
> bottom (which is the way ipython works).
>
> So it will be a nice thin client. I don't know how this fits in the
> recent ipython refactoring. Essentially I am trying to figure out some
> nice API for evaluating cells, doctests ("?"), code inspection ("??"),
> code completion ("TAB"), and it takes some time to always implement
> this in the web notebook directly, so I want to play with this in a
> simple terminal client.

Currently this new stuff is just a prototype.  Two things (still not
small) need to happen:

* We need to make the prototype kernel work for real with IPython.
* We need to solidify the frontend API so that others can start to use it.

But, both Fernando and i are feeling that these things are doable now
whereas before the 0MQ stuff, it felt semi-hopeless.

> Essentially almost all ipython features could work remotely over some
> API. And the web notebook would then use the exact same interface, so
> it should be easy for people to write the web notebooks.

This is our vision.

> I guess some of you must have thought about this, but I am just
> posting it here, as I like this idea (so far).

I definitely encourage you to have a look at the demo Fernando linked
to.  It does some very non-trivial things that will be important for a
web based interface.  The most important thing is how it handles
stdout/stderr/displayhook asynchronously.

In the demo try:

import time
for i in range(10):
  time.sleep(1)
  print i
  i**2  # this triggers displayhook

The print and the displayhook will happen async.  And if there are
multiple frontends connected, they will *all* see the results.  I
bring up these things because I saw that the sympy alpha does not
handle printing asynchronously like the Sage notebook.

Cheers,

Brian
-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Fri Mar 26 23:49:06 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 26 Mar 2010 20:49:06 -0700
Subject: [IPython-dev] What is the status of iPython+wx?
In-Reply-To: <4BABF8CE.6020203@noaa.gov>
References: <4BA8F0B9.8080600@noaa.gov>
	<fa8579a41003241010q52bb2241o93019fba6718cc6e@mail.gmail.com>
	<4BABF8CE.6020203@noaa.gov>
Message-ID: <fa8579a41003262049g4a97ed45n4b645137854e9036@mail.gmail.com>

Chris,

A warning: we are not GUI (or wx) experts, so it is likely you know
more about wx that we do...

> not so well -- I can start up a wx app and have a nice interactive
> command line, but there doesn't appear to be any way to re-run it.

I should say more about how the %gui magic works.  If you do:

%gui wx

All that happens is that we do what its needed to start the event
loop.  We do not in this case create a wx Application or do anything
else.  If you do this, it will be your repsonsibility to create and
manage an Application object.  BUT, don't start the event loop
yourself, it is already running.  ? warning: you may have to pass very
specific options to the wx App when you create it.  See our app
creation logic here for details of what you will likely have to do:

inputhook.InputHookManager.enable_wx

If you do:

%gui -a wx

We start the event loop AND create a wx Application passing reasonable
options.  In this case, you should not create an App, but rather just
get the one IPython created using wx.GetApp().

What I don't know is what wx does if the App gets closed down.  We
don't do anything unusual though that would mess with how wx handles
this type of thing.

> If I close the frame, then call run gui-wx.py -- it is unstable,
> freezing up on me fairly quickly.

I am guessing the App gets shutdown and that kills any further wx goodness.

> If I don't close the frame, It opens up a second frame (have you hooked
> in to have re-run wx.App.OnInit?), but then it's also unstable.

No, we don't do anything other than create the App and start the event
loop.  Are you using %gui -a or non-(-a)?

> I haven't looked yet at the ipython code to see what you are doing in
> appstart_wx. I'll try to do that soon.

Yes, also look at enable_wx.  We don't do much at all.

> Also, I found wx.App.SetExitOnFrameDelete(False) which should keep the
> app running, even when all the Windows have closed. That may end up
> being helpful.

Yes, definitely.

> Maybe appstart_wx could close all the top level windows if there is an
> app already running.

Have a look at what we are doing - it is basically \epsilon, so for
the most part wx should be doing what you tell it to.  BUT, this is
way different from the older IPython.  There we used to do a lot to
hijack/monkeypatch wx so many thing happened automagically.  but
monkeypatching = crashing.

Cheers,

Brian

>
> NOTE:
>
> Python 2.6.5
>
> IPython 0.11.alpha1.bzr.r1223
>
> OS-X 10.5 PPC
>
> -Chris
>
>
>
>
>
> --
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959 ? voice
> 7600 Sand Point Way NE ? (206) 526-6329 ? fax
> Seattle, WA ?98115 ? ? ? (206) 526-6317 ? main reception
>
> Chris.Barker at noaa.gov
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ondrej at certik.cz  Sat Mar 27 03:10:13 2010
From: ondrej at certik.cz (Ondrej Certik)
Date: Sat, 27 Mar 2010 00:10:13 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
Message-ID: <85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>

Hi Fernando and Brian!

this looks very exciting! Some comments below:

On Fri, Mar 26, 2010 at 8:17 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> Ondrej,
>
> Yes, we definitely want people to be able to use IPython in this
> manner. ?As Fernando mentioned, earlier this week he and I did a short
> 2 day sprint to create a first prototype of a 2 process
> kernel+frontend model using 0MQ. ?It worked even better than we hoped
> and we are convinced that this is the future for IPython. ?The idea is
> that the IPython "kernel" would run in a separate process and listen
> on a few 0MQ sockets. ?A "frontend" (which could be a web server)
> would talk to the kernel using JSON based messages and a thin 0MQ
> based API.
>
>> it just occurred to me that it'd be a cool idea to be able to use
>> ipython as the Sage/FEMhub notebook, in particular, you would use it
>> just like regular ipython in the terminal (only at the beginning you
>> would log in) and it would interface the Sage/FEMhub server over some
>> API (I am playing with json-rpc api at [0]) and I guess it would
>> always create a new worksheet and only allow to add new cells at the
>> bottom (which is the way ipython works).
>>
>> So it will be a nice thin client. I don't know how this fits in the
>> recent ipython refactoring. Essentially I am trying to figure out some
>> nice API for evaluating cells, doctests ("?"), code inspection ("??"),
>> code completion ("TAB"), and it takes some time to always implement
>> this in the web notebook directly, so I want to play with this in a
>> simple terminal client.
>
> Currently this new stuff is just a prototype. ?Two things (still not
> small) need to happen:
>
> * We need to make the prototype kernel work for real with IPython.
> * We need to solidify the frontend API so that others can start to use it.
>
> But, both Fernando and i are feeling that these things are doable now
> whereas before the 0MQ stuff, it felt semi-hopeless.
>
>> Essentially almost all ipython features could work remotely over some
>> API. And the web notebook would then use the exact same interface, so
>> it should be easy for people to write the web notebooks.
>
> This is our vision.
>
>> I guess some of you must have thought about this, but I am just
>> posting it here, as I like this idea (so far).
>
> I definitely encourage you to have a look at the demo Fernando linked
> to. ?It does some very non-trivial things that will be important for a
> web based interface. ?The most important thing is how it handles
> stdout/stderr/displayhook asynchronously.
>
> In the demo try:
>
> import time
> for i in range(10):
> ?time.sleep(1)
> ?print i
> ?i**2 ?# this triggers displayhook
>
> The print and the displayhook will happen async. ?And if there are
> multiple frontends connected, they will *all* see the results. ?I
> bring up these things because I saw that the sympy alpha does not
> handle printing asynchronously like the Sage notebook.

Indeed it doesn't yet. Let me see how you did that. I would imagine
that instead of using StringIO for stdout, I can use my own subclass
of it, that would send some stuff the client on the fly. I have to
study how the sage notebook did that too.

When compiling pyzmq, I had to apply the following patch:

diff --git a/setup.py b/setup.py
index 86283c6..7d9f1fc 100644
--- a/setup.py
+++ b/setup.py
@@ -49,7 +49,9 @@ else:
 zmq = Extension(
     'zmq._zmq',
     sources = [zmq_source],
-    libraries = [libzmq]
+    libraries = [libzmq],
+    include_dirs=["/home/ondrej/usr/include"],
+    library_dirs=["/home/ondrej/usr/lib"],
 )

 #-----------------------------------------------------------------------------


Is there some way to do this easier? I've installed zmq into ~/usr.


In general it looks really awesome, the tab completion works fine. I
am now figuring some API for handling sessions and logins. How do you
handle those? Sagenotebook uses cookies I think. What is the canonical
way to handle that? The kernel would return you some hash (key), that
you can (=have to) use in subsequent RPC method calls to authenticate?
Let me study how cookies work.

I will try to get things working too, and of course I'll be happy to
change the API, so that it's ipython compatible, once you figure it
out and stabilize it.

So in order to use your stuff, I would use json-rpc to communicate
between the browser and the server, and then the server would use
pyzmq to communicate between the server and the ipython kernel?

Ondrej


From ellisonbg at gmail.com  Sat Mar 27 13:04:13 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sat, 27 Mar 2010 10:04:13 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
Message-ID: <fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>

Ondrej,

> Indeed it doesn't yet. Let me see how you did that. I would imagine
> that instead of using StringIO for stdout, I can use my own subclass
> of it, that would send some stuff the client on the fly. I have to
> study how the sage notebook did that too.

Yep that is how we handle it.

> When compiling pyzmq, I had to apply the following patch:
>
> diff --git a/setup.py b/setup.py
> index 86283c6..7d9f1fc 100644
> --- a/setup.py
> +++ b/setup.py
> @@ -49,7 +49,9 @@ else:
> ?zmq = Extension(
> ? ? 'zmq._zmq',
> ? ? sources = [zmq_source],
> - ? ?libraries = [libzmq]
> + ? ?libraries = [libzmq],
> + ? ?include_dirs=["/home/ondrej/usr/include"],
> + ? ?library_dirs=["/home/ondrej/usr/lib"],
> ?)
>
> ?#-----------------------------------------------------------------------------
>
>
> Is there some way to do this easier? I've installed zmq into ~/usr.

We recommend adding those paths to setup.cfg, but it is the same info.

>
> In general it looks really awesome, the tab completion works fine. I
> am now figuring some API for handling sessions and logins. How do you
> handle those? Sagenotebook uses cookies I think. What is the canonical
> way to handle that? The kernel would return you some hash (key), that
> you can (=have to) use in subsequent RPC method calls to authenticate?
> Let me study how cookies work.

We don't handle it yet, but here is our plan.  When the kernel starts
it will create a security key that look like this:

tcp://ip:port/324lkj4fss90lkj234l5sdflj4

The last part is the security key.  Clients that want to connect will
have to include the security key in each message.  For user/password
style login and sessions I would implement that at the browser level.

> I will try to get things working too, and of course I'll be happy to
> change the API, so that it's ipython compatible, once you figure it
> out and stabilize it.
>
> So in order to use your stuff, I would use json-rpc to communicate
> between the browser and the server, and then the server would use
> pyzmq to communicate between the server and the ipython kernel?

Exactly.  We are more than willing to change our JSON message format
if it makes sense.
Have a look at how we are structuring our messages.  We thought about
it quite a bit so it could be general and extensible.

Cheers,

Brian


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ondrej at certik.cz  Sat Mar 27 13:27:53 2010
From: ondrej at certik.cz (Ondrej Certik)
Date: Sat, 27 Mar 2010 10:27:53 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>
Message-ID: <85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>

On Sat, Mar 27, 2010 at 10:04 AM, Brian Granger <ellisonbg at gmail.com> wrote:
> Ondrej,
>
>> Indeed it doesn't yet. Let me see how you did that. I would imagine
>> that instead of using StringIO for stdout, I can use my own subclass
>> of it, that would send some stuff the client on the fly. I have to
>> study how the sage notebook did that too.
>
> Yep that is how we handle it.
>
>> When compiling pyzmq, I had to apply the following patch:
>>
>> diff --git a/setup.py b/setup.py
>> index 86283c6..7d9f1fc 100644
>> --- a/setup.py
>> +++ b/setup.py
>> @@ -49,7 +49,9 @@ else:
>> ?zmq = Extension(
>> ? ? 'zmq._zmq',
>> ? ? sources = [zmq_source],
>> - ? ?libraries = [libzmq]
>> + ? ?libraries = [libzmq],
>> + ? ?include_dirs=["/home/ondrej/usr/include"],
>> + ? ?library_dirs=["/home/ondrej/usr/lib"],
>> ?)
>>
>> ?#-----------------------------------------------------------------------------
>>
>>
>> Is there some way to do this easier? I've installed zmq into ~/usr.
>
> We recommend adding those paths to setup.cfg, but it is the same info.
>
>>
>> In general it looks really awesome, the tab completion works fine. I
>> am now figuring some API for handling sessions and logins. How do you
>> handle those? Sagenotebook uses cookies I think. What is the canonical
>> way to handle that? The kernel would return you some hash (key), that
>> you can (=have to) use in subsequent RPC method calls to authenticate?
>> Let me study how cookies work.
>
> We don't handle it yet, but here is our plan. ?When the kernel starts
> it will create a security key that look like this:
>
> tcp://ip:port/324lkj4fss90lkj234l5sdflj4
>
> The last part is the security key. ?Clients that want to connect will
> have to include the security key in each message. ?For user/password
> style login and sessions I would implement that at the browser level.

I would like both the browser and the command line to use the exact
same API, so that I can easily test the server part using unittests.

>
>> I will try to get things working too, and of course I'll be happy to
>> change the API, so that it's ipython compatible, once you figure it
>> out and stabilize it.
>>
>> So in order to use your stuff, I would use json-rpc to communicate
>> between the browser and the server, and then the server would use
>> pyzmq to communicate between the server and the ipython kernel?
>
> Exactly. ?We are more than willing to change our JSON message format
> if it makes sense.
> Have a look at how we are structuring our messages. ?We thought about
> it quite a bit so it could be general and extensible.

That is not a big deal for me either to change the format. I am using,
what appears to be the standard way to handle json-rpc, e.g. invoke
methods over it, it works for me, and I can always change it later if
needed.

Ondrej


From ellisonbg at gmail.com  Sat Mar 27 13:57:06 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sat, 27 Mar 2010 10:57:06 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>
	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>
Message-ID: <fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>

Ondrej,

> I would like both the browser and the command line to use the exact
> same API, so that I can easily test the server part using unittests.

I am not quite sure I understand what you mean by the "same API".  Do you mean
the wire level JSON messages?  The programatic API (classes+methods,
etc), or something else.

I don't that what you want is possible actually though.  The reason is
that a browser/server architecture is pure request/reply (1 way only).
 You can fake server->browser requests but it still required quite a
bit of magic on server and browser sides.

With our 0MQ based kernel, the kernel opens two types of 0MQ sockets:

1. XREP.  This is a request reply socket that multiple clients can use
to interact with the kernel in a traditional request/reply manner.
This is how a client can execute code, do tab completion.

2. PUB.  This is a publication socket.  It is one way outbound from
the server to the client.  You can think of a PUB socket as a radio
transmission.  The client subscribes to topics on the socket (using a
SUB socket) and gets the messages that matches the subscribed topics.
We use the PUB channel for asynchronos
stdout/stderr/displayhook/status changes.

Thus a frontend has to be able to:

* Do traditional request/replies.
* Watch for incoming messages on the PUB/SUB channel.

It has to do this at the same time and the only sane was of doing that
is with an event loop.  Our current frontend.py does not use an event
loop, and thus has to use subtle and buggy logic to fake it.

Hope this helps more.  But it would help if you could explain what you
mean by the "same API".

>>
>>> I will try to get things working too, and of course I'll be happy to
>>> change the API, so that it's ipython compatible, once you figure it
>>> out and stabilize it.
>>>
>>> So in order to use your stuff, I would use json-rpc to communicate
>>> between the browser and the server, and then the server would use
>>> pyzmq to communicate between the server and the ipython kernel?
>>
>> Exactly. ?We are more than willing to change our JSON message format
>> if it makes sense.
>> Have a look at how we are structuring our messages. ?We thought about
>> it quite a bit so it could be general and extensible.
>
> That is not a big deal for me either to change the format. I am using,
> what appears to be the standard way to handle json-rpc, e.g. invoke
> methods over it, it works for me, and I can always change it later if
> needed.

I will look more at the JSON RPC format, but I doubt it will work
entirely for us.  This is because the messages sent on the PUB/SUB
sockets are not of the request/reply format.

Cheers,

Brian


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ondrej at certik.cz  Sat Mar 27 14:27:34 2010
From: ondrej at certik.cz (Ondrej Certik)
Date: Sat, 27 Mar 2010 11:27:34 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>
	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>
	<fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>
Message-ID: <85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>

On Sat, Mar 27, 2010 at 10:57 AM, Brian Granger <ellisonbg at gmail.com> wrote:
> Ondrej,
>
>> I would like both the browser and the command line to use the exact
>> same API, so that I can easily test the server part using unittests.
>
> I am not quite sure I understand what you mean by the "same API". ?Do you mean
> the wire level JSON messages? ?The programatic API (classes+methods,
> etc), or something else.
>
> I don't that what you want is possible actually though. ?The reason is
> that a browser/server architecture is pure request/reply (1 way only).
> ?You can fake server->browser requests but it still required quite a
> bit of magic on server and browser sides.
>
> With our 0MQ based kernel, the kernel opens two types of 0MQ sockets:
>
> 1. XREP. ?This is a request reply socket that multiple clients can use
> to interact with the kernel in a traditional request/reply manner.
> This is how a client can execute code, do tab completion.
>
> 2. PUB. ?This is a publication socket. ?It is one way outbound from
> the server to the client. ?You can think of a PUB socket as a radio
> transmission. ?The client subscribes to topics on the socket (using a
> SUB socket) and gets the messages that matches the subscribed topics.
> We use the PUB channel for asynchronos
> stdout/stderr/displayhook/status changes.
>
> Thus a frontend has to be able to:
>
> * Do traditional request/replies.
> * Watch for incoming messages on the PUB/SUB channel.
>
> It has to do this at the same time and the only sane was of doing that
> is with an event loop. ?Our current frontend.py does not use an event
> loop, and thus has to use subtle and buggy logic to fake it.
>
> Hope this helps more. ?But it would help if you could explain what you
> mean by the "same API".

Ok. Here is my API (so far I have no sessions there):

In [1]: import jsonrpclib

In [2]: s = jsonrpclib.SimpleServerProxy("http://2.latest.sympy-gamma.appspot.com/test-service/")

In [3]: s.eval_cell("2+3")
Out[3]: '5'

In [4]: s.eval_cell("""\
   ...: from sympy import sin, integrate, var
   ...: var("x")
   ...: integrate(sin(x), x)
   ...: """)
Out[4]: '-cos(x)'

In [5]: s.eval_cell("""\
   ...: from sympy import sin, integrate, var
   ...: var("x")
   ...: a = integrate(sin(x), x)
   ...: """)
Out[5]: ''

In [6]: s.eval_cell("a.diff(x)")
Out[6]: 'sin(x)'



and this works from a terminal. The web browser that runs javascript
uses the *exact* same json-rpc messages as the terminal version.

So if I understand you correctly, you want a more rich API, but this
cannot be handled by the browser, right?

My main goal is to figure out a nice API, that the browser can use to
communicate with the server. For testing purposes, I'll also implement
this in a terminal too. My main goal is to have a notebook in the
browser.

Ondrej


From ellisonbg at gmail.com  Sat Mar 27 14:42:52 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sat, 27 Mar 2010 11:42:52 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>
	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>
	<fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>
	<85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>
Message-ID: <fa8579a41003271142o46c976edyd78d5fa30e9d1a1c@mail.gmail.com>

Ondrej,

> Ok. Here is my API (so far I have no sessions there):
>
> In [1]: import jsonrpclib
>
> In [2]: s = jsonrpclib.SimpleServerProxy("http://2.latest.sympy-gamma.appspot.com/test-service/")
>
> In [3]: s.eval_cell("2+3")
> Out[3]: '5'
>
> In [4]: s.eval_cell("""\
> ? ...: from sympy import sin, integrate, var
> ? ...: var("x")
> ? ...: integrate(sin(x), x)
> ? ...: """)
> Out[4]: '-cos(x)'
>
> In [5]: s.eval_cell("""\
> ? ...: from sympy import sin, integrate, var
> ? ...: var("x")
> ? ...: a = integrate(sin(x), x)
> ? ...: """)
> Out[5]: ''
>
> In [6]: s.eval_cell("a.diff(x)")
> Out[6]: 'sin(x)'

OK, if this is the only API you want, it is possible.  BUT, a few points:

* It is completely blocking and synchronous.  We can create such an
API using 0MQ, but it obviously has limitations.
* It handles stdout in a completely synchronous manner.  I think we
can do this too (again limitations apply).

You are going to have to work *very* hard using json-rpc alone to get
asynchronous stdout/stderr/displayhook.  Here is the design that you
want that will entirely do what you want:

client<--jsonrpc-->bridge<--0MQ-->kernel

The bridge would be a fully asynchronous 0MQ client and would receive
the asynchronous 0MQ stdout/stderr/displayhook and simply put them in
queues.  The bridge would also be a json-rpc server.  with methods
like:

eval_cell  # submit but don't block. get back an object that would
allow you to see if the cell was done evaluating.
complete # again submit but don't return.  Similar logic.
get_stdout  # get all the stdout that has been written thus far
get_stderr  # get all stderr that has been written thus far.

You basically want to put a blocking API on top of the asynchronous 0MQ API.

This type of thing should be be too difficult to write using 0MQ and
we can help out.  If you are serious about this, let me know and
Fernando and I can come up with a plan.
We could probably develop this in the pyzmq tree for now.

Cheers,

Brian

>
>
> and this works from a terminal. The web browser that runs javascript
> uses the *exact* same json-rpc messages as the terminal version.
>
> So if I understand you correctly, you want a more rich API, but this
> cannot be handled by the browser, right?
>
> My main goal is to figure out a nice API, that the browser can use to
> communicate with the server. For testing purposes, I'll also implement
> this in a terminal too. My main goal is to have a notebook in the
> browser.
>
> Ondrej
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ondrej at certik.cz  Sat Mar 27 15:20:03 2010
From: ondrej at certik.cz (Ondrej Certik)
Date: Sat, 27 Mar 2010 12:20:03 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <fa8579a41003271142o46c976edyd78d5fa30e9d1a1c@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>
	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>
	<fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>
	<85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>
	<fa8579a41003271142o46c976edyd78d5fa30e9d1a1c@mail.gmail.com>
Message-ID: <85b5c3131003271220p777605eai45944a9c325da869@mail.gmail.com>

On Sat, Mar 27, 2010 at 11:42 AM, Brian Granger <ellisonbg at gmail.com> wrote:
> Ondrej,
>
>> Ok. Here is my API (so far I have no sessions there):
>>
>> In [1]: import jsonrpclib
>>
>> In [2]: s = jsonrpclib.SimpleServerProxy("http://2.latest.sympy-gamma.appspot.com/test-service/")
>>
>> In [3]: s.eval_cell("2+3")
>> Out[3]: '5'
>>
>> In [4]: s.eval_cell("""\
>> ? ...: from sympy import sin, integrate, var
>> ? ...: var("x")
>> ? ...: integrate(sin(x), x)
>> ? ...: """)
>> Out[4]: '-cos(x)'
>>
>> In [5]: s.eval_cell("""\
>> ? ...: from sympy import sin, integrate, var
>> ? ...: var("x")
>> ? ...: a = integrate(sin(x), x)
>> ? ...: """)
>> Out[5]: ''
>>
>> In [6]: s.eval_cell("a.diff(x)")
>> Out[6]: 'sin(x)'
>
> OK, if this is the only API you want, it is possible. ?BUT, a few points:
>
> * It is completely blocking and synchronous. ?We can create such an
> API using 0MQ, but it obviously has limitations.
> * It handles stdout in a completely synchronous manner. ?I think we
> can do this too (again limitations apply).
>
> You are going to have to work *very* hard using json-rpc alone to get
> asynchronous stdout/stderr/displayhook. ?Here is the design that you
> want that will entirely do what you want:
>
> client<--jsonrpc-->bridge<--0MQ-->kernel
>
> The bridge would be a fully asynchronous 0MQ client and would receive
> the asynchronous 0MQ stdout/stderr/displayhook and simply put them in
> queues. ?The bridge would also be a json-rpc server. ?with methods
> like:
>
> eval_cell ?# submit but don't block. get back an object that would
> allow you to see if the cell was done evaluating.
> complete # again submit but don't return. ?Similar logic.
> get_stdout ?# get all the stdout that has been written thus far
> get_stderr ?# get all stderr that has been written thus far.
>
> You basically want to put a blocking API on top of the asynchronous 0MQ API.
>
> This type of thing should be be too difficult to write using 0MQ and
> we can help out. ?If you are serious about this, let me know and
> Fernando and I can come up with a plan.
> We could probably develop this in the pyzmq tree for now.

My primary concern is the notebook. What is your idea to implement the
asynchronous output update? Let me look at how Sage does it.

As to json-rpc, it is not blocking, that's just how I like to use it
in ipython. But below the hood, it works exactly as you said, e.g. you
get some id back immediately and then your method gets called (in
pyjamas) when the result is back. I don't know how this is done
internally. I think it's just how AJAX works (the browser calls your
javascript method when the result is back).

So I need to do more studying myself now, I know what I want (notebook
working fine, and a nice API), I don't know how to do it.

Ondrej


From ellisonbg at gmail.com  Sun Mar 28 23:41:06 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 28 Mar 2010 20:41:06 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <85b5c3131003271220p777605eai45944a9c325da869@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>
	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>
	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>
	<fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>
	<85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>
	<fa8579a41003271142o46c976edyd78d5fa30e9d1a1c@mail.gmail.com>
	<85b5c3131003271220p777605eai45944a9c325da869@mail.gmail.com>
Message-ID: <fa8579a41003282041w6d476e50m7d6bb59a71c34b5c@mail.gmail.com>

Ondrej,

On Sat, Mar 27, 2010 at 12:20 PM, Ondrej Certik <ondrej at certik.cz> wrote:
> On Sat, Mar 27, 2010 at 11:42 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>> Ondrej,
>>
>>> Ok. Here is my API (so far I have no sessions there):
>>>
>>> In [1]: import jsonrpclib
>>>
>>> In [2]: s = jsonrpclib.SimpleServerProxy("http://2.latest.sympy-gamma.appspot.com/test-service/")
>>>
>>> In [3]: s.eval_cell("2+3")
>>> Out[3]: '5'
>>>
>>> In [4]: s.eval_cell("""\
>>> ? ...: from sympy import sin, integrate, var
>>> ? ...: var("x")
>>> ? ...: integrate(sin(x), x)
>>> ? ...: """)
>>> Out[4]: '-cos(x)'
>>>
>>> In [5]: s.eval_cell("""\
>>> ? ...: from sympy import sin, integrate, var
>>> ? ...: var("x")
>>> ? ...: a = integrate(sin(x), x)
>>> ? ...: """)
>>> Out[5]: ''
>>>
>>> In [6]: s.eval_cell("a.diff(x)")
>>> Out[6]: 'sin(x)'
>>
>> OK, if this is the only API you want, it is possible. ?BUT, a few points:
>>
>> * It is completely blocking and synchronous. ?We can create such an
>> API using 0MQ, but it obviously has limitations.
>> * It handles stdout in a completely synchronous manner. ?I think we
>> can do this too (again limitations apply).
>>
>> You are going to have to work *very* hard using json-rpc alone to get
>> asynchronous stdout/stderr/displayhook. ?Here is the design that you
>> want that will entirely do what you want:
>>
>> client<--jsonrpc-->bridge<--0MQ-->kernel
>>
>> The bridge would be a fully asynchronous 0MQ client and would receive
>> the asynchronous 0MQ stdout/stderr/displayhook and simply put them in
>> queues. ?The bridge would also be a json-rpc server. ?with methods
>> like:
>>
>> eval_cell ?# submit but don't block. get back an object that would
>> allow you to see if the cell was done evaluating.
>> complete # again submit but don't return. ?Similar logic.
>> get_stdout ?# get all the stdout that has been written thus far
>> get_stderr ?# get all stderr that has been written thus far.
>>
>> You basically want to put a blocking API on top of the asynchronous 0MQ API.
>>
>> This type of thing should be be too difficult to write using 0MQ and
>> we can help out. ?If you are serious about this, let me know and
>> Fernando and I can come up with a plan.
>> We could probably develop this in the pyzmq tree for now.
>
> My primary concern is the notebook. What is your idea to implement the
> asynchronous output update? Let me look at how Sage does it.

I would also look at how we are handling it in the 0MQ prototype.  Th
challenge is translating that to a browser.

> As to json-rpc, it is not blocking, that's just how I like to use it
> in ipython. But below the hood, it works exactly as you said, e.g. you
> get some id back immediately and then your method gets called (in
> pyjamas) when the result is back. I don't know how this is done
> internally. I think it's just how AJAX works (the browser calls your
> javascript method when the result is back).

I need to look at jsonrpclib so better see how it works.  I also see
that the author has something that works with tornado.

> So I need to do more studying myself now, I know what I want (notebook
> working fine, and a nice API), I don't know how to do it.

Cheers,

Brian


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From robert.kern at gmail.com  Mon Mar 29 10:47:06 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Mon, 29 Mar 2010 09:47:06 -0500
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <fa8579a41003282041w6d476e50m7d6bb59a71c34b5c@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>	<fa8579a41003262017o3e005098p572958bf4da60f3e@mail.gmail.com>	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>	<fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>	<85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>	<fa8579a41003271142o46c976edyd78d5fa30e9d1a1c@mail.gmail.com>	<85b5c3131003271220p777605eai45944a9c325da869@mail.gmail.com>
	<fa8579a41003282041w6d476e50m7d6bb59a71c34b5c@mail.gmail.com>
Message-ID: <hoqeha$6ma$1@dough.gmane.org>

On 2010-03-28 22:41 PM, Brian Granger wrote:
> Ondrej,
>
> On Sat, Mar 27, 2010 at 12:20 PM, Ondrej Certik<ondrej at certik.cz>  wrote:
>> On Sat, Mar 27, 2010 at 11:42 AM, Brian Granger<ellisonbg at gmail.com>  wrote:
>>> Ondrej,
>>>
>>>> Ok. Here is my API (so far I have no sessions there):
>>>>
>>>> In [1]: import jsonrpclib
>>>>
>>>> In [2]: s = jsonrpclib.SimpleServerProxy("http://2.latest.sympy-gamma.appspot.com/test-service/")
>>>>
>>>> In [3]: s.eval_cell("2+3")
>>>> Out[3]: '5'
>>>>
>>>> In [4]: s.eval_cell("""\
>>>>    ...: from sympy import sin, integrate, var
>>>>    ...: var("x")
>>>>    ...: integrate(sin(x), x)
>>>>    ...: """)
>>>> Out[4]: '-cos(x)'
>>>>
>>>> In [5]: s.eval_cell("""\
>>>>    ...: from sympy import sin, integrate, var
>>>>    ...: var("x")
>>>>    ...: a = integrate(sin(x), x)
>>>>    ...: """)
>>>> Out[5]: ''
>>>>
>>>> In [6]: s.eval_cell("a.diff(x)")
>>>> Out[6]: 'sin(x)'
>>>
>>> OK, if this is the only API you want, it is possible.  BUT, a few points:
>>>
>>> * It is completely blocking and synchronous.  We can create such an
>>> API using 0MQ, but it obviously has limitations.
>>> * It handles stdout in a completely synchronous manner.  I think we
>>> can do this too (again limitations apply).
>>>
>>> You are going to have to work *very* hard using json-rpc alone to get
>>> asynchronous stdout/stderr/displayhook.  Here is the design that you
>>> want that will entirely do what you want:
>>>
>>> client<--jsonrpc-->bridge<--0MQ-->kernel
>>>
>>> The bridge would be a fully asynchronous 0MQ client and would receive
>>> the asynchronous 0MQ stdout/stderr/displayhook and simply put them in
>>> queues.  The bridge would also be a json-rpc server.  with methods
>>> like:
>>>
>>> eval_cell  # submit but don't block. get back an object that would
>>> allow you to see if the cell was done evaluating.
>>> complete # again submit but don't return.  Similar logic.
>>> get_stdout  # get all the stdout that has been written thus far
>>> get_stderr  # get all stderr that has been written thus far.
>>>
>>> You basically want to put a blocking API on top of the asynchronous 0MQ API.
>>>
>>> This type of thing should be be too difficult to write using 0MQ and
>>> we can help out.  If you are serious about this, let me know and
>>> Fernando and I can come up with a plan.
>>> We could probably develop this in the pyzmq tree for now.
>>
>> My primary concern is the notebook. What is your idea to implement the
>> asynchronous output update? Let me look at how Sage does it.
>
> I would also look at how we are handling it in the 0MQ prototype.  Th
> challenge is translating that to a browser.
>
>> As to json-rpc, it is not blocking, that's just how I like to use it
>> in ipython. But below the hood, it works exactly as you said, e.g. you
>> get some id back immediately and then your method gets called (in
>> pyjamas) when the result is back. I don't know how this is done
>> internally. I think it's just how AJAX works (the browser calls your
>> javascript method when the result is back).
>
> I need to look at jsonrpclib so better see how it works.  I also see
> that the author has something that works with tornado.

You may be interested in using the new HTML5 WebSocket API. There is a 
compatibility library for older browsers and Python integration:

   http://orbited.org/

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From Chris.Barker at noaa.gov  Mon Mar 29 12:38:38 2010
From: Chris.Barker at noaa.gov (Christopher Barker)
Date: Mon, 29 Mar 2010 09:38:38 -0700
Subject: [IPython-dev] What is the status of iPython+wx?
In-Reply-To: <fa8579a41003262049g4a97ed45n4b645137854e9036@mail.gmail.com>
References: <4BA8F0B9.8080600@noaa.gov>
	<fa8579a41003241010q52bb2241o93019fba6718cc6e@mail.gmail.com>
	<4BABF8CE.6020203@noaa.gov>
	<fa8579a41003262049g4a97ed45n4b645137854e9036@mail.gmail.com>
Message-ID: <4BB0D78E.1060600@noaa.gov>

Brian Granger wrote:
> A warning: we are not GUI (or wx) experts, so it is likely you know
> more about wx that we do...

OK -- if I get stuck, hopefully I can get Robin Dunn interested...

> I should say more about how the %gui magic works.  If you do:
> 
> %gui wx
> 
> All that happens is that we do what its needed to start the event
> loop.  We do not in this case create a wx Application or do anything
> else.

I'm really confused as to how you can start an event loop without an App 
-- but I guess I'll dig into the code to figure that out.

> If you do this, it will be your repsonsibility to create and
> manage an Application object.  BUT, don't start the event loop
> yourself, it is already running.
 > A warning: you may have to pass very
> specific options to the wx App when you create it.  See our app
> creation logic here for details of what you will likely have to do:
> 
> inputhook.InputHookManager.enable_wx
> 
> If you do:
> 
> %gui -a wx
> 
> We start the event loop AND create a wx Application passing reasonable
> options.  In this case, you should not create an App, but rather just
> get the one IPython created using wx.GetApp().

Got it -- I had only seen the "-a" option in the docs, so that is what I 
was messing with.


> What I don't know is what wx does if the App gets closed down.  We
> don't do anything unusual though that would mess with how wx handles
> this type of thing.
> 
>> If I close the frame, then call run gui-wx.py -- it is unstable,
>> freezing up on me fairly quickly.
> 
> I am guessing the App gets shutdown and that kills any further wx goodness.

yup -- wx doesn't support stopping and restarting an App.


>> I haven't looked yet at the ipython code to see what you are doing in
>> appstart_wx. I'll try to do that soon.
> 
> Yes, also look at enable_wx.  We don't do much at all.

will do.

> Have a look at what we are doing - it is basically \epsilon, so for
> the most part wx should be doing what you tell it to.  BUT, this is
> way different from the older IPython.  There we used to do a lot to
> hijack/monkeypatch wx so many thing happened automagically.  but
> monkeypatching = crashing.

yes, it can mean that.

I think I'm envisioning having a "IpythonWxApp", that would act like a 
normal wx app when run on it's own, and do special stuff when run under 
wx -- Ideally it would live with wx, but that's not too big a deal. 
Hopefully I"ll get a bit of time to try to write such a beast.

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov


From ellisonbg at gmail.com  Mon Mar 29 13:15:59 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 29 Mar 2010 10:15:59 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <hoqeha$6ma$1@dough.gmane.org>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>
	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>
	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>
	<fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>
	<85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>
	<fa8579a41003271142o46c976edyd78d5fa30e9d1a1c@mail.gmail.com>
	<85b5c3131003271220p777605eai45944a9c325da869@mail.gmail.com>
	<fa8579a41003282041w6d476e50m7d6bb59a71c34b5c@mail.gmail.com>
	<hoqeha$6ma$1@dough.gmane.org>
Message-ID: <fa8579a41003291015x535f22e6xced1bcb115774ca2@mail.gmail.com>

Robert,

On Mon, Mar 29, 2010 at 7:47 AM, Robert Kern <robert.kern at gmail.com> wrote:
> On 2010-03-28 22:41 PM, Brian Granger wrote:
>> Ondrej,
>>
>> On Sat, Mar 27, 2010 at 12:20 PM, Ondrej Certik<ondrej at certik.cz> ?wrote:
>>> On Sat, Mar 27, 2010 at 11:42 AM, Brian Granger<ellisonbg at gmail.com> ?wrote:
>>>> Ondrej,
>>>>
>>>>> Ok. Here is my API (so far I have no sessions there):
>>>>>
>>>>> In [1]: import jsonrpclib
>>>>>
>>>>> In [2]: s = jsonrpclib.SimpleServerProxy("http://2.latest.sympy-gamma.appspot.com/test-service/")
>>>>>
>>>>> In [3]: s.eval_cell("2+3")
>>>>> Out[3]: '5'
>>>>>
>>>>> In [4]: s.eval_cell("""\
>>>>> ? ?...: from sympy import sin, integrate, var
>>>>> ? ?...: var("x")
>>>>> ? ?...: integrate(sin(x), x)
>>>>> ? ?...: """)
>>>>> Out[4]: '-cos(x)'
>>>>>
>>>>> In [5]: s.eval_cell("""\
>>>>> ? ?...: from sympy import sin, integrate, var
>>>>> ? ?...: var("x")
>>>>> ? ?...: a = integrate(sin(x), x)
>>>>> ? ?...: """)
>>>>> Out[5]: ''
>>>>>
>>>>> In [6]: s.eval_cell("a.diff(x)")
>>>>> Out[6]: 'sin(x)'
>>>>
>>>> OK, if this is the only API you want, it is possible. ?BUT, a few points:
>>>>
>>>> * It is completely blocking and synchronous. ?We can create such an
>>>> API using 0MQ, but it obviously has limitations.
>>>> * It handles stdout in a completely synchronous manner. ?I think we
>>>> can do this too (again limitations apply).
>>>>
>>>> You are going to have to work *very* hard using json-rpc alone to get
>>>> asynchronous stdout/stderr/displayhook. ?Here is the design that you
>>>> want that will entirely do what you want:
>>>>
>>>> client<--jsonrpc-->bridge<--0MQ-->kernel
>>>>
>>>> The bridge would be a fully asynchronous 0MQ client and would receive
>>>> the asynchronous 0MQ stdout/stderr/displayhook and simply put them in
>>>> queues. ?The bridge would also be a json-rpc server. ?with methods
>>>> like:
>>>>
>>>> eval_cell ?# submit but don't block. get back an object that would
>>>> allow you to see if the cell was done evaluating.
>>>> complete # again submit but don't return. ?Similar logic.
>>>> get_stdout ?# get all the stdout that has been written thus far
>>>> get_stderr ?# get all stderr that has been written thus far.
>>>>
>>>> You basically want to put a blocking API on top of the asynchronous 0MQ API.
>>>>
>>>> This type of thing should be be too difficult to write using 0MQ and
>>>> we can help out. ?If you are serious about this, let me know and
>>>> Fernando and I can come up with a plan.
>>>> We could probably develop this in the pyzmq tree for now.
>>>
>>> My primary concern is the notebook. What is your idea to implement the
>>> asynchronous output update? Let me look at how Sage does it.
>>
>> I would also look at how we are handling it in the 0MQ prototype. ?Th
>> challenge is translating that to a browser.
>>
>>> As to json-rpc, it is not blocking, that's just how I like to use it
>>> in ipython. But below the hood, it works exactly as you said, e.g. you
>>> get some id back immediately and then your method gets called (in
>>> pyjamas) when the result is back. I don't know how this is done
>>> internally. I think it's just how AJAX works (the browser calls your
>>> javascript method when the result is back).
>>
>> I need to look at jsonrpclib so better see how it works. ?I also see
>> that the author has something that works with tornado.
>
> You may be interested in using the new HTML5 WebSocket API. There is a
> compatibility library for older browsers and Python integration:
>
> ? http://orbited.org/

Thanks!  I have not been keeping up with HTML5.  I will talk to the
0MQ team about implementing the 0MQ wire protocol with this.  Orbited
looks interesting, but relies on Twisted, which is going to hold back
the python 3 transition.  This stuff is exactly what we need in the
browser though.

Cheers,

Brian

> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless enigma
> ?that is made terrible by our own mad attempt to interpret it as though it had
> ?an underlying truth."
> ? -- Umberto Eco
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Mar 29 13:23:03 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 29 Mar 2010 10:23:03 -0700
Subject: [IPython-dev] What is the status of iPython+wx?
In-Reply-To: <4BB0D78E.1060600@noaa.gov>
References: <4BA8F0B9.8080600@noaa.gov>
	<fa8579a41003241010q52bb2241o93019fba6718cc6e@mail.gmail.com>
	<4BABF8CE.6020203@noaa.gov>
	<fa8579a41003262049g4a97ed45n4b645137854e9036@mail.gmail.com>
	<4BB0D78E.1060600@noaa.gov>
Message-ID: <fa8579a41003291023i1272c0cfo9960c24761ddd932@mail.gmail.com>

Chris,

> I'm really confused as to how you can start an event loop without an App
> -- but I guess I'll dig into the code to figure that out.

It is a bit subtle - it took us years to understand that this was
possible.  Here is a sketch of how it works:

* The C API of Python has a hook called PyOS_InputHook.  It is a
pointer to a function.
* The function get called when Python enters raw_input.  When
raw_input is called, Python enters into its own mini event loop.  This
is where readline interactions take place.
* While this mini-event loop is running, Python calls the
PyOS_InputHook function.  The sole purpose of this call is to enable
other event loops to integrate with Python.
* This is how the event loop integration with tk works in regular Python.
* We have implemented a hook for wx.  If you look at our hook though,
it checks to see if a wx App has been created.  If no App has been
created, the hook is a no-op.  But, the second you create an App, our
function picks that up and iterates the event loop while raw_input is
being called.

HTH.

Cheers,

Brian

>> If you do this, it will be your repsonsibility to create and
>> manage an Application object. ?BUT, don't start the event loop
>> yourself, it is already running.
> ?> A warning: you may have to pass very
>> specific options to the wx App when you create it. ?See our app
>> creation logic here for details of what you will likely have to do:
>>
>> inputhook.InputHookManager.enable_wx
>>
>> If you do:
>>
>> %gui -a wx
>>
>> We start the event loop AND create a wx Application passing reasonable
>> options. ?In this case, you should not create an App, but rather just
>> get the one IPython created using wx.GetApp().
>
> Got it -- I had only seen the "-a" option in the docs, so that is what I
> was messing with.
>
>
>> What I don't know is what wx does if the App gets closed down. ?We
>> don't do anything unusual though that would mess with how wx handles
>> this type of thing.
>>
>>> If I close the frame, then call run gui-wx.py -- it is unstable,
>>> freezing up on me fairly quickly.
>>
>> I am guessing the App gets shutdown and that kills any further wx goodness.
>
> yup -- wx doesn't support stopping and restarting an App.
>
>
>>> I haven't looked yet at the ipython code to see what you are doing in
>>> appstart_wx. I'll try to do that soon.
>>
>> Yes, also look at enable_wx. ?We don't do much at all.
>
> will do.
>
>> Have a look at what we are doing - it is basically \epsilon, so for
>> the most part wx should be doing what you tell it to. ?BUT, this is
>> way different from the older IPython. ?There we used to do a lot to
>> hijack/monkeypatch wx so many thing happened automagically. ?but
>> monkeypatching = crashing.
>
> yes, it can mean that.
>
> I think I'm envisioning having a "IpythonWxApp", that would act like a
> normal wx app when run on it's own, and do special stuff when run under
> wx -- Ideally it would live with wx, but that's not too big a deal.
> Hopefully I"ll get a bit of time to try to write such a beast.
>
> -Chris
>
>
> --
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R ? ? ? ? ? ?(206) 526-6959 ? voice
> 7600 Sand Point Way NE ? (206) 526-6329 ? fax
> Seattle, WA ?98115 ? ? ? (206) 526-6317 ? main reception
>
> Chris.Barker at noaa.gov
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From robert.kern at gmail.com  Mon Mar 29 13:25:06 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Mon, 29 Mar 2010 12:25:06 -0500
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <fa8579a41003291015x535f22e6xced1bcb115774ca2@mail.gmail.com>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>	<85b5c3131003270010g7008efb7xf3d8bbfc4234e73d@mail.gmail.com>	<fa8579a41003271004o233b672di72ad0940840e2b31@mail.gmail.com>	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>	<fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>	<85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>	<fa8579a41003271142o46c976edyd78d5fa30e9d1a1c@mail.gmail.com>	<85b5c3131003271220p777605eai45944a9c325da869@mail.gmail.com>	<fa8579a41003282041w6d476e50m7d6bb59a71c34b5c@mail.gmail.com>	<hoqeha$6ma$1@dough.gmane.org>
	<fa8579a41003291015x535f22e6xced1bcb115774ca2@mail.gmail.com>
Message-ID: <hoqnpi$d5t$1@dough.gmane.org>

On 2010-03-29 12:15 PM, Brian Granger wrote:
> Robert,
>
> On Mon, Mar 29, 2010 at 7:47 AM, Robert Kern<robert.kern at gmail.com>  wrote:

>> You may be interested in using the new HTML5 WebSocket API. There is a
>> compatibility library for older browsers and Python integration:
>>
>>    http://orbited.org/
>
> Thanks!  I have not been keeping up with HTML5.  I will talk to the
> 0MQ team about implementing the 0MQ wire protocol with this.  Orbited
> looks interesting, but relies on Twisted, which is going to hold back
> the python 3 transition.  This stuff is exactly what we need in the
> browser though.

Orbited is just a relay hub. It's a separate application. While it would be nice 
if it ran on the same Python version as the IPython process, it doesn't have to. 
Any Python 3 system you deploy to will have Python 2 for a long, long time. I 
don't think it would hold up IPython (or even a web notebook component of 
IPython) from transitioning to Python 3.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From ellisonbg at gmail.com  Mon Mar 29 13:27:56 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 29 Mar 2010 10:27:56 -0700
Subject: [IPython-dev] make ipython work over web
In-Reply-To: <hoqnpi$d5t$1@dough.gmane.org>
References: <85b5c3131003261607h27be34e8r92067f81a70ac88d@mail.gmail.com>
	<85b5c3131003271027vd003019u620134af007d8ca@mail.gmail.com>
	<fa8579a41003271057y1cecd1a3vd3343da521e200f8@mail.gmail.com>
	<85b5c3131003271127n7841980fn8916a27477b75d81@mail.gmail.com>
	<fa8579a41003271142o46c976edyd78d5fa30e9d1a1c@mail.gmail.com>
	<85b5c3131003271220p777605eai45944a9c325da869@mail.gmail.com>
	<fa8579a41003282041w6d476e50m7d6bb59a71c34b5c@mail.gmail.com>
	<hoqeha$6ma$1@dough.gmane.org>
	<fa8579a41003291015x535f22e6xced1bcb115774ca2@mail.gmail.com>
	<hoqnpi$d5t$1@dough.gmane.org>
Message-ID: <fa8579a41003291027l1f60ea5em256fa4a6cf72db17@mail.gmail.com>

Robert,

On Mon, Mar 29, 2010 at 10:25 AM, Robert Kern <robert.kern at gmail.com> wrote:
> On 2010-03-29 12:15 PM, Brian Granger wrote:
>> Robert,
>>
>> On Mon, Mar 29, 2010 at 7:47 AM, Robert Kern<robert.kern at gmail.com> ?wrote:
>
>>> You may be interested in using the new HTML5 WebSocket API. There is a
>>> compatibility library for older browsers and Python integration:
>>>
>>> ? ?http://orbited.org/
>>
>> Thanks! ?I have not been keeping up with HTML5. ?I will talk to the
>> 0MQ team about implementing the 0MQ wire protocol with this. ?Orbited
>> looks interesting, but relies on Twisted, which is going to hold back
>> the python 3 transition. ?This stuff is exactly what we need in the
>> browser though.
>
> Orbited is just a relay hub. It's a separate application. While it would be nice
> if it ran on the same Python version as the IPython process, it doesn't have to.
> Any Python 3 system you deploy to will have Python 2 for a long, long time. I
> don't think it would hold up IPython (or even a web notebook component of
> IPython) from transitioning to Python 3.

Good point - even if our core code moves away from twisted, python 2.x
isn't going away anytime soon.

Cheers,

Brian

> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless enigma
> ?that is made terrible by our own mad attempt to interpret it as though it had
> ?an underlying truth."
> ? -- Umberto Eco
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com