From wardefar at iro.umontreal.ca  Fri Oct  1 14:19:36 2010
From: wardefar at iro.umontreal.ca (David Warde-Farley)
Date: Fri, 1 Oct 2010 14:19:36 -0400
Subject: [IPython-dev] New zeromq-based console now in trunk
In-Reply-To: <AANLkTimPNS0Ua2g05pk4c2CciTRy7iQJvCUze1Txd3N1@mail.gmail.com>
References: <AANLkTimPNS0Ua2g05pk4c2CciTRy7iQJvCUze1Txd3N1@mail.gmail.com>
Message-ID: <20101001181936.GA29394@ravage>

On Tue, Sep 28, 2010 at 04:40:18PM -0700, Fernando Perez wrote:
> Hi all,
> 
> We've just pushed the 'newkernel' branch into trunk, so all that the
> recent work is now fully merged.  The big user-facing news is a new
> application, called 'ipython-qtconsole', that provides a rich widget
> with the feel of a terminal but many improved features such as
> multiline editing, syntax highlighting, graphical tooltips, inline
> plots, the ability to restart the kernel without losing user history,
> and more.  Attached is a screenshot showing some of these features.
> 
> At this point we know there are still rough edges, but we think the
> code is in a shape good enough for developers to start testing it, and
> filing tickets on our bug tracker
> (http://github.com/ipython/ipython/issues) for any problems you
> encounter.

Hey Fernando,

I shared this (incl the screenshot) around my new lab (very heavy users
of Python already) and there were oohs and ahhs from all directions. 
Nice work!

I did try it out last night and was having trouble to the effect of the
kernel dying almost immediately and a backtrace being printed to the console. 
This was in a virtualenv, is it known to play nice with virtualenv? (it was
on a pretty standard Ubuntu 10.04 netbook.)


> If you want to start playing with this code, you will need (in
> addition to IPython trunk):
> 
> ZeroMQ and its python bindings, Pyzqm, version 2.0.8:
> 
>     * http://www.zeromq.org/local--files/area:download/zeromq-2.0.8.tar.gz
>     * http://github.com/downloads/zeromq/pyzmq/pyzmq-2.0.8.tar.gz

I was using zeromq and pyzmq from their respective trunks, and that
was a bad idea. I'll try it out with stable and file a bug if I'm still
having problems. 

Again, it's fantastic to see it coming together.

David


From fperez.net at gmail.com  Fri Oct  1 15:07:38 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 1 Oct 2010 12:07:38 -0700
Subject: [IPython-dev] New zeromq-based console now in trunk
In-Reply-To: <20101001181936.GA29394@ravage>
References: <AANLkTimPNS0Ua2g05pk4c2CciTRy7iQJvCUze1Txd3N1@mail.gmail.com>
	<20101001181936.GA29394@ravage>
Message-ID: <AANLkTimVy=K7UxshY2Ztza151V3_LCQC5w6ijz2GZ9XN@mail.gmail.com>

On Fri, Oct 1, 2010 at 11:19 AM, David Warde-Farley
<wardefar at iro.umontreal.ca> wrote:
>
> I shared this (incl the screenshot) around my new lab (very heavy users
> of Python already) and there were oohs and ahhs from all directions.
> Nice work!

Thanks for the kind words!

> I did try it out last night and was having trouble to the effect of the
> kernel dying almost immediately and a backtrace being printed to the console.
> This was in a virtualenv, is it known to play nice with virtualenv? (it was
> on a pretty standard Ubuntu 10.04 netbook.)

I'm pretty sure Brian uses virtualenv all the time, and there's no
reason why it shouldn't work with venv.

>
>
>> If you want to start playing with this code, you will need (in
>> addition to IPython trunk):
>>
>> ZeroMQ and its python bindings, Pyzqm, version 2.0.8:
>>
>> ? ? * http://www.zeromq.org/local--files/area:download/zeromq-2.0.8.tar.gz
>> ? ? * http://github.com/downloads/zeromq/pyzmq/pyzmq-2.0.8.tar.gz
>
> I was using zeromq and pyzmq from their respective trunks, and that
> was a bad idea. I'll try it out with stable and file a bug if I'm still
> having problems.

I think that could have been the problem.  Use the stable released
versions for the zmq tools, not the trunks, as those are changing a
bit too much.

Let us know if you still have problems with the .0.8 versions, and in
that case we'll dig deeper.

Regards,

f


From takowl at gmail.com  Fri Oct  1 19:46:26 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sat, 2 Oct 2010 00:46:26 +0100
Subject: [IPython-dev] Status of py3k ipython
Message-ID: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>

Hi,

An update on where I am with py3k ipython: The frontend/kernel system has
various external dependencies that don't seem to support python 3 (e.g.
twisted), along with some that do, but are a hassle to install (PyQT is only
packaged for Ubuntu for Python 2), so I've not attempted to get that
working. The core interpreter seems to be working OK, and is now passing
nearly all of its tests. I wonder if I could get some advice on the last
couple of tests:
In the core module, there are two tests to check that the magic %run command
doesn't change the id of __builtins__. These fail, but when I attempt to
repeat them in the interpreter, the id seems to stay the same however I try
to test it. Any bright ideas?
Also in core.tests.test_run, there's a "Test that object's __del__ methods
are called on exit." Some code is written to a temporary file and run, where
it apparently fails to find print (NameError). This is probably to do with
print becoming a function in Python 3, but I wondered if anyone had a flash
of inspiration?

Both the errors almost seem as if the test is somehow running them in the
wrong shell. Is that possible? I'm using a virtualenv, so it should be
isolated (although it's an unofficial py3k fork of virtualenv, so it could
be that at fault).

Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101002/f43475b9/attachment.html>

From fperez.net at gmail.com  Sun Oct  3 15:28:19 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 3 Oct 2010 12:28:19 -0700
Subject: [IPython-dev] Status of py3k ipython
In-Reply-To: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
References: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
Message-ID: <AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>

Hi Thomas,

On Fri, Oct 1, 2010 at 4:46 PM, Thomas Kluyver <takowl at gmail.com> wrote:
> Hi,
>
> An update on where I am with py3k ipython: The frontend/kernel system has
> various external dependencies that don't seem to support python 3 (e.g.
> twisted), along with some that do, but are a hassle to install (PyQT is only
> packaged for Ubuntu for Python 2), so I've not attempted to get that
> working. The core interpreter seems to be working OK, and is now passing

Don't worry about twisted *at all*.  It will be a long time before
they port to py3, and we will move our infrastructure from twisted to
zmq before that.  So feel free to simply ignore twisted.

Pyqt will be important later on, but for now you can focus on the
terminal-based tools.  Hopefully as py3 uptake increases, the qt tools
will be more easily available for py3.

> nearly all of its tests. I wonder if I could get some advice on the last
> couple of tests:
> In the core module, there are two tests to check that the magic %run command
> doesn't change the id of __builtins__. These fail, but when I attempt to
> repeat them in the interpreter, the id seems to stay the same however I try
> to test it. Any bright ideas?
> Also in core.tests.test_run, there's a "Test that object's __del__ methods
> are called on exit." Some code is written to a temporary file and run, where
> it apparently fails to find print (NameError). This is probably to do with
> print becoming a function in Python 3, but I wondered if anyone had a flash
> of inspiration?
>
> Both the errors almost seem as if the test is somehow running them in the
> wrong shell. Is that possible? I'm using a virtualenv, so it should be
> isolated (although it's an unofficial py3k fork of virtualenv, so it could
> be that at fault).

These two little devils are very peculiar and unpleasant.  They are
tests that I managed to write to catch certain obscure edge cases, but
they could probably be better written.

Why don't you do the following:

1. Mark them on py3 as known failures:

from IPython.testing import decorators as dec
skip_known_failure_py3 = skip('This test is known to fail on Python 3
- needs fixing')
if py3:
  badtest = skip_known_failure_py3(badtest)

that is, you make this new decorator for py3-specific known failures,
and apply it manually (since the @ syntax can't be used easily in an
if statement).

3. Make a ticket so I look later into these guys and hopefully can
find a cleaner solution.


That will let you move forward without completely ignoring the problem.

Thanks!

f


From takowl at gmail.com  Sun Oct  3 16:26:39 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sun, 3 Oct 2010 21:26:39 +0100
Subject: [IPython-dev] Status of py3k ipython
In-Reply-To: <AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>
References: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
	<AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>
Message-ID: <AANLkTi=-FVyq20F=AigaVVcJ8dwgnx1=1qknbWX2Rr7F@mail.gmail.com>

On 3 October 2010 20:28, Fernando Perez <fperez.net at gmail.com> wrote:

> Hi Thomas,
>
> Don't worry about twisted *at all*.  It will be a long time before
> they port to py3, and we will move our infrastructure from twisted to
> zmq before that.  So feel free to simply ignore twisted.
>

Thanks, MinRK told me this as well. I knew that zmq was coming in, but I
hadn't twigged that the plan was to replace twisted. I'll ignore it.


> Pyqt will be important later on, but for now you can focus on the
> terminal-based tools.  Hopefully as py3 uptake increases, the qt tools
> will be more easily available for py3.
>

After sending that, I did get PyQT installed, when I realised I could get
the source of the necessary version using apt-get. After some fiddling with
pyzmq and Cython, I got zmq bindings installed as well, and the
IPython.frontend test suite passes on my machine, but ipython-qtconsole
doesn't really work (I guess the twisted dependency needs to be removed from
the kernel).


> These two little devils are very peculiar and unpleasant.  They are
> tests that I managed to write to catch certain obscure edge cases, but
> they could probably be better written.
>

In fact, I think they're picking up meaningful errors in the code, just not
the errors they were intended to find. I'd got into a mess with
__builtins__, builtins, and __builtin__, which was causing the first problem
(in fact, the %run command tested restored __builtins__ correctly, but they
were wrong before it started).

In the second case, ipython seems to lose references to built-in functions
as it exits before it calls the __del__ method of the object. Could you
describe what happens in what order as ipython exits?

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101003/068c02f7/attachment.html>

From fperez.net at gmail.com  Sun Oct  3 16:43:30 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 3 Oct 2010 13:43:30 -0700
Subject: [IPython-dev] Status of py3k ipython
In-Reply-To: <AANLkTi=-FVyq20F=AigaVVcJ8dwgnx1=1qknbWX2Rr7F@mail.gmail.com>
References: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
	<AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>
	<AANLkTi=-FVyq20F=AigaVVcJ8dwgnx1=1qknbWX2Rr7F@mail.gmail.com>
Message-ID: <AANLkTi=cGggaD0UsAr=a+Ri6vyxNXUK9kdDN7-xXoRVJ@mail.gmail.com>

Hi Thomas,

On Sun, Oct 3, 2010 at 1:26 PM, Thomas Kluyver <takowl at gmail.com> wrote:
> After sending that, I did get PyQT installed, when I realised I could get
> the source of the necessary version using apt-get. After some fiddling with
> pyzmq and Cython, I got zmq bindings installed as well, and the
> IPython.frontend test suite passes on my machine, but ipython-qtconsole
> doesn't really work (I guess the twisted dependency needs to be removed from
> the kernel).

Very interesting...  In fact, the twisted dependency shouldn't matter
*at all* for the ipython-qtconsole code, that code uses strictly zmq
and has no twisted dependency:

In [2]: 'twisted' in sys.modules
Out[2]: False

It would be very cool to get the qt console running on py3 (even if
it's only for adventurous users willing to build pyqt themselves).  So
if you show us what the problems are, we may be able to help.  And
getting any fixes you may have made back into pyzmq would be great.
All of the pyzmq/ipython-zmq code is brand new, so the earlier we
catch any py3-noncompliance, the better off we'll be.

>> These two little devils are very peculiar and unpleasant. ?They are
>> tests that I managed to write to catch certain obscure edge cases, but
>> they could probably be better written.
>
> In fact, I think they're picking up meaningful errors in the code, just not
> the errors they were intended to find. I'd got into a mess with
> __builtins__, builtins, and __builtin__, which was causing the first problem
> (in fact, the %run command tested restored __builtins__ correctly, but they
> were wrong before it started).
>
> In the second case, ipython seems to lose references to built-in functions
> as it exits before it calls the __del__ method of the object. Could you
> describe what happens in what order as ipython exits?

When ipython exits the only code that is meant to run is whatever we
registered via atexti().  Just grep for atexit and you'll find those.

But the real problem is not what happens to ipython, but to the
*python* interpreter.  When *that* is truly being shut down (i.e.
after atexit calls happen, which occur while the interpreter is still
intact and fully operational), then various objects (including modules
and possibly builtins) start getting torn down and may be in
inconsistent state.  So __del__ calls that attempt to make use of
other machinery may well find themselves trying to call things that
have become None, and thus blow up.

I hope this helps

f


From takowl at gmail.com  Sun Oct  3 19:33:18 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Mon, 4 Oct 2010 00:33:18 +0100
Subject: [IPython-dev] Status of py3k ipython
In-Reply-To: <AANLkTi=cGggaD0UsAr=a+Ri6vyxNXUK9kdDN7-xXoRVJ@mail.gmail.com>
References: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
	<AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>
	<AANLkTi=-FVyq20F=AigaVVcJ8dwgnx1=1qknbWX2Rr7F@mail.gmail.com>
	<AANLkTi=cGggaD0UsAr=a+Ri6vyxNXUK9kdDN7-xXoRVJ@mail.gmail.com>
Message-ID: <AANLkTikZsptGKm6B+idV87wfddKbfRYdS+VUXpbzunWK@mail.gmail.com>

On 3 October 2010 21:43, Fernando Perez <fperez.net at gmail.com> wrote:

> Hi Thomas,
>
> Very interesting...  In fact, the twisted dependency shouldn't matter
> *at all* for the ipython-qtconsole code, that code uses strictly zmq
> and has no twisted dependency:
>

Hmm, interesting. I'd tried to import IPython.kernel in a shell session, and
it fell over trying to import twisted, so I assumed that the frontend code
needed the kernel code.

What it does: The Qt app starts up, and I get the banner message printed
(Python version, copyright etc., IPython version, pointers to help systems).
There's enough blank space that I can just scroll down to show a blank view.
However, there's no prompt of any sort, and typing doesn't seem to do
anything. At the terminal where I started it, I see some KSharedDataCache
messages (related to icons--I'm running KDE), "Starting the kernel at...",
details of four channels, and "To connect another client...".  There were
previously some error messages at the terminal too, but I tracked them down
and fixed them easily enough.

And getting any fixes you may have made back into pyzmq would be great.
> All of the pyzmq/ipython-zmq code is brand new, so the earlier we
> catch any py3-noncompliance, the better off we'll be.
>

You can see my changes at http://github.com/takowl/pyzmq/tree/py3zmq (look
particularly at this commit, after I'd realised that I should change the
.pyx files, not the .c files:
http://github.com/takowl/pyzmq/commit/8261e19189c6733f312e248bf77ee485286634d8).

In particular, there are a couple of places where you test for Python 3 to
decide how to do something. When this is converted to C and compiled, the
compiler can't find the relevant symbols for the Python 2 alternative. I
don't know if Cython allows you to do the equivalent of C preprocessor code,
so to get it working, I just commented out the Python 2 sections.

For the change to Cython that's needed at present, see the attached patch.

 When ipython exits the only code that is meant to run is whatever we
> registered via atexti().  Just grep for atexit and you'll find those.
>
> But the real problem is not what happens to ipython, but to the
> *python* interpreter.  When *that* is truly being shut down (i.e.
> after atexit calls happen, which occur while the interpreter is still
> intact and fully operational), then various objects (including modules
> and possibly builtins) start getting torn down and may be in
> inconsistent state.  So __del__ calls that attempt to make use of
> other machinery may well find themselves trying to call things that
> have become None, and thus blow up.
>

Well, atexit triggers .reset() of the InteractiveShell object, which looks
like it should delete locally created variables. And it does; I've just
tried that a=A() example, and calling ip.reset() gives me the same "ignored"
NameError as exiting the shell. Which is odd, because if I manually do the
first step in .reset, clear()-ing each dictionary in .ns_refs_table, the
"object A deleted" message pops out flawlessly. Thanks for the information,
although I still can't work out exactly where the problem is.

For what it's worth, I did try running the same snippet of code in plain
python 3.1, and it works as it should.

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101004/66947395/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Cython_PyUnicode.patch
Type: application/octet-stream
Size: 671 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101004/66947395/attachment.obj>

From benjaminrk at gmail.com  Mon Oct  4 00:26:06 2010
From: benjaminrk at gmail.com (MinRK)
Date: Sun, 3 Oct 2010 21:26:06 -0700
Subject: [IPython-dev] Status of py3k ipython
In-Reply-To: <AANLkTikZsptGKm6B+idV87wfddKbfRYdS+VUXpbzunWK@mail.gmail.com>
References: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
	<AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>
	<AANLkTi=-FVyq20F=AigaVVcJ8dwgnx1=1qknbWX2Rr7F@mail.gmail.com>
	<AANLkTi=cGggaD0UsAr=a+Ri6vyxNXUK9kdDN7-xXoRVJ@mail.gmail.com>
	<AANLkTikZsptGKm6B+idV87wfddKbfRYdS+VUXpbzunWK@mail.gmail.com>
Message-ID: <AANLkTinKxHcZF7F6O1Rdx_xqz3raLNLSxoybO_tPMrqF@mail.gmail.com>

On Sun, Oct 3, 2010 at 16:33, Thomas Kluyver <takowl at gmail.com> wrote:

> On 3 October 2010 21:43, Fernando Perez <fperez.net at gmail.com> wrote:
>
>> Hi Thomas,
>>
>> Very interesting...  In fact, the twisted dependency shouldn't matter
>> *at all* for the ipython-qtconsole code, that code uses strictly zmq
>> and has no twisted dependency:
>>
>
> Hmm, interesting. I'd tried to import IPython.kernel in a shell session,
> and it fell over trying to import twisted, so I assumed that the frontend
> code needed the kernel code.
>
> What it does: The Qt app starts up, and I get the banner message printed
> (Python version, copyright etc., IPython version, pointers to help systems).
> There's enough blank space that I can just scroll down to show a blank view.
> However, there's no prompt of any sort, and typing doesn't seem to do
> anything. At the terminal where I started it, I see some KSharedDataCache
> messages (related to icons--I'm running KDE), "Starting the kernel at...",
> details of four channels, and "To connect another client...".  There were
> previously some error messages at the terminal too, but I tracked them down
> and fixed them easily enough.
>
> And getting any fixes you may have made back into pyzmq would be great.
>> All of the pyzmq/ipython-zmq code is brand new, so the earlier we
>> catch any py3-noncompliance, the better off we'll be.
>>
>
> You can see my changes at http://github.com/takowl/pyzmq/tree/py3zmq (look
> particularly at this commit, after I'd realised that I should change the
> .pyx files, not the .c files:
> http://github.com/takowl/pyzmq/commit/8261e19189c6733f312e248bf77ee485286634d8).
>
> In particular, there are a couple of places where you test for Python 3 to
> decide how to do something. When this is converted to C and compiled, the
> compiler can't find the relevant symbols for the Python 2 alternative. I
> don't know if Cython allows you to do the equivalent of C preprocessor code,
> so to get it working, I just commented out the Python 2 sections.
>

Thanks for figuring this out, but there are a couple issues.  We actually
need the buffer code to work on *both* Python 2 and 3, so commenting things
out doesn't work.  It does help find the real issues elsewhere, though.
 That file, as it started in mpi4py, works on Pythons 2.3-3.2, but I have
clearly broken some of them when I made my adjustments bringing it into
pyzmq.  I will work these issues out.

As for the PyUnicode instead of PyString: We actually want to enforce
PyBytes in Python 3, not PyUnicode.  It's critically important that pyzmq
never has to deal with Python Unicode objects except through _unicode
convenience methods, due to heinous memory performance issues I won't get
into here (but have gotten into plenty with Brian and Fernando).

Thanks,
-MinRK


>
> For the change to Cython that's needed at present, see the attached patch.
>
>  When ipython exits the only code that is meant to run is whatever we
>> registered via atexti().  Just grep for atexit and you'll find those.
>>
>> But the real problem is not what happens to ipython, but to the
>> *python* interpreter.  When *that* is truly being shut down (i.e.
>> after atexit calls happen, which occur while the interpreter is still
>> intact and fully operational), then various objects (including modules
>> and possibly builtins) start getting torn down and may be in
>> inconsistent state.  So __del__ calls that attempt to make use of
>> other machinery may well find themselves trying to call things that
>> have become None, and thus blow up.
>>
>
> Well, atexit triggers .reset() of the InteractiveShell object, which looks
> like it should delete locally created variables. And it does; I've just
> tried that a=A() example, and calling ip.reset() gives me the same "ignored"
> NameError as exiting the shell. Which is odd, because if I manually do the
> first step in .reset, clear()-ing each dictionary in .ns_refs_table, the
> "object A deleted" message pops out flawlessly. Thanks for the information,
> although I still can't work out exactly where the problem is.
>
> For what it's worth, I did try running the same snippet of code in plain
> python 3.1, and it works as it should.
>
> Thanks,
> Thomas
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101003/3fe7dc26/attachment.html>

From takowl at gmail.com  Mon Oct  4 08:14:26 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Mon, 4 Oct 2010 13:14:26 +0100
Subject: [IPython-dev] Status of py3k ipython
In-Reply-To: <AANLkTinKxHcZF7F6O1Rdx_xqz3raLNLSxoybO_tPMrqF@mail.gmail.com>
References: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
	<AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>
	<AANLkTi=-FVyq20F=AigaVVcJ8dwgnx1=1qknbWX2Rr7F@mail.gmail.com>
	<AANLkTi=cGggaD0UsAr=a+Ri6vyxNXUK9kdDN7-xXoRVJ@mail.gmail.com>
	<AANLkTikZsptGKm6B+idV87wfddKbfRYdS+VUXpbzunWK@mail.gmail.com>
	<AANLkTinKxHcZF7F6O1Rdx_xqz3raLNLSxoybO_tPMrqF@mail.gmail.com>
Message-ID: <AANLkTimcUhFFvynfeWhZWkry6NWNHdGhEpwqxovKgu9k@mail.gmail.com>

On 4 October 2010 05:26, MinRK <benjaminrk at gmail.com> wrote:

> Thanks for figuring this out, but there are a couple issues.  We actually
> need the buffer code to work on *both* Python 2 and 3, so commenting things
> out doesn't work.  It does help find the real issues elsewhere, though.
>  That file, as it started in mpi4py, works on Pythons 2.3-3.2, but I have
> clearly broken some of them when I made my adjustments bringing it into
> pyzmq.  I will work these issues out.
>

I quite agree, but never having touched Cython code before, I wanted to get
it working before trying to resolve compatibility.

Where Cython is converting python functions to C code, it will create
preprocessor if/else sections where Python 2 and Python 3 API code needs to
be different. However, if we cimport API functions (PyString... etc.), it
will try to link those, whichever environment it is compiled in.


> As for the PyUnicode instead of PyString: We actually want to enforce
> PyBytes in Python 3, not PyUnicode.  It's critically important that pyzmq
> never has to deal with Python Unicode objects except through _unicode
> convenience methods, due to heinous memory performance issues I won't get
> into here (but have gotten into plenty with Brian and Fernando).
>

OK, I'll get onto that this evening. In fact, if we use PyBytes, it looks
like we can make a version that works for 2.6 and 3.x, although I think that
wouldn't work for anything before 2.6.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101004/479bc447/attachment.html>

From takowl at gmail.com  Mon Oct  4 20:00:58 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Tue, 5 Oct 2010 01:00:58 +0100
Subject: [IPython-dev] Status of py3k ipython
In-Reply-To: <AANLkTimcUhFFvynfeWhZWkry6NWNHdGhEpwqxovKgu9k@mail.gmail.com>
References: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
	<AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>
	<AANLkTi=-FVyq20F=AigaVVcJ8dwgnx1=1qknbWX2Rr7F@mail.gmail.com>
	<AANLkTi=cGggaD0UsAr=a+Ri6vyxNXUK9kdDN7-xXoRVJ@mail.gmail.com>
	<AANLkTikZsptGKm6B+idV87wfddKbfRYdS+VUXpbzunWK@mail.gmail.com>
	<AANLkTinKxHcZF7F6O1Rdx_xqz3raLNLSxoybO_tPMrqF@mail.gmail.com>
	<AANLkTimcUhFFvynfeWhZWkry6NWNHdGhEpwqxovKgu9k@mail.gmail.com>
Message-ID: <AANLkTikk4zubo-gKr7v9SpBadVRuSOBd-ZZ=paCdP7OC@mail.gmail.com>

>
> OK, I'll get onto that this evening. In fact, if we use PyBytes, it looks
> like we can make a version that works for 2.6 and 3.x, although I think that
> wouldn't work for anything before 2.6.
>

I've made the changes to use bytes, and updated the code with recent
changes. Issues it still faces:
- Buffers: It won't compile on Python 3 with references to old-style buffer
methods (e.g. PyBuffer_FromObject, Py_END_OF_BUFFER). This is the remaining
bit commented out. The new MemoryView method isn't in Python 2.6 (it comes
in with 2.7), so that falls over at runtime with the section commented out.
- There's a circular import between zmq/core/socket and zmq/core/context,
which goes into endless recursion when trying to import either of them in
Python 3.

http://github.com/takowl/pyzmq/tree/new-py3zmq

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101005/66712f80/attachment.html>

From fperez.net at gmail.com  Tue Oct  5 00:44:25 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 4 Oct 2010 21:44:25 -0700
Subject: [IPython-dev] Status of py3k ipython
In-Reply-To: <AANLkTikk4zubo-gKr7v9SpBadVRuSOBd-ZZ=paCdP7OC@mail.gmail.com>
References: <AANLkTimgwtoEbg5TNHdsmCoC7-qcO5+4DTVKv90rwty+@mail.gmail.com>
	<AANLkTi=CkfYLO=ds01vPDDUE13LZ8ueYbr118kjbVKWn@mail.gmail.com>
	<AANLkTi=-FVyq20F=AigaVVcJ8dwgnx1=1qknbWX2Rr7F@mail.gmail.com>
	<AANLkTi=cGggaD0UsAr=a+Ri6vyxNXUK9kdDN7-xXoRVJ@mail.gmail.com>
	<AANLkTikZsptGKm6B+idV87wfddKbfRYdS+VUXpbzunWK@mail.gmail.com>
	<AANLkTinKxHcZF7F6O1Rdx_xqz3raLNLSxoybO_tPMrqF@mail.gmail.com>
	<AANLkTimcUhFFvynfeWhZWkry6NWNHdGhEpwqxovKgu9k@mail.gmail.com>
	<AANLkTikk4zubo-gKr7v9SpBadVRuSOBd-ZZ=paCdP7OC@mail.gmail.com>
Message-ID: <AANLkTimMmww3=1zF7pngkaLbk2qfC__6mULEDQZE==F8@mail.gmail.com>

Hi Thomas,

On Mon, Oct 4, 2010 at 5:00 PM, Thomas Kluyver <takowl at gmail.com> wrote:
> I've made the changes to use bytes, and updated the code with recent
> changes. Issues it still faces:
> - Buffers: It won't compile on Python 3 with references to old-style buffer
> methods (e.g. PyBuffer_FromObject, Py_END_OF_BUFFER). This is the remaining
> bit commented out. The new MemoryView method isn't in Python 2.6 (it comes
> in with 2.7), so that falls over at runtime with the section commented out.
> - There's a circular import between zmq/core/socket and zmq/core/context,
> which goes into endless recursion when trying to import either of them in
> Python 3.
>
> http://github.com/takowl/pyzmq/tree/new-py3zmq

Many thanks!  I'll let Brian and Min work with you on the pyzmq
improvements, but it's really, really great to see our infrastructure
get beat into shape quickly for py3.

I should add that at this point, since we're working in trunk, feel
free to start making pull requests for the pre-py3-cleanup branch back
into trunk whenever works well for your schedule.  All that can go
into trunk now, and that way we'll minimize the delta from trunk into
the 'real'y py3 branches as much as possible.

Thanks again for your good work!


f

ps - I just edited our credits file to acknowledge your contributions,
sorry I hadn't done this before...


From butterw at gmail.com  Tue Oct  5 14:56:30 2010
From: butterw at gmail.com (Peter Butterworth)
Date: Tue, 5 Oct 2010 20:56:30 +0200
Subject: [IPython-dev] IPython 0.10.1 -pylab
Message-ID: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>

Hi,

I have the following issue with IPython 0.10.1 /IPython 0.10 with
python 2.6 and only on Windows 32/64bits in pylab mode (it works fine
in regular ipython) :
I can't cd to a directory with an accentuated character.

>>> cd c:\Python_tests\001\b?
[Error 2] Le fichier sp?cifi? est introuvable: 'c:/Python_tests/001/b\xc3\xa9'
c:\Python_tests\001

I hope this can be solved as it is really quite annoying.

-- 
thanks,
peter butterworth


From erik.tollerud at gmail.com  Tue Oct  5 19:59:16 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Tue, 5 Oct 2010 16:59:16 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
Message-ID: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>

I just switched over to the trunk with the merged-in newkernel and
have been playing around with the ipythonqt console.  Let me begin by
saying: wow!  Thanks to all those involved in implementing this - it's
a huge step forward in usability and looks really great.  I'd love to
switch over to it as my primary ipython environment.  However, there
are some things I haven't quite been able to figure out.  So consider
each of these either a "how can I do this?" or a feature request (or
at least a request as to whether or not it's on the todo list, in case
I am inclined to try to implement it myself).

* Is it possible to execute the new-style ipython profiles when
ipythonqt starts?  Looking at the ipythonqt main, it doesn't look to
me like it's in there, but seems potentially pretty straightforward.
This is the most crucial change for me as I have a lot of things that
I want loaded by default (especially "from __future__ import division"
!)
* Is there a way to adjust any of the keybindings, or add new ones?
* If not, is it possible to tie ctrl-D to the exit() function?  I
think a lot of people used to the python terminal expect ctrl-D to be
available to quit the terminal, so it might be a nice option to add
in.  Ideally, it would also just quit, skipping over the yes/no dialog
that exit() triggers with an automatic yes.  I understand, though, if
that is just to be too easy to accidentally do.
* Unlike the terminal ipython, it seems that command history does not
persist if I close out of ipythonqt and come back in.  Is this
intentional?  I find it to be a highly useful feature...
* Is the parallel computing environment (ipcluser,ipcontroller, etc.)
now based on the zmq kernel?  My understanding was that one of the
motivations for zmq was to get rid of the twisted dependency - is that
now in place, or is that still a work in progress?

Thanks again for the hard work - I can already tell this will be a
wonderful tool!

-- 
Erik Tollerud


From fperez.net at gmail.com  Tue Oct  5 20:16:03 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 5 Oct 2010 17:16:03 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
Message-ID: <AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>

Hi Erik,

On Tue, Oct 5, 2010 at 4:59 PM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
> I just switched over to the trunk with the merged-in newkernel and
> have been playing around with the ipythonqt console. ?Let me begin by
> saying: wow! ?Thanks to all those involved in implementing this - it's
> a huge step forward in usability and looks really great. ?I'd love to
> switch over to it as my primary ipython environment. ?However, there
> are some things I haven't quite been able to figure out. ?So consider
> each of these either a "how can I do this?" or a feature request (or
> at least a request as to whether or not it's on the todo list, in case
> I am inclined to try to implement it myself).

Thanks for the kind words!

> * Is it possible to execute the new-style ipython profiles when
> ipythonqt starts? ?Looking at the ipythonqt main, it doesn't look to
> me like it's in there, but seems potentially pretty straightforward.
> This is the most crucial change for me as I have a lot of things that
> I want loaded by default (especially "from __future__ import division"
> !)

Not yet, it's very high on the todo list.  We simply haven't
implemented *any* configurability yet, I'm afraid.

> * Is there a way to adjust any of the keybindings, or add new ones?

No, and I don't know how easy that will be to do.  That's on the Qt
side of things, that I know much less.  I imagine we'd have to make an
API for matching keybinding descriptions to methods implemented in the
object, and a declarative syntax to state your bindings in the config
file.  Nothing fundamentally difficult, but probably won't happen for
the first release.

> * If not, is it possible to tie ctrl-D to the exit() function? ?I
> think a lot of people used to the python terminal expect ctrl-D to be
> available to quit the terminal, so it might be a nice option to add
> in. ?Ideally, it would also just quit, skipping over the yes/no dialog
> that exit() triggers with an automatic yes. ?I understand, though, if
> that is just to be too easy to accidentally do.

Well, since the environment is now much more of a multiline editing
system, we made the keybindings mimic the Emacs ones, so C-D does
character delete.  I imagine it would be possilbe to see if it's at
the end of the buffer and trigger exit in that case, but I don't know
how easy that will be on the Qt side.

As for confirmation, we definitely want to refine the exit code to
allow unconditional exiting.  My take on it is that exit from a close
window event should ask (an accidental mis-click can easily happen)
but exit from typing explicitly 'exit' should be unconditional (nobody
types a full word by accident).

Would you mind making a ticket for this?  I think I know roughly how
to do it, but I don't have time for it right now.  I made a
'qtconsole' label to apply to these tickets so we can sort them
easily.

> * Unlike the terminal ipython, it seems that command history does not
> persist if I close out of ipythonqt and come back in. ?Is this
> intentional? ?I find it to be a highly useful feature...

Please file a ticket so we don't forget this one also, it's important
and I miss it too.

> * Is the parallel computing environment (ipcluser,ipcontroller, etc.)
> now based on the zmq kernel? ?My understanding was that one of the
> motivations for zmq was to get rid of the twisted dependency - is that
> now in place, or is that still a work in progress?

In progress, advancing well, but not finished yet.  But it's much
farther along than we'd originally thought, thanks to Min's amazing
dedication.

> Thanks again for the hard work - I can already tell this will be a
> wonderful tool!

Thanks for the feedback, keep it coming!

Cheers,

f


From tomspur at fedoraproject.org  Wed Oct  6 08:36:58 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Wed, 6 Oct 2010 14:36:58 +0200
Subject: [IPython-dev] IPython 0.10.1 release candidate up for final
 testing
In-Reply-To: <AANLkTi=Lmvd5w=WPybo=9r30j-1XL8FjN6tx9_2Ov6k=@mail.gmail.com>
References: <AANLkTi=Lmvd5w=WPybo=9r30j-1XL8FjN6tx9_2Ov6k=@mail.gmail.com>
Message-ID: <20101006143658.322caea7@earth>

On Wed, 29 Sep 2010 10:39:48 -0700
Fernando Perez wrote:

> Hi folks,
> 
> while most of our recent energy has gone into all the zeromq-based
> work for 0.11 (the code in trunk and the two GSoC projects), we
> haven't abandoned the stable 0.10 series.  A lot of small bugfixes
> have gone in, and recently Justin Riley, Satra Ghosh and Matthieu
> Brucher have contributed Sun Grid Engine scheduling support for the
> parallel computing machinery.
> 
> For those of you using the 0.10 series, here is a release candidate:
> 
> http://ipython.scipy.org/dist/testing/
> 
> Unless any problems are found with it, I will tag it as final and
> release the otherwise identical code as 0.10.1 by next Monday (give or
> take a day or two).
> 
> We'd appreciate testing feedback, and if anyone wants to do detailed
> testing (for example a release manager for a distribution) but you
> need more time, just let me know and we can wait a few extra days.

This redhad bug is fixed in the new version:
https://bugzilla.redhat.com/show_bug.cgi?id=640578

I had some problems with applying the fedora patch for unbundling the
libraries, but that worked now too. Maybe you want to apply it too,
before doing the release, but later on git should be enough for
now... ;-)

	Thomas


From erik.tollerud at gmail.com  Wed Oct  6 16:37:46 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Wed, 6 Oct 2010 13:37:46 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
Message-ID: <AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>

Thanks for the quick response!  I have a few responses in-line below
(also a question about the issue tracker), but I also thought of one
more thing that I was unclear about:

As I understand it, the new 2-process model is in place to decouple
the console from the execution kernel.  Given this, is there a
straightforward way to close the ipythonqt terminal (or a standard
ipython terminal) while leaving the execution kernel operating?  I was
able to connect to the kernel with a second ipythonqt console, but
don't see a way to disconnect the first terminal from the execution
kernel without killing the execution kernel... Or is that conversion
process for the parallel computing work that you said was still in
progress? It seems like this is a valuable capability that would
finally let me get away from using gnu screen whenever I want to be
able to check in on a python process remotely (a common use-case for
me)...


>> * Is it possible to execute the new-style ipython profiles when
>> ipythonqt starts? ?Looking at the ipythonqt main, it doesn't look to
>> me like it's in there, but seems potentially pretty straightforward.
>> This is the most crucial change for me as I have a lot of things that
>> I want loaded by default (especially "from __future__ import division"
>> !)
>
> Not yet, it's very high on the todo list. ?We simply haven't
> implemented *any* configurability yet, I'm afraid.

Ok, I might take a crack at this if I have a chance - seems pretty
straightforward.

>> * If not, is it possible to tie ctrl-D to the exit() function? ?I
>> think a lot of people used to the python terminal expect ctrl-D to be
>> available to quit the terminal, so it might be a nice option to add
>> in. ?Ideally, it would also just quit, skipping over the yes/no dialog
>> that exit() triggers with an automatic yes. ?I understand, though, if
>> that is just to be too easy to accidentally do.
>
> Well, since the environment is now much more of a multiline editing
> system, we made the keybindings mimic the Emacs ones, so C-D does
> character delete. ?I imagine it would be possilbe to see if it's at
> the end of the buffer and trigger exit in that case, but I don't know
> how easy that will be on the Qt side.

Perhaps come other key combination might be easier (perhaps C-c is the
next-most-natural choice?) - mainly I just want a single-keystroke way
of exiting...

>
> As for confirmation, we definitely want to refine the exit code to
> allow unconditional exiting. ?My take on it is that exit from a close
> window event should ask (an accidental mis-click can easily happen)
> but exit from typing explicitly 'exit' should be unconditional (nobody
> types a full word by accident).
>
> Would you mind making a ticket for this? ?I think I know roughly how
> to do it, but I don't have time for it right now. ?I made a
> 'qtconsole' label to apply to these tickets so we can sort them
> easily.

Done (ticket 161)... Although I can't figure out how to apply the
"qtconsole" label to an issue - do I have to include it in the text of
the issue somehow?  Or do I need some sort of elevated access
privileges?

>
>> * Unlike the terminal ipython, it seems that command history does not
>> persist if I close out of ipythonqt and come back in. ?Is this
>> intentional? ?I find it to be a highly useful feature...
>
> Please file a ticket so we don't forget this one also, it's important
> and I miss it too.

Done (ticket 162), although see my comment above about some confusion
with the issue tracker.


-- 
Erik Tollerud


From ellisonbg at gmail.com  Wed Oct  6 16:53:40 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 6 Oct 2010 13:53:40 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
Message-ID: <AANLkTikXcRR5DhphYHVseKhb2p_nHsdVPQG9_2ApJQbP@mail.gmail.com>

On Wed, Oct 6, 2010 at 1:37 PM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
> Thanks for the quick response! ?I have a few responses in-line below
> (also a question about the issue tracker), but I also thought of one
> more thing that I was unclear about:
>
> As I understand it, the new 2-process model is in place to decouple
> the console from the execution kernel. ?Given this, is there a
> straightforward way to close the ipythonqt terminal (or a standard
> ipython terminal) while leaving the execution kernel operating? ?I was
> able to connect to the kernel with a second ipythonqt console, but
> don't see a way to disconnect the first terminal from the execution
> kernel without killing the execution kernel... Or is that conversion
> process for the parallel computing work that you said was still in
> progress? It seems like this is a valuable capability that would
> finally let me get away from using gnu screen whenever I want to be
> able to check in on a python process remotely (a common use-case for
> me)...
>
>
>>> * Is it possible to execute the new-style ipython profiles when
>>> ipythonqt starts? ?Looking at the ipythonqt main, it doesn't look to
>>> me like it's in there, but seems potentially pretty straightforward.
>>> This is the most crucial change for me as I have a lot of things that
>>> I want loaded by default (especially "from __future__ import division"
>>> !)
>>
>> Not yet, it's very high on the todo list. ?We simply haven't
>> implemented *any* configurability yet, I'm afraid.
>
> Ok, I might take a crack at this if I have a chance - seems pretty
> straightforward.

Erik,

I just wanted to let you know that I am planning on working on this
(the configuration stuff) over the next few weeks.  We implemented a
new configuration system last summer and it needs some updating and
cleanup that I am planning on doing as part of this work.  I will keep
you posted on the progress.

Cheers,

Brian

>>> * If not, is it possible to tie ctrl-D to the exit() function? ?I
>>> think a lot of people used to the python terminal expect ctrl-D to be
>>> available to quit the terminal, so it might be a nice option to add
>>> in. ?Ideally, it would also just quit, skipping over the yes/no dialog
>>> that exit() triggers with an automatic yes. ?I understand, though, if
>>> that is just to be too easy to accidentally do.
>>
>> Well, since the environment is now much more of a multiline editing
>> system, we made the keybindings mimic the Emacs ones, so C-D does
>> character delete. ?I imagine it would be possilbe to see if it's at
>> the end of the buffer and trigger exit in that case, but I don't know
>> how easy that will be on the Qt side.
>
> Perhaps come other key combination might be easier (perhaps C-c is the
> next-most-natural choice?) - mainly I just want a single-keystroke way
> of exiting...
>
>>
>> As for confirmation, we definitely want to refine the exit code to
>> allow unconditional exiting. ?My take on it is that exit from a close
>> window event should ask (an accidental mis-click can easily happen)
>> but exit from typing explicitly 'exit' should be unconditional (nobody
>> types a full word by accident).
>>
>> Would you mind making a ticket for this? ?I think I know roughly how
>> to do it, but I don't have time for it right now. ?I made a
>> 'qtconsole' label to apply to these tickets so we can sort them
>> easily.
>
> Done (ticket 161)... Although I can't figure out how to apply the
> "qtconsole" label to an issue - do I have to include it in the text of
> the issue somehow? ?Or do I need some sort of elevated access
> privileges?
>
>>
>>> * Unlike the terminal ipython, it seems that command history does not
>>> persist if I close out of ipythonqt and come back in. ?Is this
>>> intentional? ?I find it to be a highly useful feature...
>>
>> Please file a ticket so we don't forget this one also, it's important
>> and I miss it too.
>
> Done (ticket 162), although see my comment above about some confusion
> with the issue tracker.
>
>
> --
> Erik Tollerud
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From benjaminrk at gmail.com  Wed Oct  6 17:30:37 2010
From: benjaminrk at gmail.com (MinRK)
Date: Wed, 6 Oct 2010 14:30:37 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
Message-ID: <AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>

On Wed, Oct 6, 2010 at 13:37, Erik Tollerud <erik.tollerud at gmail.com> wrote:

> Thanks for the quick response!  I have a few responses in-line below
> (also a question about the issue tracker), but I also thought of one
> more thing that I was unclear about:
>
> As I understand it, the new 2-process model is in place to decouple
> the console from the execution kernel.  Given this, is there a
> straightforward way to close the ipythonqt terminal (or a standard
> ipython terminal) while leaving the execution kernel operating?  I was
> able to connect to the kernel with a second ipythonqt console, but
> don't see a way to disconnect the first terminal from the execution
> kernel without killing the execution kernel... Or is that conversion
> process for the parallel computing work that you said was still in
> progress? It seems like this is a valuable capability that would
> finally let me get away from using gnu screen whenever I want to be
> able to check in on a python process remotely (a common use-case for
> me)...
>

This is definitely doable with the underlying machinery; it's really a
matter of startup scripts.  There could easily be a script for starting
*just* an ipython kernel bound to no frontends, then start all frontends
just like you currently do with the second frontend. That script isn't
written yet, though. It would presumably be the new 'ipkernel' script.

For now, I whipped up an example which just adds a '--kernel-only' flag to
ipythonqt that skips the frontend setup:
http://github.com/minrk/ipython/tree/kernelonly


>
>
> >> * Is it possible to execute the new-style ipython profiles when
> >> ipythonqt starts?  Looking at the ipythonqt main, it doesn't look to
> >> me like it's in there, but seems potentially pretty straightforward.
> >> This is the most crucial change for me as I have a lot of things that
> >> I want loaded by default (especially "from __future__ import division"
> >> !)
> >
> > Not yet, it's very high on the todo list.  We simply haven't
> > implemented *any* configurability yet, I'm afraid.
>
> Ok, I might take a crack at this if I have a chance - seems pretty
> straightforward.
>
> >> * If not, is it possible to tie ctrl-D to the exit() function?  I
> >> think a lot of people used to the python terminal expect ctrl-D to be
> >> available to quit the terminal, so it might be a nice option to add
> >> in.  Ideally, it would also just quit, skipping over the yes/no dialog
> >> that exit() triggers with an automatic yes.  I understand, though, if
> >> that is just to be too easy to accidentally do.
> >
> > Well, since the environment is now much more of a multiline editing
> > system, we made the keybindings mimic the Emacs ones, so C-D does
> > character delete.  I imagine it would be possilbe to see if it's at
> > the end of the buffer and trigger exit in that case, but I don't know
> > how easy that will be on the Qt side.
>
> Perhaps come other key combination might be easier (perhaps C-c is the
> next-most-natural choice?) - mainly I just want a single-keystroke way
> of exiting...
>

As soon as you have a standalone GUI that feels a terminal, exposing a
keybinding API becomes important.  We should probably ape someone else's
model for this, so as to minimize relearning for users. Do you have any
examples of nicely customizable apps for us to look at?


>
> >
> > As for confirmation, we definitely want to refine the exit code to
> > allow unconditional exiting.  My take on it is that exit from a close
> > window event should ask (an accidental mis-click can easily happen)
> > but exit from typing explicitly 'exit' should be unconditional (nobody
> > types a full word by accident).
> >
> > Would you mind making a ticket for this?  I think I know roughly how
> > to do it, but I don't have time for it right now.  I made a
> > 'qtconsole' label to apply to these tickets so we can sort them
> > easily.
>
> Done (ticket 161)... Although I can't figure out how to apply the
> "qtconsole" label to an issue - do I have to include it in the text of
> the issue somehow?  Or do I need some sort of elevated access
> privileges?
>

I think you need privileges for this.  I added the tag to 161,162.


>
> >
> >> * Unlike the terminal ipython, it seems that command history does not
> >> persist if I close out of ipythonqt and come back in.  Is this
> >> intentional?  I find it to be a highly useful feature...
> >
> > Please file a ticket so we don't forget this one also, it's important
> > and I miss it too.
>
> Done (ticket 162), although see my comment above about some confusion
> with the issue tracker.
>

Thanks!
-MinRK


>
>
> --
> Erik Tollerud
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101006/550ff307/attachment.html>

From robert.kern at gmail.com  Wed Oct  6 19:27:13 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Wed, 06 Oct 2010 18:27:13 -0500
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
Message-ID: <i8j0kh$381$2@dough.gmane.org>

On 10/6/10 4:30 PM, MinRK wrote:

> As soon as you have a standalone GUI that feels a terminal, exposing a
> keybinding API becomes important.  We should probably ape someone else's model
> for this, so as to minimize relearning for users. Do you have any examples of
> nicely customizable apps for us to look at?

http://code.enthought.com/projects/traits/docs/html/TUIUG/factories_advanced_extra.html#keybindingeditor

<wink>  ;-)

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Wed Oct  6 23:22:56 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 6 Oct 2010 20:22:56 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <i8j0kh$381$2@dough.gmane.org>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
Message-ID: <AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>

On Wed, Oct 6, 2010 at 4:27 PM, Robert Kern <robert.kern at gmail.com> wrote:
>
> http://code.enthought.com/projects/traits/docs/html/TUIUG/factories_advanced_extra.html#keybindingeditor
>
> <wink> ?;-)

You evil one...

:)

f


From fperez.net at gmail.com  Thu Oct  7 19:18:50 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 7 Oct 2010 16:18:50 -0700
Subject: [IPython-dev] Mostly for Evan: shift-right not working?
Message-ID: <AANLkTi=Hj+z9B4GqM6Yx7Gy0266ZJK=s8BUG3OY=6LOa@mail.gmail.com>

Hi Evan,

I've noticed on several machines that shift-right doesn't seem to
highlight text for cut/copy/paste, but shift-up/down/left work OK.
I've been trying to find anything in the code that might account for
this behavior but so far I've come up empty.

Any ideas?  I've seen the problem only on Linux, I haven't tested on
other platforms looking for this yet...

Thanks,

f


From benjaminrk at gmail.com  Thu Oct  7 19:36:09 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 7 Oct 2010 16:36:09 -0700
Subject: [IPython-dev] Mostly for Evan: shift-right not working?
In-Reply-To: <AANLkTi=Hj+z9B4GqM6Yx7Gy0266ZJK=s8BUG3OY=6LOa@mail.gmail.com>
References: <AANLkTi=Hj+z9B4GqM6Yx7Gy0266ZJK=s8BUG3OY=6LOa@mail.gmail.com>
Message-ID: <AANLkTinAc=ni9SqziWsjMQ39pXa2VSXxjLkN-vCMyZYc@mail.gmail.com>

Confirmed on OSX with current qt binaries.

On Thu, Oct 7, 2010 at 16:18, Fernando Perez <fperez.net at gmail.com> wrote:

> Hi Evan,
>
> I've noticed on several machines that shift-right doesn't seem to
> highlight text for cut/copy/paste, but shift-up/down/left work OK.
> I've been trying to find anything in the code that might account for
> this behavior but so far I've come up empty.
>
> Any ideas?  I've seen the problem only on Linux, I haven't tested on
> other platforms looking for this yet...
>
> Thanks,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101007/fd674126/attachment.html>

From benjaminrk at gmail.com  Thu Oct  7 20:10:04 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 7 Oct 2010 17:10:04 -0700
Subject: [IPython-dev] Mostly for Evan: shift-right not working?
In-Reply-To: <AANLkTinAc=ni9SqziWsjMQ39pXa2VSXxjLkN-vCMyZYc@mail.gmail.com>
References: <AANLkTi=Hj+z9B4GqM6Yx7Gy0266ZJK=s8BUG3OY=6LOa@mail.gmail.com>
	<AANLkTinAc=ni9SqziWsjMQ39pXa2VSXxjLkN-vCMyZYc@mail.gmail.com>
Message-ID: <AANLkTi=WLqZZaUvwzWmv4ALjzdUyn6eihx7iGfqrbvg5@mail.gmail.com>

There was also an issue with selection continuing from one line to another.

Both seem to be fixed with this:
http://github.com/minrk/ipython/commit/66596366de6ccf5e7f56f9db434b877fd93539d0

-MinRK

On Thu, Oct 7, 2010 at 16:36, MinRK <benjaminrk at gmail.com> wrote:

> Confirmed on OSX with current qt binaries.
>
>
> On Thu, Oct 7, 2010 at 16:18, Fernando Perez <fperez.net at gmail.com> wrote:
>
>> Hi Evan,
>>
>> I've noticed on several machines that shift-right doesn't seem to
>> highlight text for cut/copy/paste, but shift-up/down/left work OK.
>> I've been trying to find anything in the code that might account for
>> this behavior but so far I've come up empty.
>>
>> Any ideas?  I've seen the problem only on Linux, I haven't tested on
>> other platforms looking for this yet...
>>
>> Thanks,
>>
>> f
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101007/0eb29576/attachment.html>

From fperez.net at gmail.com  Thu Oct  7 20:25:09 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 7 Oct 2010 17:25:09 -0700
Subject: [IPython-dev] Mostly for Evan: shift-right not working?
In-Reply-To: <AANLkTi=WLqZZaUvwzWmv4ALjzdUyn6eihx7iGfqrbvg5@mail.gmail.com>
References: <AANLkTi=Hj+z9B4GqM6Yx7Gy0266ZJK=s8BUG3OY=6LOa@mail.gmail.com>
	<AANLkTinAc=ni9SqziWsjMQ39pXa2VSXxjLkN-vCMyZYc@mail.gmail.com>
	<AANLkTi=WLqZZaUvwzWmv4ALjzdUyn6eihx7iGfqrbvg5@mail.gmail.com>
Message-ID: <AANLkTikWFZ8_q_wmmCfF=2vZAP+=Y9Td8P1xP1YC5eBe@mail.gmail.com>

On Thu, Oct 7, 2010 at 5:10 PM, MinRK <benjaminrk at gmail.com> wrote:
> There was also an issue with selection continuing from one line to another.
> Both seem to be fixed with this:
> http://github.com/minrk/ipython/commit/66596366de6ccf5e7f56f9db434b877fd93539d0

Great, thanks!  Reviewed, tested, merged and pushed.

If Evan sees any issues we can fine-tune it, but I read a little bit
about the anchor mode and it looks totally OK to me.  Testing also
shows that it does fix the behavior.

Great work,

f


From ellisonbg at gmail.com  Thu Oct  7 20:30:34 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 7 Oct 2010 17:30:34 -0700
Subject: [IPython-dev] Mostly for Evan: shift-right not working?
In-Reply-To: <AANLkTikWFZ8_q_wmmCfF=2vZAP+=Y9Td8P1xP1YC5eBe@mail.gmail.com>
References: <AANLkTi=Hj+z9B4GqM6Yx7Gy0266ZJK=s8BUG3OY=6LOa@mail.gmail.com>
	<AANLkTinAc=ni9SqziWsjMQ39pXa2VSXxjLkN-vCMyZYc@mail.gmail.com>
	<AANLkTi=WLqZZaUvwzWmv4ALjzdUyn6eihx7iGfqrbvg5@mail.gmail.com>
	<AANLkTikWFZ8_q_wmmCfF=2vZAP+=Y9Td8P1xP1YC5eBe@mail.gmail.com>
Message-ID: <AANLkTik0EdP8fT4Z+ZUXhT1vyDysQGBq0Tt56P0+_PPu@mail.gmail.com>

Yes, thanks Min!

Brian

On Thu, Oct 7, 2010 at 5:25 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Thu, Oct 7, 2010 at 5:10 PM, MinRK <benjaminrk at gmail.com> wrote:
>> There was also an issue with selection continuing from one line to another.
>> Both seem to be fixed with this:
>> http://github.com/minrk/ipython/commit/66596366de6ccf5e7f56f9db434b877fd93539d0
>
> Great, thanks! ?Reviewed, tested, merged and pushed.
>
> If Evan sees any issues we can fine-tune it, but I read a little bit
> about the anchor mode and it looks totally OK to me. ?Testing also
> shows that it does fix the behavior.
>
> Great work,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From erik.tollerud at gmail.com  Thu Oct  7 21:08:04 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Thu, 7 Oct 2010 18:08:04 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
Message-ID: <AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>

> I just wanted to let you know that I am planning on working on this
> (the configuration stuff) over the next few weeks. ?We implemented a
> new configuration system last summer and it needs some updating and
> cleanup that I am planning on doing as part of this work. ?I will keep
> you posted on the progress.

Ah, great - in that case I'll just wait to see how that shapes up
before trying to hack on anything myself.


> This is definitely doable with the underlying machinery; it's really a
> matter of startup scripts. ?There could easily be a script for starting
> *just* an ipython kernel bound to no frontends, then start all frontends
> just like you currently do with the second frontend. That script isn't
> written yet, though. It would presumably be the new 'ipkernel' script.
> For now, I whipped up an example which just adds a '--kernel-only' flag to
> ipythonqt that skips the frontend setup:
> http://github.com/minrk/ipython/tree/kernelonly

This makes sense, although the more typical use case for me is wanting
to start up ipythonqt normally, executing something that will take a
while, and realizing this only *after* it's running.  So an additional
key-binding along the lines of the "detatch" function in screen is
what I was thinking of.  e.g. something that will close the qt
frontend without closing the kernel.  Of course, I can do this using
the --kernel-only flag now and then connect to that kernel, but this
requires planning ahead, something I do my best to avoid :)

At any rate, as you say, this will hopefully become much more natural
to implement when the parallel framework is in place - I just wanted
to point out this particular use-case that I would find very useful.

>> As soon as you have a standalone GUI that feels a terminal, exposing a
>> keybinding API becomes important. ?We should probably ape someone else's model
>> for this, so as to minimize relearning for users. Do you have any examples of
>> nicely customizable apps for us to look at?
>
> http://code.enthought.com/projects/traits/docs/html/TUIUG/factories_advanced_extra.html#keybindingeditor
>
> <wink> ?;-)

This is indeed the sort of thing I had in mind, but it seems natural
to me to expose it as an option for an ipython profile file or
something similar in the .ipython directory given that's currently
where all the configuration goes. Presumably the Traits Qt backend can
be examined to figure out how to do this?


-- 
Erik Tollerud


From fperez.net at gmail.com  Thu Oct  7 21:15:58 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 7 Oct 2010 18:15:58 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
Message-ID: <AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>

On Thu, Oct 7, 2010 at 6:08 PM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
> This makes sense, although the more typical use case for me is wanting
> to start up ipythonqt normally, executing something that will take a
> while, and realizing this only *after* it's running. ?So an additional
> key-binding along the lines of the "detatch" function in screen is
> what I was thinking of. ?e.g. something that will close the qt
> frontend without closing the kernel. ?Of course, I can do this using
> the --kernel-only flag now and then connect to that kernel, but this
> requires planning ahead, something I do my best to avoid :)

Actually your wish is not hard to implement: it would just take a
little bit of Qt code to add a checkbox to the exit confirmation
dialog, saying 'shut kernel down on exit'.  To simply detach from the
running kernel you would just uncheck this box and that would be it.
The code would then avoid calling the kernel shutdown on exit.

I know Evan did some work to ensure that under normal circumstances a
kernel doesn't survive the lifespan of the client (to prevent zombie
kernel processes when started from a GUI), but it's possible to make
this logic optional.

In summary: very reasonable wishlist item, just not implemented yet.

>
> At any rate, as you say, this will hopefully become much more natural
> to implement when the parallel framework is in place - I just wanted
> to point out this particular use-case that I would find very useful.
>
>>> As soon as you have a standalone GUI that feels a terminal, exposing a
>>> keybinding API becomes important. ?We should probably ape someone else's model
>>> for this, so as to minimize relearning for users. Do you have any examples of
>>> nicely customizable apps for us to look at?
>>
>> http://code.enthought.com/projects/traits/docs/html/TUIUG/factories_advanced_extra.html#keybindingeditor
>>
>> <wink> ?;-)
>
> This is indeed the sort of thing I had in mind, but it seems natural
> to me to expose it as an option for an ipython profile file or
> something similar in the .ipython directory given that's currently
> where all the configuration goes. Presumably the Traits Qt backend can
> be examined to figure out how to do this?

Yes, I hope even before something that fancy, at least we'll have
basic text-file-based configuration available.  Keybindings should be
part of that (though they may require a bit more work than other
simpler options, we'll see).  Git branches from enterprising Qt users
welcome :)

Cheers,

f


From ellisonbg at gmail.com  Thu Oct  7 21:20:23 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 7 Oct 2010 18:20:23 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
	<AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
Message-ID: <AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>

On Thu, Oct 7, 2010 at 6:15 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Thu, Oct 7, 2010 at 6:08 PM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
>> This makes sense, although the more typical use case for me is wanting
>> to start up ipythonqt normally, executing something that will take a
>> while, and realizing this only *after* it's running. ?So an additional
>> key-binding along the lines of the "detatch" function in screen is
>> what I was thinking of. ?e.g. something that will close the qt
>> frontend without closing the kernel. ?Of course, I can do this using
>> the --kernel-only flag now and then connect to that kernel, but this
>> requires planning ahead, something I do my best to avoid :)
>
> Actually your wish is not hard to implement: it would just take a
> little bit of Qt code to add a checkbox to the exit confirmation
> dialog, saying 'shut kernel down on exit'. ?To simply detach from the
> running kernel you would just uncheck this box and that would be it.
> The code would then avoid calling the kernel shutdown on exit.
>
> I know Evan did some work to ensure that under normal circumstances a
> kernel doesn't survive the lifespan of the client (to prevent zombie
> kernel processes when started from a GUI), but it's possible to make
> this logic optional.

The only difficulty is that the kernel has to know when it is started
if is can outlive its frontend.  This is the logic that makes sure we
don't get zombie kernel floating around - we don't want to turn that
logic off under normal curcumstances.  We will have to think carefully
about this.

Cheers,

Brian

> In summary: very reasonable wishlist item, just not implemented yet.
>
>>
>> At any rate, as you say, this will hopefully become much more natural
>> to implement when the parallel framework is in place - I just wanted
>> to point out this particular use-case that I would find very useful.
>>
>>>> As soon as you have a standalone GUI that feels a terminal, exposing a
>>>> keybinding API becomes important. ?We should probably ape someone else's model
>>>> for this, so as to minimize relearning for users. Do you have any examples of
>>>> nicely customizable apps for us to look at?
>>>
>>> http://code.enthought.com/projects/traits/docs/html/TUIUG/factories_advanced_extra.html#keybindingeditor
>>>
>>> <wink> ?;-)
>>
>> This is indeed the sort of thing I had in mind, but it seems natural
>> to me to expose it as an option for an ipython profile file or
>> something similar in the .ipython directory given that's currently
>> where all the configuration goes. Presumably the Traits Qt backend can
>> be examined to figure out how to do this?
>
> Yes, I hope even before something that fancy, at least we'll have
> basic text-file-based configuration available. ?Keybindings should be
> part of that (though they may require a bit more work than other
> simpler options, we'll see). ?Git branches from enterprising Qt users
> welcome :)
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Thu Oct  7 21:26:59 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 7 Oct 2010 18:26:59 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
	<AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
	<AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>
Message-ID: <AANLkTimPzXVt5m8hgXE-OJnsS0RnC1evLGAcu43=czo=@mail.gmail.com>

On Thu, Oct 7, 2010 at 6:20 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>
> The only difficulty is that the kernel has to know when it is started
> if is can outlive its frontend. ?This is the logic that makes sure we
> don't get zombie kernel floating around - we don't want to turn that
> logic off under normal curcumstances. ?We will have to think carefully
> about this.
>

We could allow the user to disable the auto-destruct behavior on
startup.  Basically advanced users who know they'll have to clean up
their kernels manually could disable this logic on startup. Then at
least they'd have the choice of disconnecting the frontend later on if
they so desire.

That wouldn't be much of an issue for kernels started manually at a
terminal, since you can always kill the kernel right there.  The
automatic logic is really important once clients are started from a
gui/menu, where there's no easy/obvious way to find a zombie kernel
short of low-level process identification.

But I think that making that logic optional with a startup flag is a
reasonable compromise: only advanced users will knowingly disable it,
and it won't cause zombie kernels under normal use.

Cheers,

f


From benjaminrk at gmail.com  Thu Oct  7 22:20:36 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 7 Oct 2010 19:20:36 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTimPzXVt5m8hgXE-OJnsS0RnC1evLGAcu43=czo=@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
	<AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
	<AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>
	<AANLkTimPzXVt5m8hgXE-OJnsS0RnC1evLGAcu43=czo=@mail.gmail.com>
Message-ID: <AANLkTikhKK5Wr_+-apvzXAhR7hTK2H2zkk8Fvc+DWsBT@mail.gmail.com>

I don't think it's as hard as you make it sound.

I just changed the close dialog so it has 3 options:
a) full shutdown,
b) only close console,
c) Cancel; forget we ever met.

I only added case b), and it's just a few lines.

see here:
http://github.com/minrk/ipython/commit/cdb78a95f99540790cdf7960e52941d2ef1af2a3

The only thing that *doesn't* seem to work, is that you have to ctrl-C or
some such to terminate the original Qt process if you close the original
console.  Other consoles can shutdown the kernel later, but the process
doesn't die.

Note: this is really a proof of concept.  Yes/No/Cancel is generally not a
good dialog if you want things to be clear.

-MinRK

On Thu, Oct 7, 2010 at 18:26, Fernando Perez <fperez.net at gmail.com> wrote:

> On Thu, Oct 7, 2010 at 6:20 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> >
> > The only difficulty is that the kernel has to know when it is started
> > if is can outlive its frontend.  This is the logic that makes sure we
> > don't get zombie kernel floating around - we don't want to turn that
> > logic off under normal curcumstances.  We will have to think carefully
> > about this.
> >
>
> We could allow the user to disable the auto-destruct behavior on
> startup.  Basically advanced users who know they'll have to clean up
> their kernels manually could disable this logic on startup. Then at
> least they'd have the choice of disconnecting the frontend later on if
> they so desire.
>
> That wouldn't be much of an issue for kernels started manually at a
> terminal, since you can always kill the kernel right there.  The
> automatic logic is really important once clients are started from a
> gui/menu, where there's no easy/obvious way to find a zombie kernel
> short of low-level process identification.
>
> But I think that making that logic optional with a startup flag is a
> reasonable compromise: only advanced users will knowingly disable it,
> and it won't cause zombie kernels under normal use.
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101007/4c321fb6/attachment.html>

From hans_meine at gmx.net  Fri Oct  8 11:58:33 2010
From: hans_meine at gmx.net (Hans Meine)
Date: Fri, 8 Oct 2010 17:58:33 +0200
Subject: [IPython-dev] Problems with IPython's git repo
Message-ID: <201010081758.33452.hans_meine@gmx.net>

Hi everybody,

I have been using ipython from git for a long time already, but today when I 
wanted to try out the merged newkernel stuff, I got this:

# python setup.py develop --prefix `local_prefix`                           
Traceback (most recent call last):
  File "setup.py", line 50, in <module>
    from IPython.utils.path import target_update
  File "/informatik/home/meine/Programming/ipython/IPython/__init__.py", line 
40, in <module>
    from .config.loader import Config
  File "/informatik/home/meine/Programming/ipython/IPython/config/loader.py", 
line 26, in <module>
    from IPython.utils.path import filefind
OverflowError: modification time overflows a 4 byte field

It turns out that the 0-byte __init__.py's had very strange timestamps:

Jun 26  1927 ./IPython/frontend/qt/console/tests/__init__.py
Jun 26  1927 ./IPython/frontend/qt/console/__init__.py
Jun 26  1927 ./IPython/frontend/qt/__init__.py
Jun 26  1927 ./IPython/frontend/terminal/__init__.py
Jun 26  1927 ./IPython/frontend/terminal/tests/__init__.py
Jul 26  1978 ./IPython/frontend/__init__.py
Jul 26  1978 ./IPython/extensions/tests/__init__.py
Jul 26  1978 ./IPython/deathrow/tests/__init__.py
Jul 26  1978 ./IPython/deathrow/__init__.py
Jul 26  1978 ./IPython/core/__init__.py
Jul 26  1978 ./IPython/core/tests/__init__.py
Jul 26  1978 ./IPython/testing/plugin/__init__.py
Jun 26  1927 ./IPython/zmq/tests/__init__.py
M?r 10  1995 ./IPython/zmq/__init__.py
Jun 26  1927 ./IPython/zmq/pylab/__init__.py
Jul 26  1978 ./IPython/config/default/__init__.py
Jul 26  1978 ./IPython/config/profile/__init__.py
Jul 26  1978 ./IPython/config/tests/__init__.py
Jul 26  1978 ./IPython/lib/tests/__init__.py
Jul 26  1978 ./IPython/utils/tests/__init__.py
Jul 26  1978 ./IPython/utils/__init__.py
Jul 26  1978 ./IPython/quarantine/tests/__init__.py
Jul 26  1978 ./IPython/quarantine/__init__.py
Jul 26  1978 ./IPython/scripts/__init__.py

I deleted them, did a "git reset --hard", and got all of them re-created.. 
with an even better timestamp:

Sep 16  1964 ./IPython/frontend/qt/console/tests/__init__.py
Sep 16  1964 ./IPython/frontend/qt/console/__init__.py
Sep 16  1964 ./IPython/frontend/qt/__init__.py
Sep 16  1964 ./IPython/frontend/terminal/__init__.py
...

Of course I can simply touch them, but what's the point?  I am struggling 
enough with git itself, I don't need these funny errors (guess how many 
minutes it took me to find out why setup.py failed in the first place..)!

Best,
  Hans


From matthew.brett at gmail.com  Fri Oct  8 13:41:45 2010
From: matthew.brett at gmail.com (Matthew Brett)
Date: Fri, 8 Oct 2010 10:41:45 -0700
Subject: [IPython-dev] Problems with IPython's git repo
In-Reply-To: <201010081758.33452.hans_meine@gmx.net>
References: <201010081758.33452.hans_meine@gmx.net>
Message-ID: <AANLkTi=__bxg+eqaKZYW4jDaiTWnBoUgiKM-VgWFXsHZ@mail.gmail.com>

Hi,

> It turns out that the 0-byte __init__.py's had very strange timestamps:
>
> Jun 26 ?1927 ./IPython/frontend/qt/console/tests/__init__.py

That's strange - when I do a checkout I get the current date and time
as I was expecting.  Is your clock OK?  What system are you on?

Best,

Matthew


From fperez.net at gmail.com  Fri Oct  8 13:45:59 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 8 Oct 2010 10:45:59 -0700
Subject: [IPython-dev] Problems with IPython's git repo
In-Reply-To: <AANLkTi=__bxg+eqaKZYW4jDaiTWnBoUgiKM-VgWFXsHZ@mail.gmail.com>
References: <201010081758.33452.hans_meine@gmx.net>
	<AANLkTi=__bxg+eqaKZYW4jDaiTWnBoUgiKM-VgWFXsHZ@mail.gmail.com>
Message-ID: <AANLkTin1TNNr1QzfKOYCTnHOmH_wvH=2P+La24wx7qVR@mail.gmail.com>

Hi Hans,

On Fri, Oct 8, 2010 at 10:41 AM, Matthew Brett <matthew.brett at gmail.com> wrote:
>
>> Jun 26 ?1927 ./IPython/frontend/qt/console/tests/__init__.py
>
> That's strange - when I do a checkout I get the current date and time
> as I was expecting. ?Is your clock OK? ?What system are you on?
>

I also get normal timestamps, both on my existing git repo and on a
freshly cloned one.

Is it possible you have a problem with your system/os that's not
actually git related at all?

Cheers,

f


From epatters at enthought.com  Fri Oct  8 15:43:43 2010
From: epatters at enthought.com (Evan Patterson)
Date: Fri, 8 Oct 2010 12:43:43 -0700
Subject: [IPython-dev] Mostly for Evan: shift-right not working?
In-Reply-To: <AANLkTikWFZ8_q_wmmCfF=2vZAP+=Y9Td8P1xP1YC5eBe@mail.gmail.com>
References: <AANLkTi=Hj+z9B4GqM6Yx7Gy0266ZJK=s8BUG3OY=6LOa@mail.gmail.com>
	<AANLkTinAc=ni9SqziWsjMQ39pXa2VSXxjLkN-vCMyZYc@mail.gmail.com>
	<AANLkTi=WLqZZaUvwzWmv4ALjzdUyn6eihx7iGfqrbvg5@mail.gmail.com>
	<AANLkTikWFZ8_q_wmmCfF=2vZAP+=Y9Td8P1xP1YC5eBe@mail.gmail.com>
Message-ID: <AANLkTi=-XuLUYc_ohxxg+tHxwY2=gJicP6PCK7QKHgrb@mail.gmail.com>

On Thu, Oct 7, 2010 at 5:25 PM, Fernando Perez <fperez.net at gmail.com> wrote:

> On Thu, Oct 7, 2010 at 5:10 PM, MinRK <benjaminrk at gmail.com> wrote:
> > There was also an issue with selection continuing from one line to
> another.
> > Both seem to be fixed with this:
> >
> http://github.com/minrk/ipython/commit/66596366de6ccf5e7f56f9db434b877fd93539d0
>
> Great, thanks!  Reviewed, tested, merged and pushed.
>
> If Evan sees any issues we can fine-tune it, but I read a little bit
> about the anchor mode and it looks totally OK to me.  Testing also
> shows that it does fix the behavior.
>

Looks fine to me. Thanks for fixing this, Min.

Evan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101008/0fd4f946/attachment.html>

From fperez.net at gmail.com  Fri Oct  8 18:37:22 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 8 Oct 2010 15:37:22 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTikhKK5Wr_+-apvzXAhR7hTK2H2zkk8Fvc+DWsBT@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
	<AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
	<AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>
	<AANLkTimPzXVt5m8hgXE-OJnsS0RnC1evLGAcu43=czo=@mail.gmail.com>
	<AANLkTikhKK5Wr_+-apvzXAhR7hTK2H2zkk8Fvc+DWsBT@mail.gmail.com>
Message-ID: <AANLkTinPr9Kd_hbOnP4JbX_0snPizK8UsKARzbWRGkuy@mail.gmail.com>

On Thu, Oct 7, 2010 at 7:20 PM, MinRK <benjaminrk at gmail.com> wrote:
> I don't think it's as hard as you make it sound.
> I just changed the close dialog so it has 3 options:
> a) full shutdown,
> b) only close console,
> c) Cancel; forget we ever met.
> I only added case b), and it's just a few lines.
> see here:
> http://github.com/minrk/ipython/commit/cdb78a95f99540790cdf7960e52941d2ef1af2a3
> The only thing that *doesn't* seem to work, is that you have to ctrl-C or
> some such to terminate the original Qt process if you close the original
> console. ?Other consoles can shutdown the kernel later, but the process
> doesn't die.
> Note: this is really a proof of concept. ?Yes/No/Cancel is generally not a
> good dialog if you want things to be clear.

Thanks for the test.  The problem you note may be perhaps that you're
not emitting the right signal?  I'm not sure, my Qt-fu is pretty
limited.

But the reason I said it could be a startup flag was because I had
understood that something had to be done differently *at
initialization* if we wanted to bypass the logic Evan had added.

Was I mistaken in that understanding?  I didn't write that code so I'm
not sure right now...

Cheers,

f


From benjaminrk at gmail.com  Fri Oct  8 18:41:01 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 8 Oct 2010 15:41:01 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTinPr9Kd_hbOnP4JbX_0snPizK8UsKARzbWRGkuy@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
	<AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
	<AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>
	<AANLkTimPzXVt5m8hgXE-OJnsS0RnC1evLGAcu43=czo=@mail.gmail.com>
	<AANLkTikhKK5Wr_+-apvzXAhR7hTK2H2zkk8Fvc+DWsBT@mail.gmail.com>
	<AANLkTinPr9Kd_hbOnP4JbX_0snPizK8UsKARzbWRGkuy@mail.gmail.com>
Message-ID: <AANLkTim5uXfBA0546nSzUaWiJcAAc7604gwXwXo5Rd4Z@mail.gmail.com>

On Fri, Oct 8, 2010 at 15:37, Fernando Perez <fperez.net at gmail.com> wrote:

> On Thu, Oct 7, 2010 at 7:20 PM, MinRK <benjaminrk at gmail.com> wrote:
> > I don't think it's as hard as you make it sound.
> > I just changed the close dialog so it has 3 options:
> > a) full shutdown,
> > b) only close console,
> > c) Cancel; forget we ever met.
> > I only added case b), and it's just a few lines.
> > see here:
> >
> http://github.com/minrk/ipython/commit/cdb78a95f99540790cdf7960e52941d2ef1af2a3
> > The only thing that *doesn't* seem to work, is that you have to ctrl-C or
> > some such to terminate the original Qt process if you close the original
> > console.  Other consoles can shutdown the kernel later, but the process
> > doesn't die.
> > Note: this is really a proof of concept.  Yes/No/Cancel is generally not
> a
> > good dialog if you want things to be clear.
>
> Thanks for the test.  The problem you note may be perhaps that you're
> not emitting the right signal?  I'm not sure, my Qt-fu is pretty
> limited.
>
> But the reason I said it could be a startup flag was because I had
> understood that something had to be done differently *at
> initialization* if we wanted to bypass the logic Evan had added.
>
> Was I mistaken in that understanding?  I didn't write that code so I'm
> not sure right now...
>

It doesn't necessarily have to be done differently at startup, because you
can 'destroy' a widget at any point, leaving the process alive.

I just updated my keepkernel branch with a couple things:

1) fixed error when resetting pykernel (it still reset, but printed an
error)
2) fixed issue where closing a frontend, even secondary ones, always
shutdown the kernel
3) shutdown_reply now goes out on the pub socket, so all clients are
notified
   3.a) this means that all clients can (and do) refresh the screen when a
reset is called, just like the master frontend
4) kernel can stay alive after consoles are shutdown, and can be shutdown by
any frontend at any point
   4.a) this means that a shutdown request from any frontend can close all
open frontends and the kernel, even if the kernel is detached, leaving no
processes zombified.
   4.b) 4.a required that a 'reset' element be added to
shutdown_request/reply messages to identify the difference between a real
shutdown message and stage 1 of a reset.

http://github.com/minrk/ipython/commits/keepkernel/

-MinRK



>
> Cheers,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101008/ba5f3eb6/attachment.html>

From fperez.net at gmail.com  Fri Oct  8 18:52:22 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 8 Oct 2010 15:52:22 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTim5uXfBA0546nSzUaWiJcAAc7604gwXwXo5Rd4Z@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
	<AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
	<AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>
	<AANLkTimPzXVt5m8hgXE-OJnsS0RnC1evLGAcu43=czo=@mail.gmail.com>
	<AANLkTikhKK5Wr_+-apvzXAhR7hTK2H2zkk8Fvc+DWsBT@mail.gmail.com>
	<AANLkTinPr9Kd_hbOnP4JbX_0snPizK8UsKARzbWRGkuy@mail.gmail.com>
	<AANLkTim5uXfBA0546nSzUaWiJcAAc7604gwXwXo5Rd4Z@mail.gmail.com>
Message-ID: <AANLkTimck5+Yob=0gJBbRZ=n4_udkzNo7wUG8w0NpnWe@mail.gmail.com>

On Fri, Oct 8, 2010 at 3:41 PM, MinRK <benjaminrk at gmail.com> wrote:
> It doesn't necessarily have to be done differently at startup, because you
> can 'destroy' a widget at any point, leaving the process alive.

Sorry, do you mean leaving the client process or the kernel process alive?

> I just updated my keepkernel branch with a couple things:
> 1) fixed error when resetting pykernel (it still reset, but printed an
> error)
> 2) fixed issue where closing a frontend, even secondary ones, always
> shutdown the kernel
> 3) shutdown_reply now goes out on the pub socket, so all clients are
> notified
> ?? 3.a) this means that all clients can (and do) refresh the screen when a
> reset is called, just like the master frontend
> 4) kernel can stay alive after consoles are shutdown, and can be shutdown by
> any frontend at any point
> ?? 4.a) this means that a shutdown request from any frontend can close all
> open frontends and the kernel, even if the kernel is detached, leaving no
> processes zombified.
> ?? 4.b) 4.a required that a 'reset' element be added to
> shutdown_request/reply messages to identify the difference between a real
> shutdown message and stage 1 of a reset.
> http://github.com/minrk/ipython/commits/keepkernel/

Great!  Do you want this for inclusion now?  Initially you meant it
purely as a proof of concept, but at this point it's getting to be
useful functionality :)

I'll trade you a review for a review of my current pull request, that
fixes and cleans up ton of nasty execution logic:

http://github.com/ipython/ipython/pull/163

:)  The key question in yours is whether it leaves the Qt client
process zombified or not, I'm not quite clear on that point yet.

cheers,

f


From benjaminrk at gmail.com  Fri Oct  8 19:09:34 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 8 Oct 2010 16:09:34 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTimck5+Yob=0gJBbRZ=n4_udkzNo7wUG8w0NpnWe@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
	<AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
	<AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>
	<AANLkTimPzXVt5m8hgXE-OJnsS0RnC1evLGAcu43=czo=@mail.gmail.com>
	<AANLkTikhKK5Wr_+-apvzXAhR7hTK2H2zkk8Fvc+DWsBT@mail.gmail.com>
	<AANLkTinPr9Kd_hbOnP4JbX_0snPizK8UsKARzbWRGkuy@mail.gmail.com>
	<AANLkTim5uXfBA0546nSzUaWiJcAAc7604gwXwXo5Rd4Z@mail.gmail.com>
	<AANLkTimck5+Yob=0gJBbRZ=n4_udkzNo7wUG8w0NpnWe@mail.gmail.com>
Message-ID: <AANLkTi=MV1DmGAY7WpEceDfV2K56agKRQiSZgDDdWWym@mail.gmail.com>

On Fri, Oct 8, 2010 at 15:52, Fernando Perez <fperez.net at gmail.com> wrote:

> On Fri, Oct 8, 2010 at 3:41 PM, MinRK <benjaminrk at gmail.com> wrote:
> > It doesn't necessarily have to be done differently at startup, because
> you
> > can 'destroy' a widget at any point, leaving the process alive.
>
> Sorry, do you mean leaving the client process or the kernel process alive?
>

It leaves the parent process alive, but destroys the widget.  What would
require some different behavior at the top level would be to allow the
kernel to persist beyond its parent process.  The original KernelManager
lives in that process. I haven't changed this behavior at all, I just
destroy the window *without* ending the process.  Then, when another window
issues a shutdown command, the original parent process gets shutdown as well
as the kernel subprocess.


>
> > I just updated my keepkernel branch with a couple things:
> > 1) fixed error when resetting pykernel (it still reset, but printed an
> > error)
> > 2) fixed issue where closing a frontend, even secondary ones, always
> > shutdown the kernel
> > 3) shutdown_reply now goes out on the pub socket, so all clients are
> > notified
> >    3.a) this means that all clients can (and do) refresh the screen when
> a
> > reset is called, just like the master frontend
> > 4) kernel can stay alive after consoles are shutdown, and can be shutdown
> by
> > any frontend at any point
> >    4.a) this means that a shutdown request from any frontend can close
> all
> > open frontends and the kernel, even if the kernel is detached, leaving no
> > processes zombified.
> >    4.b) 4.a required that a 'reset' element be added to
> > shutdown_request/reply messages to identify the difference between a real
> > shutdown message and stage 1 of a reset.
> > http://github.com/minrk/ipython/commits/keepkernel/
>
> Great!  Do you want this for inclusion now?  Initially you meant it
> purely as a proof of concept, but at this point it's getting to be
> useful functionality :)
>

It can be merged now. The main thing that's not solid is the close dialog.
 It's not obvious to me how to present the options clearly and concisely.

Any suggestions for prompt and button text?
Cancel: do nothing (easy)
Option 1: close only the console
Option 2: shutdown the entire session, including the kernel and all other
frontends.


> I'll trade you a review for a review of my current pull request, that
> fixes and cleans up ton of nasty execution logic:
>
> http://github.com/ipython/ipython/pull/163


Sure, I'll start looking it over now.  Mine's here:

http://github.com/ipython/ipython/pull/164



>
> :)  The key question in yours is whether it leaves the Qt client
> process zombified or not, I'm not quite clear on that point yet.
>

Nothing really gets zombified, because the original kernel does not get
detached from its parent.  If you close all the consoles, you can still stop
the kernel with ctrl-C in the original terminal window.

However, if you start the original kernel with a GUI launch method (i.e. not
attached to a terminal), you can close all of the consoles, leaving the
kernel around waiting for frontends.  This is sort of like a zombie since
there is little evidence that it's still around, but it's still fully
functional and fully valid (you can connect frontends to it later). Note
that you have to specifically choose to leave the kernel alive every time
you close a console to do this, so I'm okay with that level of clarity.

There could be something wrong in my code, since this (and the fixes
yesterday) is my first Qt code ever, but it seems pretty straightforward.

-MinRK


> cheers,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101008/b4236d5c/attachment.html>

From fperez.net at gmail.com  Sat Oct  9 02:32:48 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 8 Oct 2010 23:32:48 -0700
Subject: [IPython-dev] New ipythonqt console questions/feedback
In-Reply-To: <AANLkTi=MV1DmGAY7WpEceDfV2K56agKRQiSZgDDdWWym@mail.gmail.com>
References: <AANLkTi=1QzLZ+MME=Mq8TC+chG-Awqf3+rk96DCCwSk2@mail.gmail.com>
	<AANLkTi=ikkLpRVzztVYK4Q3bBXX=qeMD+iF04mbcK9_6@mail.gmail.com>
	<AANLkTikGxLfnF9Tu8vAooGOJTHShrcz++h5Hagc4PLTg@mail.gmail.com>
	<AANLkTimWUz35FFuCzH-wq7Lt6J_476KxnkCs6+aOnF_m@mail.gmail.com>
	<i8j0kh$381$2@dough.gmane.org>
	<AANLkTimFCiQJOXkfsQ0ZUc3SnTKfD+wQ3yBJg=hnNSoR@mail.gmail.com>
	<AANLkTimvAkQqq_M5xTqcYGtLP2H8AGvdnEFXFu7rAw5P@mail.gmail.com>
	<AANLkTimO0jzKOSFu3GojMNcKhM=31syPt+UjuGwWdBpt@mail.gmail.com>
	<AANLkTikATmPFwgfVRsXKmdn-ZcOXdkGTpuKX91jZViO3@mail.gmail.com>
	<AANLkTimPzXVt5m8hgXE-OJnsS0RnC1evLGAcu43=czo=@mail.gmail.com>
	<AANLkTikhKK5Wr_+-apvzXAhR7hTK2H2zkk8Fvc+DWsBT@mail.gmail.com>
	<AANLkTinPr9Kd_hbOnP4JbX_0snPizK8UsKARzbWRGkuy@mail.gmail.com>
	<AANLkTim5uXfBA0546nSzUaWiJcAAc7604gwXwXo5Rd4Z@mail.gmail.com>
	<AANLkTimck5+Yob=0gJBbRZ=n4_udkzNo7wUG8w0NpnWe@mail.gmail.com>
	<AANLkTi=MV1DmGAY7WpEceDfV2K56agKRQiSZgDDdWWym@mail.gmail.com>
Message-ID: <AANLkTi=FZom_VdKo1+P0fOtX6OBUJqzHAqRgLU=FB27z@mail.gmail.com>

Hey,

On Fri, Oct 8, 2010 at 4:09 PM, MinRK <benjaminrk at gmail.com> wrote:
>
>
> On Fri, Oct 8, 2010 at 15:52, Fernando Perez <fperez.net at gmail.com> wrote:
>>
>> On Fri, Oct 8, 2010 at 3:41 PM, MinRK <benjaminrk at gmail.com> wrote:
>> > It doesn't necessarily have to be done differently at startup, because
>> > you
>> > can 'destroy' a widget at any point, leaving the process alive.
>>
>> Sorry, do you mean leaving the client process or the kernel process alive?
>
> It leaves the parent process alive, but destroys the widget. ?What would
> require some different behavior at the top level would be to allow the
> kernel to persist beyond its parent process. ?The original KernelManager
> lives in that process. I haven't changed this behavior at all, I just
> destroy the window *without* ending the process. ?Then, when another window
> issues a shutdown command, the original parent process gets shutdown as well
> as the kernel subprocess.

OK, thanks for the clarification.

That actually sounds like a reasonable approach to me.  No user will
do this by accident, and it's a very useful feature to have.  I'll go
over the code tomorrow to review it.

>> Great! ?Do you want this for inclusion now? ?Initially you meant it
>> purely as a proof of concept, but at this point it's getting to be
>> useful functionality :)
>
> It can be merged now. The main thing that's not solid is the close dialog.
> ?It's not obvious to me how to present the options clearly and concisely.
> Any suggestions for prompt and button text?
> Cancel: do nothing (easy)
> Option 1: close only the console
> Option 2: shutdown the entire session, including the kernel and all other
> frontends.

How about the dialog:

#
"Close console and/or kernel?"

[Cancel] [Close console] [Close both]
#

>> :) ?The key question in yours is whether it leaves the Qt client
>> process zombified or not, I'm not quite clear on that point yet.
>
> Nothing really gets zombified, because the original kernel does not get
> detached from its parent. ?If you close all the consoles, you can still stop
> the kernel with ctrl-C in the original terminal window.
> However, if you start the original kernel with a GUI launch method (i.e. not
> attached to a terminal), you can close all of the consoles, leaving the
> kernel around waiting for frontends. ?This is sort of like a zombie since
> there is little evidence that it's still around, but it's still fully
> functional and fully valid (you can connect frontends to it later). Note
> that you have to specifically choose to leave the kernel alive every time
> you close a console to do this, so I'm okay with that level of clarity.

I'm thinking each kernel should leave a file in ~/.ipython with its
pid and port info.  This would make it easy to have later a tool that
can read this info and either connect to existing kernels, or cleans
them up.  What do you think?

Cheers,

f

> There could be something wrong in my code, since this (and the fixes
> yesterday) is my first Qt code ever, but it seems pretty straightforward.


From fperez.net at gmail.com  Sat Oct  9 02:47:27 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 8 Oct 2010 23:47:27 -0700
Subject: [IPython-dev] Printing support enabled
Message-ID: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>

Hi foiks,

from my presentation at UC Berkeley, we received right away a
contribution to enable printing!  I just now got some time to review
the patch, and it works great, so I've pushed it:

http://github.com/ipython/ipython/commit/0784973fad497ae51da3d70830d1188179bb237a

You can now print an entire session, graphs, syntax highlights and
all, to PDF (see attached).

Many thanks to Mark Voorhies!

Cheers,

f
-------------- next part --------------
A non-text attachment was scrubbed...
Name: job_1-untitled_document.pdf
Type: application/pdf
Size: 68058 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101008/712ee46e/attachment.pdf>

From ellisonbg at gmail.com  Sat Oct  9 10:22:10 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sat, 9 Oct 2010 07:22:10 -0700
Subject: [IPython-dev] Printing support enabled
In-Reply-To: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>
References: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>
Message-ID: <AANLkTikQ8yCqUMZ7MJgXPQqz6BPrQDPmhk1QYTBebDCc@mail.gmail.com>

Fernando and Mark,

> from my presentation at UC Berkeley, we received right away a
> contribution to enable printing! ?I just now got some time to review
> the patch, and it works great, so I've pushed it:
>
> http://github.com/ipython/ipython/commit/0784973fad497ae51da3d70830d1188179bb237a
>
> You can now print an entire session, graphs, syntax highlights and
> all, to PDF (see attached).

This is fantastic and will be used a ton by everyone!

> Many thanks to Mark Voorhies!

Definitely, thanks Mark!

Brian

> Cheers,
>
> f
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From takowl at gmail.com  Sat Oct  9 12:45:59 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sat, 9 Oct 2010 17:45:59 +0100
Subject: [IPython-dev] Bug with __del__methods on exit
Message-ID: <AANLkTikuXtmtLW8CPphXK_6pJ2ovoSTibcbSizCtsdL9@mail.gmail.com>

Hi,

I recently found a problem in my python 3 port of ipython, where the __del__
method of objects, called as the program was exiting, could not find global
functions.

On a hunch, I've just tested this in standard ipython, both 0.10 (in Ubuntu)
and trunk. The problem exists in both cases (it only came to light in Python
3 because print is a function). The minimal code to reproduce it is:

class A(object):
    def __del__(self):
        input("ABC")

a = A()
exit()

Which gives: Exception NameError: "global name 'input' is not defined" in
<bound method A.__del__ of <__main__.A object at 0x98634cc>> ignored

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101009/a68dd1ea/attachment.html>

From fperez.net at gmail.com  Sat Oct  9 15:31:19 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 12:31:19 -0700
Subject: [IPython-dev] Bug with __del__methods on exit
In-Reply-To: <AANLkTikuXtmtLW8CPphXK_6pJ2ovoSTibcbSizCtsdL9@mail.gmail.com>
References: <AANLkTikuXtmtLW8CPphXK_6pJ2ovoSTibcbSizCtsdL9@mail.gmail.com>
Message-ID: <AANLkTim16v7p_Br8ysMdEzwhUc+mHODM+TwComd3ej-4@mail.gmail.com>

Hi Thomas,

On Sat, Oct 9, 2010 at 9:45 AM, Thomas Kluyver <takowl at gmail.com> wrote:
> I recently found a problem in my python 3 port of ipython, where the __del__
> method of objects, called as the program was exiting, could not find global
> functions.
>
> On a hunch, I've just tested this in standard ipython, both 0.10 (in Ubuntu)
> and trunk. The problem exists in both cases (it only came to light in Python
> 3 because print is a function). The minimal code to reproduce it is:
>
> class A(object):
> ??? def __del__(self):
> ??????? input("ABC")
>
> a = A()
> exit()
>
> Which gives: Exception NameError: "global name 'input' is not defined" in
> <bound method A.__del__ of <__main__.A object at 0x98634cc>> ignored

This isn't an ipython bug, but a reality of python itself:

dreamweaver[test]> cat objdel.py
class A(object):
    def __del__(self):
        input("ABC")

a = A()
exit()
dreamweaver[test]> python objdel.py
Exception ValueError: 'I/O operation on closed file' in <bound method
A.__del__ of <__main__.A object at 0x7f47551bcf50>> ignored
ABCdreamweaver[test]>

Basically, on exit the sate of the interpreter is mostly undefined.
Del methods should limit themselves to closing resources they had
acquired:

self.whatever.close()

But they can't expect to access any globals, or even objects in other modules.

If you need to perform actions on exit but that require the
interpreter to be fully functional, use the atexit module and register
your callbacks (ipython uses that).

Cheers,

f


From fperez.net at gmail.com  Sat Oct  9 15:41:11 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 12:41:11 -0700
Subject: [IPython-dev] Printing support enabled
In-Reply-To: <AANLkTikQ8yCqUMZ7MJgXPQqz6BPrQDPmhk1QYTBebDCc@mail.gmail.com>
References: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>
	<AANLkTikQ8yCqUMZ7MJgXPQqz6BPrQDPmhk1QYTBebDCc@mail.gmail.com>
Message-ID: <AANLkTikRvBLdYxoaGy5cu5kaySDUBswY84ricB7o2tu=@mail.gmail.com>

On Sat, Oct 9, 2010 at 7:22 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>
>
> This is fantastic and will be used a ton by everyone!
>
>> Many thanks to Mark Voorhies!
>
> Definitely, thanks Mark!

Indeed. By the way, Mark: do you know if it's equally easy to enable
html generation?  That would be great to have as well...

Cheers,

f


From takowl at gmail.com  Sat Oct  9 19:41:33 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sun, 10 Oct 2010 00:41:33 +0100
Subject: [IPython-dev] Bug with __del__methods on exit
In-Reply-To: <AANLkTim16v7p_Br8ysMdEzwhUc+mHODM+TwComd3ej-4@mail.gmail.com>
References: <AANLkTikuXtmtLW8CPphXK_6pJ2ovoSTibcbSizCtsdL9@mail.gmail.com>
	<AANLkTim16v7p_Br8ysMdEzwhUc+mHODM+TwComd3ej-4@mail.gmail.com>
Message-ID: <AANLkTikZwF6fJozvWeUE-Vqk6V7Zuu+NZJu=PV3QzYTs@mail.gmail.com>

Hi Fernando,

The exception you're seeing is a different one, however. In fact, input was
a bad example: the difference can be better seen with str("something"). The
attached test file runs under python 2.6, but shows a NameError in ipython.

The relevant code is already being called from an atexit callback.
Specifically, it triggers the .reset() method of the
TerminalInteractiveShell object. You can verify this with the following code
in ipython trunk:

class A(object):
    def __del__(self):
        str("Hi")
a = A()
get_ipython().reset()
# Gives: Exception NameError: "global name 'str' is not defined" in <bound
method A.__del__ of <__main__.A object at 0x985004c>> ignored

Thanks,
Thomas

On 9 October 2010 20:31, Fernando Perez <fperez.net at gmail.com> wrote:

> Hi Thomas,
>
> On Sat, Oct 9, 2010 at 9:45 AM, Thomas Kluyver <takowl at gmail.com> wrote:
> > I recently found a problem in my python 3 port of ipython, where the
> __del__
> > method of objects, called as the program was exiting, could not find
> global
> > functions.
> >
> > On a hunch, I've just tested this in standard ipython, both 0.10 (in
> Ubuntu)
> > and trunk. The problem exists in both cases (it only came to light in
> Python
> > 3 because print is a function). The minimal code to reproduce it is:
> >
> > class A(object):
> >     def __del__(self):
> >         input("ABC")
> >
> > a = A()
> > exit()
> >
> > Which gives: Exception NameError: "global name 'input' is not defined" in
> > <bound method A.__del__ of <__main__.A object at 0x98634cc>> ignored
>
> This isn't an ipython bug, but a reality of python itself:
>
> dreamweaver[test]> cat objdel.py
> class A(object):
>    def __del__(self):
>        input("ABC")
>
> a = A()
> exit()
> dreamweaver[test]> python objdel.py
> Exception ValueError: 'I/O operation on closed file' in <bound method
> A.__del__ of <__main__.A object at 0x7f47551bcf50>> ignored
> ABCdreamweaver[test]>
>
> Basically, on exit the sate of the interpreter is mostly undefined.
> Del methods should limit themselves to closing resources they had
> acquired:
>
> self.whatever.close()
>
> But they can't expect to access any globals, or even objects in other
> modules.
>
> If you need to perform actions on exit but that require the
> interpreter to be fully functional, use the atexit module and register
> your callbacks (ipython uses that).
>
> Cheers,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101010/ecf1b355/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: test.py
Type: text/x-python
Size: 68 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101010/ecf1b355/attachment.py>

From fperez.net at gmail.com  Sat Oct  9 19:45:38 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 16:45:38 -0700
Subject: [IPython-dev] Printing support enabled
In-Reply-To: <201010091629.42837.mark.voorhies@ucsf.edu>
References: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>
	<AANLkTikQ8yCqUMZ7MJgXPQqz6BPrQDPmhk1QYTBebDCc@mail.gmail.com>
	<AANLkTikRvBLdYxoaGy5cu5kaySDUBswY84ricB7o2tu=@mail.gmail.com>
	<201010091629.42837.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTikC-Zo+iV1Hi-XmB=OtvCku5cCi2FK2S-FeAuXD@mail.gmail.com>

On Sat, Oct 9, 2010 at 4:29 PM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:
>
> I think this would be easy. QTextEdit.toHtml() can dump the iPython
> console as HTML, preserving mark-up. ?The tricky bit is the images,
> which get dumped like this:
>
> <img src="867583393794" />
>
> I'm assuming the number is Qt's internal ID for the image -- anyone
> know how to map that back to the SVG/PNG objects in iPython?
>
> Given the image references, should be easy to either embed PNG or SVG
> or do something like Firefox's "Web Page, complete" (obviously, embedding or
> linking SVGs would be prettier, but runs into more browser compatibility
> issues).
>
> If I can figure out how to resolve the IDs, I'll take a crack at this.

Great, thanks!  You may want to look inside the payload handler that
places the inline svgs.  That handler could perhaps get the qt id at
that point, and store it in a dict member of the main object.  Upon
saving, the dict would then easily be used to get back the reference
to the image object, hence being able to write it out to disk in a
foo_files/ directory like firefox does (or embedded in the html for
single-file printouts, even better).

Looking forward to your contributions!

Cheers,

f


From fperez.net at gmail.com  Sat Oct  9 21:28:56 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 18:28:56 -0700
Subject: [IPython-dev] Bug with __del__methods on exit
In-Reply-To: <AANLkTikZwF6fJozvWeUE-Vqk6V7Zuu+NZJu=PV3QzYTs@mail.gmail.com>
References: <AANLkTikuXtmtLW8CPphXK_6pJ2ovoSTibcbSizCtsdL9@mail.gmail.com>
	<AANLkTim16v7p_Br8ysMdEzwhUc+mHODM+TwComd3ej-4@mail.gmail.com>
	<AANLkTikZwF6fJozvWeUE-Vqk6V7Zuu+NZJu=PV3QzYTs@mail.gmail.com>
Message-ID: <AANLkTinQro=0gP6OH1f0UMTGeTYhk8HamxkWuMTFzFXu@mail.gmail.com>

On Sat, Oct 9, 2010 at 4:41 PM, Thomas Kluyver <takowl at gmail.com> wrote:
> The relevant code is already being called from an atexit callback.
> Specifically, it triggers the .reset() method of the
> TerminalInteractiveShell object. You can verify this with the following code
> in ipython trunk:

Thanks for the test case, it proved fairly tricky to track down.
Could you try again with updated trunk?  I think I got it.  And thanks
for the report!

Cheers,

f


From epatters at enthought.com  Sat Oct  9 21:39:20 2010
From: epatters at enthought.com (Evan Patterson)
Date: Sat, 9 Oct 2010 18:39:20 -0700
Subject: [IPython-dev] Printing support enabled
In-Reply-To: <AANLkTikC-Zo+iV1Hi-XmB=OtvCku5cCi2FK2S-FeAuXD@mail.gmail.com>
References: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>
	<AANLkTikQ8yCqUMZ7MJgXPQqz6BPrQDPmhk1QYTBebDCc@mail.gmail.com>
	<AANLkTikRvBLdYxoaGy5cu5kaySDUBswY84ricB7o2tu=@mail.gmail.com>
	<201010091629.42837.mark.voorhies@ucsf.edu>
	<AANLkTikC-Zo+iV1Hi-XmB=OtvCku5cCi2FK2S-FeAuXD@mail.gmail.com>
Message-ID: <AANLkTik4Aq4oY_gpLu69qjMF5S6KCXmNVHdfmYPahz5R@mail.gmail.com>

Fernando has the right idea. Look at the _process_execute_payload,
_add_image, and _get_image methods of RichIPythonWidget to see how QImages
and SVG data are indexed. Since the QTextDocument does not have a mechanism
for retrieving all of it's resources, the image IDs will have to be stored
manually in '_add_image'.

If you prefer, I can implement this some time in the next week or so, but
feel free to take a stab at it.

Evan

On Sat, Oct 9, 2010 at 4:45 PM, Fernando Perez <fperez.net at gmail.com> wrote:

> On Sat, Oct 9, 2010 at 4:29 PM, Mark Voorhies <mark.voorhies at ucsf.edu>
> wrote:
> >
> > I think this would be easy. QTextEdit.toHtml() can dump the iPython
> > console as HTML, preserving mark-up.  The tricky bit is the images,
> > which get dumped like this:
> >
> > <img src="867583393794" />
> >
> > I'm assuming the number is Qt's internal ID for the image -- anyone
> > know how to map that back to the SVG/PNG objects in iPython?
> >
> > Given the image references, should be easy to either embed PNG or SVG
> > or do something like Firefox's "Web Page, complete" (obviously, embedding
> or
> > linking SVGs would be prettier, but runs into more browser compatibility
> > issues).
> >
> > If I can figure out how to resolve the IDs, I'll take a crack at this.
>
> Great, thanks!  You may want to look inside the payload handler that
> places the inline svgs.  That handler could perhaps get the qt id at
> that point, and store it in a dict member of the main object.  Upon
> saving, the dict would then easily be used to get back the reference
> to the image object, hence being able to write it out to disk in a
> foo_files/ directory like firefox does (or embedded in the html for
> single-file printouts, even better).
>
> Looking forward to your contributions!
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101009/ac7d8774/attachment.html>

From fperez.net at gmail.com  Sat Oct  9 22:48:15 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 19:48:15 -0700
Subject: [IPython-dev] Printing support enabled
In-Reply-To: <201010091905.19622.mark.voorhies@ucsf.edu>
References: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>
	<AANLkTikC-Zo+iV1Hi-XmB=OtvCku5cCi2FK2S-FeAuXD@mail.gmail.com>
	<AANLkTik4Aq4oY_gpLu69qjMF5S6KCXmNVHdfmYPahz5R@mail.gmail.com>
	<201010091905.19622.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTimi6HzXYC6J-Qi6rrwgJJTkp-Tzo64-7=ParO=7@mail.gmail.com>

On Sat, Oct 9, 2010 at 7:05 PM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:
>
>
> Yes, looks like format.name() inside of _process_execute_payload gives the ID that I want to map from.
> I think I can do up a decent first pass this weekend.
>

Fantastic, thanks to both of you for this!  I know that this will be a
tremendously useful feature.

Let us know when it's ready to  test, I'm eager to use this while
teaching: a tiny amount of code would let the client update an html
file automatically after each execution, and one could trivially serve
that over http to 'broadcast' an interactive session to students.

John Hunter and I are teaching a workshop in 2 weeks for 2 days at
Claremont University, it would be a blast to have something like this
to play with...

Cheers,

f


From fperez.net at gmail.com  Sun Oct 10 01:04:54 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 22:04:54 -0700
Subject: [IPython-dev] git question
In-Reply-To: <4BF416D2.2000400@bostream.nu>
References: <4BF416D2.2000400@bostream.nu>
Message-ID: <AANLkTikW83UpFZfe81Wjbo-6bQFh_REfR=vPqYvEwy_K@mail.gmail.com>

Hi Jorgen,

On Wed, May 19, 2010 at 9:50 AM, J?rgen Stenarson
<jorgen.stenarson at bostream.nu> wrote:
> I'm trying to get to know git. I have made my on fork on github
> following the instructions in the gitwash document. But when I try to
> commit I get the following error message. What is the recommended way to
> fix this? Should I just set i18n.commitencoding to utf-8? Or should it
> be something else?

I'm sorry that I completely ignored this for a long time.

Did you ever resolve it?  Are you properly set up on git now, or do
you still have problems?  I now have a windows box I can test on and
know my way better around both git and unicode, so I may be able to
help.

Now that it seems we're picking up good momentum on the git/github
workflow, I hope it doesn't cause you any undue problems.

Regards,

f


From fperez.net at gmail.com  Sun Oct 10 01:18:37 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 22:18:37 -0700
Subject: [IPython-dev] Fwd: [GitHub] ipy_vimserver traceback on Windows
	[ipython/ipython GH-153]
In-Reply-To: <4c9bbf69ec2e5_4bbf3fddf33e207c222@fe4.rs.github.com.tmail>
References: <4c9bbf69ec2e5_4bbf3fddf33e207c222@fe4.rs.github.com.tmail>
Message-ID: <AANLkTikJ+mB3od01P=gq_EyCxocHUWvoSpfrZmJfLQN_@mail.gmail.com>

Hi folks,

does the vimserver code run on Windows?  We have  a bug report about
it on the site, and I have no idea what to say, see below...

Thanks

f


---------- Forwarded message ----------
From: GitHub <noreply at github.com>
Date: Thu, Sep 23, 2010 at 1:58 PM
Subject: [GitHub] ipy_vimserver traceback on Windows [ipython/ipython GH-153]
To: fperez.net at gmail.com


whitelynx reported an issue:

On Windows, I receive the following traceback when attempting to load
the ipy_vimserver module in my ipy_user_conf.py:

<pre>
ERROR: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line statement', (49, 0))

---------------------------------------------------------------------------
AttributeError ? ? ? ? ? ? ? ? ? ? ? ? ? ?Traceback (most recent call last)

C:\Python27\lib\site-packages\IPython\ipmaker.pyc in
force_import(modname, force_reload)
? ? 61 ? ? ? ? reload(sys.modules[modname])
? ? 62 ? ? else:
---> 63 ? ? ? ? __import__(modname)
? ? 64
? ? 65

C:\Users\davidbron\_ipython\ipy_user_conf.pyc in <module>()
? ?412 ? ? ip.ex('execfile("%s")' % os.path.expanduser(fname))
? ?413
--> 414 main()
? ?415
? ?416

C:\Users\davidbron\_ipython\ipy_user_conf.pyc in main()
? ? 55
? ? 56
---> 57 ? ? import ipy_vimserver
? ? 58 ? ? from subprocess import Popen, PIPE
? ? 59

C:\Python27\lib\site-packages\IPython\Extensions\ipy_vimserver.pyc in <module>()
? ? 74 import re
? ? 75
---> 76 ERRCONDS = select.POLLHUP|select.POLLERR
? ? 77 SERVER = None
? ? 78 ip = IPython.ipapi.get()

AttributeError: 'module' object has no attribute 'POLLHUP'
WARNING: Loading of ipy_user_conf failed.
</pre>

Platform: Windows 7 64-bit
Python version: 2.7
IPython version: 0.10
Vim version: 7.2

View Issue: http://github.com/ipython/ipython/issues#issue/153


From fperez.net at gmail.com  Sun Oct 10 01:23:18 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 22:23:18 -0700
Subject: [IPython-dev] IPython 0.10.1 release candidate up for final
	testing
In-Reply-To: <20101006143658.322caea7@earth>
References: <AANLkTi=Lmvd5w=WPybo=9r30j-1XL8FjN6tx9_2Ov6k=@mail.gmail.com>
	<20101006143658.322caea7@earth>
Message-ID: <AANLkTi=FLcyMDUcRX1s4euBUuv_pgVHsksHDL=Smi3T+@mail.gmail.com>

On Wed, Oct 6, 2010 at 5:36 AM, Thomas Spura <tomspur at fedoraproject.org> wrote:
> This redhad bug is fixed in the new version:
> https://bugzilla.redhat.com/show_bug.cgi?id=640578
>
> I had some problems with applying the fedora patch for unbundling the
> libraries, but that worked now too. Maybe you want to apply it too,
> before doing the release, but later on git should be enough for
> now... ;-)

I'm afraid it's too late to mess with the code.  I'm going to run one
last set of tests on the rc and ship it if all passes.  I don't want
last-minute changes which always end up breaking something.

I'm sure we'll get a trickle of little things for a 0.10.2 eventually...

Cheers,

f


From fperez.net at gmail.com  Sun Oct 10 01:29:39 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 9 Oct 2010 22:29:39 -0700
Subject: [IPython-dev] IPython handles code input as latin1 instead of
 the system encoding
In-Reply-To: <20100617212840.GO14947@blackpad.lan.raisama.net>
References: <20100617212840.GO14947@blackpad.lan.raisama.net>
Message-ID: <AANLkTi=tOHDRzhz54=gN1gYX0SpsxMP44fuTz=3yfbR3@mail.gmail.com>

Hi Eduardo,

On Thu, Jun 17, 2010 at 2:28 PM, Eduardo Habkost <ehabkost at raisama.net> wrote:
>
> Hi,
>
> I just noticed a problem with non-ascii input in ipython, that can be
> seen below:
>
> Python behavior (expected):
>
> ---------
> $ python
> Python 2.6 (r26:66714, Nov ?3 2009, 17:33:38)
> [GCC 4.4.1 20090725 (Red Hat 4.4.1-2)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
>>>> import sys, locale
>>>> print sys.stdin.encoding,locale.getdefaultlocale()
> UTF-8 ('en_US', 'UTF8')
>>>> print repr(u'??')
> u'\xe1\xe9'
> -------------
> (two unicode characters as result, as expected)
>
>
> IPython behavior:
>
> ------------------
> $ ipython
> Python 2.6 (r26:66714, Nov ?3 2009, 17:33:38)
> Type "copyright", "credits" or "license" for more information.
>
> IPython 0.11.alpha1.git -- An enhanced Interactive Python.
> ? ? ? ? ? -> Introduction and overview of IPython's features.
> %quickref -> Quick reference.
> help ? ? ?-> Python's own help system.
> object? ? -> Details about 'object'. ?object also works, ?? prints more.
>
> In [1]: import sys, locale
>
> In [2]: print sys.stdin.encoding,locale.getdefaultlocale()
> UTF-8 ('en_US', 'UTF8')
>
> In [3]: print repr(u'??')
> u'\xc3\xa1\xc3\xa9'

Thanks for the report.  We've made a lot of improvements to our
unicode handling recently, and I think it's all OK now.  With current
trunk:

IPython 0.11.alpha1.git -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import sys, locale

In [2]: print repr(u'??')
u'\xe1\xe9'


Let us know again if you have any remaining problems.

Cheers,

f


From fperez.net at gmail.com  Sun Oct 10 03:38:38 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 00:38:38 -0700
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
References: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
Message-ID: <AANLkTikq5cOaNEncLf9WTAb3DrNWmOv-eQQN9hQJEmVL@mail.gmail.com>

Hi Peter,

On Tue, Oct 5, 2010 at 11:56 AM, Peter Butterworth <butterw at gmail.com> wrote:
> Hi,
>
> I have the following issue with IPython 0.10.1 /IPython 0.10 with
> python 2.6 and only on Windows 32/64bits in pylab mode (it works fine
> in regular ipython) :
> I can't cd to a directory with an accentuated character.
>
>>>> cd c:\Python_tests\001\b?
> [Error 2] Le fichier sp?cifi? est introuvable: 'c:/Python_tests/001/b\xc3\xa9'
> c:\Python_tests\001
>
> I hope this can be solved as it is really quite annoying.

Unfortunately for the 0.10 series, we're pretty much down to
maintenance mode by accepting user contributions (such as the recent
SGE work).  We simply don't have the resources to actively develop the
main series and backport all work to 0.10.

I think we've fixed all unicode related problems we know of in 0.11,
so if you can run from that, you might be OK (many of us use now 0.11
for full production work).  If you see the problem still in 0.11 let
us know, and we'll definitely work on it.

If you need the fix for 0.10 and can send a patch or pull request
(from you or anyone else) we'll be happy to include it, but I'm afraid
we won't be able to work on it.  At least not myself.

Regards,

f


From butterw at gmail.com  Sun Oct 10 06:20:53 2010
From: butterw at gmail.com (Peter Butterworth)
Date: Sun, 10 Oct 2010 12:20:53 +0200
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <AANLkTikq5cOaNEncLf9WTAb3DrNWmOv-eQQN9hQJEmVL@mail.gmail.com>
References: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
	<AANLkTikq5cOaNEncLf9WTAb3DrNWmOv-eQQN9hQJEmVL@mail.gmail.com>
Message-ID: <AANLkTikQG+PPy_-AWe3tuS1aVr6za4w4Oi1uCEbCtJpU@mail.gmail.com>

Hi,

I've downloaded the 0.11alpha1 source code and run ipython.py without
installation ... and the Windows issue I reported still exists in
-pylab + also in regular ipython this time.

Could someone please confirm ?


On Sun, Oct 10, 2010 at 9:38 AM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi Peter,
>
> On Tue, Oct 5, 2010 at 11:56 AM, Peter Butterworth <butterw at gmail.com> wrote:
>> Hi,
>>
>> I have the following issue with IPython 0.10.1 /IPython 0.10 with
>> python 2.6 and only on Windows 32/64bits in pylab mode (it works fine
>> in regular ipython) :
>> I can't cd to a directory with an accentuated character.
>>
>>>>> cd c:\Python_tests\001\b?
>> [Error 2] Le fichier sp?cifi? est introuvable: 'c:/Python_tests/001/b\xc3\xa9'
>> c:\Python_tests\001
>>
>> I hope this can be solved as it is really quite annoying.
>
> Unfortunately for the 0.10 series, we're pretty much down to
> maintenance mode by accepting user contributions (such as the recent
> SGE work). ?We simply don't have the resources to actively develop the
> main series and backport all work to 0.10.
>
> I think we've fixed all unicode related problems we know of in 0.11,
> so if you can run from that, you might be OK (many of us use now 0.11
> for full production work). ?If you see the problem still in 0.11 let
> us know, and we'll definitely work on it.
>
> If you need the fix for 0.10 and can send a patch or pull request
> (from you or anyone else) we'll be happy to include it, but I'm afraid
> we won't be able to work on it. ?At least not myself.
>
> Regards,
>
> f
>



-- 
thanks,
peter butterworth


From jorgen.stenarson at bostream.nu  Sun Oct 10 12:41:42 2010
From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=)
Date: Sun, 10 Oct 2010 18:41:42 +0200
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
References: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
Message-ID: <4CB1ECC6.5020000@bostream.nu>

Peter Butterworth skrev 2010-10-05 20:56:
> Hi,
>
> I have the following issue with IPython 0.10.1 /IPython 0.10 with
> python 2.6 and only on Windows 32/64bits in pylab mode (it works fine
> in regular ipython) :
> I can't cd to a directory with an accentuated character.
>
>>>> cd c:\Python_tests\001\b?
> [Error 2] Le fichier sp?cifi? est introuvable: 'c:/Python_tests/001/b\xc3\xa9'
> c:\Python_tests\001
>
> I hope this can be solved as it is really quite annoying.
>

For me there are some issues with the default codepage that is set in 
the console window when you launch python.

If you are having the same problem I have experienced then 
<http://packages.python.org/pyreadline/usage.html#international-characters> 
may help you resolve your issues.
If not please let me know and I will try to improve the explanation.

/J?rgen


From jorgen.stenarson at bostream.nu  Sun Oct 10 12:47:24 2010
From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=)
Date: Sun, 10 Oct 2010 18:47:24 +0200
Subject: [IPython-dev] git question
In-Reply-To: <AANLkTikW83UpFZfe81Wjbo-6bQFh_REfR=vPqYvEwy_K@mail.gmail.com>
References: <4BF416D2.2000400@bostream.nu>
	<AANLkTikW83UpFZfe81Wjbo-6bQFh_REfR=vPqYvEwy_K@mail.gmail.com>
Message-ID: <4CB1EE1C.4030002@bostream.nu>

Fernando Perez skrev 2010-10-10 07:04:
> Hi Jorgen,
>
> On Wed, May 19, 2010 at 9:50 AM, J?rgen Stenarson
> <jorgen.stenarson at bostream.nu>  wrote:
>> I'm trying to get to know git. I have made my on fork on github
>> following the instructions in the gitwash document. But when I try to
>> commit I get the following error message. What is the recommended way to
>> fix this? Should I just set i18n.commitencoding to utf-8? Or should it
>> be something else?
>
> I'm sorry that I completely ignored this for a long time.
>
> Did you ever resolve it?  Are you properly set up on git now, or do
> you still have problems?  I now have a windows box I can test on and
> know my way better around both git and unicode, so I may be able to
> help.
>
> Now that it seems we're picking up good momentum on the git/github
> workflow, I hope it doesn't cause you any undue problems.
>

I haven't really tried it since then so no I haven't resolved it yet.
Right now I do not have the time but I'll get back to you if I have any 
problems once I get the time.

/J?rgen


From fperez.net at gmail.com  Sun Oct 10 13:47:19 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 10:47:19 -0700
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <4CB1ECC6.5020000@bostream.nu>
References: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
	<4CB1ECC6.5020000@bostream.nu>
Message-ID: <AANLkTi=7-YVj6c_jz0Y3doWxiB7kfubDgro2Uqnp6WFr@mail.gmail.com>

On Sun, Oct 10, 2010 at 9:41 AM, J?rgen Stenarson
<jorgen.stenarson at bostream.nu> wrote:
>
> For me there are some issues with the default codepage that is set in
> the console window when you launch python.
>
> If you are having the same problem I have experienced then
> <http://packages.python.org/pyreadline/usage.html#international-characters>
> may help you resolve your issues.
> If not please let me know and I will try to improve the explanation.

Thanks for the feedback, Jorgen.

What puzzles me about Peter's problem is that I don't get why the
--pylab switch should in any way affect the behavior related to
unicode/codepages...  Does anyone have an idea on this front?

Peter, could you test using

--pylab wx
--pylab gtk
--pylab tk
--pylab qt (if you have pyqt installed)

and let us know if there's any difference?  I wonder if the gui
toolkit is messing with stdin in any way.

I'm honestly shooting in the dark here, so if anyone has a better
idea, by all means speak up.

Cheers,

f


From fperez.net at gmail.com  Sun Oct 10 13:48:22 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 10:48:22 -0700
Subject: [IPython-dev] git question
In-Reply-To: <4CB1EE1C.4030002@bostream.nu>
References: <4BF416D2.2000400@bostream.nu>
	<AANLkTikW83UpFZfe81Wjbo-6bQFh_REfR=vPqYvEwy_K@mail.gmail.com>
	<4CB1EE1C.4030002@bostream.nu>
Message-ID: <AANLkTim-V_P=zdcJcm1HBuDSRN+AJ3MT8_fY_X6BjWM2@mail.gmail.com>

Hi Jorgen,

On Sun, Oct 10, 2010 at 9:47 AM, J?rgen Stenarson
<jorgen.stenarson at bostream.nu> wrote:
> I haven't really tried it since then so no I haven't resolved it yet.
> Right now I do not have the time but I'll get back to you if I have any
> problems once I get the time.

OK, let me know and I'll do my best to help out.

Regards,

f


From takowl at gmail.com  Sun Oct 10 14:10:31 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sun, 10 Oct 2010 19:10:31 +0100
Subject: [IPython-dev] Bug with __del__methods on exit
In-Reply-To: <AANLkTinQro=0gP6OH1f0UMTGeTYhk8HamxkWuMTFzFXu@mail.gmail.com>
References: <AANLkTikuXtmtLW8CPphXK_6pJ2ovoSTibcbSizCtsdL9@mail.gmail.com>
	<AANLkTim16v7p_Br8ysMdEzwhUc+mHODM+TwComd3ej-4@mail.gmail.com>
	<AANLkTikZwF6fJozvWeUE-Vqk6V7Zuu+NZJu=PV3QzYTs@mail.gmail.com>
	<AANLkTinQro=0gP6OH1f0UMTGeTYhk8HamxkWuMTFzFXu@mail.gmail.com>
Message-ID: <AANLkTimbZsXwnEv_0AXNFPYCZNkkmh-UiowpnP7uWhpL@mail.gmail.com>

On 10 October 2010 06:05, <ipython-dev-request at scipy.org> wrote:

> Thanks for the test case, it proved fairly tricky to track down.
> Could you try again with updated trunk?  I think I got it.  And thanks
> for the report!
>

Almost works. You dropped __builtin__ from the items to delete. The test
case works for me if I change that to __builtins__. I've committed this
change in ipy3-preparation (I know it's technically separate). If that
doesn't work in other cases, we could avoid deleting both __builtin__ and
__builtins__.

With __builtins__ protected, the same problem is resolved in my py3k
version, which now passes all the test suites it attempts, apart from
IPython.frontend (I'm a bit stuck with the pyzmq stuff).

I've removed the commented-out lines from ipy3-preparation, as you suggested
on my pull request.

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101010/8a74fd78/attachment.html>

From efiring at hawaii.edu  Sun Oct 10 14:56:14 2010
From: efiring at hawaii.edu (Eric Firing)
Date: Sun, 10 Oct 2010 08:56:14 -1000
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <AANLkTi=7-YVj6c_jz0Y3doWxiB7kfubDgro2Uqnp6WFr@mail.gmail.com>
References: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
	<4CB1ECC6.5020000@bostream.nu>
	<AANLkTi=7-YVj6c_jz0Y3doWxiB7kfubDgro2Uqnp6WFr@mail.gmail.com>
Message-ID: <4CB20C4E.1020703@hawaii.edu>

On 10/10/2010 07:47 AM, Fernando Perez wrote:
> On Sun, Oct 10, 2010 at 9:41 AM, J?rgen Stenarson
> <jorgen.stenarson at bostream.nu>  wrote:
>>
>> For me there are some issues with the default codepage that is set in
>> the console window when you launch python.
>>
>> If you are having the same problem I have experienced then
>> <http://packages.python.org/pyreadline/usage.html#international-characters>
>> may help you resolve your issues.
>> If not please let me know and I will try to improve the explanation.
>
> Thanks for the feedback, Jorgen.
>
> What puzzles me about Peter's problem is that I don't get why the
> --pylab switch should in any way affect the behavior related to
> unicode/codepages...  Does anyone have an idea on this front?

This sounds dimly familiar--could the fact that mpl imports locale (in 
cbook.py) be causing this?  A quick google did not turn anything up, but 
I think there was something like this reported earlier, where importing 
pylab was subtly changing the environment.

Eric

>
> Peter, could you test using
>
> --pylab wx
> --pylab gtk
> --pylab tk
> --pylab qt (if you have pyqt installed)
>
> and let us know if there's any difference?  I wonder if the gui
> toolkit is messing with stdin in any way.
>
> I'm honestly shooting in the dark here, so if anyone has a better
> idea, by all means speak up.
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev



From fperez.net at gmail.com  Sun Oct 10 16:27:41 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 13:27:41 -0700
Subject: [IPython-dev] Bug with __del__methods on exit
In-Reply-To: <AANLkTimbZsXwnEv_0AXNFPYCZNkkmh-UiowpnP7uWhpL@mail.gmail.com>
References: <AANLkTikuXtmtLW8CPphXK_6pJ2ovoSTibcbSizCtsdL9@mail.gmail.com>
	<AANLkTim16v7p_Br8ysMdEzwhUc+mHODM+TwComd3ej-4@mail.gmail.com>
	<AANLkTikZwF6fJozvWeUE-Vqk6V7Zuu+NZJu=PV3QzYTs@mail.gmail.com>
	<AANLkTinQro=0gP6OH1f0UMTGeTYhk8HamxkWuMTFzFXu@mail.gmail.com>
	<AANLkTimbZsXwnEv_0AXNFPYCZNkkmh-UiowpnP7uWhpL@mail.gmail.com>
Message-ID: <AANLkTikmhxyYPXFHz_GFjUKyhwzakWYEgFQOBAm6LV7Z@mail.gmail.com>

Hi,

On Sun, Oct 10, 2010 at 11:10 AM, Thomas Kluyver <takowl at gmail.com> wrote:
>
> With __builtins__ protected, the same problem is resolved in my py3k
> version, which now passes all the test suites it attempts, apart from
> IPython.frontend (I'm a bit stuck with the pyzmq stuff).

I committed a fix that pulls both forms (with and without the 's') and
added a unit test.  Thanks for help with tracking this down.

> I've removed the commented-out lines from ipy3-preparation, as you suggested
> on my pull request.

I committed a slightly different form with a more explicit comment,
and leaving the original code still commented out, I think your idea
of having it handy for someone tracking a problem is a good one for a
while.  I'd rather save one of us time in a debug session than be a
stickler about code purity :)

Thanks again for the great 2to3 work!  Now all of that is merged, and
hopefully your py3 branch can remain as auto-generated from trunk as
possible.

In the future, while we will *try* to not re-introduce things that you
need to clean up, we may do so accidentally (none of us is running
2to3 regularly yet).  So I hope you'll  be able to point us to any
inadvertent slips in that direction when they happen, and we'll fix
them quickly.  From my review of your branch, I had the impression
most things were pretty easy (like using iter* methods on dicts so the
semantics are unambiguous for the 2to3 tool).

Cheers,

f


From fperez.net at gmail.com  Sun Oct 10 16:32:25 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 13:32:25 -0700
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <4CB20C4E.1020703@hawaii.edu>
References: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
	<4CB1ECC6.5020000@bostream.nu>
	<AANLkTi=7-YVj6c_jz0Y3doWxiB7kfubDgro2Uqnp6WFr@mail.gmail.com>
	<4CB20C4E.1020703@hawaii.edu>
Message-ID: <AANLkTi=vojMn_mwx1z9Q7B2W3uaMiWF0J7=CtTBuquWM@mail.gmail.com>

Hi Eric,

On Sun, Oct 10, 2010 at 11:56 AM, Eric Firing <efiring at hawaii.edu> wrote:
> This sounds dimly familiar--could the fact that mpl imports locale (in
> cbook.py) be causing this? ?A quick google did not turn anything up, but
> I think there was something like this reported earlier, where importing
> pylab was subtly changing the environment.

thanks for the tip, that does sound like it could point in the right
direction...

If the OP is willing to test it out, this might clarify things.  I had
a look in matplotlib, and it's easy: comment out the locale import and
change the code below in matplotlib's cbook.py:

try:
    preferredencoding = locale.getpreferredencoding().strip()
    if not preferredencoding:
        preferredencoding = None
except (ValueError, ImportError, AttributeError):
    preferredencoding = None

to:

try:
    preferredencoding = locale.getpreferredencoding().strip()
    if not preferredencoding:
        preferredencoding = None
except (ValueError, ImportError, AttributeError, NameError):
    preferredencoding = None


The change is just adding NameError to the list of exceptions.

If this makes the problem go away, we'll at least know what the origin
of the issue is, and we can think about a solution.

Regards,

f


From fperez.net at gmail.com  Sun Oct 10 17:01:03 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 14:01:03 -0700
Subject: [IPython-dev] Pull request workflow...
Message-ID: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>

Hi all,

We're starting to get into a very good grove with github-based pull
requests.  With a cleanly constructed request, I can pretty much
review it, merge it and push with about *1 minute* total worth of
tool-based overhead.  That is, the time I need now for a review is
pretty much whatever the actual code requires in terms of
reading/thinking/testing/discussing, plus a negligible amount to apply
the merge, close the ticket and push.  Emphasis on 'cleanly
constructed'.  If the review isn't cleanly constructed, the overhead
balloons to an unbounded amount.  I recently had to review some code
on the numpy datarray project where I spent *multiple hours* wading
through a complex review.  In that case I did it because the original
author had already kindly waited for too long on me, so I decided to
take the hit and not impose further work on his part, but what should
have taken at most 30 minutes burned multiple hours this week.

So from this experience, I'd like to summarize what I've learned so
far, so that we can all hit a good stride in terms of workflow, that
balances the burden on the person offering the contribution and the
time it takes to accept it.  My impression is that if you want to
propose a pull request that will go down very easily and will be
merged with minimal pain on all sides, you want to:

- keep the work on your branch completely confined to one specific
topic, bugfix or feature implementation.  Git branches are cheap and
easy to make, do *not* mix in one branch more than one topic.  If you
do, you force the reviewer to disentangle unrelated functionality
scattered across multiple commits.  This doesn't mean that a branch
can't touch multiple files or have many commits, simply that all the
work in a branch should be related to one specific 'task', be it
refactoring, cleanup, bugfix, feature implementation, whatever.  But:
'one task, one branch'.

- name your branches sensibly: the merge commit message will have the
name of the branch by default.  It's better to read 'merge branch
john-print-support' or 'merge branch john-fix-gh-123' than 'merge
branch john-master'.

- *Never* merge back from trunk into your feature branch.  When you
merge from trunk (sometimes repeatedly), it makes the history graph
extremely, and unnecessarily, complicated.  For example for this pull
request (and I'm *not* picking on Thomas, he did a great job, this is
just part of the learning process for everybody):

http://github.com/ipython/ipython/pull/159

his repeated merge commits simply created a lot of unnecessary noise
and complexity in the graph.  I was able to rebase his branch on top
of trunk and then merge it with a clean graph, but that step would
have been saved by Thomas not merging from trunk in the first place.

Obviously you want to make sure that your work will merge into trunk
cleanly before submitting it, but you can test that in your local repo
by simply merging your work into your personal master (or any
throw-away that tracks trunk) and testing.  But don't publish in your
request your merges *from* trunk.  Since your branch is meant to go
back *into* trunk, doing that will just create a criss-cross mess.

If you absolutely need to merge something from trunk (because it has
fixes you need for your own work), then rebase on top of trunk before
making your pull request, so that your branch applies cleanly on top
of trunk as a self-contained unit without criss-crossing.

For reviewers:

- edit the merge message before you push, by adding a short
explanation of what the branch did, along with a final 'Closes
gh-XXX.' string, so the pull request is auto-closed and the ticket and
closing commit get automatically linked.


Feedback on these notes is welcome, and even more welcome will be when
someone updates our dev documents with a clean pull request that
include our collective wisdom, guiding by example :)

Cheers,

f


From takowl at gmail.com  Sun Oct 10 18:30:35 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sun, 10 Oct 2010 23:30:35 +0100
Subject: [IPython-dev] Bug with __del__methods on exit
In-Reply-To: <AANLkTikmhxyYPXFHz_GFjUKyhwzakWYEgFQOBAm6LV7Z@mail.gmail.com>
References: <AANLkTikuXtmtLW8CPphXK_6pJ2ovoSTibcbSizCtsdL9@mail.gmail.com>
	<AANLkTim16v7p_Br8ysMdEzwhUc+mHODM+TwComd3ej-4@mail.gmail.com>
	<AANLkTikZwF6fJozvWeUE-Vqk6V7Zuu+NZJu=PV3QzYTs@mail.gmail.com>
	<AANLkTinQro=0gP6OH1f0UMTGeTYhk8HamxkWuMTFzFXu@mail.gmail.com>
	<AANLkTimbZsXwnEv_0AXNFPYCZNkkmh-UiowpnP7uWhpL@mail.gmail.com>
	<AANLkTikmhxyYPXFHz_GFjUKyhwzakWYEgFQOBAm6LV7Z@mail.gmail.com>
Message-ID: <AANLkTim2g4MZroAWzCuno-vJYKokPSWcie1oMbbcn8-6@mail.gmail.com>

On 10 October 2010 21:27, Fernando Perez <fperez.net at gmail.com> wrote:

> I committed a fix that pulls both forms (with and without the 's') and
> added a unit test.  Thanks for help with tracking this down.
>

That's great. I've tweaked it for py3k, and it all seems to be working.

 I committed a slightly different form with a more explicit comment,
> and leaving the original code still commented out, I think your idea
> of having it handy for someone tracking a problem is a good one for a
> while.  I'd rather save one of us time in a debug session than be a
> stickler about code purity :)
>

Sounds like a good plan.

In the future, while we will *try* to not re-introduce things that you
> need to clean up, we may do so accidentally (none of us is running
> 2to3 regularly yet).  So I hope you'll  be able to point us to any
> inadvertent slips in that direction when they happen, and we'll fix
> them quickly.  From my review of your branch, I had the impression
> most things were pretty easy (like using iter* methods on dicts so the
> semantics are unambiguous for the 2to3 tool).
>

OK, that's great. At present, my strategy is to have one branch with the
results of running 2to3 on trunk, which I merge into the branch containing
my changes (now renamed to ipython-py3k).

Yes, I've really just gone for best practices 2.6 code, which mostly
converts neatly. Using iter* methods is tidier, but not essential; without
them, 2to3 just wraps the method call in a list(...) to get the equivalent
behaviour.

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101010/7663a353/attachment.html>

From fperez.net at gmail.com  Sun Oct 10 18:32:27 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 15:32:27 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
Message-ID: <AANLkTinajJhrDKZ8iBJDWd8KOpP7XR+fddT5P_z9xzC=@mail.gmail.com>

I would like to add, from today's experience and thinking:

On Sun, Oct 10, 2010 at 2:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> For reviewers:

- When possible, rebase the branch you're about to merge into trunk
before applying the merge.  If the rebase works, it will make the
feature branch appear in all displays of the log that are
topologically sorted as a contiguous set of commits, which makes it
much nicer to look at and inspect later, as related changes are
grouped together.

Consider for example when I merged Min's cursor fixes, a tiny
one-commit branch, where I did *not* rebase:

* |   b286d0e Merge branch 'cursor' of git://github.com/minrk/ipython into trunk
|\ \
| * | 6659636 fixed some cursor selection behavior
* | | f0980fb Add Ctrl-+/- to increase/decrease font size.
* | | 8ea3b50 Acknowledge T. Kluyver's work in credits.
| |/
|/|
* | 78f7387 Minor robustness/speed improvements to process handling

The threads of multiple branches there make the graph harder to read
than is necessary (with a gui tool the lines are easier to see, but
the problem remains).

Contrast that with today, when I merged Thomas' 2to3 preparation
branch.  He had done multiple merges *from* trunk, that made the graph
horrible.  But rather than bouncing it back to him, I tried simply
calling rebase onto trunk, and in 10 seconds git had it ready to merge
in a nice, clean and easy to read set of grouped commits:

*   572d3d7 Merge branch 'takowl-ipy3-preparation' into trunk
|\
| * 468a14f Rename misleading use of raw_input so it's not
automatically converted to input by 2to3.
| * 91a11d5 Revert "Option for testing local copy, rather than global.
Testing needs more work." (Doesn't work; will use virtualenv instead)
| * b0a6ac5 Option for testing local copy, rather than global. Testing
needs more work.
| * be0324d Update md5 calls.
| * 10494fd Change to pass tests in IPython.extensions
| * edb052c Replacing some .items() calls with .iteritems() for
cleaner conversion with 2to3.
| * d426657 Tidy up dictionary-style methods of Config (2to3 converts
'has_key' calls to use the 'in' operator, which will be that of the
parent dict unless we define it).
| * b2dc4fa Update use of rjust
| * 3fd347a Ignore .bak files
| * 50876ac Ignore .py~ files
| * 760c432 Further updating.
| * 663027f Cleaning up old code to simplify 2to3 conversion.
|/
* 263d8f1 Complete fix for __del__ errors with .reset(), add unit test.

This is the kind of graph that is a pleasure to read, and makes it
easy to inspect how a feature evolved.  Note that all commits retain
their original timestamps, it's just the topological sort that
changes.

Obviously there will be cases where a rebase may fail, and at that
point there's a judgment call to be made, between accepting a messy
DAG and asking the original author to rebase, resolve conflicts and
resubmit.  Common sense should prevail: we want a clean history, but
not at the cost of making it a pain to contribute to ipython, so a
good contribution should be accepted even if it causes a messy history
sometimes.

It's just that via good practices, we can reduce to a minimum the need
for such messes, and we'll have in the long run a much more
comprehensible project evolution.

Regards,

f


From fperez.net at gmail.com  Sun Oct 10 18:47:50 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 15:47:50 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTinajJhrDKZ8iBJDWd8KOpP7XR+fddT5P_z9xzC=@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTinajJhrDKZ8iBJDWd8KOpP7XR+fddT5P_z9xzC=@mail.gmail.com>
Message-ID: <AANLkTikiQ=ViwqH8-dccmJ0sUdZRWk_jt2UVqgJ4eKZB@mail.gmail.com>

On Sun, Oct 10, 2010 at 3:32 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> - When possible, rebase the branch you're about to merge into trunk
> before applying the merge.

Oh, and I should add: if you rebase and it works, when you merge into
trunk to apply, make sure you merge with '--no-ff', so that we get a
standalone branch grouping the related commits together.  Otherwise
the rebase will allow git to fast-forward, losing the logical grouping
altogether.

Cheers,

f


From butterw at gmail.com  Sun Oct 10 20:42:39 2010
From: butterw at gmail.com (Peter Butterworth)
Date: Mon, 11 Oct 2010 02:42:39 +0200
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
References: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
Message-ID: <AANLkTik3qUsJR56BX7=XEh7puYZCaQQH-LnbB7zuo8A0@mail.gmail.com>

The suggested modification in matplotlib/cbook.py doesn't solve the issue:
>> import locale
>> x=locale.getpreferredencoding()
>> print repr(x)
'cp1252'

On ipython v0.10/0.10.1 -pylab the problem disappears if I change the
matplotlib backend from Qt4Agg to TkAgg. I'm pretty sure I've seen
this problem previously using TkAgg though, so I will have to check
this again on some other windows machines.

In v0.10/0.10.1 launching regular ipython and then doing >> from pylab import *
does works.
In v0.11 the problem occurs also in regular ipython.

On Tue, Oct 5, 2010 at 8:56 PM, Peter Butterworth <butterw at gmail.com> wrote:
> Hi,
>
> I have the following issue with IPython 0.10.1 /IPython 0.10 with
> python 2.6 and only on Windows 32/64bits in pylab mode (it works fine
> in regular ipython) :
> I can't cd to a directory with an accentuated character.
>
>>>> cd c:\Python_tests\001\b?
> [Error 2] Le fichier sp?cifi? est introuvable: 'c:/Python_tests/001/b\xc3\xa9'
> c:\Python_tests\001
>
> I hope this can be solved as it is really quite annoying.
>
> --
> thanks,
> peter butterworth


From fperez.net at gmail.com  Sun Oct 10 20:46:20 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 17:46:20 -0700
Subject: [IPython-dev] Ctrl-C regression with current git master and
	-q4thread
In-Reply-To: <201005191454.25748.hans_meine@gmx.net>
References: <201005191454.25748.hans_meine@gmx.net>
Message-ID: <AANLkTikDoWXGNSJ-+nzq5m1a2thq61SkzdOuv5WgdSE1@mail.gmail.com>

Hi Hans,

On Wed, May 19, 2010 at 5:54 AM, Hans Meine <hans_meine at gmx.net> wrote:
> I am just trying out the current ipython from github, and I noticed that I
> cannot clear the commandline using Ctrl-C anymore when using -q4thread.
> Even worse, the next command that I confirm using [enter] is getting a delayed
> KeyboardInterrupt.

Very late in this thread, but if you still remember ... :)

It may not be ideal, but Ctrl-U does clear the line completely.
That's what I use now.  As Brian said, there's no really easy way,
that we know of, of implementing the behavior we had before re. Ctrl-C
and typing, while having the new design that avoids threads.

Ctrl-U does the job fine (it's the readline 'clear line' keybinding)
and I've become used to it.  If you can propose an improvement within
the current design, we'll be happy to include it though.  We just
haven't found any.

Cheers,

f


From fperez.net at gmail.com  Sun Oct 10 20:48:35 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 17:48:35 -0700
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <AANLkTik3qUsJR56BX7=XEh7puYZCaQQH-LnbB7zuo8A0@mail.gmail.com>
References: <AANLkTin=2PoFg-BCxehDmSAoBYCfVTGJ-GidNVS4CoB8@mail.gmail.com>
	<AANLkTik3qUsJR56BX7=XEh7puYZCaQQH-LnbB7zuo8A0@mail.gmail.com>
Message-ID: <AANLkTikGSp0w7xpvmO8Hco-U9zS7R8Pych-Q+BKPs7Kz@mail.gmail.com>

On Sun, Oct 10, 2010 at 5:42 PM, Peter Butterworth <butterw at gmail.com> wrote:
>
> In v0.11 the problem occurs also in regular ipython.

OK, I see it here.  I'll get on it, maybe not *right now*, but I'll work on it.

Thanks!

f


From butterw at gmail.com  Sun Oct 10 20:55:57 2010
From: butterw at gmail.com (Peter Butterworth)
Date: Mon, 11 Oct 2010 02:55:57 +0200
Subject: [IPython-dev] IPython 0.10.1 -pylab
Message-ID: <AANLkTimyKt2VXVEgja9M7OmV3eF5c76O7YYs2G5N6zO6@mail.gmail.com>

Sure there are issues with the default cmd.exe codepage, but I don't
think it is the problem here, aI copy paste the paths into ipython.
I changed chcp from cp850 to cp1252



>> sys.stdout.encoding
Out[11]: 'cp437'


cp 437


<quote author="J?rgen Stenarson-2">
Peter Butterworth skrev 2010-10-05 20:56:
> Hi,
>
> I have the following issue with IPython 0.10.1 /IPython 0.10 with
> python 2.6 and only on Windows 32/64bits in pylab mode (it works fine
> in regular ipython) :
> I can't cd to a directory with an accentuated character.
>
>>>> cd c:\Python_tests\001\b?
> [Error 2] Le fichier sp?cifi? est introuvable: 'c:/Python_tests/001/b\xc3\xa9'
> c:\Python_tests\001
>
> I hope this can be solved as it is really quite annoying.
>

For me there are some issues with the default codepage that is set in
the console window when you launch python.

If you are having the same problem I have experienced then
<http://packages.python.org/pyreadline/usage.html#international-characters>
may help you resolve your issues.
If not please let me know and I will try to improve the explanation.

/J?rgen

-- 
thanks,
peter butterworth


From butterw at gmail.com  Sun Oct 10 21:13:07 2010
From: butterw at gmail.com (Peter Butterworth)
Date: Mon, 11 Oct 2010 03:13:07 +0200
Subject: [IPython-dev] IPython 0.10.1 -pylab
In-Reply-To: <AANLkTimyKt2VXVEgja9M7OmV3eF5c76O7YYs2G5N6zO6@mail.gmail.com>
References: <AANLkTimyKt2VXVEgja9M7OmV3eF5c76O7YYs2G5N6zO6@mail.gmail.com>
Message-ID: <AANLkTimpusQwLpZX68jQWVt39fkWvvUSxix5RkJOofRs@mail.gmail.com>

here's the full message (previous version got sent by mistake):

>> sys.stdout.encoding
Out[11]: 'cp437'

Sure there are print issues with the default cmd.exe codepage (cp850
or cp437), but I don't
think it is the problem here, as I copy/paste the paths into ipython,
I don't type them in. I have already tried chcp 1252.


<quote author="J?rgen Stenarson-2">
Peter Butterworth skrev 2010-10-05 20:56:
> Hi,
>
> I have the following issue with IPython 0.10.1 /IPython 0.10 with
> python 2.6 and only on Windows 32/64bits in pylab mode (it works fine
> in regular ipython) :
> I can't cd to a directory with an accentuated character.
>
>>>> cd c:\Python_tests\001\b?
> [Error 2] Le fichier sp?cifi? est introuvable: 'c:/Python_tests/001/b\xc3\xa9'
> c:\Python_tests\001
>
> I hope this can be solved as it is really quite annoying.
>

For me there are some issues with the default codepage that is set in
the console window when you launch python.

If you are having the same problem I have experienced then
<http://packages.python.org/pyreadline/usage.html#international-characters>
may help you resolve your issues.
If not please let me know and I will try to improve the explanation.

/J?rgen

--
thanks,
peter butterworth


From fperez.net at gmail.com  Sun Oct 10 21:23:48 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 18:23:48 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
Message-ID: <AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>

Hi Darren,

I'm bouncing back this reply to the list, because this is a good
question that deserves clarification.  I'm hoping in the end, this
conversation will be summarized in our guidelines, so I'd rather have
it all publicly archived.

On Sun, Oct 10, 2010 at 5:43 PM, Darren Dale <dsdale24 at gmail.com> wrote:

> I should thank you for putting me onto git at scipy 2009. Its an
> amazing tool, I don't want to work with anything else.

Glad you've liked it!  I have to admit that back then, it was mostly a
hunch based on liking its abstract design a lot and early
experimentation, but I didn't really have too much hands-on experience
with complex tasks yet (mostly handling very linear repos only).  Now
that I've used it 'in rage' for a while, I really, really love it :)

>> If you absolutely need to merge something from trunk (because it has
>> fixes you need for your own work), then rebase on top of trunk before
>> making your pull request, so that your branch applies cleanly on top
>> of trunk as a self-contained unit without criss-crossing.
>
> I'm seeking clarification concerning your comments on pull request
> workflow. I'm not working with pull requests as intimately as you are,
> and some of your comments really surprised me. I thought it was bad
> practice to rebase on a branch that was published anywhere, because if
> anyone else is tracking that branch, it makes an unholy mess of the
> history in their local checkout. Rebasing actually replays the changes
> on a new reference point, and anyone else who has made changes on the
> old reference point will be out for blood.
>
> I thought the right thing to do was to merge from master before filing
> a pull request. I can see that this would yield some diamonds in the
> history graph, but it minimizes the deltas while avoiding the problems
> inherent in rebasing on a public branch, especially on a complicated
> branch with multiple contributors.

Good question. The way I see it, feature branches proposed for
inclusion should never be considered 'stable' in the sense of anyone
following them and building on top of them.  In that case, the cleaner
history for merge into trunk trumps other considerations, I think.
The problem is that the merges lead to more than simple diamonds: they
spread the history of the branch all over the place.

I should note that even if the proposer doesn't rebase, they still
have to contend with the possibility that the reviewer may rebase upon
merging, just like I did with Thomas' py3-preparation branch. I did it
to give us a much cleaner history and a cleanly grouped view of his
work (otherwise the merge had created a *massive* amount of lines in
the DAG, because he had merged multiple times from trunk).

So in this case, even though I didn't touch the contents of any of
Thomas' commits, he will still discard his local branch and pull from
trunk, so that he can have the official version of the history (same
contents and timestamps, different SHAs, because of my rebase).

The approach I propose depends on one idea and assumption: that the
branches aren't terribly long-lived, and that there isn't a
complicated hierarchy of branches-from-branches.  It encourages the
development of small, self-contained feature branches.

Note that the above doesn't preclude anyone from benefiting from a
given feature branch and the history in trunk: they can use their
master, or any other integration branch, where they track trunk and
merge the feature *into*.  This branch will be their personal way of
tracking trunk and using their feature (or features, this can be done
with multiple branches) until the feature is merged upstream.  In fact
I have done this multiple times already, and it works fine.

Does this sound reasonable?  I'm no git expert though, so it's quite
possible there are still cases I haven't considered or where the above
is insufficient.  We'll have to find the patterns that fit them as we
encounter them.

Cheers,

f


From fperez.net at gmail.com  Sun Oct 10 21:56:43 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 18:56:43 -0700
Subject: [IPython-dev] ipy_user_conf is imported when running iptest
In-Reply-To: <49E77356.6090700@bostream.nu>
References: <49E77356.6090700@bostream.nu>
Message-ID: <AANLkTi=NTLyvuifEXi=MzMRoAJyEALNDZZf-=1JUp0gd@mail.gmail.com>

On Thu, Apr 16, 2009 at 11:05 AM, J?rgen Stenarson
<jorgen.stenarson at bostream.nu> wrote:
> I have seen that the users own ipy_user_conf is imported when running
> iptest. I guess this could cause some differences in the testing
> environment between different users.
>
> Shall I add a bug on launchpad for this?

Late, I know :)

This is OK now in trunk, we don't load any user config for the tests.

Cheers,

f


From mark.voorhies at ucsf.edu  Sun Oct 10 23:30:50 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Sun, 10 Oct 2010 20:30:50 -0700
Subject: [IPython-dev] Printing support enabled
In-Reply-To: <201010091905.19622.mark.voorhies@ucsf.edu>
References: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>
	<AANLkTik4Aq4oY_gpLu69qjMF5S6KCXmNVHdfmYPahz5R@mail.gmail.com>
	<201010091905.19622.mark.voorhies@ucsf.edu>
Message-ID: <201010102030.51170.mark.voorhies@ucsf.edu>

On Saturday, October 09, 2010 07:05:19 pm Mark Voorhies wrote:
> On Saturday, October 09, 2010 06:39:20 pm Evan Patterson wrote:
> > Fernando has the right idea. Look at the _process_execute_payload, _add_image, and _get_image methods of RichIPythonWidget to see how QImages and SVG data are indexed. Since the QTextDocument does not have a mechanism for retrieving all of it's resources, the image IDs will have to be stored manually in '_add_image'.
> > 
> > If you prefer, I can implement this some time in the next week or so, but feel free to take a stab at it.
> > 
> > Evan
> 
> Yes, looks like format.name() inside of _process_execute_payload gives the ID that I want to map from.
> I think I can do up a decent first pass this weekend.

Okay, I've got my first pass for HTML export up on github at
http://github.com/markvoorhies/ipython/commit/badeab10d3254c484342a73467674f665261dfa8

I tried three approaches (available as three context menu options):

1) Export HTML (external PNGs):
   This mimics Firefox's "Save as Web Page, complete" behavior.
   Saving "mypath/test.htm" gives an HTML file with links to PNGs in
   "mypath/test_files/".  The PNGs are named relative to format.name()
   to avoid collisions.

   Works in Firefox 3.6.10, Konqueror 4.4.2/KHTML, and Konqueror 4.4.2/WebKit

2) Export HTML (inline PNGs):
   Saves a single HTML file with images as inline base64 PNGs
   (c.f. http://en.wikipedia.org/wiki/Data_URI_scheme#HTML)

   Works in Firefox 3.6.10, Konqueror 4.4.2/KHTML, and Konqueror 4.4.2/WebKit

3) Export XHTML (inline SVGs):
   Saves a single XHTML file with images as inline SVG.  The "XML" is generated
   by overwriting the Qt-generated document header, so it is not guaranteed to
   be valid XML (but Firefox does validate my test case).

   Works in Firefox 3.6.10 and Konqueror 4.4.2/WebKit.
   Image placement is incorrect for Konqueror 4.4.2/KHTML.

(all tests run on a Dell Latitude D630 w/ Kubuntu Lucid:
mvoorhie at virgil:~$ uname -a
Linux virgil 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:17:33 UTC 2010 i686 GNU/Linux)

It may be possible to link external SVG images via an <embed> or <object> tag,
but I couldn't find a clean/portable way to do this.

Current issues:
* I'm doing lots of string coercion in order to use re.sub on Qt's HTML.  I mostly
  get away with it, but we wind up with a few bad character encodings in the
  output (e.g., the tabs for multi-line inputs).  Would be good for someone who
  knows more about unicode to take a look at this...

* The file name generation for "Export HTML (external PNGs)" is a bit hacky.  Should
  probably be rewritten to use os.path.

* Haven't tested with anything other than the Qt front end.  In theory, other front
  ends will export HTML with the images stripped, unless they implement their own
  version of imagetag().

Feel free to take/hack what you like and ditch the rest.

Happy hacking,

--Mark


From fperez.net at gmail.com  Sun Oct 10 23:42:23 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 20:42:23 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
Message-ID: <AANLkTimiUBVRv+LgPvc0BMr5gfJTR+bjnh2PxuyRzWBN@mail.gmail.com>

On Sun, Oct 10, 2010 at 6:23 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>
>
> Good question. The way I see it, feature branches proposed for
> inclusion should never be considered 'stable' in the sense of anyone
> following them and building on top of them. ?In that case, the cleaner
> history for merge into trunk trumps other considerations, I think.
> The problem is that the merges lead to more than simple diamonds: they
> spread the history of the branch all over the place.

[...]

On this topic, the classic writeup is probably this about how the
process works in the Linux kernel:

http://kerneltrap.org/Linux/Git_Management

As you can see, I am proposing that the 'grunt' branches are rebased
(those for merge into trunk), but *never* that we'll rebase trunk
itself.  So when a 'manager' (to use Linus' lingo in that page)
rebases the branch he's about to push, he's just acting as a 'grunt',
he's not touching the trunk at all.

HTH,

f


From fperez.net at gmail.com  Sun Oct 10 23:49:36 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 10 Oct 2010 20:49:36 -0700
Subject: [IPython-dev] Printing support enabled
In-Reply-To: <201010102030.51170.mark.voorhies@ucsf.edu>
References: <AANLkTikiSAWx8E=OSXzV=UnPWH7pm1f-5-0czbHC=BVW@mail.gmail.com>
	<AANLkTik4Aq4oY_gpLu69qjMF5S6KCXmNVHdfmYPahz5R@mail.gmail.com>
	<201010091905.19622.mark.voorhies@ucsf.edu>
	<201010102030.51170.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTin_r_yBimDzHEASG2fFAc_rMP9gzoiNT1TrTor9@mail.gmail.com>

Hi Mark,

On Sun, Oct 10, 2010 at 8:30 PM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:
> I tried three approaches (available as three context menu options):
>

This is fantastic, many thanks!.

I encourage others to try it out and help out with feedback on the
review page, since Mark already made a pull request for this:

http://github.com/ipython/ipython/pull/167

Cheers,

f


From matthew.brett at gmail.com  Mon Oct 11 02:29:09 2010
From: matthew.brett at gmail.com (Matthew Brett)
Date: Sun, 10 Oct 2010 23:29:09 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
Message-ID: <AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>

Yo,

> I'm bouncing back this reply to the list, because this is a good
> question that deserves clarification. ?I'm hoping in the end, this
> conversation will be summarized in our guidelines, so I'd rather have
> it all publicly archived.

So just to check.  Let's say you have:

        A---B---C topic
        /
   D---E---F---G master

You (Fernando) would prefer the pull request to be from a rebased
version of 'topic':

                         Adash---Bdash---Cdash topic-rebased
                        /
   D---E---F---G master

I must say, if it were me, I'd prefer the original in that case,
because it's a clearer indication of the history, and because rebasing
does have some cost.

The cost is that rebasing orphans the original 'topic' branch so that
it becomes actively dangerous to have around.   If you try and merge
'topic' after you've merged 'rebased-topic' you'll get lots of
conflicts that will be confusing.   That means that, if you've put
'topic' up on your github site, and anyone's fetched from you, then
you've got to track who fetched and warn them they are going to run
into trouble if they use anything based on it.

Well - anyway - you know all that - but I think - if you are
suggesting rebasing even for the clean 'coffee cup handle' type of
branches, that would be unusual practice no?

On the other hand, I agree with you and Linus (!) that it's very
confusing if someone's merged the main branch into their own before a
pull request and that would be a good thing to discourage in general.

Sorry - maybe the fever's got to me ;)

Mattthew


From dsdale24 at gmail.com  Mon Oct 11 08:12:53 2010
From: dsdale24 at gmail.com (Darren Dale)
Date: Mon, 11 Oct 2010 08:12:53 -0400
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
Message-ID: <AANLkTin7QV6KN+BuZuHW48BbcCuCzoy9PbhvX_WvdkM8@mail.gmail.com>

On Sun, Oct 10, 2010 at 9:23 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi Darren,
>
> I'm bouncing back this reply to the list, because this is a good
> question that deserves clarification. ?I'm hoping in the end, this
> conversation will be summarized in our guidelines, so I'd rather have
> it all publicly archived.

Thats fine. I sent it offlist because I didn't want to risk muddying
the waters unnecessarily.

> The approach I propose depends on one idea and assumption: that the
> branches aren't terribly long-lived, and that there isn't a
> complicated hierarchy of branches-from-branches. ?It encourages the
> development of small, self-contained feature branches.
>
> Note that the above doesn't preclude anyone from benefiting from a
> given feature branch and the history in trunk: they can use their
> master, or any other integration branch, where they track trunk and
> merge the feature *into*. ?This branch will be their personal way of
> tracking trunk and using their feature (or features, this can be done
> with multiple branches) until the feature is merged upstream. ?In fact
> I have done this multiple times already, and it works fine.
>
> Does this sound reasonable? ?I'm no git expert though, so it's quite
> possible there are still cases I haven't considered or where the above
> is insufficient. ?We'll have to find the patterns that fit them as we
> encounter them.

It does. Do we assume that the only branches that are safe to track
are the ones published at the ipython page at github?

Darren


From ehabkost at raisama.net  Mon Oct 11 09:06:49 2010
From: ehabkost at raisama.net (Eduardo Habkost)
Date: Mon, 11 Oct 2010 10:06:49 -0300
Subject: [IPython-dev] IPython handles code input as latin1 instead of
 the system encoding
In-Reply-To: <AANLkTi=tOHDRzhz54=gN1gYX0SpsxMP44fuTz=3yfbR3@mail.gmail.com>
References: <20100617212840.GO14947@blackpad.lan.raisama.net>
	<AANLkTi=tOHDRzhz54=gN1gYX0SpsxMP44fuTz=3yfbR3@mail.gmail.com>
Message-ID: <20101011130649.GQ24658@blackpad.lan.raisama.net>

On Sat, Oct 09, 2010 at 10:29:39PM -0700, Fernando Perez wrote:
<snip>
> > In [1]: import sys, locale
> >
> > In [2]: print sys.stdin.encoding,locale.getdefaultlocale()
> > UTF-8 ('en_US', 'UTF8')
> >
> > In [3]: print repr(u'??')
> > u'\xc3\xa1\xc3\xa9'
> 
> Thanks for the report.  We've made a lot of improvements to our
> unicode handling recently, and I think it's all OK now.  With current
> trunk:
> 
> IPython 0.11.alpha1.git -- An enhanced Interactive Python.
> ?         -> Introduction and overview of IPython's features.
> %quickref -> Quick reference.
> help      -> Python's own help system.
> object?   -> Details about 'object', use 'object??' for extra details.
> 
> In [1]: import sys, locale
> 
> In [2]: print repr(u'??')
> u'\xe1\xe9'
> 
> 
> Let us know again if you have any remaining problems.

Hi,

I just built and installed from latest git (commit
4e2d3af2a82b31fb523497eccb7ca0cfebd9d169). Things look worse. Crash report is
below.


[ipython/master]$ ipython
Python 2.6.2 (r262:71600, Jun  4 2010, 18:28:04)
Type "copyright", "credits" or "license" for more information.

IPython 0.11.alpha1.git -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: repr("??")
ERROR: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line statement', (82, 0))

---------------------------------------------------------------------------
UnicodeDecodeError                            Python 2.6.2: /usr/bin/python
                                                   Mon Oct 11 09:55:38 2010
A problem occured executing Python code.  Here is the sequence of function
calls leading up to the error, with the most recent (innermost) call last.
/usr/bin/ipython in <module>()
      1
      2
      3
      4
      5
      6
      7 #!/usr/bin/python
      8 """Terminal-based IPython entry point.
      9
---> 10 Note: this is identical to IPython/frontend/terminal/scripts/ipython for now.
        global launch_new_instance = <function launch_new_instance at 0x91e95a4>
     11 Once 0.11 is closer to release, we will likely need to reorganize the script
     12 entry points."""
     13
     14 from IPython.frontend.terminal.ipapp import launch_new_instance
     15
     16 launch_new_instance()
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31

/usr/lib/python2.6/site-packages/IPython/frontend/terminal/ipapp.pyc in launch_new_instance()
    646 def load_default_config(ipython_dir=None):
    647     """Load the default config file from the default ipython_dir.
    648
    649     This is useful for embedded shells.
    650     """
    651     if ipython_dir is None:
    652         ipython_dir = get_ipython_dir()
    653     cl = PyFileConfigLoader(default_config_file_name, ipython_dir)
    654     config = cl.load_config()
    655     return config
    656
    657
    658 def launch_new_instance():
    659     """Create and run a full blown IPython instance"""
    660     app = IPythonApp()
--> 661     app.start()
    662
    663
    664 if __name__ == '__main__':
    665     launch_new_instance()
    666
    667
    668
    669
    670
    671
    672
    673
    674
    675
    676

/usr/lib/python2.6/site-packages/IPython/core/application.pyc in start(self=<IPython.frontend.terminal.ipapp.IPythonApp object at 0xb769a4ec>)
    196         # Merge all config objects into a single one the app can then use
    197         self.merge_configs()
    198         self.log_master_config()
    199
    200         # Construction phase
    201         self.pre_construct()
    202         self.construct()
    203         self.post_construct()
    204
    205         # Done, flag as such and
    206         self._initialized = True
    207
    208     def start(self):
    209         """Start the application."""
    210         self.initialize()
--> 211         self.start_app()
    212
    213     #-------------------------------------------------------------------------
    214     # Various stages of Application creation
    215     #-------------------------------------------------------------------------
    216
    217     def create_crash_handler(self):
    218         """Create a crash handler, typically setting sys.excepthook to it."""
    219         self.crash_handler = self.crash_handler_class(self)
    220         sys.excepthook = self.crash_handler
    221
    222     def create_default_config(self):
    223         """Create defaults that can't be set elsewhere.
    224
    225         For the most part, we try to set default in the class attributes
    226         of Configurables.  But, defaults the top-level Application (which is

/usr/lib/python2.6/site-packages/IPython/frontend/terminal/ipapp.pyc in start_app(self=<IPython.frontend.terminal.ipapp.IPythonApp object at 0xb769a4ec>)
    626         try:
    627             fname = self.extra_args[0]
    628         except:
    629             pass
    630         else:
    631             try:
    632                 self._exec_file(fname)
    633             except:
    634                 self.log.warn("Error in executing file in user namespace: %s" %
    635                               fname)
    636                 self.shell.showtraceback()
    637
    638     def start_app(self):
    639         if self.master_config.Global.interact:
    640             self.log.debug("Starting IPython's mainloop...")
--> 641             self.shell.mainloop()
    642         else:
    643             self.log.debug("IPython not interactive, start_app is no-op...")
    644
    645
    646 def load_default_config(ipython_dir=None):
    647     """Load the default config file from the default ipython_dir.
    648
    649     This is useful for embedded shells.
    650     """
    651     if ipython_dir is None:
    652         ipython_dir = get_ipython_dir()
    653     cl = PyFileConfigLoader(default_config_file_name, ipython_dir)
    654     config = cl.load_config()
    655     return config
    656

/usr/lib/python2.6/site-packages/IPython/frontend/terminal/interactiveshell.pyc in mainloop(self=<IPython.frontend.terminal.interactiveshell.TerminalInteractiveShell object at 0x8fd62ac>, display_banner=None)
    183     def mainloop(self, display_banner=None):
    184         """Start the mainloop.
    185
    186         If an optional banner argument is given, it will override the
    187         internally created default banner.
    188         """
    189
    190         with nested(self.builtin_trap, self.display_trap):
    191
    192             # if you run stuff with -c <cmd>, raw hist is not updated
    193             # ensure that it's in sync
    194             self.history_manager.sync_inputs()
    195
    196             while 1:
    197                 try:
--> 198                     self.interact(display_banner=display_banner)
        global N = undefined
        global R = undefined
        global t = undefined
        global updatet = undefined
        global user_ns_hiddenR = undefined
        global _pylab_magic_runt = undefined
        global magic_run = undefined
        global guit = undefined
        global ns = undefined
        global sN = undefined
        global usr = undefined
        global lib = undefined
        global python2 = undefined
        global site = undefined
        global packages = undefined
        global IPython = undefined
        global frontend = undefined
        global terminal = undefined
        global interactiveshell.pyt = undefined
        global enable_pylab = undefined
        global s = undefined
        global c = undefined
        global C = undefined
        global _ = undefined
        global d = undefined
        global S = undefined
        global Ask = undefined
        global the = undefined
        global shell = undefined
        global to = undefined
        global exit.Can = undefined
        global be = undefined
        global overiden = undefined
        global used = undefined
        global a = undefined
        global callback.N = undefined
        global R3 = undefined
        global RS = undefined
        global interactiveshell.pyRv = undefined
        global i = undefined
        global o = undefined
        global q9 = undefined
        global n = undefined
        global sJ = undefined
        global Handle = undefined
        global interactive = undefined
        global exit.This = undefined
        global method = undefined
        global calls = undefined
        global ask_exit = undefined
        global callback.s = undefined
        global Gd = undefined
        global g = undefined
        global GHd = undefined
        global Toggle = undefined
        global autoindent = undefined
        global on = undefined
        global off = undefined
        global available.s = undefined
        global Automatic = undefined
        global indentation = undefined
        global OFFt = undefined
        global ONN = undefined
        global shellt = undefined
        global set_autoindentR = undefined
        global parameter_s = undefined
        global magic_autoindent = undefined
        global Paste = undefined
        global execute = undefined
        global pre = undefined
        global formatted = undefined
        global code = undefined
        global block = undefined
        global clipboard.You = undefined
        global must = undefined
        global terminate = undefined
        global two = undefined
        global minus = undefined
        global signs = undefined
        global alone = undefined
        global line.You = undefined
        global can = undefined
        global also = undefined
        global provide = undefined
        global your = undefined
        global own = undefined
        global sentinel = undefined
        global new = undefined
        global this = undefined
        global operation = undefined
        global The = undefined
        global dedented = undefined
        global prior = undefined
        global execution = undefined
        global enable = undefined
        global of = undefined
        global definitions.characters = undefined
        global at = undefined
        global beginning = undefined
        global line = undefined
        global are = undefined
        global ignored = undefined
        global allow = undefined
        global pasting = undefined
        global directly = undefined
        global e = undefined
        global mails = undefined
        global diff = undefined
        global files = undefined
        global doctests = undefined
        global continuation = undefined
        global prompt = undefined
        global stripped.The = undefined
        global executed = undefined
        global assigned = undefined
        global variable = undefined
        global named = undefined
        global later = undefined
        global editing.You = undefined
        global name = undefined
        global an = undefined
        global argument = undefined
        global e.g..This = undefined
        global assigns = undefined
        global pasted = undefined
        global string = undefined
        global without = undefined
        global dedenting = undefined
        global executing = undefined
        global it = undefined
        global preceding = undefined
        global still = undefined
        global stripped = undefined
        global re = <module 're' from '/usr/lib/python2.6/re.pyc'>
        global executes = undefined
        global previously = undefined
        global entered = undefined
        global by = undefined
        global cpaste.Do = undefined
        global alarmed = undefined
        global garbled = undefined
        global output = undefined
        global Windows = undefined
        global readline = undefined
        global bug.Just = undefined
        global press = undefined
        global enter = undefined
        global type = undefined
        global again = undefined
        global will = undefined
        global what = undefined
        global was = undefined
        global just = undefined
        global pasted.IPython = undefined
        global statements = undefined
        global magics = undefined
        global escapes = undefined
        global supported = undefined
        global yet.See = undefined
        global paste = undefined
        global automatically = undefined
        global pull = undefined
        global clipboard.s = undefined
        global rs = undefined
        global modet = undefined
        global stringt = undefined
        global rNt = undefined
        global ss = undefined
        global parse_optionsRx = undefined
        global has_keyt = undefined
        global _rerun_pastedt = undefined
        global gett = undefined
        global _strip_pasted_lines_for_codet = undefined
        global _get_pasted_linest = undefined
        global _execute_block = undefined
        global optst = undefined
        global argst = undefined
        global part = undefined
        global sentinelt = undefined
        global magic_cpaste = undefined
        global pN = undefined
        global p = undefined
        global clipboard.The = undefined
        global text = undefined
        global pulled = undefined
        global clipboard = undefined
        global user = undefined
        global intervention = undefined
        global printed = undefined
        global back = undefined
        global screen = undefined
        global before = undefined
        global unless = undefined
        global q = undefined
        global flag = undefined
        global given = undefined
        global force = undefined
        global quiet = undefined
        global mode.The = undefined
        global Options = undefined
        global r = undefined
        global cpaste.q = undefined
        global mode = undefined
        global do = undefined
        global echo = undefined
        global terminal.IPython = undefined
        global cpaste = undefined
        global manually = undefined
        global into = undefined
        global until = undefined
        global you = undefined
        global mark = undefined
        global its = undefined
        global end.t = undefined
        global rqR = undefined
        global Nt = undefined
        global qs = undefined
        global Rx = undefined
        global RY = undefined
        global clipboard_getR = undefined
        global splitlinesRF = undefined
        global pycolorizet = undefined
        global endswithR = undefined
        global textR = undefined
        global RF = undefined
        global magic_paste4 = undefined
        global __name__t = undefined
        global __module__R = undefined
        global R4 = undefined
        global Rj = undefined
        global RE = undefined
        global embeddedt = undefined
        global embedded_activeR = undefined
        global editort = undefined
        global pagerR = undefined
        global R2 = undefined
        global RD = undefined
        global propertyR1 = undefined
        global RA = undefined
        global RC = undefined
        global RG = undefined
        global RQ = undefined
        global RO = undefined
        global R_ = undefined
        global Rk = undefined
        global Rv = undefined
        global Rb = undefined
        global interactiveshell.pyR = undefined
        global sL = undefined
        global U = undefined
        global contextlibR = undefined
        global reR = undefined
        global IPython.core.errorR = undefined
        global IPython.core.usageR = undefined
        global IPython.core.inputlistR = undefined
        global IPython.core.interactiveshellR = undefined
        global IPython.lib.inputhookR = undefined
        global IPython.lib.pylabtoolsR = undefined
        global IPython.utils.terminalR = undefined
        global IPython.utils.processR = undefined
        global IPython.utils.warnR = undefined
        global IPython.utils.textR = undefined
        global IPython.utils.traitletsR = undefined
        global Rr = undefined
        global register = undefined
        global module = undefined
        global s. = undefined
    199                     #self.interact_with_readline()
    200                     # XXX for testing of a readline-decoupled repl loop, call
    201                     # interact_with_readline above
    202                     break
    203                 except KeyboardInterrupt:
    204                     # this should not be necessary, but KeyboardInterrupt
    205                     # handling seems rather unpredictable...
    206                     self.write("\nKeyboardInterrupt in interact()\n")
    207
    208     def interact(self, display_banner=None):
    209         """Closely emulate the interactive Python console."""
    210
    211         # batch run -> do not interact
    212         if self.exit_now:
    213             return

/usr/lib/python2.6/site-packages/IPython/frontend/terminal/interactiveshell.pyc in interact(self=<IPython.frontend.terminal.interactiveshell.TerminalInteractiveShell object at 0x8fd62ac>, display_banner=False)
    270                      'Because of how pdb handles the stack, it is impossible\n'
    271                      'for IPython to properly format this particular exception.\n'
    272                      'IPython will resume normal operation.')
    273             except:
    274                 # exceptions here are VERY RARE, but they can be triggered
    275                 # asynchronously by signal handlers, for example.
    276                 self.showtraceback()
    277             else:
    278                 self.input_splitter.push(line)
    279                 more = self.input_splitter.push_accepts_more()
    280                 if (self.SyntaxTB.last_syntax_error and
    281                     self.autoedit_syntax):
    282                     self.edit_syntax_error()
    283                 if not more:
    284                     source_raw = self.input_splitter.source_raw_reset()[1]
--> 285                     self.run_cell(source_raw)
    286
    287         # We are off again...
    288         __builtin__.__dict__['__IPYTHON__active'] -= 1
    289
    290         # Turn off the exit flag, so the mainloop can be restarted if desired
    291         self.exit_now = False
    292
    293     def raw_input(self, prompt='', continue_prompt=False):
    294         """Write a prompt and read a line.
    295
    296         The returned line does not include the trailing newline.
    297         When the user enters the EOF key sequence, EOFError is raised.
    298
    299         Optional inputs:
    300

/usr/lib/python2.6/site-packages/IPython/core/interactiveshell.pyc in run_cell(self=<IPython.frontend.terminal.interactiveshell.TerminalInteractiveShell object at 0x8fd62ac>, cell='repr("\xc3\xa1\xc3\xa9")\n')
   2078         # - increment the global execution counter (we need to pull that out
   2079         # from outputcache's control; outputcache should instead read it from
   2080         # the main object).
   2081         # - do any logging of input
   2082         # - update histories (raw/translated)
   2083         # - then, call plain run_source (for single blocks, so displayhook is
   2084         # triggered) or run_code (for multiline blocks in exec mode).
   2085         #
   2086         # Once this is done, we'll be able to stop using runlines and we'll
   2087         # also have a much cleaner separation of logging, input history and
   2088         # output cache management.
   2089         #################################################################
   2090
   2091         # We need to break up the input into executable blocks that can be run
   2092         # in 'single' mode, to provide comfortable user behavior.
-> 2093         blocks = self.input_splitter.split_blocks(cell)
   2094
   2095         if not blocks:
   2096             return
   2097
   2098         # Store the 'ipython' version of the cell as well, since that's what
   2099         # needs to go into the translated history and get executed (the
   2100         # original cell may contain non-python syntax).
   2101         ipy_cell = ''.join(blocks)
   2102
   2103         # Store raw and processed history
   2104         self.history_manager.store_inputs(ipy_cell, cell)
   2105
   2106         self.logger.log(ipy_cell, cell)
   2107         # dbg code!!!
   2108         if 0:

/usr/lib/python2.6/site-packages/IPython/core/inputsplitter.pyc in split_blocks(self=<IPython.core.inputsplitter.IPythonInputSplitter object at 0x91f002c>, lines=[])
    514                 # block.  Thus, we must put the line back into the input buffer
    515                 # so that it starts a new block on the next pass.
    516                 #
    517                 # 2. the second case is detected in the line before the actual
    518                 # dedent happens, so , we consume the line and we can break out
    519                 # to start a new block.
    520
    521                 # Case 1, explicit dedent causes a break.
    522                 # Note: check that we weren't on the very last line, else we'll
    523                 # enter an infinite loop adding/removing the last line.
    524                 if  _full_dedent and lines and not next_line.startswith(' '):
    525                     lines.append(next_line)
    526                     break
    527
    528                 # Otherwise any line is pushed
--> 529                 self.push(next_line)
    530
    531                 # Case 2, full dedent with full block ready:
    532                 if _full_dedent or \
    533                        self.indent_spaces==0 and not self.push_accepts_more():
    534                     break
    535             # Form the new block with the current source input
    536             blocks.append(self.source_reset())
    537
    538         #return blocks
    539         # HACK!!! Now that our input is in blocks but guaranteed to be pure
    540         # python syntax, feed it back a second time through the AST-based
    541         # splitter, which is more accurate than ours.
    542         return split_blocks(''.join(blocks))
    543
    544     #------------------------------------------------------------------------

/usr/lib/python2.6/site-packages/IPython/core/inputsplitter.pyc in push(self=<IPython.core.inputsplitter.IPythonInputSplitter object at 0x91f002c>, lines='repr("\xc3\xa1\xc3\xa9")')
    981         # class by hand line by line, we need to temporarily switch out to
    982         # 'line' mode, do a single manual reset and then feed the lines one
    983         # by one.  Note that this only matters if the input has more than one
    984         # line.
    985         changed_input_mode = False
    986
    987         if self.input_mode == 'cell':
    988             self.reset()
    989             changed_input_mode = True
    990             saved_input_mode = 'cell'
    991             self.input_mode = 'line'
    992
    993         # Store raw source before applying any transformations to it.  Note
    994         # that this must be done *after* the reset() call that would otherwise
    995         # flush the buffer.
--> 996         self._store(lines, self._buffer_raw, 'source_raw')
    997
    998         try:
    999             push = super(IPythonInputSplitter, self).push
   1000             for line in lines_list:
   1001                 if self._is_complete or not self._buffer or \
   1002                    (self._buffer and self._buffer[-1].rstrip().endswith(':')):
   1003                     for f in transforms:
   1004                         line = f(line)
   1005
   1006                 out = push(line)
   1007         finally:
   1008             if changed_input_mode:
   1009                 self.input_mode = saved_input_mode
   1010         return out
   1011

/usr/lib/python2.6/site-packages/IPython/core/inputsplitter.pyc in _store(self=<IPython.core.inputsplitter.IPythonInputSplitter object at 0x91f002c>, lines='repr("\xc3\xa1\xc3\xa9")', buffer=['repr("\xc3\xa1\xc3\xa9")\n'], store='source_raw')
    592                 self.indent_spaces, self._full_dedent = self._find_indent(line)
    593
    594     def _store(self, lines, buffer=None, store='source'):
    595         """Store one or more lines of input.
    596
    597         If input lines are not newline-terminated, a newline is automatically
    598         appended."""
    599
    600         if buffer is None:
    601             buffer = self._buffer
    602
    603         if lines.endswith('\n'):
    604             buffer.append(lines)
    605         else:
    606             buffer.append(lines+'\n')
--> 607         setattr(self, store, self._set_source(buffer))
    608
    609     def _set_source(self, buffer):
    610         return ''.join(buffer).encode(self.encoding)
    611
    612
    613 #-----------------------------------------------------------------------------
    614 # Functions and classes for IPython-specific syntactic support
    615 #-----------------------------------------------------------------------------
    616
    617 # RegExp for splitting line contents into pre-char//first word-method//rest.
    618 # For clarity, each group in on one line.
    619
    620 line_split = re.compile("""
    621              ^(\s*)              # any leading space
    622              ([,;/%]|!!?|\?\??)  # escape character or characters

/usr/lib/python2.6/site-packages/IPython/core/inputsplitter.pyc in _set_source(self=<IPython.core.inputsplitter.IPythonInputSplitter object at 0x91f002c>, buffer=['repr("\xc3\xa1\xc3\xa9")\n'])
    595         """Store one or more lines of input.
    596
    597         If input lines are not newline-terminated, a newline is automatically
    598         appended."""
    599
    600         if buffer is None:
    601             buffer = self._buffer
    602
    603         if lines.endswith('\n'):
    604             buffer.append(lines)
    605         else:
    606             buffer.append(lines+'\n')
    607         setattr(self, store, self._set_source(buffer))
    608
    609     def _set_source(self, buffer):
--> 610         return ''.join(buffer).encode(self.encoding)
    611
    612
    613 #-----------------------------------------------------------------------------
    614 # Functions and classes for IPython-specific syntactic support
    615 #-----------------------------------------------------------------------------
    616
    617 # RegExp for splitting line contents into pre-char//first word-method//rest.
    618 # For clarity, each group in on one line.
    619
    620 line_split = re.compile("""
    621              ^(\s*)              # any leading space
    622              ([,;/%]|!!?|\?\??)  # escape character or characters
    623              \s*(%?[\w\.\*]*)    # function/method, possibly with leading %
    624                                  # to correctly treat things like '?%magic'
    625              (\s+.*$|$)          # rest of line

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 6: ordinal not in range(128)

Hit <Enter> to quit this message (your terminal may close):
-----------------------------

-- 
Eduardo


From ellisonbg at gmail.com  Mon Oct 11 13:14:21 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 11 Oct 2010 10:14:21 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
Message-ID: <AANLkTinXbAB8dbYyh_nY-VOvJo=TB52cBFHuwvdx7DJ4@mail.gmail.com>

> - keep the work on your branch completely confined to one specific
> topic, bugfix or feature implementation. ?Git branches are cheap and
> easy to make, do *not* mix in one branch more than one topic. ?If you
> do, you force the reviewer to disentangle unrelated functionality
> scattered across multiple commits. ?This doesn't mean that a branch
> can't touch multiple files or have many commits, simply that all the
> work in a branch should be related to one specific 'task', be it
> refactoring, cleanup, bugfix, feature implementation, whatever. ?But:
> 'one task, one branch'.

+1

This is perhaps the most important thing.

> - *Never* merge back from trunk into your feature branch.

along with:

> If you absolutely need to merge something from trunk (because it has
> fixes you need for your own work), then rebase on top of trunk before
> making your pull request, so that your branch applies cleanly on top
> of trunk as a self-contained unit without criss-crossing.

+1

Cheers,

Brian


From ellisonbg at gmail.com  Mon Oct 11 13:22:47 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 11 Oct 2010 10:22:47 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTinajJhrDKZ8iBJDWd8KOpP7XR+fddT5P_z9xzC=@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTinajJhrDKZ8iBJDWd8KOpP7XR+fddT5P_z9xzC=@mail.gmail.com>
Message-ID: <AANLkTi=HC5q0f8vJcpCe_5OnbpfhWDDbjzAqTK5nKjk5@mail.gmail.com>

> - When possible, rebase the branch you're about to merge into trunk
> before applying the merge. ?If the rebase works, it will make the
> feature branch appear in all displays of the log that are
> topologically sorted as a contiguous set of commits, which makes it
> much nicer to look at and inspect later, as related changes are
> grouped together.

I think this is going a bit far.  As long as topic branches are clean
"coffee cup handles" I don't see why doing a --no-ff merge without
rebasing is that bad of thing.  I am not saying that rebasing is a bad
idea, but to always use it seems to miss the main point of using a
good DVCS: the ease of merging just about anything.

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Oct 11 13:25:37 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 11 Oct 2010 10:25:37 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
Message-ID: <AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>

On Sun, Oct 10, 2010 at 11:29 PM, Matthew Brett <matthew.brett at gmail.com> wrote:
> Yo,
>
>> I'm bouncing back this reply to the list, because this is a good
>> question that deserves clarification. ?I'm hoping in the end, this
>> conversation will be summarized in our guidelines, so I'd rather have
>> it all publicly archived.
>
> So just to check. ?Let's say you have:
>
> ? ? ? ?A---B---C topic
> ? ? ? ?/
> ? D---E---F---G master
>
> You (Fernando) would prefer the pull request to be from a rebased
> version of 'topic':
>
> ? ? ? ? ? ? ? ? ? ? ? ? Adash---Bdash---Cdash topic-rebased
> ? ? ? ? ? ? ? ? ? ? ? ?/
> ? D---E---F---G master
>
> I must say, if it were me, I'd prefer the original in that case,
> because it's a clearer indication of the history, and because rebasing
> does have some cost.

I agree with this.

> The cost is that rebasing orphans the original 'topic' branch so that
> it becomes actively dangerous to have around. ? If you try and merge
> 'topic' after you've merged 'rebased-topic' you'll get lots of
> conflicts that will be confusing. ? That means that, if you've put
> 'topic' up on your github site, and anyone's fetched from you, then
> you've got to track who fetched and warn them they are going to run
> into trouble if they use anything based on it.

Yes, rebasing definitely creates a lot of overhead in the form of
extra branches that you have to manage. For new devs (even for me
sometimes) that is a significant cost.

> Well - anyway - you know all that - but I think - if you are
> suggesting rebasing even for the clean 'coffee cup handle' type of
> branches, that would be unusual practice no?

I know some projects like to rebase absolutely everything to have a
perfectly clean DAG. I don't feel like way and think the non-rebased
merges of coffee cup handle branches is just fine.

> On the other hand, I agree with you and Linus (!) that it's very
> confusing if someone's merged the main branch into their own before a
> pull request and that would be a good thing to discourage in general.

Yep.

Cheers,

Brian

> Sorry - maybe the fever's got to me ;)
>
> Mattthew
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Mon Oct 11 13:53:14 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 11 Oct 2010 10:53:14 -0700
Subject: [IPython-dev] IPython handles code input as latin1 instead of
 the system encoding
In-Reply-To: <20101011130649.GQ24658@blackpad.lan.raisama.net>
References: <20100617212840.GO14947@blackpad.lan.raisama.net>
	<AANLkTi=tOHDRzhz54=gN1gYX0SpsxMP44fuTz=3yfbR3@mail.gmail.com>
	<20101011130649.GQ24658@blackpad.lan.raisama.net>
Message-ID: <AANLkTik_wr0K5ySN6KC9xr2EGnJVB9w2HR-WE3rM3Kig@mail.gmail.com>

Hi,

On Mon, Oct 11, 2010 at 6:06 AM, Eduardo Habkost <ehabkost at raisama.net> wrote:
>
>
> I just built and installed from latest git (commit
> 4e2d3af2a82b31fb523497eccb7ca0cfebd9d169). Things look worse. Crash report is
> below.

yes, I saw that, it's terrible, and it's my fault: it comes from the
recent big refactoring I did of the execution flow.  We do have
unicode unit tests, but unfortunately some of the code paths only get
exercised by a human at the console, so they are hard to catch in
automated testing.

I unfortunately have something urgent right now and won't be able to
deal with this until tomorrow at best.   In the meantime, I pushed a
fix that at least prevents the crash, but unicode input is ignored:

In [1]: "?"

In [1]:

Highly sub-optimal, but better than crashing, until I can work on this
(hopefully tomorrow).  I'll think better of how to put in automated
tests that mimic user input at a console, so that we get these issues
in our test suite and not in user experience.

Sorry...

Cheers,

f


From fperez.net at gmail.com  Mon Oct 11 22:01:55 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 11 Oct 2010 19:01:55 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
Message-ID: <AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>

Hi folks,

thanks for the feedback! I just got a bit of time now...

On Mon, Oct 11, 2010 at 10:25 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>
>> So just to check. ?Let's say you have:
>>
>> ? ? ? ?A---B---C topic
>> ? ? ? ?/
>> ? D---E---F---G master
>>
>> You (Fernando) would prefer the pull request to be from a rebased
>> version of 'topic':
>>
>> ? ? ? ? ? ? ? ? ? ? ? ? Adash---Bdash---Cdash topic-rebased
>> ? ? ? ? ? ? ? ? ? ? ? ?/
>> ? D---E---F---G master
>>
>> I must say, if it were me, I'd prefer the original in that case,
>> because it's a clearer indication of the history, and because rebasing
>> does have some cost.
>
> I agree with this.
>
>> The cost is that rebasing orphans the original 'topic' branch so that
>> it becomes actively dangerous to have around. ? If you try and merge
>> 'topic' after you've merged 'rebased-topic' you'll get lots of
>> conflicts that will be confusing. ? That means that, if you've put
>> 'topic' up on your github site, and anyone's fetched from you, then
>> you've got to track who fetched and warn them they are going to run
>> into trouble if they use anything based on it.
>
> Yes, rebasing definitely creates a lot of overhead in the form of
> extra branches that you have to manage. For new devs (even for me
> sometimes) that is a significant cost.
>
>> Well - anyway - you know all that - but I think - if you are
>> suggesting rebasing even for the clean 'coffee cup handle' type of
>> branches, that would be unusual practice no?
>
> I know some projects like to rebase absolutely everything to have a
> perfectly clean DAG. I don't feel like way and think the non-rebased
> merges of coffee cup handle branches is just fine.

Those are valid points.  Let me try to clarify my perspective and why
I suggested the rebasing.  Compare the two screenshots:

- http://imgur.com/nBZI2: merged branch where I rebased right before pushing
- http://imgur.com/7bNOy: merged branch (yellow) where I did NOT
rebase before pushing.

I find the former much easier to follow than the latter, because all
related commits are topologically together.

These branches aren't meant for third-parties to follow, since they
are being proposed for merging into trunk, so I don't see the rebasing
as an issue fort third-parties.  In fact, even without rebasing,
following these branches is never a good idea since people are likely
to regularly prune their repo from obsolete branches (I know I do, and
I've seen others do it as well).  So I think for these types of
branches, the argument of possible headaches for downstream users of
the branches isn't very compelling.

I also don't think the rebased verion is a much less clear reflection
of the original history, as all commits in the rebased version retain
their original message and timestamp, so one can easily see when
things happened.

But if everyone prefers the alternative, I won't push particularly
hard in this direction.  I think that in this instance, making the
group happy is more important than making me happy :)  I'd just like
to understand what actual downsides you guys see in rebasing *in these
specific circumstances* (I'm not advocating rebasing trunk, for
example).

Cheers,

f


From fperez.net at gmail.com  Mon Oct 11 22:04:55 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 11 Oct 2010 19:04:55 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=HC5q0f8vJcpCe_5OnbpfhWDDbjzAqTK5nKjk5@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTinajJhrDKZ8iBJDWd8KOpP7XR+fddT5P_z9xzC=@mail.gmail.com>
	<AANLkTi=HC5q0f8vJcpCe_5OnbpfhWDDbjzAqTK5nKjk5@mail.gmail.com>
Message-ID: <AANLkTimtpkmphN--YeqOr8NOcDM0v-0nu7BqsF0buKS1@mail.gmail.com>

On Mon, Oct 11, 2010 at 10:22 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>
> I think this is going a bit far. ?As long as topic branches are clean
> "coffee cup handles" I don't see why doing a --no-ff merge without
> rebasing is that bad of thing. ?I am not saying that rebasing is a bad
> idea, but to always use it seems to miss the main point of using a
> good DVCS: the ease of merging just about anything.
>

Hopefully the example I sent clarifies: it wasn't for a --no-ff case,
but rather for a case that would have produced a horrid DAG.  Since
Thomas had merged multiple times *from* trunk, the original merge in
the pull request was ~11 levels deep.  The dag was completely
incomprehensible.  A rebase was trivial (no content changes) and it
produced the much more understandable picture here:

http://imgur.com/nBZI2

Cheers,

f


From matthew.brett at gmail.com  Mon Oct 11 22:16:04 2010
From: matthew.brett at gmail.com (Matthew Brett)
Date: Mon, 11 Oct 2010 19:16:04 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
Message-ID: <AANLkTikeMbZTp9RsvzoFFPVH+Q4Xt8KHqP_VhStYNkwq@mail.gmail.com>

Yo,

> Those are valid points. ?Let me try to clarify my perspective and why
> I suggested the rebasing. ?Compare the two screenshots:
>
> - http://imgur.com/nBZI2: merged branch where I rebased right before pushing
> - http://imgur.com/7bNOy: merged branch (yellow) where I did NOT
> rebase before pushing.
>
> I find the former much easier to follow than the latter, because all
> related commits are topologically together.

Yes - we may differ in taste here - I find both of these to be fine.

> These branches aren't meant for third-parties to follow, since they
> are being proposed for merging into trunk, so I don't see the rebasing
> as an issue fort third-parties. ?In fact, even without rebasing,
> following these branches is never a good idea since people are likely
> to regularly prune their repo from obsolete branches (I know I do, and
> I've seen others do it as well). ?So I think for these types of
> branches, the argument of possible headaches for downstream users of
> the branches isn't very compelling.

Right - I guess all I was saying is that the contributors have to
remember more - i.e. - make sure that no-one could reasonably have
started to use one of your branches, make sure you delete them as soon
as they've been rebased and merged, and so on.  If anyone forgets or
doesn't know, then they are more likely to inject chaos than
otherwise.

> But if everyone prefers the alternative, I won't push particularly
> hard in this direction. ?I think that in this instance, making the
> group happy is more important than making me happy :) ?I'd just like
> to understand what actual downsides you guys see in rebasing *in these
> specific circumstances* (I'm not advocating rebasing trunk, for
> example).

Oh - ignore me - I've never made an ipython commit !

See you,

Matthew


From fperez.net at gmail.com  Tue Oct 12 03:10:40 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 12 Oct 2010 00:10:40 -0700
Subject: [IPython-dev] [ANN] IPython 0.10.1 is out.
Message-ID: <AANLkTind4nFQLTap28aWwAT3p=JHA96vbPsBPL0igoES@mail.gmail.com>

Hi all,

we've just released IPython 0.10.1, full release notes are below.

Downloads in source and windows binary form are available in the usual location:
http://ipython.scipy.org/dist/

But since our switch to github, we also get automatic distribution of
archives there:
http://github.com/ipython/ipython/archives/rel-0.10.1

and we've also started uploading archives to the Python Package Index
(which easy_install will use by default):
http://pypi.python.org/pypi/ipython

so at any time you should find a location with good download speeds.

You can find the full documentation at:
http://ipython.scipy.org/doc/rel-0.10.1/html/index.html

Enjoy!

Fernando (on behalf of the whole IPython team)

Release 0.10.1
==============

IPython 0.10.1 was released October 11, 2010, over a year after version 0.10.
This is mostly a bugfix release, since after version 0.10 was released, the
development team's energy has been focused on the 0.11 series.  We have
nonetheless tried to backport what fixes we could into 0.10.1, as it remains
the stable series that many users have in production systems they rely on.

Since the 0.11 series changes many APIs in backwards-incompatible ways, we are
willing to continue maintaining the 0.10.x series.  We don't really have time
to actively write new code for 0.10.x, but we are happy to accept patches and
pull requests on the IPython `github site`_.  If sufficient contributions are
made that improve 0.10.1, we will roll them into future releases.  For this
purpose, we will have a branch called 0.10.2 on github, on which you can base
your contributions.

.. _github site: http://github.com/ipython

For this release, we applied approximately 60 commits totaling a diff of over
7000 lines::

    (0.10.1)amirbar[dist]> git diff --oneline rel-0.10.. | wc -l
    7296

Highlights of this release:

- The only significant new feature is that IPython's parallel computing
  machinery now supports natively the Sun Grid Engine and LSF schedulers.  This
  work was a joint contribution from Justin Riley, Satra Ghosh and Matthieu
  Brucher, who put a lot of work into it.  We also improved traceback handling
  in remote tasks, as well as providing better control for remote task IDs.

- New IPython Sphinx directive.  You can use this directive to mark blocks in
  reSructuredText documents as containig IPython syntax (including figures) and
  the will be executed during the build::

  .. ipython::

      In [2]: plt.figure()  # ensure a fresh figure

      @savefig psimple.png width=4in
      In [3]: plt.plot([1,2,3])
      Out[3]: [<matplotlib.lines.Line2D object at 0x9b74d8c>]

- Various fixes to the standalone ipython-wx application.

- We now ship internally the excellent argparse library, graciously licensed
  under BSD terms by Steven Bethard.  Now (2010) that argparse has become part
  of Python 2.7 this will be less of an issue, but Steven's relicensing allowed
  us to start updating IPython to using argparse well before Python 2.7.  Many
  thanks!

- Robustness improvements so that IPython doesn't crash if the readline library
  is absent (though obviously a lot of functionality that requires readline
  will not be available).

- Improvements to tab completion in Emacs with Python 2.6.

- Logging now supports timestamps (see ``%logstart?`` for full details).

- A long-standing and quite annoying bug where parentheses would be added to
  ``print`` statements, under Python 2.5 and 2.6, was finally fixed.

- Improved handling of libreadline on Apple OSX.

- Fix ``reload`` method of IPython demos, which was broken.

- Fixes for the ipipe/ibrowse system on OSX.

- Fixes for Zope profile.

- Fix %timeit reporting when the time is longer than 1000s.

- Avoid lockups with ? or ?? in SunOS, due to a bug in termios.

- The usual assortment of miscellaneous bug fixes and small improvements.

The following people contributed to this release (please let us know if we
omitted your name and we'll gladly fix this in the notes for the future):

* Beni Cherniavsky
* Boyd Waters.
* David Warde-Farley
* Fernando Perez
* G?khan Sever
* Justin Riley
* Kiorky
* Laurent Dufrechou
* Mark E. Smith
* Matthieu Brucher
* Satrajit Ghosh
* Sebastian Busch
* V?clav ?milauer


From fperez.net at gmail.com  Tue Oct 12 03:19:27 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 12 Oct 2010 00:19:27 -0700
Subject: [IPython-dev] [ANN] IPython 0.10.1 is out.
In-Reply-To: <AANLkTind4nFQLTap28aWwAT3p=JHA96vbPsBPL0igoES@mail.gmail.com>
References: <AANLkTind4nFQLTap28aWwAT3p=JHA96vbPsBPL0igoES@mail.gmail.com>
Message-ID: <AANLkTik4cx8OhaKA6j=5JQ4tSaAm9iBQsNbUn-UuiBKp@mail.gmail.com>

2010/10/12 Fernando Perez <fperez.net at gmail.com>:
> Hi all,
>
> we've just released IPython 0.10.1, full release notes are below.

Here's to hoping we don't have to do a 0.10.2, because even the
relatively simple 0.10.1 still ended up taking me a solid 3 or 4 hours
of work...  But just in case, I've created a 0.10.2 dev branch:

http://github.com/ipython/ipython/tree/0.10.2

That way pull requests can be cleanly made off it, and we can accept
them without too much hassle.

I had to do a bunch of little tool updates for the github workflow
(this was the first time we had a release from git/github), so I'll
push those into trunk in a minute.

Cheers,

f


From hans_meine at gmx.net  Tue Oct 12 05:14:15 2010
From: hans_meine at gmx.net (Hans Meine)
Date: Tue, 12 Oct 2010 11:14:15 +0200
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
Message-ID: <201010121114.16067.hans_meine@gmx.net>

Hi Fernando,

I did not get into the discussion so far, since I am very new to Git.  
However, I have a lot of experience with Mercurial, and I spent a lot of time 
thinking about equivalent problems with hg.

AFAICS, I am more with Brian than with your original proposal here; I also do 
believe that the DVCS should capture the original development process.

Op den Dingsdag 12 Oktober 2010 Klock 04:01:55 hett Fernando Perez schreven:
> Those are valid points.  Let me try to clarify my perspective and why
> I suggested the rebasing.  Compare the two screenshots:
> 
> - http://imgur.com/nBZI2: merged branch where I rebased right before
> pushing - http://imgur.com/7bNOy: merged branch (yellow) where I did NOT
> rebase before pushing.
> 
> I find the former much easier to follow than the latter, because all
> related commits are topologically together.

My opinion here is that one should separate the data (commits) and 
presentation (above graphs).  IMO it is the task of the commit log viewer to 
present a graph as nice as possible - many of your complaints do not actually 
concern the graph itself, but its linearized view, no?

Actually, I would even say that repeated merges should be fine and being 
presented in a much nicer manner, but I also see that we need to live with the 
tools we got.

The current state of Git GUIs and graph viewers is not clear to me at all, but 
the "bad" example you posted looks really stupid to me.  I seem to recall a 
discussion concerning grouped changesets in some HG repo viewer, but a quick 
test showed that the same zig-zag effect is presented (at least by default) by 
the hg tools I am using (Logilab's hgview, TortoiseHG, and hg serve's HTML).

> I also don't think the rebased verion is a much less clear reflection
> of the original history, as all commits in the rebased version retain
> their original message and timestamp, so one can easily see when
> things happened.

It would even be possible to re-order the changesets without changing their 
topology or ID at all (in hg I would know how to do that), but obviously the 
log viewers all present the exact linear order of changesets found in the 
local repo storage. :-(

Have a nice day,
  Hans

PS: I used these commands to produce an example hg repository for testing:

mkdir foo
cd foo
hg init
echo hello > world
hg add world
hg ci -m "first"
echo first > extra
hg add extra
hg ci -m "created extra (topic branch)"
hg up 0
echo second.2 >> world
hg ci -m "second"
hg up 1
echo second >> extra
hg ci -m "expanded extra (topic branch)"
hg up 2
echo third.2 >> world
hg ci -m "third"
hg merge
hg ci -m "merged topic branch"


From takowl at gmail.com  Tue Oct 12 05:24:34 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Tue, 12 Oct 2010 10:24:34 +0100
Subject: [IPython-dev] Pull request workflow...
Message-ID: <AANLkTimuE74A_dPuuwqfN3RsGYfTWUdTr0sW+mJ_v0Ob@mail.gmail.com>

On 12 October 2010 08:11, <ipython-dev-request at scipy.org> wrote:

> Since Thomas had merged multiple times *from* trunk, the original merge in
> the pull request was ~11 levels deep.
>

For the record, my apologies for this. I've not really used a DVCS before,
and I was thinking in terms of diffs (i.e. minimising the difference between
my branch and trunk), rather than changesets. It's been a learning
experience. :-)

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101012/c58bb97b/attachment.html>

From ellisonbg at gmail.com  Tue Oct 12 11:59:41 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 12 Oct 2010 08:59:41 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
Message-ID: <AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>

>> I know some projects like to rebase absolutely everything to have a
>> perfectly clean DAG. I don't feel like way and think the non-rebased
>> merges of coffee cup handle branches is just fine.
>
> Those are valid points. ?Let me try to clarify my perspective and why
> I suggested the rebasing. ?Compare the two screenshots:
>
> - http://imgur.com/nBZI2: merged branch where I rebased right before pushing
> - http://imgur.com/7bNOy: merged branch (yellow) where I did NOT
> rebase before pushing.
>
> I find the former much easier to follow than the latter, because all
> related commits are topologically together.

Definitely, but I do agree with Hans that this is really a problem
with the viewer, not the DAG itself.  But, I definitely agree with you
that the rebased version is much cleaner.

> These branches aren't meant for third-parties to follow, since they
> are being proposed for merging into trunk, so I don't see the rebasing
> as an issue fort third-parties. ?In fact, even without rebasing,
> following these branches is never a good idea since people are likely
> to regularly prune their repo from obsolete branches (I know I do, and
> I've seen others do it as well). ?So I think for these types of
> branches, the argument of possible headaches for downstream users of
> the branches isn't very compelling.

I don't think the cost of rebasing is something that users/third
parties pay, but rather a cost that we, as developers pay.  Consider
the following:

1. I work in branch foo, rebase it on top of master and then post on
github as a pull request.
2. People comment on the work and I have to make additional commits to
address the comments.
3. If we always try to rebase, I have to create a *new* foo2 branch
that has my recent commits rebased and post that to guithub.  But
because it is a new branch, I have to submit a new pull request and
the discussion has to continue in a manner that is disconnected from
the original foo pull request.
4. This creation of new branches has to be repeated for each
comment/edit/rebase cycle.

This is a significant cost for developers, and I simply don't think it
is worth the effort. Not to mention that creating/deleting lots of
branches is error prone.

This is not to say I don't think that sometimes rebasing is a great
idea.  It definitely is. But, I think we want to continue to use
non-rebased merges as a part of our regular workflow. I should say
that if rebasing didn't have this extra cost for developers, I would
be totally fine with it being the norm in most cases.

> I also don't think the rebased verion is a much less clear reflection
> of the original history, as all commits in the rebased version retain
> their original message and timestamp, so one can easily see when
> things happened.

I agree.

> But if everyone prefers the alternative, I won't push particularly
> hard in this direction. ?I think that in this instance, making the
> group happy is more important than making me happy :) ?I'd just like
> to understand what actual downsides you guys see in rebasing *in these
> specific circumstances* (I'm not advocating rebasing trunk, for
> example).

For simple pull requests where there are no additional commits, I
think rebasing is just fine.  It is mostly in the more complex
review/commit/rebase cycles.

Cheers,

Brian

> Cheers,
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From mark.voorhies at ucsf.edu  Tue Oct 12 13:04:51 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Tue, 12 Oct 2010 10:04:51 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
	<AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
Message-ID: <201010121004.52326.mark.voorhies@ucsf.edu>

On Tuesday, October 12, 2010 08:59:41 am Brian Granger wrote:
> >> I know some projects like to rebase absolutely everything to have a
> >> perfectly clean DAG. I don't feel like way and think the non-rebased
> >> merges of coffee cup handle branches is just fine.
> >
> > Those are valid points.  Let me try to clarify my perspective and why
> > I suggested the rebasing.  Compare the two screenshots:
> >
> > - http://imgur.com/nBZI2: merged branch where I rebased right before pushing
> > - http://imgur.com/7bNOy: merged branch (yellow) where I did NOT
> > rebase before pushing.
> >
> > I find the former much easier to follow than the latter, because all
> > related commits are topologically together.
> 
> Definitely, but I do agree with Hans that this is really a problem
> with the viewer, not the DAG itself.  But, I definitely agree with you
> that the rebased version is much cleaner.

FWIW, gitk's default topological sort groups the related commits in the
http://imgur.com/7bNOy example (see attached PNG, generated with
gitk --all)

--Mark
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gitk.png
Type: image/png
Size: 30552 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101012/92916f06/attachment.png>

From benjaminrk at gmail.com  Tue Oct 12 13:36:55 2010
From: benjaminrk at gmail.com (MinRK)
Date: Tue, 12 Oct 2010 10:36:55 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
	<AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
Message-ID: <AANLkTinQOijc=1pj+i+FGHET+z7_eh7=NiaSQ5pnz6BC@mail.gmail.com>

On Tue, Oct 12, 2010 at 08:59, Brian Granger <ellisonbg at gmail.com> wrote:

> >> I know some projects like to rebase absolutely everything to have a
> >> perfectly clean DAG. I don't feel like way and think the non-rebased
> >> merges of coffee cup handle branches is just fine.
> >
> > Those are valid points.  Let me try to clarify my perspective and why
> > I suggested the rebasing.  Compare the two screenshots:
> >
> > - http://imgur.com/nBZI2: merged branch where I rebased right before
> pushing
> > - http://imgur.com/7bNOy: merged branch (yellow) where I did NOT
> > rebase before pushing.
> >
> > I find the former much easier to follow than the latter, because all
> > related commits are topologically together.
>
> Definitely, but I do agree with Hans that this is really a problem
> with the viewer, not the DAG itself.  But, I definitely agree with you
> that the rebased version is much cleaner.
>
> > These branches aren't meant for third-parties to follow, since they
> > are being proposed for merging into trunk, so I don't see the rebasing
> > as an issue fort third-parties.  In fact, even without rebasing,
> > following these branches is never a good idea since people are likely
> > to regularly prune their repo from obsolete branches (I know I do, and
> > I've seen others do it as well).  So I think for these types of
> > branches, the argument of possible headaches for downstream users of
> > the branches isn't very compelling.
>
> I don't think the cost of rebasing is something that users/third
> parties pay, but rather a cost that we, as developers pay.  Consider
> the following:
>
> 1. I work in branch foo, rebase it on top of master and then post on
> github as a pull request.
> 2. People comment on the work and I have to make additional commits to
> address the comments.
> 3. If we always try to rebase, I have to create a *new* foo2 branch
> that has my recent commits rebased and post that to guithub.  But
> because it is a new branch, I have to submit a new pull request and
> the discussion has to continue in a manner that is disconnected from
> the original foo pull request.
>

3. is not actually true.  You can rebase and push on top of the old branch,
preserving the pull-request/discussion. I did this with some of my branches
in pyzmq.  The commits change, so the ordering ends up different, but it
certainly works.  And, if you want to preserve the old commits in their
original state, you can check out 'mybranch' and push it as 'mybranch-save'
before rebasing.

See http://github.com/zeromq/pyzmq/pull/31 for what a pull review looks like
where a rebase has happened in the middle.  What's lost is the ordering
relationship of commits/comments, which is certainly not ideal, but is quite
manageable for small feature branches.

-MinRK



> 4. This creation of new branches has to be repeated for each
> comment/edit/rebase cycle.
>
> This is a significant cost for developers, and I simply don't think it
> is worth the effort. Not to mention that creating/deleting lots of
> branches is error prone.
>
> This is not to say I don't think that sometimes rebasing is a great
> idea.  It definitely is. But, I think we want to continue to use
> non-rebased merges as a part of our regular workflow. I should say
> that if rebasing didn't have this extra cost for developers, I would
> be totally fine with it being the norm in most cases.
>
> > I also don't think the rebased verion is a much less clear reflection
> > of the original history, as all commits in the rebased version retain
> > their original message and timestamp, so one can easily see when
> > things happened.
>
> I agree.
>
> > But if everyone prefers the alternative, I won't push particularly
> > hard in this direction.  I think that in this instance, making the
> > group happy is more important than making me happy :)  I'd just like
> > to understand what actual downsides you guys see in rebasing *in these
> > specific circumstances* (I'm not advocating rebasing trunk, for
> > example).
>
> For simple pull requests where there are no additional commits, I
> think rebasing is just fine.  It is mostly in the more complex
> review/commit/rebase cycles.
>
> Cheers,
>
> Brian
>
> > Cheers,
> >
> > f
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101012/dab0ec66/attachment.html>

From fperez.net at gmail.com  Tue Oct 12 14:51:58 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 12 Oct 2010 11:51:58 -0700
Subject: [IPython-dev] [ANN] IPython 0.10.1 is out.
In-Reply-To: <AANLkTind4nFQLTap28aWwAT3p=JHA96vbPsBPL0igoES@mail.gmail.com>
References: <AANLkTind4nFQLTap28aWwAT3p=JHA96vbPsBPL0igoES@mail.gmail.com>
Message-ID: <AANLkTineuAX825Z=J9ySbb174dY5SNbBMgaw91cA1Ae-@mail.gmail.com>

Hi all,

Illustrating the need to *always* remember we credit in the commit
message the name of the person who made a contribution originally...

2010/10/12 Fernando Perez <fperez.net at gmail.com>:
> Hi all,
> - New IPython Sphinx directive. ?You can use this directive to mark blocks in
> ?reSructuredText documents as containig IPython syntax (including figures) and
> ?the will be executed during the build::
[...]
> The following people contributed to this release (please let us know if we
> omitted your name and we'll gladly fix this in the notes for the future):

...
I completely failed to note that this feature (one out of the only two
new features in 0.10.2!) was contributed by John Hunter.

John shall be generously compensated for this offense with fresh
coffee and tropical fruit candy from Colombia, so there's nothing to
worry :)

But this is a good lesson for the committers.  I wrote the release
notes last night by scanning the full changelog and running this
function:

function gauthor() {
    git log "$@" | grep '^Author' | cut -d' ' -f 2- | sort | uniq
}

Since when John sent this, I failed to record his name in the
changelog, last night I simply forgot.  It's very, very hard to
remember months after the fact where any one piece of code came from,
so let's try to be disciplined about *always*:

- if the contribution is more or less ready-to-commit as sent, and the
committer only does absolutely minimal work, use

git commit --author="Original Author <original at author.com>"

- If the committer does significant amounts of rework, note the
original author in the long part of the commit message (after the
first summary line).  This will make it possible to find that
information later when writing the release notes.

Here are some examples from our log where I didn't screw up:

- Using --author:
commit 8323fa343e74a01394e85f3874249b955131976a
Author: Sebastian Busch <>
Date:   Sun Apr 25 10:57:39 2010 -0700

    Improvements to Vim support for visual mode.

    Patch by Sebastian Busch.

    Note: this patch was originally for the 0.10 series, I (fperez) minimally
    fixed it for 0.11 but it may still require some tweaking to work well with
    the refactored codebase.

    Closes https://bugs.launchpad.net/ipython/+bug/460359

-- Not using --author, but recording origin:
commit ffa96dbc431628218dec604d59bb80511af40751
Author: Fernando Perez <Fernando.Perez at berkeley.edu>
Date:   Sat Apr 24 20:35:08 2010 -0700

    Fix readline detection bug in OSX.

    Close https://bugs.launchpad.net/ipython/+bug/411599

    Thanks to a patch by Boyd Waters.


Ideally, when a significant new feature lands, we should immediately
summarize it in the whatsnew/ docs, but I know that is often hard to
do, as features continue to evolve or a while.  All the more reason
why commit messages with sufficient, accurate information are so
important.

Cheers,

f


From fperez.net at gmail.com  Tue Oct 12 15:12:32 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 12 Oct 2010 12:12:32 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
	<AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
Message-ID: <AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>

On Tue, Oct 12, 2010 at 8:59 AM, Brian Granger <ellisonbg at gmail.com> wrote:
> Definitely, but I do agree with Hans that this is really a problem
> with the viewer, not the DAG itself. ?But, I definitely agree with you
> that the rebased version is much cleaner.

Yes, I agree, and Mark's mention of gitk shows that there are even
viewers who do a better job in this regard (I had looked with gitg and
qgit, but hadn't opened gitk in a few weeks and didn't realize it was
a bit smarter in its sorting).

>> These branches aren't meant for third-parties to follow, since they
>> are being proposed for merging into trunk, so I don't see the rebasing
>> as an issue fort third-parties. ?In fact, even without rebasing,
>> following these branches is never a good idea since people are likely
>> to regularly prune their repo from obsolete branches (I know I do, and
>> I've seen others do it as well). ?So I think for these types of
>> branches, the argument of possible headaches for downstream users of
>> the branches isn't very compelling.
>
> I don't think the cost of rebasing is something that users/third
> parties pay, but rather a cost that we, as developers pay. ?Consider
> the following:
>
> 1. I work in branch foo, rebase it on top of master and then post on
> github as a pull request.
> 2. People comment on the work and I have to make additional commits to
> address the comments.
> 3. If we always try to rebase, I have to create a *new* foo2 branch
> that has my recent commits rebased and post that to guithub. ?But
> because it is a new branch, I have to submit a new pull request and
> the discussion has to continue in a manner that is disconnected from
> the original foo pull request.
> 4. This creation of new branches has to be repeated for each
> comment/edit/rebase cycle.
>
> This is a significant cost for developers, and I simply don't think it
> is worth the effort. Not to mention that creating/deleting lots of
> branches is error prone.
>
> This is not to say I don't think that sometimes rebasing is a great
> idea. ?It definitely is. But, I think we want to continue to use
> non-rebased merges as a part of our regular workflow. I should say
> that if rebasing didn't have this extra cost for developers, I would
> be totally fine with it being the norm in most cases.

Good points, and I think we're finding a good balance.  I'm more than
happy to go with what seems to be the consensus on this one, thanks a
lot for entertaining this discussion (by the way, this thread is
proving very useful for numpy, where more or less the same discussion
started a day later).  It's a good thing to go carefully through this
once, and we'll work smoothly for a long time.

As soon as I can find a spare minute, I'll try to summarize this and
other small points about our worflow in our docs, so that we can refer
to it more easily in the long run.

> For simple pull requests where there are no additional commits, I
> think rebasing is just fine. ?It is mostly in the more complex
> review/commit/rebase cycles.

Do we think balance is the following?

- From the proposer's side, rebase *before* you make your first pull
request.  After that, don't worry too much about any more rebasings.
That will give a reasonably clean starting point for the review, as
well as ensuring that the branch applies cleanly.  For developers
(like Min points out with his example) who are very comfortable with
rebasing in-review they can do so (like he did), but we shouldn't
*ask* that they do.

- And no rebasing from the committer's side like I originally
proposed, except in cases where significant criss-cross merges need to
be cleaned up.

And we could make the idea of an initial rebase 100% optional, only to
be done by those who feel comfortable with git.  I know the word
'rebase' scares many new to git, and we don't want to put artificial
barriers to contribution.  So it may be best to say that these are our
suggestions for more advanced developers proposing complex merges, but
that we will still accept any non-rebased pull request, as long as it
doesn't merge *from trunk*.

How does this sound for our final take?

Cheers,

f


From fperez.net at gmail.com  Tue Oct 12 15:25:17 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 12 Oct 2010 12:25:17 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTimuE74A_dPuuwqfN3RsGYfTWUdTr0sW+mJ_v0Ob@mail.gmail.com>
References: <AANLkTimuE74A_dPuuwqfN3RsGYfTWUdTr0sW+mJ_v0Ob@mail.gmail.com>
Message-ID: <AANLkTi=ygMte7h7BDUT_-B8TNtxhodZYe88HBOKA7-Ab@mail.gmail.com>

On Tue, Oct 12, 2010 at 2:24 AM, Thomas Kluyver <takowl at gmail.com> wrote:
>
> For the record, my apologies for this. I've not really used a DVCS before,
> and I was thinking in terms of diffs (i.e. minimising the difference between
> my branch and trunk), rather than changesets. It's been a learning
> experience. :-)

There is no need to apologize, if anything I feel bad for apparently
picking on you, it's just that it was a very recent and relevant
example (right at the top of the log).  If you want to see ugly, look
at the mess Brian and I made last year:

http://i.imgur.com/KGYgs.png

(and that's with gitg collapsing part of the tree with arrows!).

We've all made mistakes, and it's very important that the project
remains *open and friendly* to people making mistakes.  If we're
afraid to try anything new or a little crazy, we'll never come up with
anything interesting!

That doesn't mean being sloppy, but it does mean that we should be far
more tolerant of the occasional mistake that requires a little bit of
cleanup, than creating an atmosphere where people are afraid to try
things out because they will be beaten up by an enforcement police.

Particularly with DVCS, it's much harder to cause real lasting damage.
 And even if there is damage to the history, ultimately it's not that
big of a deal.  When we moved from svn to bzr we actually had a nasty
actual *break* in the history because I wasn't careful enough when
doing the final integration:

http://i.imgur.com/JOz8U.png

But at that point Ville had spent a bunch of time working with bzr,
and the benefits of moving over to a dvcs outweighed the cost of the
break in the dag.  I didn't have time to learn how to replay the
history (I don't even know if it could be done in bzr) so we just
moved ahead.  The code written in that period and the time invested by
the developers has a lot more value tha keeping the DAG police happy
:)

So as you can tell, if anything the worst mistakes we've had all have
my name somewhere nearby.  And that's OK.  Like in climbing: if you're
not falling, you're not trying anything hard :)

Regards,

f


From fperez.net at gmail.com  Tue Oct 12 15:39:10 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 12 Oct 2010 12:39:10 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
	<AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
	<AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
Message-ID: <AANLkTinfMddrvakfmVKhvDPVABmxGV4U=LZmYmxLW3SM@mail.gmail.com>

On Tue, Oct 12, 2010 at 12:12 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>
>
> How does this sound for our final take?

One final note (again, not trying to pick on anyone, this time Brian,
just looking at the most recent example): for committers, I think we
should avoid actual merges for cases where it's just one (or even two)
commits.  Otherwise we'll end up with a ton of 'merge branch.. into
master' messages, one per single actual commit.  For example:

*   925c98c Merge branch 'master' into trunk
|\
| * ee8879c Uncommenting reply_socket.recvmore that was broken in 2.0.7.
* | 233d50f Stop-gap fix for crash with unicode input.
|/
* 4e2d3af merge review fperez-execution-refactor

In this case, ee8879 should have been rebased so that it was just
after 233d5, avoiding the need for the extra 925c merge commit.

My rule of thumb has become: just one or two commits, rebase and
commit as a linear set (basically I don't want 1/2 or 1/3 of the
metadata to be just the 'merging' commit).  Three or more are enough
to warrant keeping as their own branch.

But when keeping a branch, then it's important to edit the merge
commit (before pushing) with 'commit --amend' to add:

- Brief summary of what the merge does (especially if the merge
includes a ton of commits)

- If there was a pull request associated, adding

Closes gh-NNN (pull request).

so that Github automatically closes the request and links the commit
to the request in their webpages.

If y'all agree with this, I'll send it all to the summary later.

Cheers,

f


From fperez.net at gmail.com  Tue Oct 12 19:11:13 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 12 Oct 2010 16:11:13 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
	<AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
	<AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
Message-ID: <AANLkTiksKQiOwS9xg8ds4MWzq7NYxqWCnmh-h8FK+pKh@mail.gmail.com>

On Tue, Oct 12, 2010 at 12:12 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Do we think balance is the following?
>
> - From the proposer's side, rebase *before* you make your first pull
> request. ?After that, don't worry too much about any more rebasings.
> That will give a reasonably clean starting point for the review, as
> well as ensuring that the branch applies cleanly. ?For developers
> (like Min points out with his example) who are very comfortable with
> rebasing in-review they can do so (like he did), but we shouldn't
> *ask* that they do.
>
> - And no rebasing from the committer's side like I originally
> proposed, except in cases where significant criss-cross merges need to
> be cleaned up.
>
> And we could make the idea of an initial rebase 100% optional, only to
> be done by those who feel comfortable with git. ?I know the word
> 'rebase' scares many new to git, and we don't want to put artificial
> barriers to contribution. ?So it may be best to say that these are our
> suggestions for more advanced developers proposing complex merges, but
> that we will still accept any non-rebased pull request, as long as it
> doesn't merge *from trunk*.
>

Actually, a comment in this article about the PostgreSQL migration
from CVS to git:

http://lwn.net/SubscriberLink/409635/11a5197ddb2c46b8/

makes a good point in favor of my original requirement above:

"""
everything that you want integrated into the main repository has to be
rebased on the main repository branch on which you want it integrated.
This is to keep conflict resolution with the developer of the code,
the one who actually know how to fix the conflict, rather than with
the integrator.
It keeps integration cost very low and also keeps at linear history,
which is again perfect for keeping features (consisting of multiple
commits) together, which is again for for all kind of other things
like bisecting and reverting.
"""

By asking that upon pull request, the requester's tree is rebased, any
conflicts will be fixed first by the requester.  This makes total
sense, and will ultimately save everyone involved (requester and
reviewers/committers) time.

So unless anyone disagrees, I'll go with that approach when I write up
our guidelines.

Cheers,

f


From arokem at berkeley.edu  Wed Oct 13 00:38:38 2010
From: arokem at berkeley.edu (Ariel Rokem)
Date: Tue, 12 Oct 2010 21:38:38 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTim-T5kvRWsfUcSXnXMJJB2FQNAmUAm0iend3dK7@mail.gmail.com>
	<AANLkTi=B03ycWWhyzP53k_M=fr5+9QgjaWKnTLT4G7dp@mail.gmail.com>
	<AANLkTikkexnN=QnqoMcrcpR2xxJ8aDt3njV4JeZ9ptAy@mail.gmail.com>
	<AANLkTikErOCG5vdseQnVcqHqps4DnjR+0c0VKiBso3d1@mail.gmail.com>
	<AANLkTikfOQf1_o0PAFwt=OaAZq7yNezAK7Z1wC5BahVk@mail.gmail.com>
	<AANLkTi=qKVpynn=es+djYR6rBDxJwtt48DDm8HWGGqYb@mail.gmail.com>
	<AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
Message-ID: <AANLkTinSnLWvmFxi926Xv=y_weO_7NOVR8SfrHodrjSR@mail.gmail.com>

Hi Fernando and everyone,

Since we talked about this just yesterday in the context of nitime, I
thought I would pitch in with one more thought:

Do we think balance is the following?
>
> - From the proposer's side, rebase *before* you make your first pull
> request.  After that, don't worry too much about any more rebasings.
> That will give a reasonably clean starting point for the review, as
> well as ensuring that the branch applies cleanly.  For developers
> (like Min points out with his example) who are very comfortable with
> rebasing in-review they can do so (like he did), but we shouldn't
> *ask* that they do.
>
> - And no rebasing from the committer's side like I originally
> proposed, except in cases where significant criss-cross merges need to
> be cleaned up.
>
> And we could make the idea of an initial rebase 100% optional, only to
> be done by those who feel comfortable with git.  I know the word
> 'rebase' scares many new to git, and we don't want to put artificial
> barriers to contribution.  So it may be best to say that these are our
> suggestions for more advanced developers proposing complex merges, but
> that we will still accept any non-rebased pull request, as long as it
> doesn't merge *from trunk*.
>
>
In principle, this sounds balanced and good to me. However, after reading
the first email in this thread, I realized that what I hadn't considered
when we talked about this yesterday, is the difficulty posed to code
reviewers by the messed-up history caused by merging into a branch, rather
than rebasing.

What I am currently thinking is that the reviewer can decide whether the
merge causes the history to be so messy such that they cannot understand and
review the commits in the pull request. I think that this makes sense,
because if the history is messed up so badly that it can't be easily
reviewed now, it will only be more difficult to understand in a couple of
months, or in a year. This would be equivalent to some of the decisions that
reviewers make about code style and clarity. You might decide that the code
proposed is just too riddled with stylistic eye-sores to be merged (even if
it does what it is supposed to do) and ask a contributor to clean up the
code before pulling. On the other hand, you might decide to let a couple of
eye-sores slip by, in order to make a contributor's life a bit more easy. I
think that the same should apply to the git history that would result from
the pull. If it causes a slight criss-crossing in the history, that is easy
enough to figure out, let it by. If it actually makes review of the code
difficult, send a message to the contributor, preferably with some direction
on how to fix it and how not to do it again, much the same as you would for
a contribution that contains stylistic errors. Just a thought.

Cheers,

Ariel



How does this sound for our final take?
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Ariel Rokem
Helen Wills Neuroscience Institute
University of California, Berkeley
http://argentum.ucbso.berkeley.edu/ariel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101012/d75044fb/attachment.html>

From hans_meine at gmx.net  Wed Oct 13 07:21:56 2010
From: hans_meine at gmx.net (Hans Meine)
Date: Wed, 13 Oct 2010 13:21:56 +0200
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTinSnLWvmFxi926Xv=y_weO_7NOVR8SfrHodrjSR@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
	<AANLkTinSnLWvmFxi926Xv=y_weO_7NOVR8SfrHodrjSR@mail.gmail.com>
Message-ID: <201010131321.57301.hans_meine@gmx.net>

Op den Middeweken 13 Oktober 2010 Klock 06:38:38 hett Ariel Rokem schreven:
> What I am currently thinking is that the reviewer can decide whether the
> merge causes the history to be so messy such that they cannot understand
> and review the commits in the pull request. I think that this makes sense,
> because if the history is messed up so badly that it can't be easily
> reviewed now, it will only be more difficult to understand in a couple of
> months, or in a year.

Very good point.

> This would be equivalent to some of the decisions
> that reviewers make about code style and clarity.

..and a good analogy.

> I think that the same should apply to the git
> history that would result from the pull. [...] If it actually makes review
> of the code difficult, send a message to the contributor, preferably with 
some direction on how to fix it and how not to do it again, much the same as 
you would for a contribution that contains stylistic errors. Just a thought.

Well spoken.

Let's apply the same reasoning to the DAG as to the code itself; try to accept 
only good stuff, but don't scare off newbies with valuable contributions 
(instead, educate them).

Have a nice day,
  Hans


From fperez.net at gmail.com  Wed Oct 13 14:37:44 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 13 Oct 2010 11:37:44 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <201010131321.57301.hans_meine@gmx.net>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
	<AANLkTinSnLWvmFxi926Xv=y_weO_7NOVR8SfrHodrjSR@mail.gmail.com>
	<201010131321.57301.hans_meine@gmx.net>
Message-ID: <AANLkTinDDbb_pNvBr7rLvBckMGP5a5AL4=OuRToC90B9@mail.gmail.com>

On Wed, Oct 13, 2010 at 4:21 AM, Hans Meine <hans_meine at gmx.net> wrote:
>
> Let's apply the same reasoning to the DAG as to the code itself; try to accept
> only good stuff, but don't scare off newbies with valuable contributions
> (instead, educate them).

Yes, that's the spirit we're trying to hit, well put.

Thanks a lot to everyone who took the time to provide feedback on
this.  I know it's a bit "meta" but a necessary discussion.  I'll do
my best to write up something concise and useful based on this thread,
and will post it when ready.

Cheers,

f


From ellisonbg at gmail.com  Wed Oct 13 14:46:37 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 13 Oct 2010 11:46:37 -0700
Subject: [IPython-dev] Pull request workflow...
In-Reply-To: <AANLkTinDDbb_pNvBr7rLvBckMGP5a5AL4=OuRToC90B9@mail.gmail.com>
References: <AANLkTi=HL=gVz84kjqYzYbp6F0LVqvAcF1gf3zKdvCN9@mail.gmail.com>
	<AANLkTikuu+CuPmebMFEPrhE2MiQp6CeCJwNVXhSctViv@mail.gmail.com>
	<AANLkTinSnLWvmFxi926Xv=y_weO_7NOVR8SfrHodrjSR@mail.gmail.com>
	<201010131321.57301.hans_meine@gmx.net>
	<AANLkTinDDbb_pNvBr7rLvBckMGP5a5AL4=OuRToC90B9@mail.gmail.com>
Message-ID: <AANLkTimF7mek_mr-Wiw9YQRJLaR+LREKKKsVcPi2z7h=@mail.gmail.com>

On Wed, Oct 13, 2010 at 11:37 AM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Wed, Oct 13, 2010 at 4:21 AM, Hans Meine <hans_meine at gmx.net> wrote:
>>
>> Let's apply the same reasoning to the DAG as to the code itself; try to accept
>> only good stuff, but don't scare off newbies with valuable contributions
>> (instead, educate them).
>
> Yes that's the spirit we're trying to hit, well put.

Yes, I agree

Cheers,

Brian

> Thanks a lot to everyone who took the time to provide feedback on
> this. ?I know it's a bit "meta" but a necessary discussion. ?I'll do
> my best to write up something concise and useful based on this thread,
> and will post it when ready.
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Wed Oct 13 19:36:52 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 13 Oct 2010 16:36:52 -0700
Subject: [IPython-dev] Qt SVG clipping bug (it's NOT a matplotlib bug)- Qt
	won't fix...
Message-ID: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>

Hi folks,

[CC'ing mpl-dev just for reference so they know we're taking care of
this on our side]

I've been investigating further the bug where clipped paths in SVG
render wrong in our console.  It turns out the mpl team already fixed
some things on their side, and I can confirm that their SVGs now
render fine in inkscape.  Here's an annotated example:

http://i.imgur.com/NCSEJ.png

However, the problem seems to be that the Qt SVG renderer simply
doesn't support clipping paths, period.  Furthermore, they seem to
have decided they won't fix this, as the bug was closed with "won't
fix":

http://bugreports.qt.nokia.com/browse/QTBUG-1865

>From the Qt experts: am I misreading this?  Because if we're indeed
stuck with a half-broken SVG renderer from qt, then we'll need to
reconsider our implementation of pastefig(), to perhaps support an
optional format flag so that users can send png if they prefer...
Bummer.

Cheers,

f


From fperez.net at gmail.com  Thu Oct 14 02:04:08 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 13 Oct 2010 23:04:08 -0700
Subject: [IPython-dev] Experiment: pull requests CC'd to the list...
Message-ID: <AANLkTikNsuDbd4z_HYy++_P1VOCawAaSchEwNf4RPOY2@mail.gmail.com>

Hi all,

I made a little experiment: I added the ipython-dev at scipy.org address
as the official email of the ipython organization at github, and
changed the email settings so that all pull requests would generate a
message (only once per request though, I turned off the notifications
of further discussions, and also of bug reports).

I hope this will encourage more people to participate in the project
also by commenting on pull requests: code review is open to anyone
(with a github login, but that's not a restriction imposed by us, just
by the nature of the github system), and I'd like more people beyond
just  a handful of us to get involved that way.  It's a good way to
start learning the codebase, participate in concrete design
discussions, and engage other developers on specific topics you may
care about or have expertise in.

Don't be shy: even if you haven't contributed code to IPython yet, you
may know about a particular topic and your input at the time of a pull
request may help make that code that much better.

I hope this will be a net positive for the project, with minimal noise
on-list.  But if these messages start getting annoying for anyone, let
me know and I'm happy to reconsider.

Regards,

f


From fperez.net at gmail.com  Thu Oct 14 03:10:08 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 14 Oct 2010 00:10:08 -0700
Subject: [IPython-dev] HTML export with PNG or SVG images added to Qt console
Message-ID: <AANLkTik8MfnE3r5XmKq6r=U3Maf2M=4AwPDa4MhhiXjZ@mail.gmail.com>

Hi folks,

Thanks to Mark Voorhies (who gave us PDF printing just a few days
ago), we now have full HTML export of the entire buffer of the Qt
console.

http://github.com/ipython/ipython/commit/f467f96827d11b2420e921308177517f1f8ce49a

[ note: that's just the merge commit, the original commits are in the
tree with Mark's credits.  But I just realized that when ammending
merge commits from others, we should always use the --author flag,
because git credits the merge commit by default only to the committer
(and in this case I did precious little actual work beyond
reviewing/testing) ]

You can choose:

- inline png images: http://fperez.org/tmp/ipython-inline-png.htm
- pngs in an external directory: http://fperez.org/tmp/ipython-external-png.htm
- saving the original svgs: http://fperez.org/tmp/ipython-svg.xml

Many thanks to Mark for this contribution, it's fantastic!

Cheers,

f


From hans_meine at gmx.net  Thu Oct 14 04:55:20 2010
From: hans_meine at gmx.net (Hans Meine)
Date: Thu, 14 Oct 2010 10:55:20 +0200
Subject: [IPython-dev] Qt SVG clipping bug (it's NOT a matplotlib bug)-
	Qt won't fix...
In-Reply-To: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
Message-ID: <201010141055.20770.hans_meine@gmx.net>

Hi,

Op den Dunnersdag 14 Oktober 2010 Klock 01:36:52 hett Fernando Perez schreven:
> I've been investigating further the bug where clipped paths in SVG
> render wrong in our console.  It turns out the mpl team already fixed
> some things on their side, and I can confirm that their SVGs now
> render fine in inkscape.

Inkscape is an incredible piece of software, but maybe you should better test 
with e.g. Firefox, since there are *many* SVGs out there which only display 
fine in the almighty Inkscape.  (Could you attach the example SVG?  It looks 
simple enough to serve as a testcase.)

> However, the problem seems to be that the Qt SVG renderer simply
> doesn't support clipping paths, period.

That's an unfortunate limitation, indeed. :-(

> Furthermore, they seem to
> have decided they won't fix this, as the bug was closed with "won't
> fix":
> 
> http://bugreports.qt.nokia.com/browse/QTBUG-1865

In this older tracker item (before migration), you can see that the "Won't 
Fix" was later changed to "Deferred", now "Expired":

http://qt.nokia.com/developer/task-tracker/index_html?method=entry&id=204966

That at least does not look like Nokia's official policy is that this won't 
get fixed.

Anyhow, I am not so sure I fully understand the situation.  AFAICS, there are 
two separate issues here:
a) generation of the SVG
b) SVG rendering

Even if Qt's renderer does not support clipping, what does that have to do 
with a) (which is relevant for ipython export)?  Does MPL use Qt for SVG 
export?  ('Cause the tracker item is about QSvgGenerator, a QPaintDevice for 
*generating* SVGs.)

I wonder why QSvgGenerator is not fixed even without clipping support in Qt's 
SVG renderer.. I can understand if they say "we want to be able to render what 
we produce", but in this case this introduces seemingly unnecessary 
limitations.

(BTW: The viewbox will not help, this only sets a global SVG 'viewbox' 
property, which is not related to path clipping at all.)

Have a nice day,
  Hans

PS: Here are the current docs; I could not find a word about clipping in them:
  http://doc.qt.nokia.com/4.7/qtsvg.html
  http://doc.qt.nokia.com/4.7/qsvggenerator.html


From fperez.net at gmail.com  Thu Oct 14 10:46:52 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 14 Oct 2010 07:46:52 -0700
Subject: [IPython-dev] Qt SVG clipping bug (it's NOT a matplotlib bug)-
 Qt won't fix...
In-Reply-To: <201010141055.20770.hans_meine@gmx.net>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010141055.20770.hans_meine@gmx.net>
Message-ID: <AANLkTik+EuQgBj_TL=4Dgk2ZYuc6K5xf6xnfmBZihRUo@mail.gmail.com>

On Thu, Oct 14, 2010 at 1:55 AM, Hans Meine <hans_meine at gmx.net> wrote:
> Inkscape is an incredible piece of software, but maybe you should better test
> with e.g. Firefox, since there are *many* SVGs out there which only display
> fine in the almighty Inkscape. ?(Could you attach the example SVG? ?It looks
> simple enough to serve as a testcase.)

Attached.  Firefox shows the same as inkscape.

>
>> However, the problem seems to be that the Qt SVG renderer simply
>> doesn't support clipping paths, period.
>
> That's an unfortunate limitation, indeed. :-(

[...]

Thanks for the feedback.  As MD points out, perhaps bug isn't the
right word, just that Qt chose to target a simpler svg spec level than
what mpl uses.

Cheers,

f
-------------- next part --------------
A non-text attachment was scrubbed...
Name: clipped.svg
Type: image/svg+xml
Size: 15823 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101014/4699f8fa/attachment.svg>

From robert.kern at gmail.com  Thu Oct 14 10:52:05 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 14 Oct 2010 09:52:05 -0500
Subject: [IPython-dev] Qt SVG clipping bug (it's NOT a matplotlib bug)-
	Qt won't fix...
In-Reply-To: <201010141055.20770.hans_meine@gmx.net>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010141055.20770.hans_meine@gmx.net>
Message-ID: <i975en$c8v$1@dough.gmane.org>

On 10/14/10 3:55 AM, Hans Meine wrote:

> PS: Here are the current docs; I could not find a word about clipping in them:
>    http://doc.qt.nokia.com/4.7/qtsvg.html
>    http://doc.qt.nokia.com/4.7/qsvggenerator.html

They have removed some information from their documentation in recent versions. 
As of 4.5, they are only claiming to support the static features of SVG 1.2 Tiny 
in both the rendering and generation:

   http://doc.qt.nokia.com/4.5/qtsvg.html

SVG 1.2 Tiny does not support clipping.

   http://www.w3.org/TR/SVGMobile12/

That's why the bug reports have been closed as "won't fix". They are out of 
scope for the standards they have decided to support.

It might be worthwhile for matplotlib to only use the SVG 1.2 Tiny standard for 
greater compatibility. Tiny is much easier to implement, so I suspect there are 
now more Tiny renderers now than there are Full renderers.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From mdroe at stsci.edu  Thu Oct 14 11:27:10 2010
From: mdroe at stsci.edu (Michael Droettboom)
Date: Thu, 14 Oct 2010 11:27:10 -0400
Subject: [IPython-dev] Qt SVG clipping bug (it's NOT a matplotlib bug)-
 Qt won't fix...
In-Reply-To: <i975en$c8v$1@dough.gmane.org>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>	<201010141055.20770.hans_meine@gmx.net>
	<i975en$c8v$1@dough.gmane.org>
Message-ID: <4CB7214E.1090504@stsci.edu>

On 10/14/2010 10:52 AM, Robert Kern wrote:
> On 10/14/10 3:55 AM, Hans Meine wrote:
>
>    
>> PS: Here are the current docs; I could not find a word about clipping in them:
>>     http://doc.qt.nokia.com/4.7/qtsvg.html
>>     http://doc.qt.nokia.com/4.7/qsvggenerator.html
>>      
> They have removed some information from their documentation in recent versions.
> As of 4.5, they are only claiming to support the static features of SVG 1.2 Tiny
> in both the rendering and generation:
>
>     http://doc.qt.nokia.com/4.5/qtsvg.html
>
> SVG 1.2 Tiny does not support clipping.
>
>     http://www.w3.org/TR/SVGMobile12/
>
> That's why the bug reports have been closed as "won't fix". They are out of
> scope for the standards they have decided to support.
>
> It might be worthwhile for matplotlib to only use the SVG 1.2 Tiny standard for
> greater compatibility. Tiny is much easier to implement, so I suspect there are
> now more Tiny renderers now than there are Full renderers.
>    
This is true -- and we can probably remove some of the simpler-to-remove 
parts that are missing from SVG Tiny.  However, to support clipping, we 
either have to either a) write the routines to do clipping in vector 
space or b) use rasterized fallbacks (as we do for some alpha issues in 
Postscript, for instance).  a) is non-trivial in the general case, 
particularly when accounting for line thicknesses, and b) is a hack.

Mike


-- 
Michael Droettboom
Science Software Branch
Space Telescope Science Institute
Baltimore, Maryland, USA



From mark.voorhies at ucsf.edu  Thu Oct 14 12:58:42 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Thu, 14 Oct 2010 09:58:42 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
	matplotlib bug)- Qt won't fix...
In-Reply-To: <4CB7214E.1090504@stsci.edu>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<i975en$c8v$1@dough.gmane.org> <4CB7214E.1090504@stsci.edu>
Message-ID: <201010140958.42475.mark.voorhies@ucsf.edu>

On Thursday, October 14, 2010 08:27:10 am Michael Droettboom wrote:
> On 10/14/2010 10:52 AM, Robert Kern wrote:
> > On 10/14/10 3:55 AM, Hans Meine wrote:
> > It might be worthwhile for matplotlib to only use the SVG 1.2 Tiny standard for
> > greater compatibility. Tiny is much easier to implement, so I suspect there are
> > now more Tiny renderers now than there are Full renderers.
> >    
> This is true -- and we can probably remove some of the simpler-to-remove 
> parts that are missing from SVG Tiny.  However, to support clipping, we 
> either have to either a) write the routines to do clipping in vector 
> space or b) use rasterized fallbacks (as we do for some alpha issues in 
> Postscript, for instance).  a) is non-trivial in the general case, 
> particularly when accounting for line thicknesses, and b) is a hack.

Just to clarify what's happening on the iPython Qt console side:

* We receive SVG (with clipping path) from Matplotlib

* We wrap the SVG in Qt's rasterizer
  (via rich_ipython_widget._process_execute_payload
   calling svg.svg_to_image) and drop that as a widget on the
  console canvas.

+ As noted, the Qt rasterizer doesn't implement clipping, so
  we get unclipped plots drawn on the console.

+ This strategy also results in rasterized plots in the PDF export.

* For the context menu "Save Image As" and "Export HTML"
  functions, we call save(filename, "PNG") on the Qt
  rasterizer, so we get the same lack-of-clipping artifact.

* For the context menu "Save SVG As" and "Export XHTML"
  we work from Matplotlib's original SVG, so the clipping path
  is retained (and we get correct clipping in Inkscape, Firefox,
  and WebKit).

* Saving as PNG via Matplotlib's GUI as launched from iPython
  gives a correctly clipped PNG.  I'm not sure if Matplotlib is
  rasterizing directly, or drawing to the GUI canvas and then
  asking the GUI to generate a PNG.  Also, not sure which
  GUI backend I'm looking at...

Would it be reasonable to bypass the Qt issue by asking Matplotlib
for a PNG at the time that we receive the SVG and put that on
our console canvas?  It seems like this would be a good general
strategy for supporting non-Qt frontends (e.g., James Gao might
want something like this for the HTML frontend to support older
browsers).

--Mark


From ellisonbg at gmail.com  Thu Oct 14 13:02:27 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 14 Oct 2010 10:02:27 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
 matplotlib bug)- Qt won't fix...
In-Reply-To: <201010140958.42475.mark.voorhies@ucsf.edu>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<i975en$c8v$1@dough.gmane.org> <4CB7214E.1090504@stsci.edu>
	<201010140958.42475.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTikcscJo9h_gaMK0TW8vTZAsc0KfYGHYsrkF615O@mail.gmail.com>

Mark,

I like the idea of getting the .PNG rather than the .SVG, but can we
still embed it in the html output?

Cheers,

Brian

On Thu, Oct 14, 2010 at 9:58 AM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:
> On Thursday, October 14, 2010 08:27:10 am Michael Droettboom wrote:
>> On 10/14/2010 10:52 AM, Robert Kern wrote:
>> > On 10/14/10 3:55 AM, Hans Meine wrote:
>> > It might be worthwhile for matplotlib to only use the SVG 1.2 Tiny standard for
>> > greater compatibility. Tiny is much easier to implement, so I suspect there are
>> > now more Tiny renderers now than there are Full renderers.
>> >
>> This is true -- and we can probably remove some of the simpler-to-remove
>> parts that are missing from SVG Tiny. ?However, to support clipping, we
>> either have to either a) write the routines to do clipping in vector
>> space or b) use rasterized fallbacks (as we do for some alpha issues in
>> Postscript, for instance). ?a) is non-trivial in the general case,
>> particularly when accounting for line thicknesses, and b) is a hack.
>
> Just to clarify what's happening on the iPython Qt console side:
>
> * We receive SVG (with clipping path) from Matplotlib
>
> * We wrap the SVG in Qt's rasterizer
> ?(via rich_ipython_widget._process_execute_payload
> ? calling svg.svg_to_image) and drop that as a widget on the
> ?console canvas.
>
> + As noted, the Qt rasterizer doesn't implement clipping, so
> ?we get unclipped plots drawn on the console.
>
> + This strategy also results in rasterized plots in the PDF export.
>
> * For the context menu "Save Image As" and "Export HTML"
> ?functions, we call save(filename, "PNG") on the Qt
> ?rasterizer, so we get the same lack-of-clipping artifact.
>
> * For the context menu "Save SVG As" and "Export XHTML"
> ?we work from Matplotlib's original SVG, so the clipping path
> ?is retained (and we get correct clipping in Inkscape, Firefox,
> ?and WebKit).
>
> * Saving as PNG via Matplotlib's GUI as launched from iPython
> ?gives a correctly clipped PNG. ?I'm not sure if Matplotlib is
> ?rasterizing directly, or drawing to the GUI canvas and then
> ?asking the GUI to generate a PNG. ?Also, not sure which
> ?GUI backend I'm looking at...
>
> Would it be reasonable to bypass the Qt issue by asking Matplotlib
> for a PNG at the time that we receive the SVG and put that on
> our console canvas? ?It seems like this would be a good general
> strategy for supporting non-Qt frontends (e.g., James Gao might
> want something like this for the HTML frontend to support older
> browsers).
>
> --Mark
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From jdh2358 at gmail.com  Thu Oct 14 13:05:06 2010
From: jdh2358 at gmail.com (John Hunter)
Date: Thu, 14 Oct 2010 12:05:06 -0500
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
 matplotlib bug)- Qt won't fix...
In-Reply-To: <201010140958.42475.mark.voorhies@ucsf.edu>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<i975en$c8v$1@dough.gmane.org> <4CB7214E.1090504@stsci.edu>
	<201010140958.42475.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTimajY1=E-K2gfQELRYA8DkQr52qgdu=Bp3jEdBj@mail.gmail.com>

On Thu, Oct 14, 2010 at 11:58 AM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:

> * Saving as PNG via Matplotlib's GUI as launched from iPython
> ?gives a correctly clipped PNG. ?I'm not sure if Matplotlib is
> ?rasterizing directly, or drawing to the GUI canvas and then
> ?asking the GUI to generate a PNG. ?Also, not sure which
> ?GUI backend I'm looking at...

matplotlib can mix and match GUI frontends with renderers, eg we do
support native GTK or WX rendering into their respective canvases.
But we recommend for most people to use one of the *Agg backends
(GTKAgg, TkAgg, WXAgg, QtAgg, etc) in which case we will render the
graphics using Agg (antigrain) and then dump them into the GUI canvas
as a pixel buffer.  So if you ask for a PNG from any of the *Agg
backends, you will get a matplotlib rendered PNG.  For more details
see

http://matplotlib.sourceforge.net/faq/installing_faq.html#backends

> Would it be reasonable to bypass the Qt issue by asking Matplotlib
> for a PNG at the time that we receive the SVG and put that on
> our console canvas? ?It seems like this would be a good general
> strategy for supporting non-Qt frontends (e.g., James Gao might
> want something like this for the HTML frontend to support older
> browsers).

It should be fairly trivial to get mpl to generate a PNG on the kernel
side by requesting backend Agg, and then shipping that along with your
SVG and embedding it into your widget.

JDH


From ellisonbg at gmail.com  Thu Oct 14 13:45:40 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 14 Oct 2010 10:45:40 -0700
Subject: [IPython-dev] HTML export with PNG or SVG images added to Qt
	console
In-Reply-To: <AANLkTik8MfnE3r5XmKq6r=U3Maf2M=4AwPDa4MhhiXjZ@mail.gmail.com>
References: <AANLkTik8MfnE3r5XmKq6r=U3Maf2M=4AwPDa4MhhiXjZ@mail.gmail.com>
Message-ID: <AANLkTinKv3RovdWaKQLM7dNudVxAvFarh3YwQPiwjMMn@mail.gmail.com>

Fernando and Mark,

> Thanks to Mark Voorhies (who gave us PDF printing just a few days
> ago), we now have full HTML export of the entire buffer of the Qt
> console.
>
> http://github.com/ipython/ipython/commit/f467f96827d11b2420e921308177517f1f8ce49a

This is definitely great to have.  Thanks for the work Mark!

Cheers,

Brian

> [ note: that's just the merge commit, the original commits are in the
> tree with Mark's credits. ?But I just realized that when ammending
> merge commits from others, we should always use the --author flag,
> because git credits the merge commit by default only to the committer
> (and in this case I did precious little actual work beyond
> reviewing/testing) ]
>
> You can choose:
>
> - inline png images: http://fperez.org/tmp/ipython-inline-png.htm
> - pngs in an external directory: http://fperez.org/tmp/ipython-external-png.htm
> - saving the original svgs: http://fperez.org/tmp/ipython-svg.xml
>
> Many thanks to Mark for this contribution, it's fantastic!
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From mark.voorhies at ucsf.edu  Thu Oct 14 14:50:18 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Thu, 14 Oct 2010 11:50:18 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
	matplotlib bug)- Qt won't fix...
In-Reply-To: <AANLkTikcscJo9h_gaMK0TW8vTZAsc0KfYGHYsrkF615O@mail.gmail.com>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010140958.42475.mark.voorhies@ucsf.edu>
	<AANLkTikcscJo9h_gaMK0TW8vTZAsc0KfYGHYsrkF615O@mail.gmail.com>
Message-ID: <201010141150.19160.mark.voorhies@ucsf.edu>

On Thursday, October 14, 2010 10:02:27 am Brian Granger wrote:
> Mark,
> 
> I like the idea of getting the .PNG rather than the .SVG, but can we
> still embed it in the html output?

The HTML export shouldn't care -- it's still seeing a QImage object with
a save method (hurray for encapsulation).

Some of the SVG functions may need a bit more overhead (since they're
no longer "first class") but I've already added some of this for the XHTML
export (via the _name_to_svg dict that Fernando suggested) so it shouldn't
be too bad.

--Mark

P.S. What would be really nice, long term, would be for the Qt figure widget
to hold SVG, PNG, and PDF from Matplotlib (or just a copy-on-write reference
to the figure if we want to be lazy) and return the appropriate representation
depending on context.  I think that this would be a way to get vector figures
in PDF output (e.g., if the widget can tell its being called by a QPrinter rather
than some other type of QPainter), but this might require fairly deep Qt
hacking...

P.P.S. Tangentially, how easy is it to receive SVG payloads from sources other
than matplotlib (e.g., if I wanted to mix in SVG's from rpy or from a web resource
like Gbrowse)? 


From fperez.net at gmail.com  Thu Oct 14 15:08:55 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 14 Oct 2010 12:08:55 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
 matplotlib bug)- Qt won't fix...
In-Reply-To: <201010141150.19160.mark.voorhies@ucsf.edu>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010140958.42475.mark.voorhies@ucsf.edu>
	<AANLkTikcscJo9h_gaMK0TW8vTZAsc0KfYGHYsrkF615O@mail.gmail.com>
	<201010141150.19160.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTikuURE6NnPK+KJ=-_OpUfMGtHrrXdGgvEWqsEAp@mail.gmail.com>

On Thu, Oct 14, 2010 at 11:50 AM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:
>> I like the idea of getting the .PNG rather than the .SVG, but can we
>> still embed it in the html output?
>
> The HTML export shouldn't care -- it's still seeing a QImage object with
> a save method (hurray for encapsulation).
>
> Some of the SVG functions may need a bit more overhead (since they're
> no longer "first class") but I've already added some of this for the XHTML
> export (via the _name_to_svg dict that Fernando suggested) so it shouldn't
> be too bad.

The trick will be sending to the client both the svg and the png at
pastefig() time.  Later on the figures may have been destroyed, so
unless we send that on the spot, the client would have no way of
reconstructing this.

Technically it's pretty easy to do: the payload can carry multiple
entries, the client can use the png for on-screen rendering (being
generated by mpl's Agg backend, it's guaranteed to be our "gold
standard" so anything on-screen should use that), and it can then use
the 'hidden' SVG (or pdf - see below) when printing to html/pdf.

The cost is time/bandwidth/memory.  So this should probably be
configurable, and even togglable during runtime.  Sending all three
formats instead of just a png is obvioiusly more expensive, and users
may be OK with plain pngs in some cases (slow link when doing remote
collaboration, for example).

> P.S. What would be really nice, long term, would be for the Qt figure widget
> to hold SVG, PNG, and PDF from Matplotlib (or just a copy-on-write reference
> to the figure if we want to be lazy) and return the appropriate representation
> depending on context. ?I think that this would be a way to get vector figures
> in PDF output (e.g., if the widget can tell its being called by a QPrinter rather
> than some other type of QPainter), but this might require fairly deep Qt
> hacking...

We can't do copy-on-write, because the actual mpl figure and the
client are in *separate processes*.  Hence my note above about having
to send those other guys (svg/pdf) right on pastefig().  You either
send them right then and there, or for all intents and purposes they
are lost.

> P.P.S. Tangentially, how easy is it to receive SVG payloads from sources other
> than matplotlib (e.g., if I wanted to mix in SVG's from rpy or from a web resource
> like Gbrowse)?

Trivial.  It will take a tiny amount of code to be written, but
ultimately it's just a matter of calling

add_plot_payload('svg', svg_data).

That's the method whose interface we'll probably want to enhance to
allow for svg/png/pdf multi-format payloads.

Regards,


f


From takowl at gmail.com  Sat Oct 16 11:29:09 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sat, 16 Oct 2010 16:29:09 +0100
Subject: [IPython-dev] Shutdown __del__ methods bug
Message-ID: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>

Unfortunately, this commit appears to have undone Fernando's fix for the bug
with __del__ methods that I found:
http://github.com/ipython/ipython/commit/239d2ed6f44c3f6511ee1e9069a5a1aee9c20f9c

I can reproduce the bug in trunk. This also highlights that the doctest to
catch it evidently doesn't do so. Running iptest IPython.core shows the
error message on the console (among the ... of passed tests), but it doesn't
fail. I'm not a console ninja, but could it be that the message goes to
stderr, and the evaluation only checks stdout?

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101016/76cb6219/attachment.html>

From fperez.net at gmail.com  Sat Oct 16 12:16:05 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 16 Oct 2010 09:16:05 -0700
Subject: [IPython-dev] Shutdown __del__ methods bug
In-Reply-To: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
References: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
Message-ID: <AANLkTimNo9_+jeRkqOz9ZyYNi+9F7h3VfTf0z8kcv8=y@mail.gmail.com>

On Sat, Oct 16, 2010 at 8:29 AM, Thomas Kluyver <takowl at gmail.com> wrote:
> Unfortunately, this commit appears to have undone Fernando's fix for the bug
> with __del__ methods that I found:
> http://github.com/ipython/ipython/commit/239d2ed6f44c3f6511ee1e9069a5a1aee9c20f9c
>
> I can reproduce the bug in trunk. This also highlights that the doctest to
> catch it evidently doesn't do so. Running iptest IPython.core shows the
> error message on the console (among the ... of passed tests), but it doesn't
> fail. I'm not a console ninja, but could it be that the message goes to
> stderr, and the evaluation only checks stdout?

Ouch, thanks for catching that... Your intuition is indeed correct:
and the problem is that this exception is caught internally by python
itself, and not accessible to us by normal mechanisms.

We may try to hack either by redirecting sys.stderr to sys.stdout
temporarily, or replacing sys.excepthook.  If you can have a go at
these and find something interesting please let us know, I'm at a
conference this weekend with very limited time.

Cheers,

f


From takowl at gmail.com  Sat Oct 16 14:25:22 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sat, 16 Oct 2010 19:25:22 +0100
Subject: [IPython-dev] IPython-dev Digest, Vol 81, Issue 25
In-Reply-To: <mailman.5.1287248401.7218.ipython-dev@scipy.org>
References: <mailman.5.1287248401.7218.ipython-dev@scipy.org>
Message-ID: <AANLkTi=FJ8sr7SfL9C8w2dSQYAh1maSmj+ayjT53Rr3R@mail.gmail.com>

On 16 October 2010 18:00, <ipython-dev-request at scipy.org> wrote:

> We may try to hack either by redirecting sys.stderr to sys.stdout
> temporarily, or replacing sys.excepthook.  If you can have a go at
> these and find something interesting please let us know, I'm at a
> conference this weekend with very limited time.


I've gone for a rather easier option: print str("Hi"). If it can't find the
str function, it can't print any output, and fails the test. I've checked
that the test fails, committed that, and then reapplied the fix for the bug
(and checked that the test passes again), in this branch:
http://github.com/takowl/ipython/tree/fix-del-method-exit-test
I've made a pull request for it.

I also noticed a few bits of code not converting cleanly, so I've made
another cleanup branch (cleanup-old-code).

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101016/2c4f3341/attachment.html>

From mark.voorhies at ucsf.edu  Sun Oct 17 16:56:58 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Sun, 17 Oct 2010 13:56:58 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
	matplotlib bug)- Qt won't fix...
In-Reply-To: <AANLkTimajY1=E-K2gfQELRYA8DkQr52qgdu=Bp3jEdBj@mail.gmail.com>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010140958.42475.mark.voorhies@ucsf.edu>
	<AANLkTimajY1=E-K2gfQELRYA8DkQr52qgdu=Bp3jEdBj@mail.gmail.com>
Message-ID: <201010171356.59099.mark.voorhies@ucsf.edu>

On Thursday, October 14, 2010 10:05:06 am John Hunter wrote:
> It should be fairly trivial to get mpl to generate a PNG on the kernel
> side by requesting backend Agg, and then shipping that along with your
> SVG and embedding it into your widget.
> 
> JDH
> 

Thanks!  I'm not sure if the iPython kernel has the same defaults on all systems,
but my system is defaulting to the TkAgg backend and
print_figure(string_io, format='png') does what I want.

--Mark


From mark.voorhies at ucsf.edu  Sun Oct 17 17:28:53 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Sun, 17 Oct 2010 14:28:53 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
	matplotlib bug)- Qt won't fix...
In-Reply-To: <AANLkTikuURE6NnPK+KJ=-_OpUfMGtHrrXdGgvEWqsEAp@mail.gmail.com>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010141150.19160.mark.voorhies@ucsf.edu>
	<AANLkTikuURE6NnPK+KJ=-_OpUfMGtHrrXdGgvEWqsEAp@mail.gmail.com>
Message-ID: <201010171428.54326.mark.voorhies@ucsf.edu>

On Thursday, October 14, 2010 12:08:55 pm Fernando Perez wrote:
> The trick will be sending to the client both the svg and the png at
> pastefig() time.  Later on the figures may have been destroyed, so
> unless we send that on the spot, the client would have no way of
> reconstructing this.

I tried a first pass at this (branch "pastefig" in my github repository.
Latest commit: 
http://github.com/markvoorhies/ipython/commit/3f3d3d2f6e1f457856ce7e5481aa681fddb72a82
)

The multi-image bundle is sent as type "multi", with data set to
a dict of "format"->data (so, currently, 
{"png" : PNG data from matplotlib,
 "svg" : SVG data from maptplotlib}
)
["multi" is probably not the best name choice -- any suggestions for
 something more descriptive/specific?]

Naively sending PNG data causes reply_socket.send_json(repy_msg)
in ipkernel.py to hang (clearing the eighth bit of the data fixes this,
does ZMQ require 7bit data?) -- I'm currently working around this by
base64 encoding the PNG, but this may not be the best choice wrt
bandwidth.

> 
> Technically it's pretty easy to do: the payload can carry multiple
> entries, the client can use the png for on-screen rendering (being
> generated by mpl's Agg backend, it's guaranteed to be our "gold
> standard" so anything on-screen should use that), and it can then use
> the 'hidden' SVG (or pdf - see below) when printing to html/pdf.

Generating the PNG from the mpl AGG backend does indeed give
nice clipping in the Qt console.  The one wrinkle is that the default
PNG images were larger than what we previously had in Qt.  Currently
working around this by requesting 70dpi PNGs from matplotlib, but
I'm not sure what the default should be (since this is a kernel-side
decision, it sets an upper limit on resolution for clients that don't
do their own re-rendering from the SVG; since the payload is broadcast,
this decision also has a bandwidth consequence for all clients).

> 
> The cost is time/bandwidth/memory.  So this should probably be
> configurable, and even togglable during runtime.  Sending all three
> formats instead of just a png is obvioiusly more expensive, and users
> may be OK with plain pngs in some cases (slow link when doing remote
> collaboration, for example).

One question is how much of this can/should be configured in the kernel
vs. the client(s).  Are there situations where it would make sense to
supplement the main broadcast channel with a "high bandwidth" channel
for the subset of clients that want to receive, e.g., pdf data (or would the
complexity cost be too high to justify this)?

If the direction in my pastefig branch looks reasonable, let me know and
we can continue the discussion in the context of a pull request.  Otherwise,
it might be useful to have some more discussion on this thread wrt 
structuring/configuring the payload and wrapping it on the Qt side.

--Mark

P.S. For James Gao's HTML frontend, handling the "multi" payload should just be
a matter of grabbing the "svg" part in notebook.js::execute(code) and handling
it like a regular svg payload.


From fperez.net at gmail.com  Tue Oct 19 18:05:42 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 19 Oct 2010 15:05:42 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
 matplotlib bug)- Qt won't fix...
In-Reply-To: <201010171428.54326.mark.voorhies@ucsf.edu>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010141150.19160.mark.voorhies@ucsf.edu>
	<AANLkTikuURE6NnPK+KJ=-_OpUfMGtHrrXdGgvEWqsEAp@mail.gmail.com>
	<201010171428.54326.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTimJNZYqRZXUepCXDKCQGgVojUZDb8WMJan_ohLW@mail.gmail.com>

On Sun, Oct 17, 2010 at 2:28 PM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:
> I tried a first pass at this (branch "pastefig" in my github repository.
> Latest commit:
> http://github.com/markvoorhies/ipython/commit/3f3d3d2f6e1f457856ce7e5481aa681fddb72a82
> )

Thanks!!!

> The multi-image bundle is sent as type "multi", with data set to
> a dict of "format"->data (so, currently,
> {"png" : PNG data from matplotlib,
> ?"svg" : SVG data from maptplotlib}
> )
> ["multi" is probably not the best name choice -- any suggestions for
> ?something more descriptive/specific?]

It may be time to stop for a minute to think about our payloads.  The
payload system works well but we've known all along that once we have
a clearer understanding of what we need, we'd want to refine its
design.  All along something has been telling me that we should move
to a full specification of payloads with mimetype information (plus
possibly ipython-specific extra data).  Python has a mimetype library,
and if our payloads are properly mimetype-encoded, web frontends would
have little to no extra work to do, as browsers are already tooled up
to handle gracefully mimetype-tagged data that comes in.

What do people think of this approach?

> Naively sending PNG data causes reply_socket.send_json(repy_msg)
> in ipkernel.py to hang (clearing the eighth bit of the data fixes this,
> does ZMQ require 7bit data?) -- I'm currently working around this by
> base64 encoding the PNG, but this may not be the best choice wrt
> bandwidth.

That's very odd.  Brian, Min, do you know of any such restrictions in
zmq/pyzmq?  I thought that zmq would happily handle pretty much any
binary data...

> Generating the PNG from the mpl AGG backend does indeed give
> nice clipping in the Qt console. ?The one wrinkle is that the default
> PNG images were larger than what we previously had in Qt. ?Currently
> working around this by requesting 70dpi PNGs from matplotlib, but
> I'm not sure what the default should be (since this is a kernel-side
> decision, it sets an upper limit on resolution for clients that don't
> do their own re-rendering from the SVG; since the payload is broadcast,
> this decision also has a bandwidth consequence for all clients).

I think setting 72dpi for this (the default for low-resolution
printing) is probably OK for now.  Later we can expose this as a
user-visible parameter and even one that could be tuned at runtime.

> One question is how much of this can/should be configured in the kernel
> vs. the client(s). ?Are there situations where it would make sense to
> supplement the main broadcast channel with a "high bandwidth" channel
> for the subset of clients that want to receive, e.g., pdf data (or would the
> complexity cost be too high to justify this)?

I think we simply want all payloads to move to the pub socket, and for
now keep a simple design.  We need the payloads on the pub sockets so
multiple clients can process them.  Once that is working, we can
consider finer-grained filtering.  One step at a time :)

> If the direction in my pastefig branch looks reasonable, let me know and
> we can continue the discussion in the context of a pull request. ?Otherwise,
> it might be useful to have some more discussion on this thread wrt
> structuring/configuring the payload and wrapping it on the Qt side.

Let's see what comes back here about the mimetype and zmq questions,
and we'll then refine it for a pull request.

Cheers,

f


From benjaminrk at gmail.com  Tue Oct 19 18:34:07 2010
From: benjaminrk at gmail.com (MinRK)
Date: Tue, 19 Oct 2010 15:34:07 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
 matplotlib bug)- Qt won't fix...
In-Reply-To: <AANLkTimJNZYqRZXUepCXDKCQGgVojUZDb8WMJan_ohLW@mail.gmail.com>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010141150.19160.mark.voorhies@ucsf.edu>
	<AANLkTikuURE6NnPK+KJ=-_OpUfMGtHrrXdGgvEWqsEAp@mail.gmail.com>
	<201010171428.54326.mark.voorhies@ucsf.edu>
	<AANLkTimJNZYqRZXUepCXDKCQGgVojUZDb8WMJan_ohLW@mail.gmail.com>
Message-ID: <AANLkTikNVSCMSLMH+vhpuo+TkFFssZwwP_KX0cQEzCrX@mail.gmail.com>

On Tue, Oct 19, 2010 at 15:05, Fernando Perez <fperez.net at gmail.com> wrote:

> On Sun, Oct 17, 2010 at 2:28 PM, Mark Voorhies <mark.voorhies at ucsf.edu>
> wrote:
> > I tried a first pass at this (branch "pastefig" in my github repository.
> > Latest commit:
> >
> http://github.com/markvoorhies/ipython/commit/3f3d3d2f6e1f457856ce7e5481aa681fddb72a82
> > )
>
> Thanks!!!
>
> > The multi-image bundle is sent as type "multi", with data set to
> > a dict of "format"->data (so, currently,
> > {"png" : PNG data from matplotlib,
> >  "svg" : SVG data from maptplotlib}
> > )
> > ["multi" is probably not the best name choice -- any suggestions for
> >  something more descriptive/specific?]
>
> It may be time to stop for a minute to think about our payloads.  The
> payload system works well but we've known all along that once we have
> a clearer understanding of what we need, we'd want to refine its
> design.  All along something has been telling me that we should move
> to a full specification of payloads with mimetype information (plus
> possibly ipython-specific extra data).  Python has a mimetype library,
> and if our payloads are properly mimetype-encoded, web frontends would
> have little to no extra work to do, as browsers are already tooled up
> to handle gracefully mimetype-tagged data that comes in.
>
> What do people think of this approach?
>
> > Naively sending PNG data causes reply_socket.send_json(repy_msg)
> > in ipkernel.py to hang (clearing the eighth bit of the data fixes this,
> > does ZMQ require 7bit data?) -- I'm currently working around this by
> > base64 encoding the PNG, but this may not be the best choice wrt
> > bandwidth.
>
> That's very odd.  Brian, Min, do you know of any such restrictions in
> zmq/pyzmq?  I thought that zmq would happily handle pretty much any
> binary data...
>

Sorry, I sent this a few days ago, but failed to reply-all:

It's not zmq, but json that prevents sending raw data.  ZMQ can send any
bytes just fine (I tested with the code being used to deliver the payloads,
and it can send StringIO from a PNG canvas no problem), but json requires
encoded strings.  Arbitrary C-strings are not necessarily valid JSON
strings. This gets confusing, but essentially JSON has the same notion of
strings as Python-3 (str=unicode, bytes=C-str).  A string for them is a
series of *characters*, not any series of 8-bit numbers, which is the
C/Python<3 notion. Since not all series of arbitrary 8-bit numbers can be
interpreted as valid characters, JSON can't encode them for marshaling.
Zeroing out the 8th bit works because all 7-bit numbers *are* valid ASCII
characters (and thus also valid in almost all encodings).

JSON has no binary data format. The only valid data for JSON are: numbers,
encoded strings, lists, dicts, and lists/dicts of those 4 types, so if you
want to send binary data, you have to first turn it into an *encoded*
string, not a C-string.  Base64 is an example of such a thing, and I don't
know of a better way than that, if JSON is enforced. Obviously, if you used
pickle instead, there would be no problem

This is why BSON (the data format used by MongoDB among others) exists. It
adds binary data support to JSON.

-MinRK


>
> > Generating the PNG from the mpl AGG backend does indeed give
> > nice clipping in the Qt console.  The one wrinkle is that the default
> > PNG images were larger than what we previously had in Qt.  Currently
> > working around this by requesting 70dpi PNGs from matplotlib, but
> > I'm not sure what the default should be (since this is a kernel-side
> > decision, it sets an upper limit on resolution for clients that don't
> > do their own re-rendering from the SVG; since the payload is broadcast,
> > this decision also has a bandwidth consequence for all clients).
>
> I think setting 72dpi for this (the default for low-resolution
> printing) is probably OK for now.  Later we can expose this as a
> user-visible parameter and even one that could be tuned at runtime.
>
> > One question is how much of this can/should be configured in the kernel
> > vs. the client(s).  Are there situations where it would make sense to
> > supplement the main broadcast channel with a "high bandwidth" channel
> > for the subset of clients that want to receive, e.g., pdf data (or would
> the
> > complexity cost be too high to justify this)?
>
> I think we simply want all payloads to move to the pub socket, and for
> now keep a simple design.  We need the payloads on the pub sockets so
> multiple clients can process them.  Once that is working, we can
> consider finer-grained filtering.  One step at a time :)
>
> > If the direction in my pastefig branch looks reasonable, let me know and
> > we can continue the discussion in the context of a pull request.
>  Otherwise,
> > it might be useful to have some more discussion on this thread wrt
> > structuring/configuring the payload and wrapping it on the Qt side.
>
> Let's see what comes back here about the mimetype and zmq questions,
> and we'll then refine it for a pull request.
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101019/77b744a3/attachment.html>

From robert.kern at gmail.com  Tue Oct 19 19:17:01 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Tue, 19 Oct 2010 18:17:01 -0500
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
 matplotlib bug)- Qt won't fix...
In-Reply-To: <AANLkTikNVSCMSLMH+vhpuo+TkFFssZwwP_KX0cQEzCrX@mail.gmail.com>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>	<201010141150.19160.mark.voorhies@ucsf.edu>	<AANLkTikuURE6NnPK+KJ=-_OpUfMGtHrrXdGgvEWqsEAp@mail.gmail.com>	<201010171428.54326.mark.voorhies@ucsf.edu>	<AANLkTimJNZYqRZXUepCXDKCQGgVojUZDb8WMJan_ohLW@mail.gmail.com>
	<AANLkTikNVSCMSLMH+vhpuo+TkFFssZwwP_KX0cQEzCrX@mail.gmail.com>
Message-ID: <i9l8te$d20$1@dough.gmane.org>

On 2010-10-19 17:34 , MinRK wrote:
>
>
> On Tue, Oct 19, 2010 at 15:05, Fernando Perez <fperez.net
> <http://fperez.net>@gmail.com <http://gmail.com>> wrote:
>
>     On Sun, Oct 17, 2010 at 2:28 PM, Mark Voorhies <mark.voorhies at ucsf.edu
>     <mailto:mark.voorhies at ucsf.edu>> wrote:
>      > I tried a first pass at this (branch "pastefig" in my github repository.
>      > Latest commit:
>      >
>     http://github.com/markvoorhies/ipython/commit/3f3d3d2f6e1f457856ce7e5481aa681fddb72a82
>      > )
>
>     Thanks!!!
>
>      > The multi-image bundle is sent as type "multi", with data set to
>      > a dict of "format"->data (so, currently,
>      > {"png" : PNG data from matplotlib,
>      > "svg" : SVG data from maptplotlib}
>      > )
>      > ["multi" is probably not the best name choice -- any suggestions for
>      >  something more descriptive/specific?]
>
>     It may be time to stop for a minute to think about our payloads.  The
>     payload system works well but we've known all along that once we have
>     a clearer understanding of what we need, we'd want to refine its
>     design.  All along something has been telling me that we should move
>     to a full specification of payloads with mimetype information (plus
>     possibly ipython-specific extra data).  Python has a mimetype library,
>     and if our payloads are properly mimetype-encoded, web frontends would
>     have little to no extra work to do, as browsers are already tooled up
>     to handle gracefully mimetype-tagged data that comes in.
>
>     What do people think of this approach?
>
>      > Naively sending PNG data causes reply_socket.send_json(repy_msg)
>      > in ipkernel.py to hang (clearing the eighth bit of the data fixes this,
>      > does ZMQ require 7bit data?) -- I'm currently working around this by
>      > base64 encoding the PNG, but this may not be the best choice wrt
>      > bandwidth.
>
>     That's very odd.  Brian, Min, do you know of any such restrictions in
>     zmq/pyzmq?  I thought that zmq would happily handle pretty much any
>     binary data...
>
>
> Sorry, I sent this a few days ago, but failed to reply-all:
>
> It's not zmq, but json that prevents sending raw data.  ZMQ can send any bytes
> just fine (I tested with the code being used to deliver the payloads, and it can
> send StringIO from a PNG canvas no problem), but json requires encoded strings.
>   Arbitrary C-strings are not necessarily valid JSON strings. This gets
> confusing, but essentially JSON has the same notion of strings as Python-3
> (str=unicode, bytes=C-str).  A string for them is a series of /characters/, not
> any series of 8-bit numbers, which is the C/Python<3 notion. Since not all
> series of arbitrary 8-bit numbers can be interpreted as valid characters, JSON
> can't encode them for marshaling. Zeroing out the 8th bit works because all
> 7-bit numbers /are/ valid ASCII characters (and thus also valid in almost all
> encodings).
>
> JSON has no binary data format. The only valid data for JSON are: numbers,
> encoded strings, lists, dicts, and lists/dicts of those 4 types, so if you want
> to send binary data, you have to first turn it into an *encoded* string, not a
> C-string.  Base64 is an example of such a thing, and I don't know of a better
> way than that, if JSON is enforced. Obviously, if you used pickle instead, there
> would be no problem
>
> This is why BSON (the data format used by MongoDB among others) exists. It adds
> binary data support to JSON.

The approach I advocated at SciPy was to use multipart messages. Send the header 
encoded in JSON (or whatever) and then follow that with a message part (or 
parts) containing the binary data. Don't try to encode the data inside any kind 
of markup requiring parsing, whether the format is binary-friendly or not. This 
lets the receiver parse just the smallish header and decide what to do with the 
largish data without touching the data. You don't want to parse all of a BSON 
message just to find out that it's a PNG when you want the SVG.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From benjaminrk at gmail.com  Tue Oct 19 20:42:40 2010
From: benjaminrk at gmail.com (MinRK)
Date: Tue, 19 Oct 2010 17:42:40 -0700
Subject: [IPython-dev] [QUAR] Re: Qt SVG clipping bug (it's NOT a
 matplotlib bug)- Qt won't fix...
In-Reply-To: <i9l8te$d20$1@dough.gmane.org>
References: <AANLkTinaG2cZg6DKGDnHzbrUMz-oA+ce+RXsJ2KqDGOV@mail.gmail.com>
	<201010141150.19160.mark.voorhies@ucsf.edu>
	<AANLkTikuURE6NnPK+KJ=-_OpUfMGtHrrXdGgvEWqsEAp@mail.gmail.com>
	<201010171428.54326.mark.voorhies@ucsf.edu>
	<AANLkTimJNZYqRZXUepCXDKCQGgVojUZDb8WMJan_ohLW@mail.gmail.com>
	<AANLkTikNVSCMSLMH+vhpuo+TkFFssZwwP_KX0cQEzCrX@mail.gmail.com>
	<i9l8te$d20$1@dough.gmane.org>
Message-ID: <AANLkTikff73MEHHYMyaARaWOzqZgyt-_2DiHsmN1nu_n@mail.gmail.com>

Note that in the parallel code, I do exactly what you mention.
I added the buffers argument to session.send(), because it is critically
important for the parallel code to be able to send things like numpy arrays
without ever serializing or copying the raw data, and currently, I can do
that - there are zero in-memory copies of array data (even from Python->C
zmq); only over the network.  It also allows me to pickle arbitrary objects,
and send them without having to ever copy the pickled string.  Metadata is
sent via json, and on the back is a series of buffers containing any binary
data.  I imagine that my Session object will be merged with the existing
Session object once the Parallel code gets pulled into trunk, but that's a
little while off.

Perhaps with the payload system, it would make sense for the kernel to use
this new model.  Of course, it isn't perfectly universal, as web frontends
require mime-type header info in order to interpret binary data, so you
would probably fracture the portability of pure JSON, but I'm not sure.
Maybe the HTML header info can be in the JSON metadata in such a way that a
javascript side would be able to properly interpret the data.

-MinRK

On Tue, Oct 19, 2010 at 16:17, Robert Kern <robert.kern at gmail.com> wrote:

> On 2010-10-19 17:34 , MinRK wrote:
> >
> >
> > On Tue, Oct 19, 2010 at 15:05, Fernando Perez <fperez.net
> > <http://fperez.net>@gmail.com <http://gmail.com>> wrote:
> >
> >     On Sun, Oct 17, 2010 at 2:28 PM, Mark Voorhies <
> mark.voorhies at ucsf.edu
> >     <mailto:mark.voorhies at ucsf.edu>> wrote:
> >      > I tried a first pass at this (branch "pastefig" in my github
> repository.
> >      > Latest commit:
> >      >
> >
> http://github.com/markvoorhies/ipython/commit/3f3d3d2f6e1f457856ce7e5481aa681fddb72a82
> >      > )
> >
> >     Thanks!!!
> >
> >      > The multi-image bundle is sent as type "multi", with data set to
> >      > a dict of "format"->data (so, currently,
> >      > {"png" : PNG data from matplotlib,
> >      > "svg" : SVG data from maptplotlib}
> >      > )
> >      > ["multi" is probably not the best name choice -- any suggestions
> for
> >      >  something more descriptive/specific?]
> >
> >     It may be time to stop for a minute to think about our payloads.  The
> >     payload system works well but we've known all along that once we have
> >     a clearer understanding of what we need, we'd want to refine its
> >     design.  All along something has been telling me that we should move
> >     to a full specification of payloads with mimetype information (plus
> >     possibly ipython-specific extra data).  Python has a mimetype
> library,
> >     and if our payloads are properly mimetype-encoded, web frontends
> would
> >     have little to no extra work to do, as browsers are already tooled up
> >     to handle gracefully mimetype-tagged data that comes in.
> >
> >     What do people think of this approach?
> >
> >      > Naively sending PNG data causes reply_socket.send_json(repy_msg)
> >      > in ipkernel.py to hang (clearing the eighth bit of the data fixes
> this,
> >      > does ZMQ require 7bit data?) -- I'm currently working around this
> by
> >      > base64 encoding the PNG, but this may not be the best choice wrt
> >      > bandwidth.
> >
> >     That's very odd.  Brian, Min, do you know of any such restrictions in
> >     zmq/pyzmq?  I thought that zmq would happily handle pretty much any
> >     binary data...
> >
> >
> > Sorry, I sent this a few days ago, but failed to reply-all:
> >
> > It's not zmq, but json that prevents sending raw data.  ZMQ can send any
> bytes
> > just fine (I tested with the code being used to deliver the payloads, and
> it can
> > send StringIO from a PNG canvas no problem), but json requires encoded
> strings.
> >   Arbitrary C-strings are not necessarily valid JSON strings. This gets
> > confusing, but essentially JSON has the same notion of strings as
> Python-3
> > (str=unicode, bytes=C-str).  A string for them is a series of
> /characters/, not
> > any series of 8-bit numbers, which is the C/Python<3 notion. Since not
> all
> > series of arbitrary 8-bit numbers can be interpreted as valid characters,
> JSON
> > can't encode them for marshaling. Zeroing out the 8th bit works because
> all
> > 7-bit numbers /are/ valid ASCII characters (and thus also valid in almost
> all
> > encodings).
> >
> > JSON has no binary data format. The only valid data for JSON are:
> numbers,
> > encoded strings, lists, dicts, and lists/dicts of those 4 types, so if
> you want
> > to send binary data, you have to first turn it into an *encoded* string,
> not a
> > C-string.  Base64 is an example of such a thing, and I don't know of a
> better
> > way than that, if JSON is enforced. Obviously, if you used pickle
> instead, there
> > would be no problem
> >
> > This is why BSON (the data format used by MongoDB among others) exists.
> It adds
> > binary data support to JSON.
>
> The approach I advocated at SciPy was to use multipart messages. Send the
> header
> encoded in JSON (or whatever) and then follow that with a message part (or
> parts) containing the binary data. Don't try to encode the data inside any
> kind
> of markup requiring parsing, whether the format is binary-friendly or not.
> This
> lets the receiver parse just the smallish header and decide what to do with
> the
> largish data without touching the data. You don't want to parse all of a
> BSON
> message just to find out that it's a PNG when you want the SVG.
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> enigma
>  that is made terrible by our own mad attempt to interpret it as though it
> had
>  an underlying truth."
>   -- Umberto Eco
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101019/b73b1cf1/attachment.html>

From erik.tollerud at gmail.com  Wed Oct 20 04:42:35 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Wed, 20 Oct 2010 01:42:35 -0700
Subject: [IPython-dev] exit magic vs exit function(s?) for qtconsole
Message-ID: <AANLkTinF2+Dy0TuvAjTJzzfQEg7_qB=_iV1BQxfUzBKj@mail.gmail.com>

I've been trying to re-work the "exit" command for the
ipython-qtconsole so that it doesn't bring up the window asking if the
console and/or kernel should be closed.  It's nice to have a console
command to do this quickly - as Fernando pointed out, presumably no
one accidentally types "exit" without meaning to do it.  I managed to
get this working great using the %exit magic command, and added a
"%exit -k" command that keeps the kernel running but kills the console
- see http://github.com/eteq/ipython/tree/qt-exiting if you're
interested.

The trouble is, if I do either "exit" or "exit()" at the console, it
goes to the exit *function* instead of the exit magic command.  In
fact, "exit()" and "exit" seem to follow very different code paths
(one is the IPython.core.quitter.Quitter class, and I'm not sure where
the other one gets hooked in).  Is there a particular reason why there
are three different versions of "exit"?  And if not, is there a
straightforward way to get the exit magic to by default override the
other two and use only the magic command in the qtconsole?

Thanks!

-- 
Erik Tollerud


From gokhansever at gmail.com  Wed Oct 20 13:45:30 2010
From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=)
Date: Wed, 20 Oct 2010 12:45:30 -0500
Subject: [IPython-dev] Using Ipython cache as file proxy
Message-ID: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>

Hello,

Like in many of my code, I do lots of data reading in this
analysis/plotting script (yet to include model computations)
http://code.google.com/p/ccnworks/source/browse/trunk/thesis/part2_saudi/airplot.py

When I first run the IPython, system monitor shows me about 30 MB of
memory usage. After I first run this script memory use jump to around
200 MB. Most of which comes from data reading --2 to 10 MB range 27
different file streams read as masked arrays. Now the issue is the
execution of this script has started taking about 30 secs with this
heavy reading and plotting of 8 pages of figures. I am doing slight
modifications and annotations on my figures to make them look more
readable, and each run airplot.py is taking a while to bring plots the
screen and produce the final multipage pdf file. This is a 4 GB dual
core 2.5 Ghz laptop. I understand that I am mostly bounded by data io
speeds of my not-sure-how-many-spins hdd.

In the mean time, I wonder if I could get some help from IPython to
lower these file read wait periods. Once I execute the script they are
readily available in the IPython shell, nicely responding my whos and
variable access queries. About %99 of my time, I leave my dataset as
is, and making changes on processing/analysis code. As far I know
there is no feature in IPython to look up the local namespace and not
make any duplicate reads of the same name (It sounds a bit fantasy I
know :). Anyways, could this be easily implemented? There might many
exceptions to execute such mechanism but at least for me, let say I
would list the names that I don't want to be re-imported or the type
of objects, instead using the IPython cache [I really mean local
name-space dictionary} to eliminate multiple readings. That would
boosts the execution time of my script very much and possibly instead
of 30 secs it would most likely be done in less than 10 secs.

What do you think?

-- 
G?khan


From mark.voorhies at ucsf.edu  Wed Oct 20 15:30:33 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Wed, 20 Oct 2010 12:30:33 -0700
Subject: [IPython-dev] [QUAR]  Using Ipython cache as file proxy
In-Reply-To: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>
References: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>
Message-ID: <201010201230.34335.mark.voorhies@ucsf.edu>

On Wednesday, October 20, 2010 10:45:30 am G?khan Sever wrote:
<snip>
> heavy reading and plotting of 8 pages of figures. I am doing slight
> modifications and annotations on my figures to make them look more
> readable, and each run airplot.py is taking a while to bring plots the
> screen and produce the final multipage pdf file.
<snip>
> variable access queries. About %99 of my time, I leave my dataset as
> is, and making changes on processing/analysis code. As far I know
> there is no feature in IPython to look up the local namespace and not
> make any duplicate reads of the same name (It sounds a bit fantasy I
> know :). Anyways, could this be easily implemented? 

If you factor your script in to reading/analysis/plotting functions then
you can do:
1) Start iPython
2) import yourmodule and run full pipeline
3) hack on analysis/plotting functions
4) reload(yourmodule) and rerun just the analysis and/or plotting parts

I use an object oriented equivalent of this a lot, e.g.:

>>> import mymodule
# First load (e.g., 30 minutes of reading and pre-processing)
>>> data = mymodule.Data()
# First analysis (e.g. 30 seconds)
>>> data.analyze()
# hack on mymodule
>>> reload(mymodule)
# re-link methods rather than re-loading data
>>> data.__class__ = mymodule.Data
# Updated analysis
>>> data.analyze()

HTH,

Mark


From fperez.net at gmail.com  Thu Oct 21 03:29:39 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 21 Oct 2010 00:29:39 -0700
Subject: [IPython-dev] Robert - issue gh-177
Message-ID: <AANLkTinm6b7bjOdGDr3eTpP76XqTZ6+BXSO=DddUMR6U@mail.gmail.com>

Hey Robert,

github is down so I can't post code/branch there.  I've attached the
diff here in the meantime, in case you have any suggestions: I tried
from the code you posted on the gist (I guess I got in right before
github went down) but I still don't see a traceback.  Do you see any
obvious mistake with my approach?

It would be great if we could get your idea to work, much much better
than temp files!

Thanks,

f
-------------- next part --------------
A non-text attachment was scrubbed...
Name: tb.diff
Type: text/x-patch
Size: 1868 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101021/34a953df/attachment.bin>

From robert.kern at gmail.com  Thu Oct 21 12:13:35 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 21 Oct 2010 11:13:35 -0500
Subject: [IPython-dev] Robert - issue gh-177
In-Reply-To: <AANLkTinm6b7bjOdGDr3eTpP76XqTZ6+BXSO=DddUMR6U@mail.gmail.com>
References: <AANLkTinm6b7bjOdGDr3eTpP76XqTZ6+BXSO=DddUMR6U@mail.gmail.com>
Message-ID: <i9porg$l4c$1@dough.gmane.org>

On 10/21/10 2:29 AM, Fernando Perez wrote:
> Hey Robert,
>
> github is down so I can't post code/branch there.  I've attached the
> diff here in the meantime, in case you have any suggestions: I tried
> from the code you posted on the gist (I guess I got in right before
> github went down) but I still don't see a traceback.  Do you see any
> obvious mistake with my approach?
>
> It would be great if we could get your idea to work, much much better
> than temp files!

Hmm, ultratb.py calls linecache.checkcache() frequently to ensure that the cache 
is up-to-date. However, this function will delete entries that refer to files 
that are not on the file system. Perhaps ultratb.py can be modified to use a 
modified copy of that function that leaves entries '<code ...>' entries alone.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From gokhansever at gmail.com  Thu Oct 21 18:16:26 2010
From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=)
Date: Thu, 21 Oct 2010 17:16:26 -0500
Subject: [IPython-dev] Using Ipython cache as file proxy
In-Reply-To: <A1722D15-F2E3-49B9-999E-0B464B01288A@informatik.uni-hamburg.de>
References: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>
	<A1722D15-F2E3-49B9-999E-0B464B01288A@informatik.uni-hamburg.de>
Message-ID: <AANLkTi=UWC1ma4tg0tdQH8-OY0Tj8o0ek3E_u81GYDXt@mail.gmail.com>

On Thu, Oct 21, 2010 at 2:09 AM, Hans Meine
<meine at informatik.uni-hamburg.de> wrote:
> Hi!
>
> I might have a solution for you. ?This is based on %run's "-i" parameter, which retains the environment, similar to execfile().
>
> What I do looks sth. like this:
>
>
> # content of foo.py:
> import numpy
>
> if 'data' not in globals():
> ? ?data = numpy.load("...")
> ? ?# more memory-/IO-intensive setup code
>
>
> # ---- plotting part, always executed ----
> import pylab
>
> pylab.clf()
> pylab.plot(data[0]...)
> pylab.show()
>
>
> Then, you can do %run -i foo.py from within ipython, and the upper part will only get executed once. ?(You can "del data" or use %run without -i to run it again.)
>
> HTH,
> ?Hans

Hello Hans,

This is indeed a super-short term solution for my analysis/plotting
case. I did a slight test modification in my code as you suggested:

if 'ccn02' not in globals():
    # Read data in as masked arrays
    ccn02 = NasaFile("./airborne/20090402_131020/09_04_02_13_10_20.dmtccnc.combined.raw")
    ccn02.mask_MVC()
    pcasp02 = NasaFile("./airborne/20090402_131020/09_04_02_13_10_20.conc.pcasp.raw")
    pcasp02.mask_MVC()
    # following 25 more similar data read-in.

and actually it is a very clever trick. All I need to check if only
one variable name is in globals() dictionary, and do the reading
accordingly.

However this hasn't given me the speed-up I was initially expecting.
Doing a bit more investigation with:

I[151]: %time run airplot.py
CPU times: user 29.02 s, sys: 0.17 s, total: 29.19 s
Wall time: 30.44 s

which a simple manual timer tests take about 35-40 seconds for the
last figure page in my plots to actually show-up in the screen
(possibly right after I save multi-page PDF file). When I do a profile
run --showing only top some results:

9900136 function calls (9652765 primitive calls) in 59.379 CPU seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
       26    5.131    0.197   11.375    0.437 io.py:468(loadtxt)
      112    2.152    0.019    2.427    0.022 mathtext.py:569(__init__)
148944/148114    2.119    0.000    3.224    0.000 backend_pdf.py:128(pdfRepr)
   242859    1.845    0.000    3.637    0.000 io.py:574(split_line)
   244769    1.563    0.000    1.563    0.000 {zip}
799394/774449    1.381    0.000    1.469    0.000 {len}
   486984    1.371    0.000    1.371    0.000 {method 'split' of 'str' objects}
   638477    1.347    0.000    1.347    0.000 {isinstance}
   242080    1.259    0.000    1.758    0.000 __init__.py:656(__getitem__)
     4758    1.138    0.000    2.694    0.001
backend_pdf.py:1224(pathOperations)
    57101    1.063    0.000    1.623    0.000 path.py:190(iter_segments)

CPU time shows almost a minute of execution. I am guessing this should
be probably due to profile overloading. Anyways, the code run is still
within my breath-holding time limit --that's my rough approximation
for code execution upper-time. I usually start looking for ways to
decrease execution time if I can't hold my breath anymore after I hit
run script.py in IPython. Funny situation is in this scheme I might
have died a few times (when the wall times reaching over 20-25 minutes
in some modelling work) "Cython magic" was here to help reviving me
back into the life :)

-- 
G?khan


From gokhansever at gmail.com  Thu Oct 21 18:21:51 2010
From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=)
Date: Thu, 21 Oct 2010 17:21:51 -0500
Subject: [IPython-dev] [QUAR]  Using Ipython cache as file proxy
In-Reply-To: <201010201230.34335.mark.voorhies@ucsf.edu>
References: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>
	<201010201230.34335.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTin3RP4D9ghrdjtW_5JGucOw077gnmjY_Tt871gD@mail.gmail.com>

On Wed, Oct 20, 2010 at 2:30 PM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:
> If you factor your script in to reading/analysis/plotting functions then
> you can do:
> 1) Start iPython
> 2) import yourmodule and run full pipeline
> 3) hack on analysis/plotting functions
> 4) reload(yourmodule) and rerun just the analysis and/or plotting parts
>
> I use an object oriented equivalent of this a lot, e.g.:
>
>>>> import mymodule
> # First load (e.g., 30 minutes of reading and pre-processing)
>>>> data = mymodule.Data()
> # First analysis (e.g. 30 seconds)
>>>> data.analyze()
> # hack on mymodule
>>>> reload(mymodule)
> # re-link methods rather than re-loading data
>>>> data.__class__ = mymodule.Data
> # Updated analysis
>>>> data.analyze()
>
> HTH,
>
> Mark
>

Hello Mark,

Thanks for your suggestion. This is a super-long term sturdier
solution. I was initially thinking to base my analysis on OOP
constructs. However, the number of cases that I analyse isn't that
many (only 4 so far) which procedural approach works fine, except
making me wait a bit too much.

This said, the number of cases might possibly jump over 40 and (with
much bigger data-files) in the upcoming analysis work, which I will
definitely consider your approach, primarily because not suffocating
myself in front of the screen :) and have better control on the
growing complexity.

-- 
G?khan


From gael.varoquaux at normalesup.org  Thu Oct 21 18:21:39 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Fri, 22 Oct 2010 00:21:39 +0200
Subject: [IPython-dev] Using Ipython cache as file proxy
In-Reply-To: <AANLkTi=UWC1ma4tg0tdQH8-OY0Tj8o0ek3E_u81GYDXt@mail.gmail.com>
References: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>
	<A1722D15-F2E3-49B9-999E-0B464B01288A@informatik.uni-hamburg.de>
	<AANLkTi=UWC1ma4tg0tdQH8-OY0Tj8o0ek3E_u81GYDXt@mail.gmail.com>
Message-ID: <20101021222138.GC30989@phare.normalesup.org>

On Thu, Oct 21, 2010 at 05:16:26PM -0500, G?khan Sever wrote:
> > I might have a solution for you. ?This is based on %run's "-i" parameter, which retains the environment, similar to execfile().

In similar setting, I try to structure the corresponding part of my code
as functions with no side effects, and use joblib:
http://packages.python.org/joblib/

One big pro, is that is persists accross sessions (I get crashes, as I
tend to do nasty things with C extensions and the memory).

Ga?l


From fperez.net at gmail.com  Thu Oct 21 19:54:52 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 21 Oct 2010 16:54:52 -0700
Subject: [IPython-dev] Using Ipython cache as file proxy
In-Reply-To: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>
References: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>
Message-ID: <AANLkTi=2r_4E1paJp=J6wQ094gxQhgecT3Ns34mpFOP6@mail.gmail.com>

On Wed, Oct 20, 2010 at 10:45 AM, G?khan Sever <gokhansever at gmail.com> wrote:
> Hello,
>

[...]

In addition to the suggestions you've received so far, you may want to
look at Philip Guo's IncPy:

http://www.stanford.edu/~pgbovine/incpy.html

It's still a research project, but I know Philip (CC'd here) is very
interested in real-world feedback.  He has tested running ipython on
top of his modified incpy Python.  It's as easy as building a separate
incpy, and then running

/path/to/incpy `which ipython` ...

and you're running on top of incpy instead of plain python.  You
should be able to otherwise use your normal packages/libraries.

I have no idea how well it will work for you (I haven't tested it
myself) but am quite curious.  Hence the suggestion that you be our
guinea pig and report back ;)

Cheers,

f


From fperez.net at gmail.com  Thu Oct 21 20:01:56 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 21 Oct 2010 17:01:56 -0700
Subject: [IPython-dev] exit magic vs exit function(s?) for qtconsole
In-Reply-To: <AANLkTinF2+Dy0TuvAjTJzzfQEg7_qB=_iV1BQxfUzBKj@mail.gmail.com>
References: <AANLkTinF2+Dy0TuvAjTJzzfQEg7_qB=_iV1BQxfUzBKj@mail.gmail.com>
Message-ID: <AANLkTi=vk+J0ecimsdDZuTd6W2z-Cy2-FTjAS-45t-yH@mail.gmail.com>

On Wed, Oct 20, 2010 at 1:42 AM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
>
>
> The trouble is, if I do either "exit" or "exit()" at the console, it
> goes to the exit *function* instead of the exit magic command. ?In
> fact, "exit()" and "exit" seem to follow very different code paths
> (one is the IPython.core.quitter.Quitter class, and I'm not sure where
> the other one gets hooked in). ?Is there a particular reason why there
> are three different versions of "exit"? ?And if not, is there a
> straightforward way to get the exit magic to by default override the
> other two and use only the magic command in the qtconsole?

Not really, other than historical madness :)  Well, that and the fact
that exit/quit are actually injected into the builtin namespace by
python itself:

>>> import __builtin__ as b
>>> b.exit
Use exit() or Ctrl-D (i.e. EOF) to exit
>>> b.quit
Use quit() or Ctrl-D (i.e. EOF) to exit

As far as I'm concerned, this stuff is fair game for a cleanup.  In
the past I used to be overly sensitive to not doing *anything* that
differed in any way from the default python shell, but honestly the
above behavior is just nonsense, and we should do better without being
afraid.

My proposal:

- remove exit/quit from the builtin namespace
- stop adding our own exit/quit functions
- just have a single magic (aliased to exit/quit names) that does
unconditional exit just by being called, bonus points for having the
exit control options available in it as you suggested.

Unless someone disagrees with this approach, whip up the code and send
a pull request when ready!

Cheers,

f


From james at jamesgao.com  Thu Oct 21 20:49:36 2010
From: james at jamesgao.com (James Gao)
Date: Thu, 21 Oct 2010 17:49:36 -0700
Subject: [IPython-dev] IPython HTTP frontend
Message-ID: <AANLkTimYppgQB0cgo41216qz7sTUmvmuVOuYfi=00_LN@mail.gmail.com>

Hi everyone,
I've been coding up an HTTP frontend for the new ipython zmq kernel. This
gives a convenient interface to access the kernel directly from one web
client, or even multiple web clients across the network. Please see my pull
request, http://github.com/ipython/ipython/pull/179 and give me comments.
Thanks!

-James Gao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101021/03026bda/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ipython-http.png
Type: image/png
Size: 58918 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101021/03026bda/attachment.png>

From andresete.chaos at gmail.com  Thu Oct 21 22:23:32 2010
From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=)
Date: Thu, 21 Oct 2010 21:23:32 -0500
Subject: [IPython-dev] IPython HTTP frontend
In-Reply-To: <AANLkTimYppgQB0cgo41216qz7sTUmvmuVOuYfi=00_LN@mail.gmail.com>
References: <AANLkTimYppgQB0cgo41216qz7sTUmvmuVOuYfi=00_LN@mail.gmail.com>
Message-ID: <AANLkTinX+c-58XujtU6jp344P8XMvpNweh_OcT94sSr4@mail.gmail.com>

Oh! that great!!!
how I can download and test this softwar?
O.

2010/10/21 James Gao <james at jamesgao.com>

> Hi everyone,
> I've been coding up an HTTP frontend for the new ipython zmq kernel. This
> gives a convenient interface to access the kernel directly from one web
> client, or even multiple web clients across the network. Please see my pull
> request, http://github.com/ipython/ipython/pull/179 and give me comments.
> Thanks!
>
> -James Gao
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101021/cb5ca3f5/attachment.html>

From fperez.net at gmail.com  Thu Oct 21 23:16:16 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 21 Oct 2010 20:16:16 -0700
Subject: [IPython-dev] IPython HTTP frontend
In-Reply-To: <AANLkTinX+c-58XujtU6jp344P8XMvpNweh_OcT94sSr4@mail.gmail.com>
References: <AANLkTimYppgQB0cgo41216qz7sTUmvmuVOuYfi=00_LN@mail.gmail.com>
	<AANLkTinX+c-58XujtU6jp344P8XMvpNweh_OcT94sSr4@mail.gmail.com>
Message-ID: <AANLkTikwyt-sLeZmTLqz6DpSW2BxmBn5jeMNdC7H+hv_@mail.gmail.com>

Hi Omar,

2010/10/21 Omar Andr?s Zapata Mesa <andresete.chaos at gmail.com>:
> Oh! that great!!!
> how I can download and test this softwar?

You can see the instructions at the bottom of the pull request:

http://github.com/ipython/ipython/pull/179

Basically to test it, you follow Step 1 and Step 2 (Step 3 is only for
those merging back into trunk the changes once ready).

And if you have any feedback on code, design, ideas, etc, please do
share it on the pull request page!  This is *fantastic* functionality
that James has implemented in just a few days of work, but obviously
it will benefit from review from others.  We want to make sure we all
give the overall design a good check before merging it in (though we
don't want to raise the bar so high that the code needs a year's worth
of fixes either, better now than never so we can improve it
incrementally).

So anyone with an interest or knowledge of this stuff, by all means pitch in!

Regards,


f


From benjaminrk at gmail.com  Fri Oct 22 02:53:28 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 21 Oct 2010 23:53:28 -0700
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
Message-ID: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>

I have my first performance numbers for throughput with the new parallel
code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
~512 tiny tasks submitted as fast as they can is ~100x faster than with
Twisted.

As a throughput test, I submitted a flood of many very small tasks that
should take ~no time:
new-style:
def wait(t=0):
    import time
    time.sleep(t)
submit:
client.apply(wait, args=(t,))

Twisted:
task = StringTask("import time; time.sleep(%f)"%t)
submit:
client.run(task)

Flooding the queue with these tasks with t=0, and then waiting for the
results, I tracked two times:
Sent: the time from the first submit until the last submit returns
Roundtrip: the time from the first submit to getting the last result

Plotting these times vs number of messages, we see some decent numbers:
* The pure ZMQ scheduler is fastest, 10-100 times faster than Twisted
roundtrip
* The Python scheduler is ~3x slower roundtrip than pure ZMQ, but no penalty
to the submission rate
* Twisted performance falls off very quickly as the number of tasks grows
* ZMQ performance is quite flat

Legend:
zmq: the pure ZMQ Device is used for routing tasks
lru/weighted: the simplest/most complicated routing schemes respectively in
the Python ZMQ Scheduler (which supports dependencies)
twisted: the old IPython.kernel

[image: roundtrip.png]
[image: sent.png]
Test system:
Core-i7 930, 4x2 cores (ht), 4-engine cluster all over tcp/loopback, Ubuntu
10.04, Python 2.6.5

-MinRK
http://github.com/minrk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101021/cf4a3766/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: roundtrip.png
Type: image/png
Size: 30731 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101021/cf4a3766/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sent.png
Type: image/png
Size: 31114 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101021/cf4a3766/attachment-0001.png>

From fperez.net at gmail.com  Fri Oct 22 03:03:42 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 22 Oct 2010 00:03:42 -0700
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
In-Reply-To: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
References: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
Message-ID: <AANLkTi=yXqSRdpTBoCNB=EYaHCJJVrHfyo03dYbu0tLe@mail.gmail.com>

On Thu, Oct 21, 2010 at 11:53 PM, MinRK <benjaminrk at gmail.com> wrote:

> I have my first performance numbers for throughput with the new parallel
> code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
> ~512 tiny tasks submitted as fast as they can is ~100x faster than with
> Twisted.
>

This is *fantastic*!  Many thanks, Min!  I'll be mostly offline for a couple
of days (that workshop with jdh) but this is very, very cool.  I know you've
put a ton of work into it, so thanks a lot.

Are we still on for the p4s presentation next week?  Could you send me
off-list a title/abstract so I can post the announcement?  I think with the
architecture we have now and these preliminary results, it's more than
enough for a great talk.

I'm almost done with fixing the long-standing bug (basically since ipython's
birth) of not having any interactive tracebacks.  Thanks to Robert for the
linecache idea, it's now working great.  That will make the multiline client
vastly more usable for real work.

Cheers,

f
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/ef79bbb9/attachment.html>

From fperez.net at gmail.com  Fri Oct 22 04:08:34 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 22 Oct 2010 01:08:34 -0700
Subject: [IPython-dev] Shutdown __del__ methods bug
In-Reply-To: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
References: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
Message-ID: <AANLkTimS+j-VyU-5r+HfSVVm8CJty3M4CPrjE5pHc45G@mail.gmail.com>

On Sat, Oct 16, 2010 at 8:29 AM, Thomas Kluyver <takowl at gmail.com> wrote:
> Unfortunately, this commit appears to have undone Fernando's fix for the bug
> with __del__ methods that I found:
> http://github.com/ipython/ipython/commit/239d2ed6f44c3f6511ee1e9069a5a1aee9c20f9c
>
> I can reproduce the bug in trunk. This also highlights that the doctest to
> catch it evidently doesn't do so. Running iptest IPython.core shows the
> error message on the console (among the ... of passed tests), but it doesn't
> fail. I'm not a console ninja, but could it be that the message goes to
> stderr, and the evaluation only checks stdout?

Thomas, thanks a lot for this fix, I've merged it since it was a total
no-brainer.  Great detective work on finding where it got reverted,
but I'm quite bothered by that having happened: do you have any idea
why that commit would have reverted those changes?  That commit:

http://github.com/ipython/ipython/commit/239d2ed6f44c3f6511ee1e9069a5a1aee9c20f9c

appears as a simple merge commit, but its diff is gigantic (the entire
merge).  I don't understand why the merge Min did reverted that
particular change.

And I am actually *very* worried that there might have been other
damage done in that commit that we didn't catch simply because it
didn't trigger any test failure...

Any thoughts?  I really don't want to leave dangling the notion that
we might have had a commit that undid a lot of other stuff we thought
was already in there...  I really have no idea what happened here, any
clarity will be welcome.

Cheers,

f


From takowl at gmail.com  Fri Oct 22 05:49:50 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Fri, 22 Oct 2010 10:49:50 +0100
Subject: [IPython-dev] Shutdown __del__ methods bug
In-Reply-To: <AANLkTimS+j-VyU-5r+HfSVVm8CJty3M4CPrjE5pHc45G@mail.gmail.com>
References: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
	<AANLkTimS+j-VyU-5r+HfSVVm8CJty3M4CPrjE5pHc45G@mail.gmail.com>
Message-ID: <AANLkTimN5AMOZEs4HVW6QHBU0uZS4gBJ4AgxGp+txqpU@mail.gmail.com>

On 22 October 2010 09:08, Fernando Perez <fperez.net at gmail.com> wrote:

> do you have any idea why that commit would have reverted those changes?
>  That commit:
>
>
> http://github.com/ipython/ipython/commit/239d2ed6f44c3f6511ee1e9069a5a1aee9c20f9c
>
> appears as a simple merge commit
>

A couple of other things strike me about it:
- In the ipython network graph on github, it doesn't show up as a merge
(there's no other branch leading into it). You have to scroll back a bit now
to see it, but it's the first in a cluster of three commits on trunk just
before the number 11 on the date line.
- Other merges usually mention branches in the commit message. This one just
mentions a file.

Perhaps Min can shed more light, if he remembers how he made the commit. It
does seem that something didn't go to plan. I've noticed a few places where
it's reverting to older forms of code (e.g. using exceptions.Exception), and
I've been tidying them up in my cleanup-old-code branch. I'll put in a pull
request.

If you're worried about possible regressions, I think the best thing is for
you and Min to go over the diff for that commit and work out which changes
were intentional.

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/85be1843/attachment.html>

From ellisonbg at gmail.com  Fri Oct 22 12:26:20 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 22 Oct 2010 09:26:20 -0700
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
In-Reply-To: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
References: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
Message-ID: <AANLkTik7xLLtwNw6eDobVF30y8E1xL7xWLTHBom1ppYS@mail.gmail.com>

Wow.

This is absolutely incredible.  We were hoping it would be better, but this
is even better than I had imagined (partially because the Twisted
performance is even worse than I thought).  In particular the stability of
all the zmq based queues is really impressive.  In my mind, this means that
to first order, we really are hitting the latency/throughput of the loopback
interface and that this limitation is stable under load.  This is really
significant, because it means the performance in a real cluster could be
improved by using a fast interconnect.  Also, this brings our task
granularity down to ~ 1 ms, which is a big deal.  This *really* sets us
apart from the traditional load balancing that a batch system does.  Can you
rerun this test keeping the CPU effort at 0, but sending a large buffer with
each task.  Then vary the size of that buffer (512, 1024, ...).  I want to
see how zmq scales in the throughput sense.  Twisted is especially horrible
in this respect.

I am also quite impressed at how little we loose in moving to Python for the
actual scheduling.  That is still very good performance, especially taking
into account the dependency handling that is going on.  I have some ideas
about performance testing that that will make a very good story for a paper.

This is really great and we definitely need to start reviewing this soon.

Cheers,

Brian

On Thu, Oct 21, 2010 at 11:53 PM, MinRK <benjaminrk at gmail.com> wrote:

> I have my first performance numbers for throughput with the new parallel
> code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
> ~512 tiny tasks submitted as fast as they can is ~100x faster than with
> Twisted.
>
> As a throughput test, I submitted a flood of many very small tasks that
> should take ~no time:
> new-style:
> def wait(t=0):
>     import time
>     time.sleep(t)
> submit:
> client.apply(wait, args=(t,))
>
> Twisted:
> task = StringTask("import time; time.sleep(%f)"%t)
> submit:
> client.run(task)
>
> Flooding the queue with these tasks with t=0, and then waiting for the
> results, I tracked two times:
> Sent: the time from the first submit until the last submit returns
> Roundtrip: the time from the first submit to getting the last result
>
> Plotting these times vs number of messages, we see some decent numbers:
> * The pure ZMQ scheduler is fastest, 10-100 times faster than Twisted
> roundtrip
> * The Python scheduler is ~3x slower roundtrip than pure ZMQ, but no
> penalty to the submission rate
> * Twisted performance falls off very quickly as the number of tasks grows
> * ZMQ performance is quite flat
>
> Legend:
> zmq: the pure ZMQ Device is used for routing tasks
> lru/weighted: the simplest/most complicated routing schemes respectively in
> the Python ZMQ Scheduler (which supports dependencies)
> twisted: the old IPython.kernel
>
> [image: roundtrip.png]
> [image: sent.png]
> Test system:
> Core-i7 930, 4x2 cores (ht), 4-engine cluster all over tcp/loopback, Ubuntu
> 10.04, Python 2.6.5
>
> -MinRK
> http://github.com/minrk
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/b4c4fa9e/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sent.png
Type: image/png
Size: 31114 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/b4c4fa9e/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: roundtrip.png
Type: image/png
Size: 30731 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/b4c4fa9e/attachment-0001.png>

From ellisonbg at gmail.com  Fri Oct 22 12:27:37 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 22 Oct 2010 09:27:37 -0700
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
In-Reply-To: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
References: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
Message-ID: <AANLkTi=CrceUHbxfPxVui4WZ-BSBkZuYYi25ePH+NvTn@mail.gmail.com>

Min,

Also, can you get memory consumption numbers for the controller and queues.
 I want to see how much worse Twisted is in that respect.

Cheers,

Brian

On Thu, Oct 21, 2010 at 11:53 PM, MinRK <benjaminrk at gmail.com> wrote:

> I have my first performance numbers for throughput with the new parallel
> code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
> ~512 tiny tasks submitted as fast as they can is ~100x faster than with
> Twisted.
>
> As a throughput test, I submitted a flood of many very small tasks that
> should take ~no time:
> new-style:
> def wait(t=0):
>     import time
>     time.sleep(t)
> submit:
> client.apply(wait, args=(t,))
>
> Twisted:
> task = StringTask("import time; time.sleep(%f)"%t)
> submit:
> client.run(task)
>
> Flooding the queue with these tasks with t=0, and then waiting for the
> results, I tracked two times:
> Sent: the time from the first submit until the last submit returns
> Roundtrip: the time from the first submit to getting the last result
>
> Plotting these times vs number of messages, we see some decent numbers:
> * The pure ZMQ scheduler is fastest, 10-100 times faster than Twisted
> roundtrip
> * The Python scheduler is ~3x slower roundtrip than pure ZMQ, but no
> penalty to the submission rate
> * Twisted performance falls off very quickly as the number of tasks grows
> * ZMQ performance is quite flat
>
> Legend:
> zmq: the pure ZMQ Device is used for routing tasks
> lru/weighted: the simplest/most complicated routing schemes respectively in
> the Python ZMQ Scheduler (which supports dependencies)
> twisted: the old IPython.kernel
>
> [image: roundtrip.png]
> [image: sent.png]
> Test system:
> Core-i7 930, 4x2 cores (ht), 4-engine cluster all over tcp/loopback, Ubuntu
> 10.04, Python 2.6.5
>
> -MinRK
> http://github.com/minrk
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/647611e7/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: roundtrip.png
Type: image/png
Size: 30731 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/647611e7/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sent.png
Type: image/png
Size: 31114 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/647611e7/attachment-0001.png>

From benjaminrk at gmail.com  Fri Oct 22 12:47:53 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 22 Oct 2010 09:47:53 -0700
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
In-Reply-To: <AANLkTi=yXqSRdpTBoCNB=EYaHCJJVrHfyo03dYbu0tLe@mail.gmail.com>
References: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
	<AANLkTi=yXqSRdpTBoCNB=EYaHCJJVrHfyo03dYbu0tLe@mail.gmail.com>
Message-ID: <AANLkTik-s6+sgJr7FgivdL5vsoq3XKEQ1P1Ug_EUuwLu@mail.gmail.com>

On Fri, Oct 22, 2010 at 00:03, Fernando Perez <fperez.net at gmail.com> wrote:

> On Thu, Oct 21, 2010 at 11:53 PM, MinRK <benjaminrk at gmail.com> wrote:
>
>> I have my first performance numbers for throughput with the new parallel
>> code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
>> ~512 tiny tasks submitted as fast as they can is ~100x faster than with
>> Twisted.
>>
>
> This is *fantastic*!  Many thanks, Min!  I'll be mostly offline for a
> couple of days (that workshop with jdh) but this is very, very cool.  I know
> you've put a ton of work into it, so thanks a lot.
>
> Are we still on for the p4s presentation next week?  Could you send me
> off-list a title/abstract so I can post the announcement?  I think with the
> architecture we have now and these preliminary results, it's more than
> enough for a great talk.
>

Sure, I'll be happy to give a demo next week. I'll send you the abstract
today.


>
> I'm almost done with fixing the long-standing bug (basically since
> ipython's birth) of not having any interactive tracebacks.  Thanks to Robert
> for the linecache idea, it's now working great.  That will make the
> multiline client vastly more usable for real work.
>

That's very exciting!


>
> Cheers,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/178578df/attachment.html>

From benjaminrk at gmail.com  Fri Oct 22 12:52:39 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 22 Oct 2010 09:52:39 -0700
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
In-Reply-To: <AANLkTi=CrceUHbxfPxVui4WZ-BSBkZuYYi25ePH+NvTn@mail.gmail.com>
References: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
	<AANLkTi=CrceUHbxfPxVui4WZ-BSBkZuYYi25ePH+NvTn@mail.gmail.com>
Message-ID: <AANLkTinp-TzJEobf8xZf6npSVmt4O3f3TLickXmTcMpJ@mail.gmail.com>

I'll get on the new tests, I already have a bandwidth one written, so I'm
running it now.  As for Twisted's throughput performance, it's at least
partly our fault.  Since the receiving is in Python, every time we try to
send there are incoming results getting in the way.  If we wrote it such
that sending prevented the receipt of results, I'm sure the Twisted code
would be faster for large numbers of messages.  With ZMQ, though, we don't
have to be receiving in Python to get the results to the client process, so
they arrive in ZMQ and await simple memcpy/deserialization.

-MinRK

On Fri, Oct 22, 2010 at 09:27, Brian Granger <ellisonbg at gmail.com> wrote:

> Min,
>
> Also, can you get memory consumption numbers for the controller and queues.
>  I want to see how much worse Twisted is in that respect.
>
> Cheers,
>
> Brian
>
> On Thu, Oct 21, 2010 at 11:53 PM, MinRK <benjaminrk at gmail.com> wrote:
>
>> I have my first performance numbers for throughput with the new parallel
>> code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
>> ~512 tiny tasks submitted as fast as they can is ~100x faster than with
>> Twisted.
>>
>> As a throughput test, I submitted a flood of many very small tasks that
>> should take ~no time:
>> new-style:
>> def wait(t=0):
>>     import time
>>     time.sleep(t)
>> submit:
>> client.apply(wait, args=(t,))
>>
>> Twisted:
>> task = StringTask("import time; time.sleep(%f)"%t)
>> submit:
>> client.run(task)
>>
>> Flooding the queue with these tasks with t=0, and then waiting for the
>> results, I tracked two times:
>> Sent: the time from the first submit until the last submit returns
>> Roundtrip: the time from the first submit to getting the last result
>>
>> Plotting these times vs number of messages, we see some decent numbers:
>> * The pure ZMQ scheduler is fastest, 10-100 times faster than Twisted
>> roundtrip
>> * The Python scheduler is ~3x slower roundtrip than pure ZMQ, but no
>> penalty to the submission rate
>> * Twisted performance falls off very quickly as the number of tasks grows
>> * ZMQ performance is quite flat
>>
>> Legend:
>> zmq: the pure ZMQ Device is used for routing tasks
>> lru/weighted: the simplest/most complicated routing schemes respectively
>> in the Python ZMQ Scheduler (which supports dependencies)
>> twisted: the old IPython.kernel
>>
>> [image: roundtrip.png]
>> [image: sent.png]
>> Test system:
>> Core-i7 930, 4x2 cores (ht), 4-engine cluster all over tcp/loopback,
>> Ubuntu 10.04, Python 2.6.5
>>
>> -MinRK
>> http://github.com/minrk
>>
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/67a1c1e7/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: roundtrip.png
Type: image/png
Size: 30731 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/67a1c1e7/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sent.png
Type: image/png
Size: 31114 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/67a1c1e7/attachment-0001.png>

From gokhansever at gmail.com  Fri Oct 22 12:58:43 2010
From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=)
Date: Fri, 22 Oct 2010 11:58:43 -0500
Subject: [IPython-dev] Using Ipython cache as file proxy
In-Reply-To: <20101021222138.GC30989@phare.normalesup.org>
References: <AANLkTimg9YAJ6GCra=6XEJrXgDG-q2001zmjz=tE2nQR@mail.gmail.com>
	<A1722D15-F2E3-49B9-999E-0B464B01288A@informatik.uni-hamburg.de>
	<AANLkTi=UWC1ma4tg0tdQH8-OY0Tj8o0ek3E_u81GYDXt@mail.gmail.com>
	<20101021222138.GC30989@phare.normalesup.org>
Message-ID: <AANLkTinYjeqFysFHzO5L8QieVeAKxF2d0io6YP-oXpu+@mail.gmail.com>

On Thu, Oct 21, 2010 at 5:21 PM, Gael Varoquaux
<gael.varoquaux at normalesup.org> wrote:
> On Thu, Oct 21, 2010 at 05:16:26PM -0500, G?khan Sever wrote:
>> > I might have a solution for you. ?This is based on %run's "-i" parameter, which retains the environment, similar to execfile().
>
> In similar setting, I try to structure the corresponding part of my code
> as functions with no side effects, and use joblib:
> http://packages.python.org/joblib/
>
> One big pro, is that is persists accross sessions (I get crashes, as I
> tend to do nasty things with C extensions and the memory).
>
> Ga?l
>

Hi Ga?l and all,

I was thinking of this pattern remembering approach yesterday late at
night. I wonder if it be possible to transparently handle this in
library side instead of directly modifying the code on user side. Say
for instance the matplotlib side computations. I do slight
modifications on code for annotations and re-run whole script. All of
the plotting code is re-executed whether changed or not.

I will look into joblib and IncPy approaches when my code evolves into
a more complex state and especially when I bring into some
CPU-intensive computations --so far still
reading/masking/masking/masking/annotating/plotting/saving.
Modelling/comparing/plotting cycles are yet to come.

Thanks for all nice feedback.

-- 
G?khan


From fperez.net at gmail.com  Fri Oct 22 13:59:18 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 22 Oct 2010 10:59:18 -0700
Subject: [IPython-dev] Shutdown __del__ methods bug
In-Reply-To: <AANLkTimN5AMOZEs4HVW6QHBU0uZS4gBJ4AgxGp+txqpU@mail.gmail.com>
References: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
	<AANLkTimS+j-VyU-5r+HfSVVm8CJty3M4CPrjE5pHc45G@mail.gmail.com>
	<AANLkTimN5AMOZEs4HVW6QHBU0uZS4gBJ4AgxGp+txqpU@mail.gmail.com>
Message-ID: <AANLkTiny2ndiYmpeUk0+9+6XTQtHny7Szy2E9sz=9Rhk@mail.gmail.com>

On Fri, Oct 22, 2010 at 2:49 AM, Thomas Kluyver <takowl at gmail.com> wrote:
>
> A couple of other things strike me about it:
> - In the ipython network graph on github, it doesn't show up as a merge
> (there's no other branch leading into it). You have to scroll back a bit now
> to see it, but it's the first in a cluster of three commits on trunk just
> before the number 11 on the date line.
> - Other merges usually mention branches in the commit message. This one just
> mentions a file.
>
> Perhaps Min can shed more light, if he remembers how he made the commit. It
> does seem that something didn't go to plan. I've noticed a few places where
> it's reverting to older forms of code (e.g. using exceptions.Exception), and
> I've been tidying them up in my cleanup-old-code branch. I'll put in a pull
> request.
>
> If you're worried about possible regressions, I think the best thing is for
> you and Min to go over the diff for that commit and work out which changes
> were intentional.

Yes, I am worried.  Min, let's try to get together sometime next week
and do a little forensics on this one, to make sure nothing else
slipped by.  I'd also like to understand *what happened* so we don't
repeat this in the future, but I admit I'm puzzled right now.

Cheers,

f


From benjaminrk at gmail.com  Fri Oct 22 14:55:09 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 22 Oct 2010 11:55:09 -0700
Subject: [IPython-dev] Shutdown __del__ methods bug
In-Reply-To: <AANLkTiny2ndiYmpeUk0+9+6XTQtHny7Szy2E9sz=9Rhk@mail.gmail.com>
References: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
	<AANLkTimS+j-VyU-5r+HfSVVm8CJty3M4CPrjE5pHc45G@mail.gmail.com>
	<AANLkTimN5AMOZEs4HVW6QHBU0uZS4gBJ4AgxGp+txqpU@mail.gmail.com>
	<AANLkTiny2ndiYmpeUk0+9+6XTQtHny7Szy2E9sz=9Rhk@mail.gmail.com>
Message-ID: <AANLkTintzfACjuF4YYFadmh=AE-xBzNDGwRZSLth=KiK@mail.gmail.com>

Sure, we can meet to look into it.  If I recall correctly, I exactly
followed the merge flow provided by github at the bottom of the pull
request.

On Fri, Oct 22, 2010 at 10:59, Fernando Perez <fperez.net at gmail.com> wrote:

> On Fri, Oct 22, 2010 at 2:49 AM, Thomas Kluyver <takowl at gmail.com> wrote:
> >
> > A couple of other things strike me about it:
> > - In the ipython network graph on github, it doesn't show up as a merge
> > (there's no other branch leading into it). You have to scroll back a bit
> now
> > to see it, but it's the first in a cluster of three commits on trunk just
> > before the number 11 on the date line.
> > - Other merges usually mention branches in the commit message. This one
> just
> > mentions a file.
> >
> > Perhaps Min can shed more light, if he remembers how he made the commit.
> It
> > does seem that something didn't go to plan. I've noticed a few places
> where
> > it's reverting to older forms of code (e.g. using exceptions.Exception),
> and
> > I've been tidying them up in my cleanup-old-code branch. I'll put in a
> pull
> > request.
> >
> > If you're worried about possible regressions, I think the best thing is
> for
> > you and Min to go over the diff for that commit and work out which
> changes
> > were intentional.
>
> Yes, I am worried.  Min, let's try to get together sometime next week
> and do a little forensics on this one, to make sure nothing else
> slipped by.  I'd also like to understand *what happened* so we don't
> repeat this in the future, but I admit I'm puzzled right now.
>
> Cheers,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/86e85000/attachment.html>

From fperez.net at gmail.com  Fri Oct 22 14:59:18 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 22 Oct 2010 11:59:18 -0700
Subject: [IPython-dev] Shutdown __del__ methods bug
In-Reply-To: <AANLkTintzfACjuF4YYFadmh=AE-xBzNDGwRZSLth=KiK@mail.gmail.com>
References: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
	<AANLkTimS+j-VyU-5r+HfSVVm8CJty3M4CPrjE5pHc45G@mail.gmail.com>
	<AANLkTimN5AMOZEs4HVW6QHBU0uZS4gBJ4AgxGp+txqpU@mail.gmail.com>
	<AANLkTiny2ndiYmpeUk0+9+6XTQtHny7Szy2E9sz=9Rhk@mail.gmail.com>
	<AANLkTintzfACjuF4YYFadmh=AE-xBzNDGwRZSLth=KiK@mail.gmail.com>
Message-ID: <AANLkTikUCgbwOjyakHQv6iwcQ8tNC8AFoLt-R0p066OR@mail.gmail.com>

On Fri, Oct 22, 2010 at 11:55 AM, MinRK <benjaminrk at gmail.com> wrote:
> Sure, we can meet to look into it. ?If I recall correctly, I exactly
> followed the merge flow provided by github at the bottom of the pull
> request.

Great, thanks.  Hopefully it didn't sound like I was blaming you in
any way, I just want to understand what happened (for the future) and
to review any possible regressions that might have slipped in
unnoticed (so we can re-apply them like Thomas did with his patch).

Let's talk on campus next week when I return from SoCal.

Cheers,

f


From jbaker at zyasoft.com  Fri Oct 22 17:53:35 2010
From: jbaker at zyasoft.com (Jim Baker)
Date: Fri, 22 Oct 2010 15:53:35 -0600
Subject: [IPython-dev] Jython support in ipython
Message-ID: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>

As of r7164 of Jython, which will be part of 2.5.2rc2 (and hopefully the
last release candidate for Jython 2.5.2!), we have what looks decent support
for readline emulation for Jython, sufficient to run a minimally modified
version of ipython 0.10.1, including colorization (which is not reproduced
when copied here for whatever reason from my terminal):

Python 2.5.2rc1 (trunk:7162:7163M, Oct 21 2010, 20:58:50)
Type "copyright", "credits" or "license" for more information.

IPython 0.10.1 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object'. ?object also works, ?? prints more.

In [1]: import java

In [2]: java.awt.L

java.awt.Label                 java.awt.LayoutManager
java.awt.LayoutManager2
java.awt.LinearGradientPaint   java.awt.List

In [2]: java.awt.L


There are of course a number of outstanding issues. Most of these are
because I haven't yet tried to learn the ipython codebase:

   1. setup.py has to be modified to support os.name == 'java', otherwise we
   get an unsupported OS on install. Running directly with ipython.py is not an
   issue.
   2. Doing something like
   In [2]: print java.awt
   results in AttributeError: CommandCompiler instance has no attribute
   'compiler'. I assume ipython is using the compiler module here to help in
   its magic support. Ideally we would be using the ast/_ast modules, both of
   which Jython 2.5 supports. I haven't looked at the 0.11 work to see if that
   helps here, but this is probably the biggest issue for Jython support.
   3. OS integration relies on dispatches against os.name. In Jython, this
   will always be 'java', but for the underlying OS, and its resources, we need
   to know if its 'nt' or 'posix', which comes from os._name im Jython. I've
   previously played with replacing os.name everywhere with an os_name()
   function, defined in platutils that does the following. However it's not a
   complete fix (colorization is impacted), so it needs to be done more
   carefully.
   4. Pretty minor - we only support emulating readline.parse_and_bind("tab:
   complete"); other binds are converted to (ignored) warnings. In the future,
   we should be enhancing our readline support through the underlying JLine
   console so that this is possible. Otherwise, I think the readline emulation
   is currently complete.
   5. It's not *Python* 2.5.2rc1 (trunk:7162:7163M, Oct 21 2010, 20:58:50),
   is it? ;)


My goal in this last phase of work was just to get sufficient
*readline*emulation to run ipython, so this part has been
accomplished. But of course
we should also look at really supporting *ipython* too:

   1. Create a jython-ipython fork on github, then make it available from
   PyPI. My current plan is to target against the 0.10.1 tag, but if it makes
   more sense to go with 0.11 (especially AST support), then please tell me.
   More unit tests would be especially a good reason, especially if they use
   pexpect. Jython does not directly support pexpect, because we lack access to
   pseudo ttys, but we can use such CPython-only modules transparently from
   Jython via execnet (
   http://codespeak.net/execnet/example/test_info.html#remote-exec-a-module-avoiding-inlined-source-part-ii
   ).
   2. Merge this fork back into ipython at some point in the future. Among
   other things, the ZMQ work should also be feasible to port to Jython when it
   makes sense.
   3. Support additional Java integration, such as Swing. Maybe this is a
   separate plugin?


- Jim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/13a7c1a7/attachment.html>

From ondrej at certik.cz  Fri Oct 22 18:08:40 2010
From: ondrej at certik.cz (Ondrej Certik)
Date: Fri, 22 Oct 2010 15:08:40 -0700
Subject: [IPython-dev] remote interactive shell using JSON RPC
Message-ID: <AANLkTi=dbsO8pZFURroHG_gK_wWMiB-_zdbwZMh+ndJ-@mail.gmail.com>

Hi guys,

in case you wanted to play, try this:

$ git clone git://github.com/hpfem/femhub-online-lab.git
$ cd femhub-online-lab/
$ PYTHONPATH=. bin/ifemhub
Connecting to the online lab at http://lab.femhub.org/ ...
Initializing the engine...
FEMhub interactive remote console
>>>

And then type, for example:

>>> import femhub
>>> femhub.Mesh?
Represents a FE mesh.
[...]


and so on. Try TAB completion, ?, ??, ...

This is communicating with our online lab using JSON RPC, and the
Python engine is running within femhub, so all packages that are
installed in FEMhub are accesible (matplotlib, sympy, ...).

Ondrej


From fperez.net at gmail.com  Fri Oct 22 18:16:19 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 22 Oct 2010 15:16:19 -0700
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>
References: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>
Message-ID: <AANLkTi=apnCsJRO0bwG0=Dx0s=xPdmNk08M4kMhAcYAv@mail.gmail.com>

Hi Jim!

[ great to hear from you again :) ]


I have to run soon and will be offline for a few days, so I wanted to
at least drop a very quick reply to get you going, while I have a
chance for a more detailed discussion.

On Fri, Oct 22, 2010 at 2:53 PM, Jim Baker <jbaker at zyasoft.com> wrote:
> As of r7164 of Jython, which will be part of 2.5.2rc2 (and hopefully the
> last release candidate for Jython 2.5.2!), we have what looks decent support
> for readline emulation for Jython, sufficient to run a minimally modified
> version of ipython 0.10.1, including colorization (which is not reproduced
> when copied here for whatever reason from my terminal):

First, this is *great* news.  We'll do whatever we can from our side
to make the integration easier as we move forward.

> Create a jython-ipython fork on github, then make it available from PyPI. My
> current plan is to target against the 0.10.1 tag, but if it makes more sense
> to go with 0.11 (especially AST support), then please tell me. More unit

As to this point, yes, please base any new work you do on the 0.11
tree (master branch on github).  The 0.10 codebase is in minimal
maintenance-only mode, while the 0.11 one is in very active
development and has seen massive refactorings (for the better).  So
that's the place to build any new ideas on.

Will get back to you with more later...

Cheers,

f


From jbaker at zyasoft.com  Fri Oct 22 19:03:36 2010
From: jbaker at zyasoft.com (Jim Baker)
Date: Fri, 22 Oct 2010 17:03:36 -0600
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTi=apnCsJRO0bwG0=Dx0s=xPdmNk08M4kMhAcYAv@mail.gmail.com>
References: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>
	<AANLkTi=apnCsJRO0bwG0=Dx0s=xPdmNk08M4kMhAcYAv@mail.gmail.com>
Message-ID: <AANLkTinZa7kzcXjR9jfM2tFrwPVgbwjziGi3wSkK_ThT@mail.gmail.com>

Fernando,

On Fri, Oct 22, 2010 at 4:16 PM, Fernando Perez <fperez.net at gmail.com>wrote:

> Hi Jim!
>
> [ great to hear from you again :) ]
>
> Same here!

>
> I have to run soon and will be offline for a few days, so I wanted to
> at least drop a very quick reply to get you going, while I have a
> chance for a more detailed discussion.
>
> On Fri, Oct 22, 2010 at 2:53 PM, Jim Baker <jbaker at zyasoft.com> wrote:
> > As of r7164 of Jython, which will be part of 2.5.2rc2 (and hopefully the
> > last release candidate for Jython 2.5.2!), we have what looks decent
> support
> > for readline emulation for Jython, sufficient to run a minimally modified
> > version of ipython 0.10.1, including colorization (which is not
> reproduced
> > when copied here for whatever reason from my terminal):
>
> First, this is *great* news.  We'll do whatever we can from our side
> to make the integration easier as we move forward.
>

Thanks!

>
> > Create a jython-ipython fork on github, then make it available from PyPI.
> My
> > current plan is to target against the 0.10.1 tag, but if it makes more
> sense
> > to go with 0.11 (especially AST support), then please tell me. More unit
>
> As to this point, yes, please base any new work you do on the 0.11
> tree (master branch on github).  The 0.10 codebase is in minimal
> maintenance-only mode, while the 0.11 one is in very active
> development and has seen massive refactorings (for the better).  So
> that's the place to build any new ideas on.
>
>
I just tried a checkout of master, however I get the following:

ImportError: Python Version 2.6 or above is required for IPython.


Trying a little bit more by disabling that version check, I discovered that
at least 2.6's support of print as a function, instead of a statement, is
used. Now Jython 2.6 work kicked off just this week, so hopefully we don't
really need 2.6. Jython 2.5 does have two key 2.6 features that you might be
using: 1) full mutable ast support (through the ast module, used by sympy's
support); 2) class decorators. (We also have namedtuple.)

Maybe for Jython this is just a matter of isolating the print function? It
doesn't seem to be used in too many places.

- Jim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/dc3590bc/attachment.html>

From takowl at gmail.com  Fri Oct 22 19:35:12 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sat, 23 Oct 2010 00:35:12 +0100
Subject: [IPython-dev] Jython support in ipython
Message-ID: <AANLkTikOgmNPa9UDYjYYAsksE29opWyvN=QV=9+1dw6L@mail.gmail.com>

On 23 October 2010 00:04, <ipython-dev-request at scipy.org> wrote:

> Trying a little bit more by disabling that version check, I discovered that
> at least 2.6's support of print as a function, instead of a statement, is
> used. Now Jython 2.6 work kicked off just this week, so hopefully we don't
> really need 2.6. Jython 2.5 does have two key 2.6 features that you might
> be
> using: 1) full mutable ast support (through the ast module, used by sympy's
> support); 2) class decorators. (We also have namedtuple.)
>
> Maybe for Jython this is just a matter of isolating the print function? It
> doesn't seem to be used in too many places.
>

Just to chime in here, we have been working on the principle that ipython
0.11 would depend on Python 2.6 or later, and I've been 'modernising' the
code base a bit, to fit in with my Python 3 branch of ipython. I'm sorry if
this makes your life harder. Off the top of my head, though, I think most if
not all of my changes should be compatible with 2.5.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101023/bd5124f6/attachment.html>

From jbaker at zyasoft.com  Fri Oct 22 21:30:46 2010
From: jbaker at zyasoft.com (Jim Baker)
Date: Fri, 22 Oct 2010 19:30:46 -0600
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTikOgmNPa9UDYjYYAsksE29opWyvN=QV=9+1dw6L@mail.gmail.com>
References: <AANLkTikOgmNPa9UDYjYYAsksE29opWyvN=QV=9+1dw6L@mail.gmail.com>
Message-ID: <AANLkTinirTzDmXjinHODMy=9iWq3fp=TFxACDJq2Pja+@mail.gmail.com>

On Fri, Oct 22, 2010 at 5:35 PM, Thomas Kluyver <takowl at gmail.com> wrote:

> On 23 October 2010 00:04, <ipython-dev-request at scipy.org> wrote:
>
>> Trying a little bit more by disabling that version check, I discovered
>> that
>> at least 2.6's support of print as a function, instead of a statement, is
>> used. Now Jython 2.6 work kicked off just this week, so hopefully we don't
>> really need 2.6. Jython 2.5 does have two key 2.6 features that you might
>> be
>> using: 1) full mutable ast support (through the ast module, used by
>> sympy's
>> support); 2) class decorators. (We also have namedtuple.)
>>
>> Maybe for Jython this is just a matter of isolating the print function? It
>> doesn't seem to be used in too many places.
>>
>
> Just to chime in here, we have been working on the principle that ipython
> 0.11 would depend on Python 2.6 or later, and I've been 'modernising' the
> code base a bit, to fit in with my Python 3 branch of ipython. I'm sorry if
> this makes your life harder. Off the top of my head, though, I think most if
> not all of my changes should be compatible with 2.5.
>

Sounds reasonable to have a small compatibility shim then for 2.5. The other
piece I've found so far is switching from compiler to _ast (assuming CPython
2.5 compliance of course); neither Python 3 nor Jython supports the
old-style syntax tree. Casually inspecting the usage of the compiler module
suggests that there's not much use. In core, kernel.core.interpreter and
core.inputsplitter seem to have significant code duplication in their use of
compiler to get line numbers. We do support codeop, so that should work as
soon as it's made valid for future imports.

Incidentally the problem I noticed earlier in parsing a statement like
"print 42" was with codeop.CommandCompiler, not the compiler module.
Apparently ipython expects CPython's behavior of
codeop.CommandCompiler().compiler.flags, which is an int, whereas Jython has
this as codeop.CommandCompiler()._cflags, which is a structure. Strange, I
thought this was being exposed as an int for compatibility in this fashion.
Maybe we should fix that part for 2.5.2rc2. I'll take a look at it.

- Jim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/5c921383/attachment.html>

From benjaminrk at gmail.com  Fri Oct 22 22:10:34 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 22 Oct 2010 19:10:34 -0700
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTinirTzDmXjinHODMy=9iWq3fp=TFxACDJq2Pja+@mail.gmail.com>
References: <AANLkTikOgmNPa9UDYjYYAsksE29opWyvN=QV=9+1dw6L@mail.gmail.com>
	<AANLkTinirTzDmXjinHODMy=9iWq3fp=TFxACDJq2Pja+@mail.gmail.com>
Message-ID: <AANLkTimjf6Wp_a1H94OS7GJ0NKgeY5edjZR4G2Ytndzr@mail.gmail.com>

On Fri, Oct 22, 2010 at 18:30, Jim Baker <jbaker at zyasoft.com> wrote:

> On Fri, Oct 22, 2010 at 5:35 PM, Thomas Kluyver <takowl at gmail.com> wrote:
>
>> On 23 October 2010 00:04, <ipython-dev-request at scipy.org> wrote:
>>
>>> Trying a little bit more by disabling that version check, I discovered
>>> that
>>> at least 2.6's support of print as a function, instead of a statement, is
>>> used. Now Jython 2.6 work kicked off just this week, so hopefully we
>>> don't
>>> really need 2.6. Jython 2.5 does have two key 2.6 features that you might
>>> be
>>> using: 1) full mutable ast support (through the ast module, used by
>>> sympy's
>>> support); 2) class decorators. (We also have namedtuple.)
>>>
>>> Maybe for Jython this is just a matter of isolating the print function?
>>> It
>>> doesn't seem to be used in too many places.
>>>
>>
>> Just to chime in here, we have been working on the principle that ipython
>> 0.11 would depend on Python 2.6 or later, and I've been 'modernising' the
>> code base a bit, to fit in with my Python 3 branch of ipython. I'm sorry if
>> this makes your life harder. Off the top of my head, though, I think most if
>> not all of my changes should be compatible with 2.5.
>>
>
One 'modern' codestyle that is not compatible with 2.5:
try:
   stuff()
except Exception as e:
   handle(e)

That's invalid on 2.5, but 'except Exception, e:' is invalid on 3.x.

The only method that I have discovered that works on both is:

try:
  stuff()
except Exception:
  e = sys.exc_info()[1]

Obviously not as elegant as either one, but if you are supporting 2.5 and
3.1, it's the only way that works that I know of.
That's what I do in my 'nowarn' branch of pyzmq, which works (as of
yesterday) on everything from 2.5-3.1 with no code changes.

-MinRK



>
> Sounds reasonable to have a small compatibility shim then for 2.5. The
> other piece I've found so far is switching from compiler to _ast (assuming
> CPython 2.5 compliance of course); neither Python 3 nor Jython supports the
> old-style syntax tree. Casually inspecting the usage of the compiler module
> suggests that there's not much use. In core, kernel.core.interpreter and
> core.inputsplitter seem to have significant code duplication in their use of
> compiler to get line numbers. We do support codeop, so that should work as
> soon as it's made valid for future imports.
>
> Incidentally the problem I noticed earlier in parsing a statement like
> "print 42" was with codeop.CommandCompiler, not the compiler module.
> Apparently ipython expects CPython's behavior of
> codeop.CommandCompiler().compiler.flags, which is an int, whereas Jython has
> this as codeop.CommandCompiler()._cflags, which is a structure. Strange, I
> thought this was being exposed as an int for compatibility in this fashion.
> Maybe we should fix that part for 2.5.2rc2. I'll take a look at it.
>
> - Jim
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/11ae2833/attachment.html>

From takowl at gmail.com  Sat Oct 23 06:33:29 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sat, 23 Oct 2010 11:33:29 +0100
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTimjf6Wp_a1H94OS7GJ0NKgeY5edjZR4G2Ytndzr@mail.gmail.com>
References: <AANLkTikOgmNPa9UDYjYYAsksE29opWyvN=QV=9+1dw6L@mail.gmail.com>
	<AANLkTinirTzDmXjinHODMy=9iWq3fp=TFxACDJq2Pja+@mail.gmail.com>
	<AANLkTimjf6Wp_a1H94OS7GJ0NKgeY5edjZR4G2Ytndzr@mail.gmail.com>
Message-ID: <AANLkTin2k9_pAK3kwcTForxvE1Q3SPhyrb575ovXUDSd@mail.gmail.com>

On 23 October 2010 03:10, MinRK <benjaminrk at gmail.com> wrote:

> That's what I do in my 'nowarn' branch of pyzmq, which works (as of
> yesterday) on everything from 2.5-3.1 with no code changes.


Thanks for that. I had a couple of problems getting ipython-qtconsole
running in Python 3.1 (it's the eventloop module that was causing trouble).
I've made a few changes, and it works under 2.6 and 3.1. I've tried to leave
it so that it will work with 2.5, but I haven't tested.
http://github.com/takowl/pyzmq/tree/nowarn

Best wishes,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101023/0e1e3359/attachment.html>

From fperez.net at gmail.com  Mon Oct 25 17:12:53 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 25 Oct 2010 14:12:53 -0700
Subject: [IPython-dev] GTK regression in 0.10.1
Message-ID: <AANLkTi=oSa6DVYSv4f-BKNr+N_vmjht9DfJX_BPpZWhf@mail.gmail.com>

Hi all,

we've managed to ship a 0.10.1 that does not work AT ALL with --pylab
in GTK mode.  This is particularly problematic because on linux,
absent any user customization, matplotlib defaults to GTK.

I'll try to find the time for a fix soon, but if anyone beats me to
it, by all means go ahead, you'll be my hero for a day :)

Anyone willing to try and figure out what went wrong/propose a fix
should do so off the 0.10.2 branch:

http://github.com/ipython/ipython/tree/0.10.2

Cheers,

f


From mark.voorhies at ucsf.edu  Tue Oct 26 00:18:16 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Mon, 25 Oct 2010 21:18:16 -0700
Subject: [IPython-dev] [SPAM]   GTK regression in 0.10.1
In-Reply-To: <AANLkTi=oSa6DVYSv4f-BKNr+N_vmjht9DfJX_BPpZWhf@mail.gmail.com>
References: <AANLkTi=oSa6DVYSv4f-BKNr+N_vmjht9DfJX_BPpZWhf@mail.gmail.com>
Message-ID: <201010252118.16324.mark.voorhies@ucsf.edu>

On Monday, October 25, 2010 02:12:53 pm Fernando Perez wrote:
> Hi all,
> 
> we've managed to ship a 0.10.1 that does not work AT ALL with --pylab
> in GTK mode.  This is particularly problematic because on linux,
> absent any user customization, matplotlib defaults to GTK.
> 
> I'll try to find the time for a fix soon, but if anyone beats me to
> it, by all means go ahead, you'll be my hero for a day :)
> 
> Anyone willing to try and figure out what went wrong/propose a fix
> should do so off the 0.10.2 branch:
> 
> http://github.com/ipython/ipython/tree/0.10.2

I'm not sure I'm reproducing exactly the same problem (for me, 
ipython --pylab --gthread on 0.10.2 hangs immediately), but
reverting 3e84e9 "Fix problem with rc_override." appears to fix
it (presumably giving a regression for whatever 3e84e9 fixes).

Reverted branch is:
http://github.com/markvoorhies/ipython/tree/gtkfix

More details here:
http://github.com/ipython/ipython/issues/issue/185/

--Mark


From ellisonbg at gmail.com  Tue Oct 26 02:12:09 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 25 Oct 2010 23:12:09 -0700
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTinZa7kzcXjR9jfM2tFrwPVgbwjziGi3wSkK_ThT@mail.gmail.com>
References: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>
	<AANLkTi=apnCsJRO0bwG0=Dx0s=xPdmNk08M4kMhAcYAv@mail.gmail.com>
	<AANLkTinZa7kzcXjR9jfM2tFrwPVgbwjziGi3wSkK_ThT@mail.gmail.com>
Message-ID: <AANLkTinaBJRvDkPqfXWZYBMj3xSDjFzaF9Sq5y4W37eT@mail.gmail.com>

Jim,

> I just tried a checkout of master, however I get the following:
>
> ImportError: Python Version 2.6 or above is required for IPython.
>
> Trying a little bit more by disabling that version check, I discovered that
> at least 2.6's support of print as a function, instead of a statement, is
> used. Now Jython 2.6 work kicked off just this week, so hopefully we don't
> really need 2.6. Jython 2.5 does have two key 2.6 features that you might be
> using: 1) full mutable ast support (through the ast module, used by sympy's
> support); 2) class decorators. (We also have namedtuple.)
> Maybe for Jython this is just a matter of isolating the print function? It
> doesn't seem to be used in too many places.

We are using Python 2.6 pretty aggressively in the IPython codebase.
I can't remember all the things we are using at this point, but I know
we are using abc's and the improved Popen stuff as well as print as
you mention.  But, our usage of abc's is probably the most difficult
to get around at this point.

Cheers,

Brian



> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Tue Oct 26 02:17:00 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 25 Oct 2010 23:17:00 -0700
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
In-Reply-To: <AANLkTika8y1Neg13JDD1rEjrO2rLL7hqtEeCXr+YaeC4@mail.gmail.com>
References: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
	<AANLkTi=CrceUHbxfPxVui4WZ-BSBkZuYYi25ePH+NvTn@mail.gmail.com>
	<AANLkTinp-TzJEobf8xZf6npSVmt4O3f3TLickXmTcMpJ@mail.gmail.com>
	<AANLkTika8y1Neg13JDD1rEjrO2rLL7hqtEeCXr+YaeC4@mail.gmail.com>
Message-ID: <AANLkTimai2adYvy8DfTYVbLE8d6C=PeN8p3B8xxKkD-w@mail.gmail.com>

Min,

Thanks for running these benchmarks, comments below...


> Re-run for throughput with data:
>
> submit 16 tasks of a given size, plot against size.
> new-style:
> def echo(a):
>     return a
> old-style:
> task = StringTask("b=a", push=dict(a=a), pull=['b'])
>
>
I really like the style of the new API - echo is exactly what it does!


> The input chosen was random numpy arrays (64b float, so len(A)/8 ~= size in
> B).
>
> Notable points:
> * ZMQ submission remains flat, independent of size, due to non-copying
> sends
>

We hoped that this would be the case, but this is really non-trivial and
good to see.


> * size doesn't come into account until ~100kB, and clearly dominates both
> after 1MB
>     the turning point for Twisted is a little earlier than for ZMQ
> * at 4MB, Twisted is submitting < 2 tasks per sec, while ZMQ is submitting
> ~90
>

This is a fantastic point of comparison.  4 MB is a non-trivial amount of
data, and there is a huge difference between 0.5 second overhead (Twisted)
and 0.01 sec overhead (zmq).  It means that with zmq, users can get a
parallel speedup on calculations that involve much less CPU  cycles per byte
of data sent.


> * roundtrip, ZMQ is fairly consistently ~40x faster.
>
> memory usage:
> * Peak memory for the engines is 20% higher with ZMQ, because more than one
> task can now be waiting in the queue on the engine at a time.
>

Right, but this is good news as it is offloading the data off the controller
faster.


> * Peak memory for the Controller including schedulers is 25% less than
> Twisted with pure ZMQ, and 20% less with the Python scheduler. Note that all
> results still reside in memory, since I haven't implemented the db backend
> yet.
>

I would think that is the biggest memory usage for the controller in the
long run.  But we know how to fix that.


> * Peak memory for the Python scheduler is approximately the same as the
> engines
>


> * Peak memory for the zmq scheduler is about half that.
>
>
all very good news.  I think these plots can definitely make it into a paper
on this.

Cheers,

Brian


> -MinRK
>
> On Fri, Oct 22, 2010 at 09:52, MinRK <benjaminrk at gmail.com> wrote:
>
>> I'll get on the new tests, I already have a bandwidth one written, so I'm
>> running it now.  As for Twisted's throughput performance, it's at least
>> partly our fault.  Since the receiving is in Python, every time we try to
>> send there are incoming results getting in the way.  If we wrote it such
>> that sending prevented the receipt of results, I'm sure the Twisted code
>> would be faster for large numbers of messages.  With ZMQ, though, we don't
>> have to be receiving in Python to get the results to the client process, so
>> they arrive in ZMQ and await simple memcpy/deserialization.
>>
>> -MinRK
>>
>>
>> On Fri, Oct 22, 2010 at 09:27, Brian Granger <ellisonbg at gmail.com> wrote:
>>
>>> Min,
>>>
>>> Also, can you get memory consumption numbers for the controller and
>>> queues.  I want to see how much worse Twisted is in that respect.
>>>
>>> Cheers,
>>>
>>> Brian
>>>
>>> On Thu, Oct 21, 2010 at 11:53 PM, MinRK <benjaminrk at gmail.com> wrote:
>>>
>>>> I have my first performance numbers for throughput with the new parallel
>>>> code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
>>>> ~512 tiny tasks submitted as fast as they can is ~100x faster than with
>>>> Twisted.
>>>>
>>>> As a throughput test, I submitted a flood of many very small tasks that
>>>> should take ~no time:
>>>> new-style:
>>>> def wait(t=0):
>>>>     import time
>>>>     time.sleep(t)
>>>> submit:
>>>> client.apply(wait, args=(t,))
>>>>
>>>> Twisted:
>>>> task = StringTask("import time; time.sleep(%f)"%t)
>>>> submit:
>>>> client.run(task)
>>>>
>>>> Flooding the queue with these tasks with t=0, and then waiting for the
>>>> results, I tracked two times:
>>>> Sent: the time from the first submit until the last submit returns
>>>> Roundtrip: the time from the first submit to getting the last result
>>>>
>>>> Plotting these times vs number of messages, we see some decent numbers:
>>>> * The pure ZMQ scheduler is fastest, 10-100 times faster than Twisted
>>>> roundtrip
>>>> * The Python scheduler is ~3x slower roundtrip than pure ZMQ, but no
>>>> penalty to the submission rate
>>>> * Twisted performance falls off very quickly as the number of tasks
>>>> grows
>>>> * ZMQ performance is quite flat
>>>>
>>>> Legend:
>>>> zmq: the pure ZMQ Device is used for routing tasks
>>>> lru/weighted: the simplest/most complicated routing schemes respectively
>>>> in the Python ZMQ Scheduler (which supports dependencies)
>>>> twisted: the old IPython.kernel
>>>>
>>>> [image: roundtrip.png]
>>>> [image: sent.png]
>>>> Test system:
>>>> Core-i7 930, 4x2 cores (ht), 4-engine cluster all over tcp/loopback,
>>>> Ubuntu 10.04, Python 2.6.5
>>>>
>>>> -MinRK
>>>> http://github.com/minrk
>>>>
>>>
>>>
>>>
>>> --
>>> Brian E. Granger, Ph.D.
>>> Assistant Professor of Physics
>>> Cal Poly State University, San Luis Obispo
>>> bgranger at calpoly.edu
>>> ellisonbg at gmail.com
>>>
>>
>>
>


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101025/c05fe347/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: roundtrip.png
Type: image/png
Size: 30731 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101025/c05fe347/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sent.png
Type: image/png
Size: 31114 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101025/c05fe347/attachment-0001.png>

From pivanov314 at gmail.com  Tue Oct 26 03:57:19 2010
From: pivanov314 at gmail.com (Paul Ivanov)
Date: Tue, 26 Oct 2010 00:57:19 -0700
Subject: [IPython-dev] GTK regression in 0.10.1
In-Reply-To: <201010252118.16324.mark.voorhies@ucsf.edu>
References: <AANLkTi=oSa6DVYSv4f-BKNr+N_vmjht9DfJX_BPpZWhf@mail.gmail.com>
	<201010252118.16324.mark.voorhies@ucsf.edu>
Message-ID: <20101026075719.GA12899@ykcyc>

Mark Voorhies, on 2010-10-25 21:18,  wrote:
> On Monday, October 25, 2010 02:12:53 pm Fernando Perez wrote:
> > Hi all,
> > 
> > we've managed to ship a 0.10.1 that does not work AT ALL with --pylab
> > in GTK mode.  This is particularly problematic because on linux,
> > absent any user customization, matplotlib defaults to GTK.
> > 
> > I'll try to find the time for a fix soon, but if anyone beats me to
> > it, by all means go ahead, you'll be my hero for a day :)
> > 
> > Anyone willing to try and figure out what went wrong/propose a fix
> > should do so off the 0.10.2 branch:
> > 
> > http://github.com/ipython/ipython/tree/0.10.2
> 
> I'm not sure I'm reproducing exactly the same problem (for me, 
> ipython --pylab --gthread on 0.10.2 hangs immediately), but
> reverting 3e84e9 "Fix problem with rc_override." appears to fix
> it (presumably giving a regression for whatever 3e84e9 fixes).
> 
> Reverted branch is:
> http://github.com/markvoorhies/ipython/tree/gtkfix
> 
> More details here:
> http://github.com/ipython/ipython/issues/issue/185/

Keeping simple things simple - I can confirm that:

  $ ipython -pylab -gthread
  In [1]: ax = subplot(111); ax.plot(rand(10)); show()

all work, without blocking with just one line change to the
current 0.10.2 branch, by uncommenting gtk.set_interactive(False)
after gtk is imported on line 781 of Shell.py.

http://github.com/ivanov/ipython/tree/fix185

best,
-- 
Paul Ivanov
314 address only used for lists,  off-list direct email at:
http://pirsquared.org | GPG/PGP key id: 0x0F3E28F7 


From hans_meine at gmx.net  Tue Oct 26 05:30:13 2010
From: hans_meine at gmx.net (Hans Meine)
Date: Tue, 26 Oct 2010 11:30:13 +0200
Subject: [IPython-dev] Shutdown __del__ methods bug
In-Reply-To: <AANLkTiny2ndiYmpeUk0+9+6XTQtHny7Szy2E9sz=9Rhk@mail.gmail.com>
References: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
	<AANLkTimN5AMOZEs4HVW6QHBU0uZS4gBJ4AgxGp+txqpU@mail.gmail.com>
	<AANLkTiny2ndiYmpeUk0+9+6XTQtHny7Szy2E9sz=9Rhk@mail.gmail.com>
Message-ID: <201010261130.14011.hans_meine@gmx.net>

Op den Freedag 22 Oktober 2010 Klock 19:59:18 hett Fernando Perez schreven:
> On Fri, Oct 22, 2010 at 2:49 AM, Thomas Kluyver <takowl at gmail.com> wrote:
> > If you're worried about possible regressions, I think the best thing is
> > for you and Min to go over the diff for that commit and work out which
> > changes were intentional.
> 
> Yes, I am worried.  Min, let's try to get together sometime next week
> and do a little forensics on this one, to make sure nothing else
> slipped by.

I don't know git very well, but with Mercurial I would simply repeat the 
merge, check whether it worked this time, and then diff the two results, in 
order to see what got reverted.

At least that should give you much less than the full diff.

HTH,
  Hans


From hans_meine at gmx.net  Tue Oct 26 09:31:56 2010
From: hans_meine at gmx.net (Hans Meine)
Date: Tue, 26 Oct 2010 15:31:56 +0200
Subject: [IPython-dev] filename completion - anyone working on it?
Message-ID: <201010261531.56428.hans_meine@gmx.net>

Hi everybody,

I am constantly bugged by filename completion in IPython.  To be specific, 
there are two cases that maybe need to be considered separately:
1) Completion of filenames in arguments to IPython magics, e.g. %run foo<tab>
2) Completion of filenames in strings, i.e. filename = "../subdir/bar<tab>"

In the second case, I don't want to have completions of other types (i.e. 
variable/function names), and in both cases I want filenames with spaces to be 
supported!

The failure w.r.t. the latter is a serious bug for me, so I'd like to ask if 
anybody is working on the relevant code ATM, and if not, which hints you 
want/can give me for fixing this.

Have a nice day,
  Hans


From fperez.net at gmail.com  Tue Oct 26 15:15:40 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 12:15:40 -0700
Subject: [IPython-dev] filename completion - anyone working on it?
In-Reply-To: <201010261531.56428.hans_meine@gmx.net>
References: <201010261531.56428.hans_meine@gmx.net>
Message-ID: <AANLkTi=B_zmNoVzeoJEQ9UeHLi76ndtvOw2ti921uuSZ@mail.gmail.com>

Hi Hans,

On Tue, Oct 26, 2010 at 6:31 AM, Hans Meine <hans_meine at gmx.net> wrote:
>
> I am constantly bugged by filename completion in IPython. ?To be specific,
> there are two cases that maybe need to be considered separately:
> 1) Completion of filenames in arguments to IPython magics, e.g. %run foo<tab>
> 2) Completion of filenames in strings, i.e. filename = "../subdir/bar<tab>"
>
> In the second case, I don't want to have completions of other types (i.e.
> variable/function names), and in both cases I want filenames with spaces to be
> supported!
>
> The failure w.r.t. the latter is a serious bug for me, so I'd like to ask if
> anybody is working on the relevant code ATM, and if not, which hints you
> want/can give me for fixing this.

Could you pull from trunk and let me know if you find the situation
any better?  Invoking my time machine for your question, I rewound to
yesterday afternoon and put more time than I'd like to admit into this
very problem:

http://github.com/ipython/ipython/commit/02eecaf061408f26a3c6029886b8794f73581938

Things already work as you imagine, with a custom completer for magics
without quotes, and a generic file completer that should work in
strings.  This weekend John and I actually ran into the limitations of
the completions-in-strings, and that prompted my effort to fix this.

The code for this is surprisingly tricky, but I think things are
better now.  One caveat: readline itself (without any ability for us
to control/prevent, as best I can tell) automatically closes the
quotes when there's a single completion.  Which very annoyingly means
that if you stop at a directory boundary

a = f('some<tab>

-> produces:

a = f('somedir/'

even if somedir/ is not empty.  I spent a lot of time trying to
prevent readline from doing this, but failed.  If anyone knows how to
do it, I'd love to hear.  The problem doesn't appear in the Qt
console, as it's only done by readline itself, and in Qt we use our
own completion machinery that's totally independent of readline.

Cheers,

f


From jbaker at zyasoft.com  Tue Oct 26 15:28:56 2010
From: jbaker at zyasoft.com (Jim Baker)
Date: Tue, 26 Oct 2010 13:28:56 -0600
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTinaBJRvDkPqfXWZYBMj3xSDjFzaF9Sq5y4W37eT@mail.gmail.com>
References: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>
	<AANLkTi=apnCsJRO0bwG0=Dx0s=xPdmNk08M4kMhAcYAv@mail.gmail.com>
	<AANLkTinZa7kzcXjR9jfM2tFrwPVgbwjziGi3wSkK_ThT@mail.gmail.com>
	<AANLkTinaBJRvDkPqfXWZYBMj3xSDjFzaF9Sq5y4W37eT@mail.gmail.com>
Message-ID: <AANLkTinH3CieJU=MNzTysDkOKneO23qK9AyKMty==d_6@mail.gmail.com>

Brian,

On Tue, Oct 26, 2010 at 12:12 AM, Brian Granger <ellisonbg at gmail.com> wrote:

>
> We are using Python 2.6 pretty aggressively in the IPython codebase.
> I can't remember all the things we are using at this point, but I know
> we are using abc's and the improved Popen stuff as well as print as
> you mention.  But, our usage of abc's is probably the most difficult
> to get around at this point.
>

Jython 2.6+ will support ABCs, but that has work that has just begun. Given
that, we probably should adopt this plan:

   1. Create a jython-ipython fork of 0.10.1; publish this on PyPI
   2. Port relevant changes to ipython trunk so that stock ipython can be
   installed for Jython 2.6+

One of the first 2.6 features I would expect would be implemented is ABCs,
because a variety of new or rewritten components in the stdlib depend on ABC
support.

(Jython 2.6+ here means we will at least 2.6 functionality with some 2.7
features, and possibly it will be released as 2.7 instead.)

- Jim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101026/07a9e64e/attachment.html>

From fperez.net at gmail.com  Tue Oct 26 15:42:55 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 12:42:55 -0700
Subject: [IPython-dev] Fwd: [GitHub] show() blocks in pylab mode with
 ipython 0.10.1 [ipython/ipython GH-185]
In-Reply-To: <4cc68afbf2416_54ca3ff4f1a532fc136@fe6.rs.github.com.tmail>
References: <4cc68afbf2416_54ca3ff4f1a532fc136@fe6.rs.github.com.tmail>
Message-ID: <AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>

Hi all,

Paul Ivanov just posted this minimally invasive change to fix the GTK
pylab bug in 0.10.1.

The change is a trivial one-line fix:

http://github.com/ivanov/ipython/commit/8ed54e466932c326868a82c68adc92345325ca93

I'm posting here so affected users can just go and edit that one file
themselves for now, untill we roll the fix out.

I think we should roll a bugfix 0.10.2 so that distributions can
update and push the fix to their users, sooner rather than later.
I'll leave this for feedback for a few days, and unless I hear
otherwise I'll push the 0.10.2 with this single fix over the weekend.

If anyone knows of a better/more solid fix, I'm all ears.  I don't use
pygtk myself *at all* so I'm the least qualified person to be doing
this (in case that wasn't already abundantly clear from the mess I
made...).

Many thanks to Mark Voorhies and Paul Ivanov for tracking this one down!

Regards,

f


---------- Forwarded message ----------
From: GitHub <noreply at github.com>
Date: Tue, Oct 26, 2010 at 1:02 AM
Subject: [GitHub] show() blocks in pylab mode with ipython 0.10.1
[ipython/ipython GH-185]
To: fperez.net at gmail.com


From: ivanov

Keeping simple things simple - I can confirm that:
?$ ipython -pylab -gthread
?In [1]: ax = subplot(111); ax.plot(rand(10)); show()
all work, without blocking with just one line change to the current
0.10.2 branch, by uncommenting gtk.set_interactive(False) after gtk is
imported on line 781 of Shell.py

http://github.com/ivanov/ipython/tree/fix185

View Issue: http://github.com/ipython/ipython/issues#issue/185/comment/493228


From fperez.net at gmail.com  Tue Oct 26 15:48:04 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 12:48:04 -0700
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTinH3CieJU=MNzTysDkOKneO23qK9AyKMty==d_6@mail.gmail.com>
References: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>
	<AANLkTi=apnCsJRO0bwG0=Dx0s=xPdmNk08M4kMhAcYAv@mail.gmail.com>
	<AANLkTinZa7kzcXjR9jfM2tFrwPVgbwjziGi3wSkK_ThT@mail.gmail.com>
	<AANLkTinaBJRvDkPqfXWZYBMj3xSDjFzaF9Sq5y4W37eT@mail.gmail.com>
	<AANLkTinH3CieJU=MNzTysDkOKneO23qK9AyKMty==d_6@mail.gmail.com>
Message-ID: <AANLkTi=VOPQthQ1dLEzFtwbE+iz9U0HxyYwSx8YoSGQ_@mail.gmail.com>

Hi Jim,

On Tue, Oct 26, 2010 at 12:28 PM, Jim Baker <jbaker at zyasoft.com> wrote:
> Jython 2.6+ will support ABCs, but that has work that has just begun. Given
> that, we probably should adopt this plan:
>
> Create a jython-ipython fork of?0.10.1; publish this on PyPI
> Port relevant changes to ipython trunk so that stock ipython can be
> installed for Jython 2.6+

That sound like a good plan to me, modulo that I'd base your current
work on the 0.10.2 branch here:

http://github.com/ipython/ipython/tree/0.10.2

that has the most recent code that's still 2.5-compatible.

Sorry about the fact that we'd moved to 2.6 just as you guys showed up
:)  The real motivation for that was getting things ready for the 3.x
transition, and we figured that 2.6 being out for over 2 years was
enough of a window that we took the jump.  Hopefully before long your
2.6/7 work will be stable enough that this won't be a problem anymore.

One interesting possibility opened by the new ZeroMQ-based model will
be the ability to use any client (such as the nice Qt one or the new
web one) to talk to a Jython-based kernel.  You could thus benefit
from the client work done by others while talking to a kernel running
on the JVM and accessing your Java libs.

And conversely, you could write a pure Java client (in say Eclipse or
any other Java tool) that could talk to a CPython kernel, so that for
example people who like the Java tool could operate on
numpy/scipy/matplotlib backends from their familiar environments.

Cheers,

f


From fperez.net at gmail.com  Tue Oct 26 15:52:26 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 12:52:26 -0700
Subject: [IPython-dev] [GitHub] show() blocks in pylab mode with ipython
 0.10.1 [ipython/ipython GH-185]
In-Reply-To: <AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>
References: <4cc68afbf2416_54ca3ff4f1a532fc136@fe6.rs.github.com.tmail>
	<AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>
Message-ID: <AANLkTi=LnPJKV2YX1vMPS3n1c+nByGSPf8YaLkuAz3aE@mail.gmail.com>

On Tue, Oct 26, 2010 at 12:42 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Paul Ivanov just posted this minimally invasive change to fix the GTK
> pylab bug in 0.10.1.
>
> The change is a trivial one-line fix:
>
> http://github.com/ivanov/ipython/commit/8ed54e466932c326868a82c68adc92345325ca93
>

I should add that anyone who prefers to simply reinstall the fixed
version without manually fixing anything, can use github to download
(in zip or tar format) Paul's fixed branch from here:

http://github.com/ivanov/ipython/archives/fix185

That link will give you an automatically generated archive of Paul's
fixed branch without needing to use git itself for anything on your
side.  The joys of github...

Cheers,

f


From fperez.net at gmail.com  Tue Oct 26 16:06:49 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 13:06:49 -0700
Subject: [IPython-dev] Min: Cmd-P/Ctrl-P for new print code question..
Message-ID: <AANLkTimvAv_qBM9ura=PjBCLY5MMcbdp7Etyaa57jncm@mail.gmail.com>

Hey Min,

in console/console_widget.py:

e11b615e (MinRK          2010-10-18 16:34:07 -0700  170)         #
Configure actions.
e11b615e (MinRK          2010-10-18 16:34:07 -0700  171)
action = QtGui.QAction('Print', None)
e11b615e (MinRK          2010-10-18 16:34:07 -0700  172)
action.setEnabled(True)
e11b615e (MinRK          2010-10-18 16:34:07 -0700  173)
action.setShortcut(QtGui.QKeySequence.Print)
e11b615e (MinRK          2010-10-18 16:34:07 -0700  174)
action.triggered.connect(self.print_)
e11b615e (MinRK          2010-10-18 16:34:07 -0700  175)
self.addAction(action)
e11b615e (MinRK          2010-10-18 16:34:07 -0700  176)
self._print_action = action

you added in line 173 the default print keybinding to the Print
action.  One problem that introduces is that it overwrites the
keybinding we had for Ctrl-P, which was equivalent to 'smart up arrow'
(history-aware).  Since that keybinding has been there for a long
time, is consistent with the terminal and we want to preserve as much
similarity as reasonable with habits from the terminal, we need to
change this.  I think a reasonable compromise is to make:

Ctrl-P -> 'smart up arrow'
Ctrl-Shift-P -> Print

But I don't want to make any changes, as I don't know if on the Mac,
there is no problem.  I don't know if on the mac, Control-p and Cmd-p
are different, so that Ctrl-p remains as before and only Cmd-p goes to
print.

If that's the case, then we should probably leave it as you did for
the Mac, and only change the keybinding for Linux/windows (which don't
have a separate Cmd key).

Thoughts?

Cheers,

f


From tomspur at fedoraproject.org  Tue Oct 26 16:19:29 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Tue, 26 Oct 2010 22:19:29 +0200
Subject: [IPython-dev] Fwd: [GitHub] show() blocks in pylab mode with
 ipython 0.10.1 [ipython/ipython GH-185]
In-Reply-To: <AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>
References: <4cc68afbf2416_54ca3ff4f1a532fc136@fe6.rs.github.com.tmail>
	<AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>
Message-ID: <20101026221929.1c1d98c8@earth>

On Tue, 26 Oct 2010 12:42:55 -0700
Fernando Perez wrote:
> I think we should roll a bugfix 0.10.2 so that distributions can
> update and push the fix to their users, sooner rather than later.
> I'll leave this for feedback for a few days, and unless I hear
> otherwise I'll push the 0.10.2 with this single fix over the weekend.

Now, that I see you talking about 0.10.2...
There are some issues in arised in fedora, but I had no time yet to
look at it:
- When opening a shell and deleting that folder in another console and
  starting ipyton in the now deleted folder you get this crash:
  https://bugzilla.redhat.com/show_bug.cgi?id=593115
- Don't have a clue on this:
  https://bugzilla.redhat.com/show_bug.cgi?id=596075

- Might be the most important one:
  ipython requires gtk, but should not...
  https://bugzilla.redhat.com/show_bug.cgi?id=646079

-- 
Thomas


From benjaminrk at gmail.com  Tue Oct 26 16:35:25 2010
From: benjaminrk at gmail.com (MinRK)
Date: Tue, 26 Oct 2010 13:35:25 -0700
Subject: [IPython-dev] Min: Cmd-P/Ctrl-P for new print code question..
In-Reply-To: <AANLkTimvAv_qBM9ura=PjBCLY5MMcbdp7Etyaa57jncm@mail.gmail.com>
References: <AANLkTimvAv_qBM9ura=PjBCLY5MMcbdp7Etyaa57jncm@mail.gmail.com>
Message-ID: <AANLkTimLK4MukVFhfcY0+BGHNZrWmmG1__e6mNR3BZ2j@mail.gmail.com>

On Tue, Oct 26, 2010 at 13:06, Fernando Perez <fperez.net at gmail.com> wrote:

> Hey Min,
>
> in console/console_widget.py:
>
> e11b615e (MinRK          2010-10-18 16:34:07 -0700  170)         #
> Configure actions.
> e11b615e (MinRK          2010-10-18 16:34:07 -0700  171)
> action = QtGui.QAction('Print', None)
> e11b615e (MinRK          2010-10-18 16:34:07 -0700  172)
> action.setEnabled(True)
> e11b615e (MinRK          2010-10-18 16:34:07 -0700  173)
> action.setShortcut(QtGui.QKeySequence.Print)
> e11b615e (MinRK          2010-10-18 16:34:07 -0700  174)
> action.triggered.connect(self.print_)
> e11b615e (MinRK          2010-10-18 16:34:07 -0700  175)
> self.addAction(action)
> e11b615e (MinRK          2010-10-18 16:34:07 -0700  176)
> self._print_action = action
>
> you added in line 173 the default print keybinding to the Print
> action.  One problem that introduces is that it overwrites the
> keybinding we had for Ctrl-P, which was equivalent to 'smart up arrow'
> (history-aware).


Hm, that's annoying, sorry I didn't catch it.

One of my very favorite things about OSX is that since the standard meta key
is cmd instead of ctrl, all regular GUI bindings (entirely cmd-based) and
all emacs-style bindings (ctrl-based) have no degeneracy, so you don't have
to choose between having ctrl-a be home or select-all, or this cmd/ctrl-P
behavior, and basic emacs navigation works in *all* GUI apps by default.

Since that keybinding has been there for a long
> time, is consistent with the terminal and we want to preserve as much
> similarity as reasonable with habits from the terminal, we need to
> change this.  I think a reasonable compromise is to make:
>

> Ctrl-P -> 'smart up arrow'
> Ctrl-Shift-P -> Print
>
> But I don't want to make any changes, as I don't know if on the Mac,
> there is no problem.  I don't know if on the mac, Control-p and Cmd-p
> are different, so that Ctrl-p remains as before and only Cmd-p goes to
> print.
>
> If that's the case, then we should probably leave it as you did for
> the Mac, and only change the keybinding for Linux/windows (which don't
> have a separate Cmd key).
>

Yes, we should do a platform check, since it would be completely hideous to
be the only Mac app in the world that doesn't use cmd-P for print.

Questions:
Is it just Linux, or will there also be a conflict in Windows?
 Specifically: is OSX the only one without a problem, or is Linux the only
one with a problem?

Is there a similar conflict on ctrl-S?

-MinRK


>
> Thoughts?
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101026/ca205b2f/attachment.html>

From jbaker at zyasoft.com  Tue Oct 26 16:42:56 2010
From: jbaker at zyasoft.com (Jim Baker)
Date: Tue, 26 Oct 2010 14:42:56 -0600
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTi=VOPQthQ1dLEzFtwbE+iz9U0HxyYwSx8YoSGQ_@mail.gmail.com>
References: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>
	<AANLkTi=apnCsJRO0bwG0=Dx0s=xPdmNk08M4kMhAcYAv@mail.gmail.com>
	<AANLkTinZa7kzcXjR9jfM2tFrwPVgbwjziGi3wSkK_ThT@mail.gmail.com>
	<AANLkTinaBJRvDkPqfXWZYBMj3xSDjFzaF9Sq5y4W37eT@mail.gmail.com>
	<AANLkTinH3CieJU=MNzTysDkOKneO23qK9AyKMty==d_6@mail.gmail.com>
	<AANLkTi=VOPQthQ1dLEzFtwbE+iz9U0HxyYwSx8YoSGQ_@mail.gmail.com>
Message-ID: <AANLkTi=0pUVdGL6D5ZCw8zSn42oVidKHBU2=AX_0xJz8@mail.gmail.com>

Fernando,

On Tue, Oct 26, 2010 at 1:48 PM, Fernando Perez <fperez.net at gmail.com>wrote:

>
> That sound like a good plan to me, modulo that I'd base your current
> work on the 0.10.2 branch here:
>
> http://github.com/ipython/ipython/tree/0.10.2
>
> that has the most recent code that's still 2.5-compatible.
>
> Sounds good, I will do that.


> Sorry about the fact that we'd moved to 2.6 just as you guys showed up
> :)


No problem at all. It would've been nice that readline had been better
defined with a test suite, or we might have had done this last year. So
anything that ipython trunk can do to test via pexpect (or something like
that) will be very helpful. (Jython can transparently drive pexpect with
execnet, through a subprocess gateway).


>  The real motivation for that was getting things ready for the 3.x
> transition, and we figured that 2.6 being out for over 2 years was
> enough of a window that we took the jump.  Hopefully before long your
> 2.6/7 work will be stable enough that this won't be a problem anymore.
>

We're probably 6 months away from a usable early release (alpha), so it's
reasonable.

>
> One interesting possibility opened by the new ZeroMQ-based model will
> be the ability to use any client (such as the nice Qt one or the new
> web one) to talk to a Jython-based kernel.  You could thus benefit
> from the client work done by others while talking to a kernel running
> on the JVM and accessing your Java libs.
>

This is a real use case that frequently comes up.

>
> And conversely, you could write a pure Java client (in say Eclipse or
> any other Java tool) that could talk to a CPython kernel, so that for
> example people who like the Java tool could operate on
> numpy/scipy/matplotlib backends from their familiar environments.
>
> I'm also looking forward to when users can also mix in a single process
Java libraries *and* C/C++/Fortran-based libraries, including numpy/*,  once
we support the Python C Extension API. It also looks feasible to support
memoryview via Java NIO buffers, so this could be done without impacting the
JIT and with zerocopy.

- Jim
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101026/2d4b4c72/attachment.html>

From ellisonbg at gmail.com  Tue Oct 26 16:51:32 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 26 Oct 2010 13:51:32 -0700
Subject: [IPython-dev] Min: Cmd-P/Ctrl-P for new print code question..
In-Reply-To: <AANLkTimvAv_qBM9ura=PjBCLY5MMcbdp7Etyaa57jncm@mail.gmail.com>
References: <AANLkTimvAv_qBM9ura=PjBCLY5MMcbdp7Etyaa57jncm@mail.gmail.com>
Message-ID: <AANLkTinkOJpBsdY=0sx1aXvpUtVRO3r-CeNqBSrJsw0n@mail.gmail.com>

On Tue, Oct 26, 2010 at 1:06 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hey Min,
>
> in console/console_widget.py:
>
> e11b615e (MinRK ? ? ? ? ?2010-10-18 16:34:07 -0700 ?170) ? ? ? ? #
> Configure actions.
> e11b615e (MinRK ? ? ? ? ?2010-10-18 16:34:07 -0700 ?171)
> action = QtGui.QAction('Print', None)
> e11b615e (MinRK ? ? ? ? ?2010-10-18 16:34:07 -0700 ?172)
> action.setEnabled(True)
> e11b615e (MinRK ? ? ? ? ?2010-10-18 16:34:07 -0700 ?173)
> action.setShortcut(QtGui.QKeySequence.Print)
> e11b615e (MinRK ? ? ? ? ?2010-10-18 16:34:07 -0700 ?174)
> action.triggered.connect(self.print_)
> e11b615e (MinRK ? ? ? ? ?2010-10-18 16:34:07 -0700 ?175)
> self.addAction(action)
> e11b615e (MinRK ? ? ? ? ?2010-10-18 16:34:07 -0700 ?176)
> self._print_action = action
>
> you added in line 173 the default print keybinding to the Print
> action. ?One problem that introduces is that it overwrites the
> keybinding we had for Ctrl-P, which was equivalent to 'smart up arrow'
> (history-aware). ?Since that keybinding has been there for a long
> time, is consistent with the terminal and we want to preserve as much
> similarity as reasonable with habits from the terminal, we need to
> change this. ?I think a reasonable compromise is to make:
>
> Ctrl-P -> 'smart up arrow'
> Ctrl-Shift-P -> Print
>
> But I don't want to make any changes, as I don't know if on the Mac,
> there is no problem. ?I don't know if on the mac, Control-p and Cmd-p
> are different, so that Ctrl-p remains as before and only Cmd-p goes to
> print.

Yes, on the Mac I think this is the right approach.

> If that's the case, then we should probably leave it as you did for
> the Mac, and only change the keybinding for Linux/windows (which don't
> have a separate Cmd key).
>
> Thoughts?
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Tue Oct 26 17:19:46 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 14:19:46 -0700
Subject: [IPython-dev] Jython support in ipython
In-Reply-To: <AANLkTi=0pUVdGL6D5ZCw8zSn42oVidKHBU2=AX_0xJz8@mail.gmail.com>
References: <AANLkTikbJwt-WEPouLx0SJRMg9tJtFzWfXbr8iUVHWeH@mail.gmail.com>
	<AANLkTi=apnCsJRO0bwG0=Dx0s=xPdmNk08M4kMhAcYAv@mail.gmail.com>
	<AANLkTinZa7kzcXjR9jfM2tFrwPVgbwjziGi3wSkK_ThT@mail.gmail.com>
	<AANLkTinaBJRvDkPqfXWZYBMj3xSDjFzaF9Sq5y4W37eT@mail.gmail.com>
	<AANLkTinH3CieJU=MNzTysDkOKneO23qK9AyKMty==d_6@mail.gmail.com>
	<AANLkTi=VOPQthQ1dLEzFtwbE+iz9U0HxyYwSx8YoSGQ_@mail.gmail.com>
	<AANLkTi=0pUVdGL6D5ZCw8zSn42oVidKHBU2=AX_0xJz8@mail.gmail.com>
Message-ID: <AANLkTi=rzT00+hmuULDAXp=DNjev7SVypdC8TFUDLYUp@mail.gmail.com>

On Tue, Oct 26, 2010 at 1:42 PM, Jim Baker <jbaker at zyasoft.com> wrote:
> Fernando,
>
> On Tue, Oct 26, 2010 at 1:48 PM, Fernando Perez <fperez.net at gmail.com>
> wrote:
>>
>> That sound like a good plan to me, modulo that I'd base your current
>> work on the 0.10.2 branch here:
>>
>> http://github.com/ipython/ipython/tree/0.10.2
>>
>> that has the most recent code that's still 2.5-compatible.
>>
> Sounds good, I will do that.

[...]

Keep us posted, and we'll be happy to include upstream any changes
that you propose to make life easier java-side when they have no
detrimental impact on the cpython side of things.  Ideally once your
2.6/7 release is ready, we'd have a single ipython to run on both
jython and cpython.

Cheers,

f


From fperez.net at gmail.com  Tue Oct 26 17:24:02 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 14:24:02 -0700
Subject: [IPython-dev] Min: Cmd-P/Ctrl-P for new print code question..
In-Reply-To: <AANLkTimLK4MukVFhfcY0+BGHNZrWmmG1__e6mNR3BZ2j@mail.gmail.com>
References: <AANLkTimvAv_qBM9ura=PjBCLY5MMcbdp7Etyaa57jncm@mail.gmail.com>
	<AANLkTimLK4MukVFhfcY0+BGHNZrWmmG1__e6mNR3BZ2j@mail.gmail.com>
Message-ID: <AANLkTikQoC_mjBZDu2FZvCaSF8tFoF3h-Z1LYPQgG13s@mail.gmail.com>

Hey,

On Tue, Oct 26, 2010 at 1:35 PM, MinRK <benjaminrk at gmail.com> wrote:
> Hm, that's annoying, sorry I didn't catch it.

No worries, I didn't notice either until a while later.

> One of my very favorite things about OSX is that since the standard meta key
> is cmd instead of ctrl, all regular GUI bindings (entirely cmd-based) and
> all emacs-style bindings (ctrl-based) have no degeneracy, so you don't have
> to choose between having ctrl-a be home or select-all, or this cmd/ctrl-P
> behavior, and basic emacs navigation works in *all* GUI apps by default.

Indeed.  And if only Apple had the decency to include a right Ctrl-key
(in addition to Cmd and Alt/Option) on their laptop/small desktop
keyboards, I might actually own an Apple machine! :) [I can't stand
having to chord on only left-Ctrl key for everything on the smaller
Apple keyboards, despite loving them].

> Yes, we should do a platform check, since it would be completely hideous to
> be the only Mac app in the world that doesn't use cmd-P for print.
> Questions:
> Is it just Linux, or will there also be a conflict in Windows?
> ?Specifically: is OSX the only one without a problem, or is Linux the only
> one with a problem?

OSX is the only one *without* a problem. So the fix is to leave your
current version for OSX, and switch out to Ctrl-Shift-P for all others
(*nix, Windows).

> Is there a similar conflict on ctrl-S?

In principle yes, except that we have no other keybinding for Ctrl-S
yet, so I'm not that worried about that one.

Cheers,

f


From benjaminrk at gmail.com  Tue Oct 26 17:29:57 2010
From: benjaminrk at gmail.com (MinRK)
Date: Tue, 26 Oct 2010 14:29:57 -0700
Subject: [IPython-dev] Min: Cmd-P/Ctrl-P for new print code question..
In-Reply-To: <AANLkTikQoC_mjBZDu2FZvCaSF8tFoF3h-Z1LYPQgG13s@mail.gmail.com>
References: <AANLkTimvAv_qBM9ura=PjBCLY5MMcbdp7Etyaa57jncm@mail.gmail.com>
	<AANLkTimLK4MukVFhfcY0+BGHNZrWmmG1__e6mNR3BZ2j@mail.gmail.com>
	<AANLkTikQoC_mjBZDu2FZvCaSF8tFoF3h-Z1LYPQgG13s@mail.gmail.com>
Message-ID: <AANLkTin-BoX42Y-wcNLQRRK0hF+NpmFsWs0M9V12NpeX@mail.gmail.com>

On Tue, Oct 26, 2010 at 14:24, Fernando Perez <fperez.net at gmail.com> wrote:

> Hey,
>
> On Tue, Oct 26, 2010 at 1:35 PM, MinRK <benjaminrk at gmail.com> wrote:
> > Hm, that's annoying, sorry I didn't catch it.
>
> No worries, I didn't notice either until a while later.
>
> > One of my very favorite things about OSX is that since the standard meta
> key
> > is cmd instead of ctrl, all regular GUI bindings (entirely cmd-based) and
> > all emacs-style bindings (ctrl-based) have no degeneracy, so you don't
> have
> > to choose between having ctrl-a be home or select-all, or this cmd/ctrl-P
> > behavior, and basic emacs navigation works in *all* GUI apps by default.
>
> Indeed.  And if only Apple had the decency to include a right Ctrl-key
> (in addition to Cmd and Alt/Option) on their laptop/small desktop
> keyboards, I might actually own an Apple machine! :) [I can't stand
> having to chord on only left-Ctrl key for everything on the smaller
> Apple keyboards, despite loving them].
>
> > Yes, we should do a platform check, since it would be completely hideous
> to
> > be the only Mac app in the world that doesn't use cmd-P for print.
> > Questions:
> > Is it just Linux, or will there also be a conflict in Windows?
> >  Specifically: is OSX the only one without a problem, or is Linux the
> only
> > one with a problem?
>
> OSX is the only one *without* a problem. So the fix is to leave your
> current version for OSX, and switch out to Ctrl-Shift-P for all others
> (*nix, Windows).
>

Done and done:
http://github.com/ipython/ipython/pull/187

It specifically checks if the print key is ctrl-P, and changes to
ctrl-shift-P if that's the case.
If, for some reason, the print key on the platform is something else, then
it won't change.


>
> > Is there a similar conflict on ctrl-S?
>
> In principle yes, except that we have no other keybinding for Ctrl-S
> yet, so I'm not that worried about that one.
>
> Cheers,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101026/4eef581b/attachment.html>

From fperez.net at gmail.com  Tue Oct 26 17:34:12 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 14:34:12 -0700
Subject: [IPython-dev] Fwd: [GitHub] show() blocks in pylab mode with
 ipython 0.10.1 [ipython/ipython GH-185]
In-Reply-To: <20101026221929.1c1d98c8@earth>
References: <4cc68afbf2416_54ca3ff4f1a532fc136@fe6.rs.github.com.tmail>
	<AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>
	<20101026221929.1c1d98c8@earth>
Message-ID: <AANLkTimcGQ8qiQYc5Qe3j1yeRfx0Qkwxg=1ESDf7w2dP@mail.gmail.com>

On Tue, Oct 26, 2010 at 1:19 PM, Thomas Spura <tomspur at fedoraproject.org> wrote:
> - When opening a shell and deleting that folder in another console and
> ?starting ipyton in the now deleted folder you get this crash:
> ?https://bugzilla.redhat.com/show_bug.cgi?id=593115

I'm not terribly inclined to spend much time on this one: trying to
run anything in a directory that doesn't exist is a bad idea to begin
with.   We could improve the error message a little bit to something
like

"You are trying to walk over the edge of a cliff and haven't noticed
there is no ground below you.  You are about to experience gravity".

But that's about it, I think.  Pull request welcome.

> - Don't have a clue on this:
> ?https://bugzilla.redhat.com/show_bug.cgi?id=596075

No idea either.

> - Might be the most important one:
> ?ipython requires gtk, but should not...
> ?https://bugzilla.redhat.com/show_bug.cgi?id=646079

Ok, this one is bad, and fortunately easy to fix.  Let me know if this
is not sufficient:

http://github.com/ipython/ipython/commit/8161523536289eaed01ca42707f6785f59343cd7

Regards,

f


From fperez.net at gmail.com  Tue Oct 26 17:35:33 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 14:35:33 -0700
Subject: [IPython-dev] [GitHub] show() blocks in pylab mode with ipython
 0.10.1 [ipython/ipython GH-185]
In-Reply-To: <AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>
References: <4cc68afbf2416_54ca3ff4f1a532fc136@fe6.rs.github.com.tmail>
	<AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>
Message-ID: <AANLkTikn4gu1FgYRan_W_pRpT6OjLf4T=S6QjwaP+Yrg@mail.gmail.com>

On Tue, Oct 26, 2010 at 12:42 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Paul Ivanov just posted this minimally invasive change to fix the GTK
> pylab bug in 0.10.1.
>
> The change is a trivial one-line fix:
>
> http://github.com/ivanov/ipython/commit/8ed54e466932c326868a82c68adc92345325ca93
>

BTW, I've pulled this into 0.10.2 for now.  We may get an
improved/revised fix later, but at least now the official 0.10.2
doesn't have this very problematic regression (as well as having the
other gtk-related fix Tom Spura mentioned).

Thanks a lot to Paul and Mark for the help!

Cheers,

f


From fperez.net at gmail.com  Tue Oct 26 17:38:23 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 26 Oct 2010 14:38:23 -0700
Subject: [IPython-dev] Min: Cmd-P/Ctrl-P for new print code question..
In-Reply-To: <AANLkTin-BoX42Y-wcNLQRRK0hF+NpmFsWs0M9V12NpeX@mail.gmail.com>
References: <AANLkTimvAv_qBM9ura=PjBCLY5MMcbdp7Etyaa57jncm@mail.gmail.com>
	<AANLkTimLK4MukVFhfcY0+BGHNZrWmmG1__e6mNR3BZ2j@mail.gmail.com>
	<AANLkTikQoC_mjBZDu2FZvCaSF8tFoF3h-Z1LYPQgG13s@mail.gmail.com>
	<AANLkTin-BoX42Y-wcNLQRRK0hF+NpmFsWs0M9V12NpeX@mail.gmail.com>
Message-ID: <AANLkTimU0dTntw2SQfu2c2ij53obbn1xVw05YLGzfBJK@mail.gmail.com>

On Tue, Oct 26, 2010 at 2:29 PM, MinRK <benjaminrk at gmail.com> wrote:
>
> Done and done:
> http://github.com/ipython/ipython/pull/187
> It specifically checks if the print key is ctrl-P, and changes to
> ctrl-shift-P if that's the case.
> If, for some reason, the print key on the platform is something else, then
> it won't change.
>

Perfect, many thanks.  Merged.

Cheers,

f


From tomspur at fedoraproject.org  Tue Oct 26 18:18:49 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Wed, 27 Oct 2010 00:18:49 +0200
Subject: [IPython-dev] Fwd: [GitHub] show() blocks in pylab mode with
 ipython 0.10.1 [ipython/ipython GH-185]
In-Reply-To: <AANLkTimcGQ8qiQYc5Qe3j1yeRfx0Qkwxg=1ESDf7w2dP@mail.gmail.com>
References: <4cc68afbf2416_54ca3ff4f1a532fc136@fe6.rs.github.com.tmail>
	<AANLkTindTbKcrS83=K5f_kk0xtzDyQFkak+57Y-Ly1Vw@mail.gmail.com>
	<20101026221929.1c1d98c8@earth>
	<AANLkTimcGQ8qiQYc5Qe3j1yeRfx0Qkwxg=1ESDf7w2dP@mail.gmail.com>
Message-ID: <20101027001849.5d45bfa2@earth>

On Tue, 26 Oct 2010 14:34:12 -0700
Fernando Perez wrote:

> On Tue, Oct 26, 2010 at 1:19 PM, Thomas Spura
> <tomspur at fedoraproject.org> wrote:
> > - When opening a shell and deleting that folder in another console
> > and starting ipyton in the now deleted folder you get this crash:
> > ?https://bugzilla.redhat.com/show_bug.cgi?id=593115
> 
> I'm not terribly inclined to spend much time on this one: trying to
> run anything in a directory that doesn't exist is a bad idea to begin
> with.   We could improve the error message a little bit to something
> like
> 
> "You are trying to walk over the edge of a cliff and haven't noticed
> there is no ground below you.  You are about to experience gravity".
> 
> But that's about it, I think.  Pull request welcome.

Will look at it in some spare time (not very soon).

> > - Don't have a clue on this:
> > ?https://bugzilla.redhat.com/show_bug.cgi?id=596075
> 
> No idea either.
> 
> > - Might be the most important one:
> > ?ipython requires gtk, but should not...
> > ?https://bugzilla.redhat.com/show_bug.cgi?id=646079
> 
> Ok, this one is bad, and fortunately easy to fix.  Let me know if this
> is not sufficient:
> 
> http://github.com/ipython/ipython/commit/8161523536289eaed01ca42707f6785f59343cd7

I almost exclusively send this mail because of the last issue ;-)
So thanks for the easy-fix...

I'll update ipython this week (or maybe next) and see what the reporter
says...


-- 
Thomas


From robert.kern at gmail.com  Wed Oct 27 11:37:51 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Wed, 27 Oct 2010 10:37:51 -0500
Subject: [IPython-dev] Extensible pretty-printing
Message-ID: <ia9h0f$v1j$1@dough.gmane.org>

In the ticket discussion around my patch to restore the result_display hook, 
Brian suggested that the real issue is what the extensibility API for this 
functionality should be. I would like to propose the pretty extension as that 
API. I propose that it should be integrated into the core of IPython as the 
pretty-printer instead of pprint. pretty allows one to specify pretty-print 
functions for individual types and have them used in nested contexts.

Incidentally, this would resolve this issue by allowing the user to specify a 
pretty-printer for floats:

   http://github.com/ipython/ipython/issues/issue/190

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Wed Oct 27 12:41:07 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 27 Oct 2010 09:41:07 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <ia9h0f$v1j$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
Message-ID: <AANLkTin-fDxnV7jqk6=cQGPQb8VpPEQdM31NnY8ONvY9@mail.gmail.com>

Hey Robert,

On Wed, Oct 27, 2010 at 8:37 AM, Robert Kern <robert.kern at gmail.com> wrote:
> In the ticket discussion around my patch to restore the result_display hook,
> Brian suggested that the real issue is what the extensibility API for this
> functionality should be. I would like to propose the pretty extension as that
> API. I propose that it should be integrated into the core of IPython as the
> pretty-printer instead of pprint. pretty allows one to specify pretty-print
> functions for individual types and have them used in nested contexts.
>

Can you remind me which ticket that was?  For all of github's great
things, their ticket search is broken beyond comprehension and is for
all intents and purposes useless.

Cheers,

f


From robert.kern at gmail.com  Wed Oct 27 12:42:48 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Wed, 27 Oct 2010 11:42:48 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTin-fDxnV7jqk6=cQGPQb8VpPEQdM31NnY8ONvY9@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTin-fDxnV7jqk6=cQGPQb8VpPEQdM31NnY8ONvY9@mail.gmail.com>
Message-ID: <ia9kq9$iv8$1@dough.gmane.org>

On 10/27/10 11:41 AM, Fernando Perez wrote:
> Hey Robert,
>
> On Wed, Oct 27, 2010 at 8:37 AM, Robert Kern<robert.kern at gmail.com>  wrote:
>> In the ticket discussion around my patch to restore the result_display hook,
>> Brian suggested that the real issue is what the extensibility API for this
>> functionality should be. I would like to propose the pretty extension as that
>> API. I propose that it should be integrated into the core of IPython as the
>> pretty-printer instead of pprint. pretty allows one to specify pretty-print
>> functions for individual types and have them used in nested contexts.
>
> Can you remind me which ticket that was?  For all of github's great
> things, their ticket search is broken beyond comprehension and is for
> all intents and purposes useless.

Pull request, actually:

http://github.com/ipython/ipython/pull/149

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From ellisonbg at gmail.com  Wed Oct 27 17:14:09 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 27 Oct 2010 14:14:09 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <ia9h0f$v1j$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
Message-ID: <AANLkTinLFynfVdsWDjtr2-GXm=DyUSMAjqBZn=-O=9_O@mail.gmail.com>

Robert,

> In the ticket discussion around my patch to restore the result_display hook,
> Brian suggested that the real issue is what the extensibility API for this
> functionality should be. I would like to propose the pretty extension as that
> API. I propose that it should be integrated into the core of IPython as the
> pretty-printer instead of pprint. pretty allows one to specify pretty-print
> functions for individual types and have them used in nested contexts.

This is at the top of my ipython tasks right now, but I have been
finishing some sympy related stuff.  I agree with you that this is, to
first order a great extension model for the display hook and I think
we should support it.  There are some other issues though that we will
have to figure out how to merge with this idea:

* We also want to extend the display hook to allow other (non-str)
representations of objects.  For example, it would be fantastic to
allow html, png, svg representations that can be used by frontends
that support them.  Other frontends can fallback on the basic pretty
print approach.  The question is how to integrate these different
approaches.

* To get these other representations of an object back to the
frontend, we will have to move the payload API over to the PUB/SUB
channel.

I can imagine a few models for this...

1. We could extend the pretty printing API to allow the ability for
the function to return different representations.  It could for
example return a dict:

{'str': the_str_repr,
 'html; : the html repr,
 'svg' : the svg repr}

In this model it would be up to the registered callable to construct
that dict in whatever way is appropriate.

2. We could look for special methods on objects and use those for the
printing.  This is how sage works.

For example, when the display hook get an object, it would look for
methods with names like:

foo._html_repr
foo._svg_repr

And then call those to get the various representations.

The downside of this model is that the actual objects have to be
modified.  But in some cases, this would be really nice and nicely
encapsulate the logic in a way that doesn't depend on IPython.  The
other benefit is that we can also introduce top-level functions like
repr_html, repr_png, etc. that users could use to get a particular
representation of the object displayed in the frontend.  This would be
nice in inner loops, where display hook won't be called.

Any thoughts on which of these models sounds better?  Any other good models.

Cheers,

Brian

[the thing that is motivating me on this is the desire to use
matplotlib's latex rendering to get png+latex representations of sympy
expressions displayed in the frontend.]

I think this type of capability would set off a wildfire of people
making really cool representations of their objects.




> Incidentally, this would resolve this issue by allowing the user to specify a
> pretty-printer for floats:
>
> ? http://github.com/ipython/ipython/issues/issue/190
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless enigma
> ?that is made terrible by our own mad attempt to interpret it as though it had
> ?an underlying truth."
> ? -- Umberto Eco
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From robert.kern at gmail.com  Wed Oct 27 17:55:40 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Wed, 27 Oct 2010 16:55:40 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTinLFynfVdsWDjtr2-GXm=DyUSMAjqBZn=-O=9_O@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTinLFynfVdsWDjtr2-GXm=DyUSMAjqBZn=-O=9_O@mail.gmail.com>
Message-ID: <iaa74t$a28$1@dough.gmane.org>

[PS: I'm subscribed to the list. Please don't Cc me.]

On 10/27/10 4:14 PM, Brian Granger wrote:
> Robert,
>
>> In the ticket discussion around my patch to restore the result_display hook,
>> Brian suggested that the real issue is what the extensibility API for this
>> functionality should be. I would like to propose the pretty extension as that
>> API. I propose that it should be integrated into the core of IPython as the
>> pretty-printer instead of pprint. pretty allows one to specify pretty-print
>> functions for individual types and have them used in nested contexts.
>
> This is at the top of my ipython tasks right now, but I have been
> finishing some sympy related stuff.  I agree with you that this is, to
> first order a great extension model for the display hook and I think
> we should support it.  There are some other issues though that we will
> have to figure out how to merge with this idea:

As far as I can see, this is entirely orthogonal to the choice of pretty as the 
API for configuring the str pretty-representation. pretty isn't really relevant 
to the problems below nor the reverse. There is no need to hold it up while you 
work out the rest.

> * We also want to extend the display hook to allow other (non-str)
> representations of objects.  For example, it would be fantastic to
> allow html, png, svg representations that can be used by frontends
> that support them.  Other frontends can fallback on the basic pretty
> print approach.  The question is how to integrate these different
> approaches.
>
> * To get these other representations of an object back to the
> frontend, we will have to move the payload API over to the PUB/SUB
> channel.
>
> I can imagine a few models for this...
>
> 1. We could extend the pretty printing API to allow the ability for
> the function to return different representations.  It could for
> example return a dict:
>
> {'str': the_str_repr,
>   'html; : the html repr,
>   'svg' : the svg repr}
>
> In this model it would be up to the registered callable to construct
> that dict in whatever way is appropriate.

Back when I did the ipwx refactor so many years ago, I had a DisplayTrap and a 
set of DisplayFormatters. The DisplayTrap had a list of formatters. Each 
formatter had an identifier. The DisplayTrap replaced the displayhook to record 
the object. Then, when the interpreter needed to reply to the frontend, it asked 
the DisplayTrap to add to the message. In turn, it asked each formatter to try 
to render the object and add that to the message like the dict above.

http://www.enthought.com/~rkern/cgi-bin/hgwebdir.cgi/ipwx/file/fe7abfcd0f69/ipwx/display_trap.py
http://www.enthought.com/~rkern/cgi-bin/hgwebdir.cgi/ipwx/file/fe7abfcd0f69/ipwx/display_formatter.py

This API only concerns itself with rendering the top-level object. It doesn't 
handle nested structures at all. I don't think it is reasonable to handle 
nesting in anything but text, but if someone wants to try that's fine. The 
responsibility for defining the API to handle that falls to the individual 
DisplayFormatters. I would have a PrettyDisplayFormatter that exposes the pretty 
API for adding functions for types. The rest of IPython shouldn't concern itself 
about the API beyond that.

> 2. We could look for special methods on objects and use those for the
> printing.  This is how sage works.

Entirely inflexible if it's the only option. There are way too many objects that 
we want to pretty-print that aren't modifiable by us. If individual formatters 
want to look at special methods (pretty looks for __pretty__, a PNG formatter 
could look for _latex_), that's fine, but it can't be the only way to extend.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Wed Oct 27 20:32:28 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 27 Oct 2010 17:32:28 -0700
Subject: [IPython-dev] Shutdown __del__ methods bug
In-Reply-To: <201010261130.14011.hans_meine@gmx.net>
References: <AANLkTin+w1y3E=51EBY61CqWTX6TdNZe3wLA5s7iuzPs@mail.gmail.com>
	<AANLkTimN5AMOZEs4HVW6QHBU0uZS4gBJ4AgxGp+txqpU@mail.gmail.com>
	<AANLkTiny2ndiYmpeUk0+9+6XTQtHny7Szy2E9sz=9Rhk@mail.gmail.com>
	<201010261130.14011.hans_meine@gmx.net>
Message-ID: <AANLkTikZyBRnZY9+CK51Yn6gcHT_NVqBC2EgSUz04asU@mail.gmail.com>

On Tue, Oct 26, 2010 at 2:30 AM, Hans Meine <hans_meine at gmx.net> wrote:
> I don't know git very well, but with Mercurial I would simply repeat the
> merge, check whether it worked this time, and then diff the two results, in
> order to see what got reverted.
>
> At least that should give you much less than the full diff.

I just wanted to report on this issue.  We pretty much did what you
suggest above, and carefully looked at what happened.  There's still a
minor mystery lingering as to why the commit

http://github.com/ipython/ipython/commit/239d2ed6f44c3f6511ee1e9069a5a1aee9c20f9c

appears in the DAG as a single-parent commit without any trace of it
being actually a merge (which it was).  How that came to be, we have
no clue.

But what happened to the content, we do know: that merge caused two
conflicts in interactiveshell.py, in the reset() method:

http://github.com/ipython/ipython/commit/239d2ed6f44c3f6511ee1e9069a5a1aee9c20f9c#L0L967

and Min inadvertedly resolved them incorrectly, throwing away some
valid code.  We manually verified that Thomas 're-apply' commit had
completely reconstructed the lost code, and that current trunk has
100% of what was meant to go there, so we have no problem left.

And we now know that there  are no other possibly hidden landmines:
the only issue was with the two conflicts,  the error was in only one
of them, and Thomas already fixed it, so we're completely OK.

I'm not really worried now, because we understand what happened: a
manual mistake in resolving a merge conflict is something that can
happen to anyone, and not a big deal.  What I was worried about was
the possibility of other hidden changes we might not have noticed, but
that's not the case.

In any case, a special thanks to Thomas for pinpointing the source of
the problem and even providing the actual fix!

Cheers,

f


From fperez.net at gmail.com  Wed Oct 27 21:51:47 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 27 Oct 2010 18:51:47 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTinLFynfVdsWDjtr2-GXm=DyUSMAjqBZn=-O=9_O@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTinLFynfVdsWDjtr2-GXm=DyUSMAjqBZn=-O=9_O@mail.gmail.com>
Message-ID: <AANLkTimQnAbd5fZ4KEzr5ekhPyyBk7cfptMH2kPcyfTB@mail.gmail.com>

On Wed, Oct 27, 2010 at 2:14 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> * To get these other representations of an object back to the
> frontend, we will have to move the payload API over to the PUB/SUB
> channel.

I might take a stab at this one soon, it's really bugging me
especially with James' beautiful web client now in the picture.  It's
a great way to 'broadcast' an existing session, but plots are not
avaiable (wrong socket).

Since this is a fairly central change, the earlier we make it, the better...

Cheers,

f


From ellisonbg at gmail.com  Wed Oct 27 22:14:23 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 27 Oct 2010 19:14:23 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTimQnAbd5fZ4KEzr5ekhPyyBk7cfptMH2kPcyfTB@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTinLFynfVdsWDjtr2-GXm=DyUSMAjqBZn=-O=9_O@mail.gmail.com>
	<AANLkTimQnAbd5fZ4KEzr5ekhPyyBk7cfptMH2kPcyfTB@mail.gmail.com>
Message-ID: <AANLkTikGYh1433Tg40ZX0w0ba=UC8Q6fsDGs3S0xu4+f@mail.gmail.com>

On Wed, Oct 27, 2010 at 6:51 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Wed, Oct 27, 2010 at 2:14 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>> * To get these other representations of an object back to the
>> frontend, we will have to move the payload API over to the PUB/SUB
>> channel.
>
> I might take a stab at this one soon, it's really bugging me
> especially with James' beautiful web client now in the picture. ?It's
> a great way to 'broadcast' an existing session, but plots are not
> avaiable (wrong socket).
>
> Since this is a fairly central change, the earlier we make it, the better...

Yes, definitely, and I don't think it will be too bad.  But, do you
think we should still have any payload go out through the REQ/REP
channel?  Or all payload through PUBSUB?

Cheers,

Brian

> Cheers,
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Wed Oct 27 22:16:30 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 27 Oct 2010 19:16:30 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTikGYh1433Tg40ZX0w0ba=UC8Q6fsDGs3S0xu4+f@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTinLFynfVdsWDjtr2-GXm=DyUSMAjqBZn=-O=9_O@mail.gmail.com>
	<AANLkTimQnAbd5fZ4KEzr5ekhPyyBk7cfptMH2kPcyfTB@mail.gmail.com>
	<AANLkTikGYh1433Tg40ZX0w0ba=UC8Q6fsDGs3S0xu4+f@mail.gmail.com>
Message-ID: <AANLkTik3YXvL-5WjrK56PR2id3N2srgyz+EQ2_NwfX34@mail.gmail.com>

On Wed, Oct 27, 2010 at 7:14 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>
> Yes, definitely, and I don't think it will be too bad. ?But, do you
> think we should still have any payload go out through the REQ/REP
> channel? ?Or all payload through PUBSUB?
>

I've been pondering that... I was wondering if we might want to have
the api be PUB by default, but with a 'private=True' flag that would
allow the sending of payloads only on the REP socket, for
communication of payloads strictly back to the calling client.

But I don't know if that would just complicate things too much in the
long run... Thoughts?

f


From fperez.net at gmail.com  Wed Oct 27 22:52:55 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 27 Oct 2010 19:52:55 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <ia9h0f$v1j$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
Message-ID: <AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>

Howdy,

On Wed, Oct 27, 2010 at 8:37 AM, Robert Kern <robert.kern at gmail.com> wrote:
> In the ticket discussion around my patch to restore the result_display hook,
> Brian suggested that the real issue is what the extensibility API for this
> functionality should be. I would like to propose the pretty extension as that
> API. I propose that it should be integrated into the core of IPython as the
> pretty-printer instead of pprint. pretty allows one to specify pretty-print
> functions for individual types and have them used in nested contexts.

I've been looking carefully at the CommandChainDispatcher code, since
you raised the design points about it in the discussion on github.  I
agree that it's a very reasonable design pattern and we're likely to
end up re-implementing something quite similar to it if we discard it,
so there's no need to.  It just needs a few tests to pin down the
expected functionality for the long run, and I'd like to change the
notion that undispatched functions in the chain can modify the
arguments along the way.  That just seems to me like a source of
hard-to-find bugs and I fail to see the need for it.  But otherwise I
have no problems with it.

As for the rest of the discussion and the points Brian brings up, it
seems to me that we can proceed in steps:

1. Bring pretty (as-written) back to life as a starting point.  We
never meant to nuke it for ill, and we seem to all agree that it's
fundamentally a good approach to start off.

2. We can then consider extending the model, from only returning the
'data' field we use today:
http://ipython.scipy.org/doc/nightly/html/development/messaging.html#python-outputs

to multiple fields as Brian mentioned.  We already effectively return
a dict, it's just that now we only have one 'data' field.  Extending
this to 'str', 'html', etc is fairly natural.

We can at that point discuss how these other fields would be filled
out, whether by registering formatters along the lines of your old
ipwx code or some other mechanism...

It seems to me that the best way forward would be for you to set up a
pull request that restores pretty's functionality, ensuring that it
works both on the terminal and the Qt console.  From there we can
polish things and think about the more extended model.

How does that sound to everyone?

Cheers,

f


From fperez.net at gmail.com  Thu Oct 28 03:28:09 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 00:28:09 -0700
Subject: [IPython-dev] IPython HTTP frontend
In-Reply-To: <AANLkTimYppgQB0cgo41216qz7sTUmvmuVOuYfi=00_LN@mail.gmail.com>
References: <AANLkTimYppgQB0cgo41216qz7sTUmvmuVOuYfi=00_LN@mail.gmail.com>
Message-ID: <AANLkTi=BywG2Qy=CYxNuM6Zti+61zsVE9e=BaRT763zn@mail.gmail.com>

Hi folks,

On Thu, Oct 21, 2010 at 5:49 PM, James Gao <james at jamesgao.com> wrote:
> I've been coding up an HTTP frontend for the new ipython zmq kernel. This
> gives a?convenient?interface to access the kernel directly from one web
> client, or even multiple web clients across the network. Please see my pull
> request,?http://github.com/ipython/ipython/pull/179?and give me comments.
> Thanks!

It would be great if another pair of eyes, especially from someone
with experience with web apps, could give James feedback.  I know very
little about web architectures, so in my review I couldn't really
meaningfully comment on those parts.

But in my testing so far the tool works great and there's no doubt
that we want something like this.  We'll obviously continue polishing
it once it's merged, but since it's a substantial amount of new code,
a good review from others with this kind of experience would be very
welcome.

And thanks a lot to Mark Voorhies and Carlos Cordoba who have already
pitched in with comments!

Cheers,

f


From benjaminrk at gmail.com  Thu Oct 28 03:57:34 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 28 Oct 2010 00:57:34 -0700
Subject: [IPython-dev] DAG Dependencies
Message-ID: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>

Hello,

In order to test/demonstrate arbitrary DAG dependency support in the new ZMQ
Python scheduler, I wrote an example using NetworkX, as Fernando suggested.

It generates a random DAG with a given number of nodes and edges, runs a set
of empty jobs (one for each node) using the DAG as a dependency graph, where
each edge represents a job depending on another.
It then validates the results, ensuring that no job ran before its
dependencies, and draws the graph, with nodes arranged in X according to
time, which means that all arrows must point to the right if the
time-dependencies were met.

It happily handles pretty elaborate (hundreds of edges) graphs.

Too bad I didn't have this done for today's Py4Science talk.

Script can be found here:
http://github.com/minrk/ipython/blob/newparallel/examples/demo/dagdeps.py

-MinRK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101028/b0b21795/attachment.html>

From walter at livinglogic.de  Thu Oct 28 05:45:34 2010
From: walter at livinglogic.de (=?UTF-8?B?V2FsdGVyIETDtnJ3YWxk?=)
Date: Thu, 28 Oct 2010 11:45:34 +0200
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <ia9h0f$v1j$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
Message-ID: <4CC9463E.7010605@livinglogic.de>

On 27.10.10 17:37, Robert Kern wrote:

> In the ticket discussion around my patch to restore the result_display hook, 
> Brian suggested that the real issue is what the extensibility API for this 
> functionality should be. I would like to propose the pretty extension as that 
> API. I propose that it should be integrated into the core of IPython as the 
> pretty-printer instead of pprint. pretty allows one to specify pretty-print 
> functions for individual types and have them used in nested contexts.
> 
> Incidentally, this would resolve this issue by allowing the user to specify a 
> pretty-printer for floats:
> 
>    http://github.com/ipython/ipython/issues/issue/190

Are there plans to support something like pretty:

   http://pypi.python.org/pypi/pretty

which seems to be well thought out and extensible?

Servus,
   Walter


From walter at livinglogic.de  Thu Oct 28 05:47:29 2010
From: walter at livinglogic.de (=?UTF-8?B?V2FsdGVyIETDtnJ3YWxk?=)
Date: Thu, 28 Oct 2010 11:47:29 +0200
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <4CC9463E.7010605@livinglogic.de>
References: <ia9h0f$v1j$1@dough.gmane.org> <4CC9463E.7010605@livinglogic.de>
Message-ID: <4CC946B1.3090309@livinglogic.de>

On 28.10.10 11:45, Walter D?rwald wrote:

> On 27.10.10 17:37, Robert Kern wrote:
> 
>> In the ticket discussion around my patch to restore the result_display hook, 
>> Brian suggested that the real issue is what the extensibility API for this 
>> functionality should be. I would like to propose the pretty extension as that 
>> API. I propose that it should be integrated into the core of IPython as the 
>> pretty-printer instead of pprint. pretty allows one to specify pretty-print 
>> functions for individual types and have them used in nested contexts.
>>
>> Incidentally, this would resolve this issue by allowing the user to specify a 
>> pretty-printer for floats:
>>
>>    http://github.com/ipython/ipython/issues/issue/190
> 
> Are there plans to support something like pretty:
> 
>    http://pypi.python.org/pypi/pretty
> 
> which seems to be well thought out and extensible?

BTW, here's a directy link to the file with documentation:

   http://dev.pocoo.org/hg/sandbox/file/tip/pretty/pretty.py

Servus,
   Walter


From satra at mit.edu  Thu Oct 28 10:50:10 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Thu, 28 Oct 2010 10:50:10 -0400
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
Message-ID: <AANLkTikUHG7+ixJTuxyV1XLHcHcs9SjZQy+wu6zDt8MU@mail.gmail.com>

hi min,

this is great. a few things that might be useful to consider:

* optionally offload the dag directly to the underlying scheduler if it has
dependency support (i.e., SGE, Torque/PBS, LSF)
* something we currently do in nipype is that we provide a configurable
option to continue processing if a given node fails. we simply remove the
dependencies of the node from further execution and generate a report at the
end saying which nodes crashed.
* callback support for node: node_started_cb, node_finished_cb
* support for nodes themselves being DAGs
* the concept of stash and pop for DAG nodes. i.e. a node which is a dag can
stash itself while it's internal nodes execute and should not take up any
engine.

also i was recently with some folks who have been using DRMAA (
http://en.wikipedia.org/wiki/DRMAA) as the underlying common layer for
communicating with PBS, SGE, LSF, Condor. it might be worthwhile taking a
look (if you already haven't) to see what sort of mechanisms might help you.
a python binding is available at:
http://code.google.com/p/drmaa-python/wiki/Tutorial

cheers,

satra


On Thu, Oct 28, 2010 at 3:57 AM, MinRK <benjaminrk at gmail.com> wrote:

> Hello,
>
> In order to test/demonstrate arbitrary DAG dependency support in the new
> ZMQ Python scheduler, I wrote an example using NetworkX, as Fernando
> suggested.
>
> It generates a random DAG with a given number of nodes and edges, runs a
> set of empty jobs (one for each node) using the DAG as a dependency graph,
> where each edge represents a job depending on another.
> It then validates the results, ensuring that no job ran before its
> dependencies, and draws the graph, with nodes arranged in X according to
> time, which means that all arrows must point to the right if the
> time-dependencies were met.
>
> It happily handles pretty elaborate (hundreds of edges) graphs.
>
> Too bad I didn't have this done for today's Py4Science talk.
>
>  Script can be found here:
> http://github.com/minrk/ipython/blob/newparallel/examples/demo/dagdeps.py
>
> -MinRK
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101028/ce6cab6f/attachment.html>

From robert.kern at gmail.com  Thu Oct 28 11:03:35 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 10:03:35 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <4CC9463E.7010605@livinglogic.de>
References: <ia9h0f$v1j$1@dough.gmane.org> <4CC9463E.7010605@livinglogic.de>
Message-ID: <iac3c7$ohe$1@dough.gmane.org>

On 10/28/10 4:45 AM, Walter D?rwald wrote:
> On 27.10.10 17:37, Robert Kern wrote:
>
>> In the ticket discussion around my patch to restore the result_display hook,
>> Brian suggested that the real issue is what the extensibility API for this
>> functionality should be. I would like to propose the pretty extension as that
>> API. I propose that it should be integrated into the core of IPython as the
>> pretty-printer instead of pprint. pretty allows one to specify pretty-print
>> functions for individual types and have them used in nested contexts.
>>
>> Incidentally, this would resolve this issue by allowing the user to specify a
>> pretty-printer for floats:
>>
>>     http://github.com/ipython/ipython/issues/issue/190
>
> Are there plans to support something like pretty:
>
>     http://pypi.python.org/pypi/pretty
>
> which seems to be well thought out and extensible?

That is exactly what the discussion is about. We already have pretty as an 
extension, but the recent refactoring broke the extension hook it used pending a 
discussion about the general way to make extensible APIs for IPython components. 
I am suggesting that pretty should be the extensible API for the pretty-printing 
that IPython does.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From robert.kern at gmail.com  Thu Oct 28 11:07:17 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 10:07:17 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
Message-ID: <iac3j6$rau$1@dough.gmane.org>

On 10/27/10 9:52 PM, Fernando Perez wrote:
> Howdy,
>
> On Wed, Oct 27, 2010 at 8:37 AM, Robert Kern<robert.kern at gmail.com>  wrote:
>> In the ticket discussion around my patch to restore the result_display hook,
>> Brian suggested that the real issue is what the extensibility API for this
>> functionality should be. I would like to propose the pretty extension as that
>> API. I propose that it should be integrated into the core of IPython as the
>> pretty-printer instead of pprint. pretty allows one to specify pretty-print
>> functions for individual types and have them used in nested contexts.
>
> I've been looking carefully at the CommandChainDispatcher code, since
> you raised the design points about it in the discussion on github.  I
> agree that it's a very reasonable design pattern and we're likely to
> end up re-implementing something quite similar to it if we discard it,
> so there's no need to.  It just needs a few tests to pin down the
> expected functionality for the long run, and I'd like to change the
> notion that undispatched functions in the chain can modify the
> arguments along the way.  That just seems to me like a source of
> hard-to-find bugs and I fail to see the need for it.  But otherwise I
> have no problems with it.

Well, although I said that before, I do think that pretty's API makes 
CommandChainDispatcher irrelevant for this use case, and I think we should just 
use it directly here.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From ellisonbg at gmail.com  Thu Oct 28 13:40:19 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 10:40:19 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTikUHG7+ixJTuxyV1XLHcHcs9SjZQy+wu6zDt8MU@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTikUHG7+ixJTuxyV1XLHcHcs9SjZQy+wu6zDt8MU@mail.gmail.com>
Message-ID: <AANLkTi=cAZNoVuRmk1=8D6HQJ855m+1Wtsx1Yehv713D@mail.gmail.com>

Satra,

> * optionally offload the dag directly to the underlying scheduler if it has
> dependency support (i.e., SGE, Torque/PBS, LSF)

While we could support this, I actually think it would be a step
backwards.  The benefit of using IPython is the extremely high
performance.  I don't know the exact performance numbers for the DAG
scheduling, but IPython has a task submission latency of about 1 ms.
This means that you can parallelize DAGs where each task is a small
fraction of a second.  The submission overhead for the batch systems,
even with an empty queue is going to be orders of magnitude longer.
The other thing that impacts latency is that with a batch system you
have to:

* Serialize data to disk
* Move the data to the compute nodes or have it shared on a network file system.
* Start Python on each compute node *for each task*.
* Import all your Python modules *for each task*
* Setup global variables and data structure *for each task*
* Load data from the file system and deserialize it.

All of this means lots and lots of latency for each task in the DAG.
For tasks that have lots of data or lots of Python modules to import,
that will simply kill the parallel speedup you will get (ala Amdahl's
law).


> * something we currently do in nipype is that we provide a configurable
> option to continue processing if a given node fails. we simply remove the
> dependencies of the node from further execution and generate a report at the
> end saying which nodes crashed.

I guess I don't see how it was a true dependency then.  Is this like
an optional dependency?  What are the usage cases for this.

> * callback support for node: node_started_cb, node_finished_cb

I am not sure we could support this, because once you create the DAG
and send it to the scheduler, the tasks are out of your local Python
session.  IOW, there is really no place to call such callbacks.

> * support for nodes themselves being DAGs

Yes, that shouldn't be too difficult.

> * the concept of stash and pop for DAG nodes. i.e. a node which is a dag can
> stash itself while it's internal nodes execute and should not take up any
> engine.

I think for the node is a DAG case, we would just flatten that at
submission time.  IOW, apply the transformation:

A DAG of nodes, each of which may be a DAG => A DAG of node.

Would this work?

> also i was recently with some folks who have been using DRMAA
> (http://en.wikipedia.org/wiki/DRMAA) as the underlying common layer for
> communicating with PBS, SGE, LSF, Condor. it might be worthwhile taking a
> look (if you already haven't) to see what sort of mechanisms might help you.
> a python binding is available at:
> http://code.google.com/p/drmaa-python/wiki/Tutorial

Yes, it does make sense to support DRMAA in ipcluster.  Once Min's
stuff has been merged into master, we will begin to get it working
with the batch systems again.

Cheers,

Brian

> cheers,
>
> satra
>
>
> On Thu, Oct 28, 2010 at 3:57 AM, MinRK <benjaminrk at gmail.com> wrote:
>>
>> Hello,
>> In order to test/demonstrate arbitrary DAG dependency support in the new
>> ZMQ Python scheduler, I wrote an example using NetworkX, as Fernando
>> suggested.
>> It generates a random DAG with a given number of nodes and edges, runs a
>> set of empty jobs (one for each node) using the DAG as a dependency graph,
>> where each edge represents a job depending on another.
>> It then validates the results, ensuring that no job ran before its
>> dependencies, and draws the graph, with nodes arranged in X according to
>> time, which means that?all arrows must point to the right if the
>> time-dependencies were met.
>> It happily handles pretty elaborate (hundreds of edges) graphs.
>> Too bad I didn't have this done for today's Py4Science talk.
>> Script can be found here:
>> http://github.com/minrk/ipython/blob/newparallel/examples/demo/dagdeps.py
>> -MinRK
>>
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Thu Oct 28 13:46:19 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 10:46:19 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
Message-ID: <AANLkTimOdoRzDFuFDkApRT1A3FmxZRc2m9hy_AfPvf_Z@mail.gmail.com>

Min,

On Thu, Oct 28, 2010 at 12:57 AM, MinRK <benjaminrk at gmail.com> wrote:
> Hello,
> In order to test/demonstrate arbitrary DAG dependency support in the new ZMQ
> Python scheduler, I wrote an example using NetworkX, as Fernando suggested.
> It generates a random DAG with a given number of nodes and edges, runs a set
> of empty jobs (one for each node) using the DAG as a dependency graph, where
> each edge represents a job depending on another.
> It then validates the results, ensuring that no job ran before its
> dependencies, and draws the graph, with nodes arranged in X according to
> time, which means that?all arrows must point to the right if the
> time-dependencies were met.

Very impressive demo and test.  Here is a very significant benchmark
we could do with this...

1. Make each node do a time.sleep(rand_time) where rand_time is a
random time interval over some range of times.
2. For a DAG of such tasks, you can calculate the fastest possible
parallel execution time by finding the shortest path through the DAG,
where, by shortest path, I mean the path where the sum of rand_time's
on that path is the smallest.  Call that time T_best.  By analyzing
the DAG, you can also tell the number of engines required to acheive
that T_best.  We can also calculate things like the parallel and
serial fraction of the DAG to find the max speedup.
3. Run that same DAG on 1, 2, 4, 8, ... engines to see how close we
can get to T_best and the max_speedup.

This would be a very rigorous way of testing the system over a variety
of different types of loads.

> It happily handles pretty elaborate (hundreds of edges) graphs.

That is quite impressive, but what is the limitation?  It should be
able to do 1000s or more of edges right?

> Too bad I didn't have this done for today's Py4Science talk.

Yes, defiinitely, that would have been "epic" as my teenage son would say.

> Script can be found here:
> http://github.com/minrk/ipython/blob/newparallel/examples/demo/dagdeps.py

Cheers,

Brian


> -MinRK
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Thu Oct 28 13:49:55 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 10:49:55 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
Message-ID: <AANLkTi=NrLSFa1QjW=1gNs=8BNUmy2YSVZrCuRnjW8eQ@mail.gmail.com>

All,

On Wed, Oct 27, 2010 at 7:52 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Howdy,
>
> On Wed, Oct 27, 2010 at 8:37 AM, Robert Kern <robert.kern at gmail.com> wrote:
>> In the ticket discussion around my patch to restore the result_display hook,
>> Brian suggested that the real issue is what the extensibility API for this
>> functionality should be. I would like to propose the pretty extension as that
>> API. I propose that it should be integrated into the core of IPython as the
>> pretty-printer instead of pprint. pretty allows one to specify pretty-print
>> functions for individual types and have them used in nested contexts.
>
> I've been looking carefully at the CommandChainDispatcher code, since
> you raised the design points about it in the discussion on github. ?I
> agree that it's a very reasonable design pattern and we're likely to
> end up re-implementing something quite similar to it if we discard it,
> so there's no need to. ?It just needs a few tests to pin down the
> expected functionality for the long run, and I'd like to change the
> notion that undispatched functions in the chain can modify the
> arguments along the way. ?That just seems to me like a source of
> hard-to-find bugs and I fail to see the need for it. ?But otherwise I
> have no problems with it.

I agree with this.

> As for the rest of the discussion and the points Brian brings up, it
> seems to me that we can proceed in steps:
>
> 1. Bring pretty (as-written) back to life as a starting point. ?We
> never meant to nuke it for ill, and we seem to all agree that it's
> fundamentally a good approach to start off.

Yep

> 2. We can then consider extending the model, from only returning the
> 'data' field we use today:
> http://ipython.scipy.org/doc/nightly/html/development/messaging.html#python-outputs
>
> to multiple fields as Brian mentioned. ?We already effectively return
> a dict, it's just that now we only have one 'data' field. ?Extending
> this to 'str', 'html', etc is fairly natural.

Yep

> We can at that point discuss how these other fields would be filled
> out, whether by registering formatters along the lines of your old
> ipwx code or some other mechanism...
>
> It seems to me that the best way forward would be for you to set up a
> pull request that restores pretty's functionality, ensuring that it
> works both on the terminal and the Qt console. ?From there we can
> polish things and think about the more extended model.

> How does that sound to everyone?

I think this sounds good.  But, the important point is that the pretty
model will be *the* model for our basic str repr in displayhook, but
merely an extension.

Cheers,

Brian


> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From satra at mit.edu  Thu Oct 28 14:30:02 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Thu, 28 Oct 2010 14:30:02 -0400
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTi=cAZNoVuRmk1=8D6HQJ855m+1Wtsx1Yehv713D@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTikUHG7+ixJTuxyV1XLHcHcs9SjZQy+wu6zDt8MU@mail.gmail.com>
	<AANLkTi=cAZNoVuRmk1=8D6HQJ855m+1Wtsx1Yehv713D@mail.gmail.com>
Message-ID: <AANLkTik8mNN8j3O9S+TpvR7723J_6E6OK2XMT9hS7nSw@mail.gmail.com>

hi brian,

thanks for the responses. i'll touch on a few of them.

> * optionally offload the dag directly to the underlying scheduler if it
> has
> > dependency support (i.e., SGE, Torque/PBS, LSF)
>
> While we could support this, I actually think it would be a step
> backwards.
>
...
>
All of this means lots and lots of latency for each task in the DAG.
> For tasks that have lots of data or lots of Python modules to import,
> that will simply kill the parallel speedup you will get (ala Amdahl's
> law).
>

here is the scenario where this becomes a useful thing (and hence to
optionally have it). let's say under sge usage you have started 10
clients/ipengines. now at the time of creating the clients one machine with
10 allocations was free and sge routed all the 10 clients to that machine.
now this will be the machine that will be used for all ipcluster processing.
whereas if the node distribution and ipengine startup were to happen
simultaneously at the level of the sge scheduler, processes would get routed
to the best available slot at the time of execution.

i agree that in several other scenarios, the current mechanism works great.
but this is a common scenario that we have run into in a heavily used
cluster (limited nodes + lots of users).


> > * something we currently do in nipype is that we provide a configurable
> > option to continue processing if a given node fails. we simply remove the
> > dependencies of the node from further execution and generate a report at
> the
> > end saying which nodes crashed.
>
> I guess I don't see how it was a true dependency then.  Is this like
> an optional dependency?  What are the usage cases for this.
>

perhaps i misunderstood what happens in the current implementation. if you
have a DAG such as (A,B) (B,E) (A,C) (C,D) and let's say C fails, does the
current dag controller continue executing B,E? or does it crash at the first
failure. we have the option to go either way in nipype. if something
crashes, stop or if something crashes, process all things that are not
dependent on the crash.


> > * callback support for node: node_started_cb, node_finished_cb
>
> I am not sure we could support this, because once you create the DAG
> and send it to the scheduler, the tasks are out of your local Python
> session.  IOW, there is really no place to call such callbacks.
>

i'll have to think about this one a little more. one use case for this is
reporting where things stand within the  execution graph (perhaps the
scheduler can report this, although now, i'm back to polling instead of
being called back.)


> > * support for nodes themselves being DAGs
>
...
>
I think for the node is a DAG case, we would just flatten that at
> submission time.  IOW, apply the transformation:
>
> A DAG of nodes, each of which may be a DAG => A DAG of node.
>
> Would this work?
>

this would work, i think we have a slightly more complicated case of this
implemented in nipype, but perhaps i need to think about it again. our case
is like a maptask, where the same thing operates on a list of inputs and
then we collate the outputs back. but as a general purpose mechanism, you
should not worry about this use case now.


> Yes, it does make sense to support DRMAA in ipcluster.  Once Min's
> stuff has been merged into master, we will begin to get it working
> with the batch systems again.
>

great.

cheers,

satra
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101028/84e11991/attachment.html>

From benjaminrk at gmail.com  Thu Oct 28 14:55:18 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 28 Oct 2010 11:55:18 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTimOdoRzDFuFDkApRT1A3FmxZRc2m9hy_AfPvf_Z@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTimOdoRzDFuFDkApRT1A3FmxZRc2m9hy_AfPvf_Z@mail.gmail.com>
Message-ID: <AANLkTinKzdKw7B4DA_aJiEvZUqEYv0mN=9tkBH+=Ug5t@mail.gmail.com>

On Thu, Oct 28, 2010 at 10:46, Brian Granger <ellisonbg at gmail.com> wrote:

> Min,
>
> On Thu, Oct 28, 2010 at 12:57 AM, MinRK <benjaminrk at gmail.com> wrote:
> > Hello,
> > In order to test/demonstrate arbitrary DAG dependency support in the new
> ZMQ
> > Python scheduler, I wrote an example using NetworkX, as Fernando
> suggested.
> > It generates a random DAG with a given number of nodes and edges, runs a
> set
> > of empty jobs (one for each node) using the DAG as a dependency graph,
> where
> > each edge represents a job depending on another.
> > It then validates the results, ensuring that no job ran before its
> > dependencies, and draws the graph, with nodes arranged in X according to
> > time, which means that all arrows must point to the right if the
> > time-dependencies were met.
>
> Very impressive demo and test.  Here is a very significant benchmark
> we could do with this...
>
> 1. Make each node do a time.sleep(rand_time) where rand_time is a
> random time interval over some range of times.
> 2. For a DAG of such tasks, you can calculate the fastest possible
> parallel execution time by finding the shortest path through the DAG,
> where, by shortest path, I mean the path where the sum of rand_time's
> on that path is the smallest.


It's actually slightly more complicated than that, because T_best should
actually be the *longest* path from a root to any terminus. Remember that a
node depends on all its parents, so the longest path is the earliest start
time for a given node.  Since It's a DAG, there can't be any loops that
would mess up the length of your route. It would be shortest if I set
mode='any' on the dependencies, and even then, T_best would be the *longest*
path of the collection of shortest paths from each root to each node.

It's been a long time since the DAG unit of my Automata and Languages
course, but I'll put this together.


> Call that time T_best.  By analyzing
> the DAG, you can also tell the number of engines required to acheive
> that T_best.  We can also calculate things like the parallel and
> serial fraction of the DAG to find the max speedup.
> 3. Run that same DAG on 1, 2, 4, 8, ... engines to see how close we
> can get to T_best and the max_speedup.
>
> This would be a very rigorous way of testing the system over a variety
> of different types of loads.
>

> > It happily handles pretty elaborate (hundreds of edges) graphs.
>
> That is quite impressive, but what is the limitation?  It should be
> able to do 1000s or more of edges right?
>

The limitation is that my tasks take O(1 sec) to run, and I didn't want to
wait for several minutes for the test to complete :).  There should be no
problem internally with millions of tasks and dependencies.  The performance
limitation will be the set methods used to check whether a dependency has
been met and the memory footprint of the sets themselves. Quickly checking
with %timeit, the set checks with 100k dependencies and 1M msg_ids to check
against still only take ~5ms on my laptop (200ns for 10 dependencies and 10M
msg_ids to check).  My interactive session starts running into memory issues
with 100M ids, so that's something to consider.


>
> > Too bad I didn't have this done for today's Py4Science talk.
>
> Yes, defiinitely, that would have been "epic" as my teenage son would say.
>
> > Script can be found here:
> >
> http://github.com/minrk/ipython/blob/newparallel/examples/demo/dagdeps.py
>
> Cheers,
>
> Brian
>
>
> > -MinRK
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101028/170ce9f9/attachment.html>

From fperez.net at gmail.com  Thu Oct 28 15:00:40 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 12:00:40 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTinKzdKw7B4DA_aJiEvZUqEYv0mN=9tkBH+=Ug5t@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTimOdoRzDFuFDkApRT1A3FmxZRc2m9hy_AfPvf_Z@mail.gmail.com>
	<AANLkTinKzdKw7B4DA_aJiEvZUqEYv0mN=9tkBH+=Ug5t@mail.gmail.com>
Message-ID: <AANLkTi=6QT++0Ug_aHBnwvKu3AzSH7O=ExWPhV6di6yY@mail.gmail.com>

On Thu, Oct 28, 2010 at 11:55 AM, MinRK <benjaminrk at gmail.com> wrote:
> My interactive session starts running into memory issues with 100M ids, so
> that's something to consider.

Well, let's just hope that anyone with 100M ids in real life (not a
synthetic benchmark) has the common sense of working on something with
100G of RAM or so :)

Thanks a lot for looking into this, Min.  Fantastic example to have,
and one that is very relevant to many use cases.

Cheers,

f


From benjaminrk at gmail.com  Thu Oct 28 15:16:42 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 28 Oct 2010 12:16:42 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTik8mNN8j3O9S+TpvR7723J_6E6OK2XMT9hS7nSw@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTikUHG7+ixJTuxyV1XLHcHcs9SjZQy+wu6zDt8MU@mail.gmail.com>
	<AANLkTi=cAZNoVuRmk1=8D6HQJ855m+1Wtsx1Yehv713D@mail.gmail.com>
	<AANLkTik8mNN8j3O9S+TpvR7723J_6E6OK2XMT9hS7nSw@mail.gmail.com>
Message-ID: <AANLkTinTJ8b0q0OsyTWXxzyJ-8TM-NPt3+TLAKj8tiFo@mail.gmail.com>

On Thu, Oct 28, 2010 at 11:30, Satrajit Ghosh <satra at mit.edu> wrote:

> hi brian,
>
> thanks for the responses. i'll touch on a few of them.
>
> > * optionally offload the dag directly to the underlying scheduler if it
>> has
>> > dependency support (i.e., SGE, Torque/PBS, LSF)
>>
>> While we could support this, I actually think it would be a step
>> backwards.
>>
> ...
>>
> All of this means lots and lots of latency for each task in the DAG.
>> For tasks that have lots of data or lots of Python modules to import,
>> that will simply kill the parallel speedup you will get (ala Amdahl's
>> law).
>>
>
> here is the scenario where this becomes a useful thing (and hence to
> optionally have it). let's say under sge usage you have started 10
> clients/ipengines. now at the time of creating the clients one machine with
> 10 allocations was free and sge routed all the 10 clients to that machine.
> now this will be the machine that will be used for all ipcluster processing.
> whereas if the node distribution and ipengine startup were to happen
> simultaneously at the level of the sge scheduler, processes would get routed
> to the best available slot at the time of execution.
>

You should get this for free if our ipcluster script on SGE/etc. can
grow/shrink to fit available resources.  Remember, jobs don't get submitted
to engines until their dependencies are met.  It handles new engines coming
just fine (still some work to do to handle engines disappearing gracefully).
Engines should correspond to actual available resources, and we should
probably have an ipcluster script that supports changing resources on a
grid, but I'm not so sure about the scheduler itself.


> i agree that in several other scenarios, the current mechanism works great.
> but this is a common scenario that we have run into in a heavily used
> cluster (limited nodes + lots of users).
>
>
>> > * something we currently do in nipype is that we provide a configurable
>> > option to continue processing if a given node fails. we simply remove
>> the
>> > dependencies of the node from further execution and generate a report at
>> the
>> > end saying which nodes crashed.
>>
>> I guess I don't see how it was a true dependency then.  Is this like
>> an optional dependency?  What are the usage cases for this.
>>
>
> perhaps i misunderstood what happens in the current implementation. if you
> have a DAG such as (A,B) (B,E) (A,C) (C,D) and let's say C fails, does the
> current dag controller continue executing B,E? or does it crash at the first
> failure. we have the option to go either way in nipype. if something
> crashes, stop or if something crashes, process all things that are not
> dependent on the crash.
>

The Twisted Scheduler considers a task dependency unmet if the task raised
an error, and currently the ZMQ scheduler has no sense of error/success, so
it works the other way, but can easily be changed if I add the ok/error
status to the msg header (I should probably do this).  I think this is a
fairly reasonable use case to have a switch on failure/success, and it would
actually get you the callbacks you mention:

msg_id = client.apply(job1)
client.apply(cleanup, follow_failure=msg_id)
client.apply(job2, after_success=msg_id)
client.apply(job3, after_all=msg_id)

With this:
    iff job1 fails: cleanup will be run *in the same place* as job1
    iff job1 succeeds: job2 will be run somewhere
    when job1 finishes: job3 will be run somewhere, regardless of job1's
status

Satrajit: Does that sound adequate?
Brian: Does that sound too complex?


>
>
>> > * callback support for node: node_started_cb, node_finished_cb
>>
>> I am not sure we could support this, because once you create the DAG
>> and send it to the scheduler, the tasks are out of your local Python
>> session.  IOW, there is really no place to call such callbacks.
>>
>
> i'll have to think about this one a little more. one use case for this is
> reporting where things stand within the  execution graph (perhaps the
> scheduler can report this, although now, i'm back to polling instead of
> being called back.)
>
>
>> > * support for nodes themselves being DAGs
>>
> ...
>>
> I think for the node is a DAG case, we would just flatten that at
>> submission time.  IOW, apply the transformation:
>>
>> A DAG of nodes, each of which may be a DAG => A DAG of node.
>>
>> Would this work?
>>
>
> this would work, i think we have a slightly more complicated case of this
> implemented in nipype, but perhaps i need to think about it again. our case
> is like a maptask, where the same thing operates on a list of inputs and
> then we collate the outputs back. but as a general purpose mechanism, you
> should not worry about this use case now.
>

A DAG-node is really the same has having the root(s) of a sub-DAG have the
dependencies of the DAG-node, and anything that would depend on the DAG-node
actually depends on any|all of the termini of the sub-DAG, no?


>
>
>> Yes, it does make sense to support DRMAA in ipcluster.  Once Min's
>> stuff has been merged into master, we will begin to get it working
>> with the batch systems again.
>>
>
> great.
>
> cheers,
>
> satra
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101028/cbe26e67/attachment.html>

From ellisonbg at gmail.com  Thu Oct 28 15:33:00 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 12:33:00 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTinKzdKw7B4DA_aJiEvZUqEYv0mN=9tkBH+=Ug5t@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTimOdoRzDFuFDkApRT1A3FmxZRc2m9hy_AfPvf_Z@mail.gmail.com>
	<AANLkTinKzdKw7B4DA_aJiEvZUqEYv0mN=9tkBH+=Ug5t@mail.gmail.com>
Message-ID: <AANLkTinyLr6Xvi2-38Rxk6v6q2k_dXY9Od9XW4svPv-g@mail.gmail.com>

On Thu, Oct 28, 2010 at 11:55 AM, MinRK <benjaminrk at gmail.com> wrote:
>
>
> On Thu, Oct 28, 2010 at 10:46, Brian Granger <ellisonbg at gmail.com> wrote:
>>
>> Min,
>>
>> On Thu, Oct 28, 2010 at 12:57 AM, MinRK <benjaminrk at gmail.com> wrote:
>> > Hello,
>> > In order to test/demonstrate arbitrary DAG dependency support in the new
>> > ZMQ
>> > Python scheduler, I wrote an example using NetworkX, as Fernando
>> > suggested.
>> > It generates a random DAG with a given number of nodes and edges, runs a
>> > set
>> > of empty jobs (one for each node) using the DAG as a dependency graph,
>> > where
>> > each edge represents a job depending on another.
>> > It then validates the results, ensuring that no job ran before its
>> > dependencies, and draws the graph, with nodes arranged in X according to
>> > time, which means that?all arrows must point to the right if the
>> > time-dependencies were met.
>>
>> Very impressive demo and test. ?Here is a very significant benchmark
>> we could do with this...
>>
>> 1. Make each node do a time.sleep(rand_time) where rand_time is a
>> random time interval over some range of times.
>> 2. For a DAG of such tasks, you can calculate the fastest possible
>> parallel execution time by finding the shortest path through the DAG,
>> where, by shortest path, I mean the path where the sum of rand_time's
>> on that path is the smallest.
>
> It's actually slightly more complicated than that, because T_best should
> actually be the *longest* path from a root to any terminus. Remember that a
> node depends on all its parents, so the longest path is the earliest start
> time for a given node. ?Since It's a DAG, there can't be any loops that
> would mess up the length of your route. It would be shortest if I set
> mode='any' on the dependencies, and even then, T_best would be the *longest*
> path of the collection of shortest paths from each root to each node.
> It's been a long time since the DAG unit of my Automata and Languages
> course, but I'll put this together.

Absolutely, my brain was thinking longest, but it came out shortest.

>>
>> Call that time T_best. ?By analyzing
>> the DAG, you can also tell the number of engines required to acheive
>> that T_best. ?We can also calculate things like the parallel and
>> serial fraction of the DAG to find the max speedup.
>> 3. Run that same DAG on 1, 2, 4, 8, ... engines to see how close we
>> can get to T_best and the max_speedup.
>>
>> This would be a very rigorous way of testing the system over a variety
>> of different types of loads.
>>
>> > It happily handles pretty elaborate (hundreds of edges) graphs.
>>
>> That is quite impressive, but what is the limitation? ?It should be
>> able to do 1000s or more of edges right?
>
> The limitation is that my tasks take O(1 sec) to run, and I didn't want to
> wait for several minutes for the test to complete :). ?There should be no
> problem internally with millions of tasks and dependencies. ?The performance
> limitation will be the set methods used to check whether a dependency has
> been met and the memory footprint of the sets themselves. Quickly checking
> with %timeit, the set checks with 100k dependencies and 1M msg_ids to check
> against still only take ~5ms on my laptop (200ns for 10 dependencies and 10M
> msg_ids to check). ?My interactive session starts running into memory issues
> with 100M ids, so that's something to consider.

OK, this makes sense.

>>
>> > Too bad I didn't have this done for today's Py4Science talk.
>>
>> Yes, defiinitely, that would have been "epic" as my teenage son would say.
>>
>> > Script can be found here:
>> >
>> > http://github.com/minrk/ipython/blob/newparallel/examples/demo/dagdeps.py
>>
>> Cheers,
>>
>> Brian
>>
>>
>> > -MinRK
>> >
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Thu Oct 28 15:34:45 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 12:34:45 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTi=6QT++0Ug_aHBnwvKu3AzSH7O=ExWPhV6di6yY@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTimOdoRzDFuFDkApRT1A3FmxZRc2m9hy_AfPvf_Z@mail.gmail.com>
	<AANLkTinKzdKw7B4DA_aJiEvZUqEYv0mN=9tkBH+=Ug5t@mail.gmail.com>
	<AANLkTi=6QT++0Ug_aHBnwvKu3AzSH7O=ExWPhV6di6yY@mail.gmail.com>
Message-ID: <AANLkTikkbLdn7Qy-6KaenU4MvOQUbTR19PsR-woYM1r6@mail.gmail.com>

On Thu, Oct 28, 2010 at 12:00 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Thu, Oct 28, 2010 at 11:55 AM, MinRK <benjaminrk at gmail.com> wrote:
>> My interactive session starts running into memory issues with 100M ids, so
>> that's something to consider.
>
> Well, let's just hope that anyone with 100M ids in real life (not a
> synthetic benchmark) has the common sense of working on something with
> 100G of RAM or so :)

But, with the performance we are getting, some people might want to
simply run this 24/7 (instead of using a batch system).  Then, over a
long period of time, having that many tasks is not too crazy.  We
definitely need to think about this aspect of things.

> Thanks a lot for looking into this, Min. ?Fantastic example to have,
> and one that is very relevant to many use cases.

Definitely,

Cheers,

Brian

> Cheers,
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From robert.kern at gmail.com  Thu Oct 28 15:42:48 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 14:42:48 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTi=NrLSFa1QjW=1gNs=8BNUmy2YSVZrCuRnjW8eQ@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<AANLkTi=NrLSFa1QjW=1gNs=8BNUmy2YSVZrCuRnjW8eQ@mail.gmail.com>
Message-ID: <iacjno$j9t$1@dough.gmane.org>

On 10/28/10 12:49 PM, Brian Granger wrote:
> All,
>
> On Wed, Oct 27, 2010 at 7:52 PM, Fernando Perez<fperez.net at gmail.com>  wrote:

>> It seems to me that the best way forward would be for you to set up a
>> pull request that restores pretty's functionality, ensuring that it
>> works both on the terminal and the Qt console.  From there we can
>> polish things and think about the more extended model.
>
>> How does that sound to everyone?
>
> I think this sounds good.  But, the important point is that the pretty
> model will be *the* model for our basic str repr in displayhook, but
> merely an extension.

Are you missing a "not" in there?

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From benjaminrk at gmail.com  Thu Oct 28 15:44:04 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 28 Oct 2010 12:44:04 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTikkbLdn7Qy-6KaenU4MvOQUbTR19PsR-woYM1r6@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTimOdoRzDFuFDkApRT1A3FmxZRc2m9hy_AfPvf_Z@mail.gmail.com>
	<AANLkTinKzdKw7B4DA_aJiEvZUqEYv0mN=9tkBH+=Ug5t@mail.gmail.com>
	<AANLkTi=6QT++0Ug_aHBnwvKu3AzSH7O=ExWPhV6di6yY@mail.gmail.com>
	<AANLkTikkbLdn7Qy-6KaenU4MvOQUbTR19PsR-woYM1r6@mail.gmail.com>
Message-ID: <AANLkTi=_guwod=O_TUz6KZqRoyT4i8gFujeuR+L1sD63@mail.gmail.com>

On Thu, Oct 28, 2010 at 12:34, Brian Granger <ellisonbg at gmail.com> wrote:

> On Thu, Oct 28, 2010 at 12:00 PM, Fernando Perez <fperez.net at gmail.com>
> wrote:
> > On Thu, Oct 28, 2010 at 11:55 AM, MinRK <benjaminrk at gmail.com> wrote:
> >> My interactive session starts running into memory issues with 100M ids,
> so
> >> that's something to consider.
> >
> > Well, let's just hope that anyone with 100M ids in real life (not a
> > synthetic benchmark) has the common sense of working on something with
> > 100G of RAM or so :)
>
> But, with the performance we are getting, some people might want to
> simply run this 24/7 (instead of using a batch system).  Then, over a
> long period of time, having that many tasks is not too crazy.  We
> definitely need to think about this aspect of things.
>

Until I put in the DB backend, the memory footprint of the Controller after
100M tasks is going to be huge, so we can worry about that a little later.
 Possibly add a sense of msg_ids going stale?  Only keep the most recent 10M
for checking?

-MinRK


>
> > Thanks a lot for looking into this, Min.  Fantastic example to have,
> > and one that is very relevant to many use cases.
>
> Definitely,
>
> Cheers,
>
> Brian
>
> > Cheers,
> >
> > f
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101028/33b74df6/attachment.html>

From ellisonbg at gmail.com  Thu Oct 28 15:46:39 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 12:46:39 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTinTJ8b0q0OsyTWXxzyJ-8TM-NPt3+TLAKj8tiFo@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTikUHG7+ixJTuxyV1XLHcHcs9SjZQy+wu6zDt8MU@mail.gmail.com>
	<AANLkTi=cAZNoVuRmk1=8D6HQJ855m+1Wtsx1Yehv713D@mail.gmail.com>
	<AANLkTik8mNN8j3O9S+TpvR7723J_6E6OK2XMT9hS7nSw@mail.gmail.com>
	<AANLkTinTJ8b0q0OsyTWXxzyJ-8TM-NPt3+TLAKj8tiFo@mail.gmail.com>
Message-ID: <AANLkTincqQ2kBW+Z-zC2xVRWFvszeK4ORe-aWQMEbQ8-@mail.gmail.com>

>> here is the scenario where this becomes a useful thing (and hence to
>> optionally have it). let's say under sge usage you have started 10
>> clients/ipengines. now at the time of creating the clients one machine with
>> 10 allocations was free and sge routed all the 10 clients to that machine.
>> now this will be the machine that will be used for all ipcluster processing.
>> whereas if the node distribution and ipengine startup were to happen
>> simultaneously at the level of the sge scheduler, processes would get routed
>> to the best available slot at the time of execution.
>
> You should get this for free if our ipcluster script on SGE/etc. can
> grow/shrink to fit available resources. ?Remember, jobs don't get submitted
> to engines until their dependencies are met. ?It handles new engines coming
> just fine (still some work to do to handle engines disappearing gracefully).
> Engines should correspond to actual available resources, and we should
> probably have an ipcluster script that supports changing resources on a
> grid, but I'm not so sure about the scheduler itself.

And I do think that having ipcluster be able to grow/shrink the
cluster is an important feature.

>> perhaps i misunderstood what happens in the current implementation. if you
>> have a DAG such as (A,B) (B,E) (A,C) (C,D) and let's say C fails, does the
>> current dag controller continue executing B,E? or does it crash at the first
>> failure. we have the option to go either way in nipype. if something
>> crashes, stop or if something crashes, process all things that are not
>> dependent on the crash.
>
> The Twisted Scheduler considers a task dependency unmet if the task raised
> an error, and currently the ZMQ scheduler has no sense of error/success, so
> it works the other way, but can easily be changed if I add the ok/error
> status to the msg header (I should probably do this). ?I think this is a
> fairly reasonable use case to have a switch on failure/success, and it would
> actually get you the callbacks you mention:
> msg_id = client.apply(job1)
> client.apply(cleanup, follow_failure=msg_id)
> client.apply(job2, after_success=msg_id)
> client.apply(job3, after_all=msg_id)
> With this:
> ?? ?iff job1 fails: cleanup will be run *in the same place* as job1
> ?? ?iff job1 succeeds: job2 will be run somewhere
> ?? ?when job1 finishes: job3 will be run somewhere, regardless of job1's
> status
> Satrajit: Does that sound adequate?
> Brian: Does that sound too complex?

I see a couple of different options if a task in a DAG fails:

* It is fatal.  The task failed and is absolutely required by
subsequent tasks, and nothing can be done.  In this case, I think the
tasks that depend on the failed one should be aborted.

* It is not fatal.  In this case, something can be done to remedy the
situation.  Maybe a tasks couldn't find the data it needs, but it
could find it elsewhere.  Or maybe the failed task has to close some
resource.  But, in all of these situations, I would say that all of
this logic should be built into the task itself (using regular Python
exception handling).  IOW, tasks should handle non-fatal errors
themselves.

But....this analysis doesn't make sense if there are task failure
modes that can't be handled by the tasks itself by catching
exceptions, etc.

My main concern with the above approach is that it adds complexity
both to the public APIs, but more importantly to the scheduler itself.






> A DAG-node is really the same has having the root(s) of a sub-DAG have the
> dependencies of the DAG-node, and anything that would depend on the DAG-node
> actually depends on any|all of the termini of the sub-DAG, no?

That is my thinking.

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Thu Oct 28 15:47:42 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 12:47:42 -0700
Subject: [IPython-dev] DAG Dependencies
In-Reply-To: <AANLkTi=_guwod=O_TUz6KZqRoyT4i8gFujeuR+L1sD63@mail.gmail.com>
References: <AANLkTikYKshVT7iVdK8gxaCzzgA3PWnZstUg2Ts3U_LZ@mail.gmail.com>
	<AANLkTimOdoRzDFuFDkApRT1A3FmxZRc2m9hy_AfPvf_Z@mail.gmail.com>
	<AANLkTinKzdKw7B4DA_aJiEvZUqEYv0mN=9tkBH+=Ug5t@mail.gmail.com>
	<AANLkTi=6QT++0Ug_aHBnwvKu3AzSH7O=ExWPhV6di6yY@mail.gmail.com>
	<AANLkTikkbLdn7Qy-6KaenU4MvOQUbTR19PsR-woYM1r6@mail.gmail.com>
	<AANLkTi=_guwod=O_TUz6KZqRoyT4i8gFujeuR+L1sD63@mail.gmail.com>
Message-ID: <AANLkTimZ93uyNtuFMQiVWmp1EcqqEO8Jq+U=kUg6s6+d@mail.gmail.com>

> Until I put in the DB backend, the memory footprint of the Controller after
> 100M tasks is going to be huge, so we can worry about that a little later.
> ?Possibly add a sense of msg_ids going stale? ?Only keep the most recent 10M
> for checking?

Yes, unless we want to store everything in a DB, we will need to
introduce something like this, either time or number based.

Brian


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Thu Oct 28 15:49:01 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 12:49:01 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <iacjno$j9t$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<AANLkTi=NrLSFa1QjW=1gNs=8BNUmy2YSVZrCuRnjW8eQ@mail.gmail.com>
	<iacjno$j9t$1@dough.gmane.org>
Message-ID: <AANLkTikoCvwj-1=pys32+ZHyQo+yjQ4PB8zyivZvNCSC@mail.gmail.com>

Robert,

>> I think this sounds good. ?But, the important point is that the pretty
>> model will be *the* model for our basic str repr in displayhook, but
>> merely an extension.
>
> Are you missing a "not" in there?

Yep, try this...

But, the important point is that the pretty model will be *the* model
for our basic str repr in displayhook, NOT merely an extension.

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From robert.kern at gmail.com  Thu Oct 28 15:53:09 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 14:53:09 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTikoCvwj-1=pys32+ZHyQo+yjQ4PB8zyivZvNCSC@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>	<AANLkTi=NrLSFa1QjW=1gNs=8BNUmy2YSVZrCuRnjW8eQ@mail.gmail.com>	<iacjno$j9t$1@dough.gmane.org>
	<AANLkTikoCvwj-1=pys32+ZHyQo+yjQ4PB8zyivZvNCSC@mail.gmail.com>
Message-ID: <iackb5$m7h$1@dough.gmane.org>

On 10/28/10 2:49 PM, Brian Granger wrote:
> Robert,
>
>>> I think this sounds good.  But, the important point is that the pretty
>>> model will be *the* model for our basic str repr in displayhook, but
>>> merely an extension.
>>
>> Are you missing a "not" in there?
>
> Yep, try this...
>
> But, the important point is that the pretty model will be *the* model
> for our basic str repr in displayhook, NOT merely an extension.

Much better. :-)

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Thu Oct 28 16:46:43 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 13:46:43 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <iac3j6$rau$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<iac3j6$rau$1@dough.gmane.org>
Message-ID: <AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>

On Thu, Oct 28, 2010 at 8:07 AM, Robert Kern <robert.kern at gmail.com> wrote:
> Well, although I said that before, I do think that pretty's API makes
> CommandChainDispatcher irrelevant for this use case, and I think we should just
> use it directly here.

Ah, there was a point of confusion then: your extensions/pretty uses
CommandChainDispatcher (by raising TryNext), while external/pretty is
obviously independant.  I had the former in mind, it seems you had the
latter.  Is that correct?

Cheers,

f


From robert.kern at gmail.com  Thu Oct 28 17:16:18 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 16:16:18 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>	<iac3j6$rau$1@dough.gmane.org>
	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
Message-ID: <iacp72$e0d$1@dough.gmane.org>

On 10/28/10 3:46 PM, Fernando Perez wrote:
> On Thu, Oct 28, 2010 at 8:07 AM, Robert Kern<robert.kern at gmail.com>  wrote:
>> Well, although I said that before, I do think that pretty's API makes
>> CommandChainDispatcher irrelevant for this use case, and I think we should just
>> use it directly here.
>
> Ah, there was a point of confusion then: your extensions/pretty uses
> CommandChainDispatcher (by raising TryNext), while external/pretty is
> obviously independant.  I had the former in mind, it seems you had the
> latter.  Is that correct?

Yes. The original pull request just tried to restore the status quo 
anterefactor. It happened to use the CommandChainDispatcher because that was the 
extensible API at the time. Since the larger issue of what the extensible API 
*should* be was raised, I am now proposing that we should use pretty.py and 
expose it as the API for people to extend the string representation used in the 
displayhook.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Thu Oct 28 17:55:53 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 14:55:53 -0700
Subject: [IPython-dev] remote interactive shell using JSON RPC
In-Reply-To: <AANLkTi=dbsO8pZFURroHG_gK_wWMiB-_zdbwZMh+ndJ-@mail.gmail.com>
References: <AANLkTi=dbsO8pZFURroHG_gK_wWMiB-_zdbwZMh+ndJ-@mail.gmail.com>
Message-ID: <AANLkTinV0ECUj7xw7c__JNLWZvYi4AsppKZDzyb9LBeY@mail.gmail.com>

On Fri, Oct 22, 2010 at 3:08 PM, Ondrej Certik <ondrej at certik.cz> wrote:
>
>
> This is communicating with our online lab using JSON RPC, and the
> Python engine is running within femhub, so all packages that are
> installed in FEMhub are accesible (matplotlib, sympy, ...).
>

Very nice, and so is the online version.  I see you've had fun with
extjs :)  Great job!

Have you guys specced out your json protocol?  We've tried to keep our
protocol spec up always:

http://ipython.scipy.org/doc/nightly/html/development/messaging.html

hoping that it would help cross-project communication...

Cheers,

f


From ondrej at certik.cz  Thu Oct 28 18:06:36 2010
From: ondrej at certik.cz (Ondrej Certik)
Date: Thu, 28 Oct 2010 15:06:36 -0700
Subject: [IPython-dev] remote interactive shell using JSON RPC
In-Reply-To: <AANLkTinV0ECUj7xw7c__JNLWZvYi4AsppKZDzyb9LBeY@mail.gmail.com>
References: <AANLkTi=dbsO8pZFURroHG_gK_wWMiB-_zdbwZMh+ndJ-@mail.gmail.com>
	<AANLkTinV0ECUj7xw7c__JNLWZvYi4AsppKZDzyb9LBeY@mail.gmail.com>
Message-ID: <AANLkTin1w3833sta6CzdWmxaDccnoF-Q695SubFsO-5E@mail.gmail.com>

On Thu, Oct 28, 2010 at 2:55 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Fri, Oct 22, 2010 at 3:08 PM, Ondrej Certik <ondrej at certik.cz> wrote:
>>
>>
>> This is communicating with our online lab using JSON RPC, and the
>> Python engine is running within femhub, so all packages that are
>> installed in FEMhub are accesible (matplotlib, sympy, ...).
>>
>
> Very nice, and so is the online version. ?I see you've had fun with
> extjs :) ?Great job!
>
> Have you guys specced out your json protocol? ?We've tried to keep our
> protocol spec up always:
>
> http://ipython.scipy.org/doc/nightly/html/development/messaging.html
>
> hoping that it would help cross-project communication...

At the moment, since we were all too busy, our only specification is
the implementation -- but we use JSON RPC, so the specification of
that is online (http://json-rpc.org/) and we just need to specify the
methods + parameters, and those are currently in our sourcecodes.

Well, since I wrote my email above, I also wrote another
implementation of the API, that runs on the google app engine:

http://engine.sympy.org/

and there you can see, that I am using the exact same script
(ifemhub), only this time it communicates with a different server...


Ondrej


From fperez.net at gmail.com  Thu Oct 28 18:24:17 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 15:24:17 -0700
Subject: [IPython-dev] remote interactive shell using JSON RPC
In-Reply-To: <AANLkTin1w3833sta6CzdWmxaDccnoF-Q695SubFsO-5E@mail.gmail.com>
References: <AANLkTi=dbsO8pZFURroHG_gK_wWMiB-_zdbwZMh+ndJ-@mail.gmail.com>
	<AANLkTinV0ECUj7xw7c__JNLWZvYi4AsppKZDzyb9LBeY@mail.gmail.com>
	<AANLkTin1w3833sta6CzdWmxaDccnoF-Q695SubFsO-5E@mail.gmail.com>
Message-ID: <AANLkTimGGsTUvu7upot=MViBYTEQLif9rUMCPgmhGZut@mail.gmail.com>

On Thu, Oct 28, 2010 at 3:06 PM, Ondrej Certik <ondrej at certik.cz> wrote:
> At the moment, since we were all too busy, our only specification is
> the implementation -- but we use JSON RPC, so the specification of
> that is online (http://json-rpc.org/) and we just need to specify the
> methods + parameters, and those are currently in our sourcecodes.
>
> Well, since I wrote my email above, I also wrote another
> implementation of the API, that runs on the google app engine:
>
> http://engine.sympy.org/
>
> and there you can see, that I am using the exact same script
> (ifemhub), only this time it communicates with a different server...

Cool!

One reason we tried to avoid the direct RPC route was to establish a
strong decoupling between frontends and implementation.  Our messaging
spec is 100% of the api between clients and kernels.  There's a direct
way to get raw kernel attributes for special cases, but by not making
it the  top-level mechanism, we enforce a fairly strong separation
between the two.  We think in the long run that's a good thing.

But there are certainly advantages to the direct rpc approach when
doing rapid development, that's for sure.

Cheers,

f


From fperez.net at gmail.com  Thu Oct 28 18:41:27 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 15:41:27 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <iacp72$e0d$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<iac3j6$rau$1@dough.gmane.org>
	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
	<iacp72$e0d$1@dough.gmane.org>
Message-ID: <AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>

On Thu, Oct 28, 2010 at 2:16 PM, Robert Kern <robert.kern at gmail.com> wrote:
> Yes. The original pull request just tried to restore the status quo
> anterefactor. It happened to use the CommandChainDispatcher because that was the
> extensible API at the time. Since the larger issue of what the extensible API
> *should* be was raised, I am now proposing that we should use pretty.py and
> expose it as the API for people to extend the string representation used in the
> displayhook.

OK, it all makes sense now.  Having looked at extensions/pretty in
more detail now, I'm happy following through with your suggestion,
modulo perhaps updating to the most current pretty if we have a stale
one (I  didn't check yet).

One last question: we don't want anything actually *printing*, instead
we want an interface that *returns* strings which we'll  stuff on the
pyout channel (an in-process version can simply take these values and
print them, of course).  Right now we only have a single
representation stored in the 'data' field.  How do you think we should
go about the multi-field option, within the  context of pretty?

Cheers,

f


From ellisonbg at gmail.com  Thu Oct 28 18:56:12 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 15:56:12 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<iac3j6$rau$1@dough.gmane.org>
	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
	<iacp72$e0d$1@dough.gmane.org>
	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
Message-ID: <AANLkTimNUELMc39BEhYZfMMBR86Syuw2KM_reztqnNVH@mail.gmail.com>

On Thu, Oct 28, 2010 at 3:41 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Thu, Oct 28, 2010 at 2:16 PM, Robert Kern <robert.kern at gmail.com> wrote:
>> Yes. The original pull request just tried to restore the status quo
>> anterefactor. It happened to use the CommandChainDispatcher because that was the
>> extensible API at the time. Since the larger issue of what the extensible API
>> *should* be was raised, I am now proposing that we should use pretty.py and
>> expose it as the API for people to extend the string representation used in the
>> displayhook.
>
> OK, it all makes sense now. ?Having looked at extensions/pretty in
> more detail now, I'm happy following through with your suggestion,
> modulo perhaps updating to the most current pretty if we have a stale
> one (I ?didn't check yet).
>
> One last question: we don't want anything actually *printing*, instead
> we want an interface that *returns* strings which we'll ?stuff on the
> pyout channel (an in-process version can simply take these values and
> print them, of course). ?Right now we only have a single
> representation stored in the 'data' field. ?How do you think we should
> go about the multi-field option, within the ?context of pretty?

I should mention that this issue of the display hook actually doing
the printing itself was part of why I disabled the pretty extension in
the first place.

Brian

> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From robert.kern at gmail.com  Thu Oct 28 19:00:40 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 18:00:40 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>	<iac3j6$rau$1@dough.gmane.org>	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>	<iacp72$e0d$1@dough.gmane.org>
	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
Message-ID: <iacvap$6dn$1@dough.gmane.org>

On 10/28/10 5:41 PM, Fernando Perez wrote:
> On Thu, Oct 28, 2010 at 2:16 PM, Robert Kern<robert.kern at gmail.com>  wrote:
>> Yes. The original pull request just tried to restore the status quo
>> anterefactor. It happened to use the CommandChainDispatcher because that was the
>> extensible API at the time. Since the larger issue of what the extensible API
>> *should* be was raised, I am now proposing that we should use pretty.py and
>> expose it as the API for people to extend the string representation used in the
>> displayhook.
>
> OK, it all makes sense now.  Having looked at extensions/pretty in
> more detail now, I'm happy following through with your suggestion,
> modulo perhaps updating to the most current pretty if we have a stale
> one (I  didn't check yet).

It's fresh. Also note that we have local modifications not in the upstream to 
support the registration of prettyprinters by the name of the type to avoid imports.

> One last question: we don't want anything actually *printing*, instead
> we want an interface that *returns* strings which we'll  stuff on the
> pyout channel (an in-process version can simply take these values and
> print them, of course).

pretty has a pformat()-equivalent. The original pull request had already made 
that change.

> Right now we only have a single
> representation stored in the 'data' field.  How do you think we should
> go about the multi-field option, within the  context of pretty?

pretty does not solve that problem.

I recommend exactly what I did in ipwx. The DisplayTrap is configured with a 
list of DisplayFormatters. Each DisplayFormatter gets a chance to decorate the 
return messaged with an additional entry, keyed by the type of the 
DisplayFormatter (probably something like 'string', 'html', 'image', etc. but 
also perhaps 'repr', 'pretty', 'mathtext'; needs some more thought). pretty 
would just be the implementation of the default string DisplayFormatter.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Thu Oct 28 19:11:18 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 16:11:18 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <iacvap$6dn$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<iac3j6$rau$1@dough.gmane.org>
	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
	<iacp72$e0d$1@dough.gmane.org>
	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
	<iacvap$6dn$1@dough.gmane.org>
Message-ID: <AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>

On Thu, Oct 28, 2010 at 4:00 PM, Robert Kern <robert.kern at gmail.com> wrote:
> It's fresh. Also note that we have local modifications not in the upstream to
> support the registration of prettyprinters by the name of the type to avoid imports.

OK.  Probably would be a good idea to make a little note in the file
indicating this.

>> One last question: we don't want anything actually *printing*, instead
>> we want an interface that *returns* strings which we'll ?stuff on the
>> pyout channel (an in-process version can simply take these values and
>> print them, of course).
>
> pretty has a pformat()-equivalent. The original pull request had already made
> that change.

OK.

>> Right now we only have a single
>> representation stored in the 'data' field. ?How do you think we should
>> go about the multi-field option, within the ?context of pretty?
>
> pretty does not solve that problem.
>
> I recommend exactly what I did in ipwx. The DisplayTrap is configured with a
> list of DisplayFormatters. Each DisplayFormatter gets a chance to decorate the
> return messaged with an additional entry, keyed by the type of the
> DisplayFormatter (probably something like 'string', 'html', 'image', etc. but
> also perhaps 'repr', 'pretty', 'mathtext'; needs some more thought). pretty
> would just be the implementation of the default string DisplayFormatter.

OK, so how do you want to proceed: do you want to reopen your pull
request (possibly rebasing it if necessary) as it was, or do you want
to go ahead and implement the above approach right away?

If the latter, I'm not sure I like the approach of passing a dict
through and letting each formatter modify it.  Sate that mutates
as-it-goes tends to produce harder to understand code, at least in my
experience.  Instead, we can call all the formatters in sequence and
get from each a pair of key, value.  We can then insert the keys into
a dict as they come on our side (so if the storage structure ever
changes from a dict to anything else, likely the formatters can stay
unmodified).  Does that sound reasonable to you?

cheers,

f


From mark.voorhies at ucsf.edu  Thu Oct 28 19:54:06 2010
From: mark.voorhies at ucsf.edu (Mark Voorhies)
Date: Thu, 28 Oct 2010 16:54:06 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org> <iacvap$6dn$1@dough.gmane.org>
	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
Message-ID: <201010281654.07105.mark.voorhies@ucsf.edu>

On Thursday, October 28, 2010 04:11:18 pm Fernando Perez wrote:
> On Thu, Oct 28, 2010 at 4:00 PM, Robert Kern <robert.kern at gmail.com> wrote:
> > I recommend exactly what I did in ipwx. The DisplayTrap is configured with a
> > list of DisplayFormatters. Each DisplayFormatter gets a chance to decorate the
> > return messaged with an additional entry, keyed by the type of the
> > DisplayFormatter (probably something like 'string', 'html', 'image', etc. but
> > also perhaps 'repr', 'pretty', 'mathtext'; needs some more thought).

Would it make sense to use DisplayFormatter class names as keys?  That would
avoid name collisions.  Clients wanting more abstract/semantic formatting names
could use an auxiliary hierarchy of general->specific->formatter strings (e.g.,
provided by the pretty module) to find the best match to a target formatting
in the message (e.g., like resolving fonts in CSS).

> If the latter, I'm not sure I like the approach of passing a dict
> through and letting each formatter modify it.

Given that the outputs should be independent (i.e., shouldn't be modifying each
other), it seems like the main advantage of chaining the formatters would be to
avoid duplicating work (e.g., the html formatter could work off of the string result).
This could also be done by linking the formatters directly (e.g., passing a result-caching
string formatter to the html formatter's constructor) as long as we know the order that
they will be called in.

--Mark


From fperez.net at gmail.com  Thu Oct 28 20:01:11 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 17:01:11 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
Message-ID: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>

Hi all,

I know we've said several times that we should release 0.11 'soon', so
I forgive anyone for laughing at this email.  But don't ignore it,
this time we mean it :)

We now have a massive amount of  new code in the pipeline, and it's
really high time we start getting this out in the hands of users
beyond those willing to run from a git HEAD.  0.11 will be a 'tech
preview' release, especially because the situation with regards to the
parallel code is a bit messy right now.  But we shouldn't wait for too
much longer.

Brian and I tried to compile a list of the main things that need work
before we can make a release, and this is  our best estimate right
now:

- Unicode: this is totally broken and totally unacceptable.  But I'm
pretty sure with a few clean hours I can get it done.  It's not super
hard, just detail-oriented work that I need a quiet block of time to
tackle.

- Updating top-level entry points to use the new config system,
especially the Qt console.  Brian said he could tackle this one.

- Final checks on the state of the GUI/event loop support.  Things are
looking fairly good from usage, but we have concerns that there may
still be problems lurking just beneath the surface.

- Continue/finish the displayhook discussion: we're well on our way on
this, we just need to finish it up.  We mark it here because it's an
important part of the api and a good test case for how we want to
expose this kind of functionality.

- Move all payloads to pub channel.  This is also a big api item that
affects all clients, so we might as well get it right from the start.
I can try to work on this.

- James' web frontend: I'd really like to get that code in for early
battle-testing, even though it's clear it's early functionality
subject still to much evolution.

That's all I have in my list.  Anything else you can all think of?

As for non-blockers, we have:

- the parallel code is not in a good situation right now: we have a
few regressions re. the Twisted 0.10.1 code (e.g. the SGE code isn't
ported yet), the Twisted winhpc scheduler is only in 0.11, and while
the new zmq tools are looking great, they are NOT production-ready
quite yet.  In summary, we'll have to warn in bright, blinking pink
letters 1995-style, everyone who uses the parallel code in production
systems to stick with the 0.10 series for a little longer.  Annoying,
yes, but unfortunately such is life.

- our docs have unfortunately gone fairly stale in a few places.  We
have no docs for the new Qt console and a lot of information is partly
or completely stale.  This is an area where volunteers could make a
huge difference: any help here has a big impact in letting the project
better serve users, and doc pull requests are likely to be reviewed
very quickly.  Additionally, you don't need to know too much about the
code's intimate details to help with documenting the user-facing
functionality.

Anything else?

Plan: I'd love to get 0.11 out in the first week of December.  John
Hunter, Stefan van der Walt and I (all three contributors) will be at
Scipy India in Hyderabad Dec 11-18, and there will be sprint time
there.  Ideally, we'd have a stable release out for potential sprint
participants who want to hack on IPython to work from.  It would also
be a good way to wrap up a great year of development and de-stagnation
of IPython, leaving us with a nice fresh ball of warm code to play
with over the winter holidays.

Cheers,

f


From robert.kern at gmail.com  Thu Oct 28 20:13:25 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 19:13:25 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>	<iac3j6$rau$1@dough.gmane.org>	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>	<iacp72$e0d$1@dough.gmane.org>	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>	<iacvap$6dn$1@dough.gmane.org>
	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
Message-ID: <iad3j6$l3u$1@dough.gmane.org>

On 10/28/10 6:11 PM, Fernando Perez wrote:
> On Thu, Oct 28, 2010 at 4:00 PM, Robert Kern<robert.kern at gmail.com>  wrote:
>> It's fresh. Also note that we have local modifications not in the upstream to
>> support the registration of prettyprinters by the name of the type to avoid imports.
>
> OK.  Probably would be a good idea to make a little note in the file
> indicating this.
>
>>> One last question: we don't want anything actually *printing*, instead
>>> we want an interface that *returns* strings which we'll  stuff on the
>>> pyout channel (an in-process version can simply take these values and
>>> print them, of course).
>>
>> pretty has a pformat()-equivalent. The original pull request had already made
>> that change.
>
> OK.
>
>>> Right now we only have a single
>>> representation stored in the 'data' field.  How do you think we should
>>> go about the multi-field option, within the  context of pretty?
>>
>> pretty does not solve that problem.
>>
>> I recommend exactly what I did in ipwx. The DisplayTrap is configured with a
>> list of DisplayFormatters. Each DisplayFormatter gets a chance to decorate the
>> return messaged with an additional entry, keyed by the type of the
>> DisplayFormatter (probably something like 'string', 'html', 'image', etc. but
>> also perhaps 'repr', 'pretty', 'mathtext'; needs some more thought). pretty
>> would just be the implementation of the default string DisplayFormatter.
>
> OK, so how do you want to proceed: do you want to reopen your pull
> request (possibly rebasing it if necessary) as it was, or do you want
> to go ahead and implement the above approach right away?

I'd rather implement this approach right away. We just need to decide what the 
keys should be and what they should mean. I originally used the ID of the 
DisplayFormatter. This would allow both a "normal" representation and an 
enhanced one both of the same type (plain text, HTML, PNG image) to coexist. 
Then the frontend could pick which one to display and let the user flip back and 
forth as desired even for old Out[] entries without reexecuting code. This may 
be a case of YAGNI.

However, that means that the frontend needs to know about the IDs of the 
DisplayFormatters. It needs to know that 'my-tweaked-html' formatter is HTML. I 
might propose this as the fully-general solution:

Each DisplayFormatter has a unique ID and a non-unique type. The type string 
determines how a frontend would actually interpret the data for display. If a 
frontend can display a particular type, it can display it for any 
DisplayFormatter of that type. There will be a few predefined type strings with 
meanings, but implementors can define new ones as long as they pick new names.

   text -- monospaced plain text (unicode)
   html -- snippet of HTML (anything one can slap inside of a <div>)
   image -- bytes of an image file (anything loadable by PIL, so no need to have 
different PNG and JPEG type strings)
   mathtext -- just the TeX-lite text (the frontend can render it itself)

When given an object for display, the DisplayHook will give it to each of the 
DisplayFormatters in turn. If the formatter can handle the object, it will 
return some JSONable object[1]. The DisplayHook will append a 3-tuple

   (formatter.id, formatter.type, data)

to a list. The DisplayHook will give this to whatever is forming the response 
message.

Most likely, there won't be too many of these formatters for the same type 
active at any time and there should always be the (id='default', type='text') 
formatter. A simple frontend can just look for that. A more complicated GUI 
frontend may prefer a type='html' response and only fall back to a type='text' 
format. It may have an ordered list of formatter IDs that it will try to display 
before falling back in order. It might allow the user to flip through the 
different representations for each cell. For example, if I have a 
type='mathtext' formatter showing sympy expressions, I might wish to go back to 
a simple repr so I know what to type to reproduce the expression.

I'm certain this is overengineered, but I think we have use cases for all of the 
features in it. I think most of the complexity is optional. The basic in-process 
terminal frontend doesn't even bother with most of this and just uses the 
default formatter to get the text and prints it.

[1] Why a general JSONable object instead of just bytes? It would be nice to be 
able to define a formatter that could give some structured information about the 
object. For example, we could define an ArrayMetadataFormatter that gives a dict 
with shape, dtype, etc. A GUI frontend could display this information nicely 
formatted along with one of the other representations.

> If the latter, I'm not sure I like the approach of passing a dict
> through and letting each formatter modify it.  Sate that mutates
> as-it-goes tends to produce harder to understand code, at least in my
> experience.  Instead, we can call all the formatters in sequence and
> get from each a pair of key, value.  We can then insert the keys into
> a dict as they come on our side (so if the storage structure ever
> changes from a dict to anything else, likely the formatters can stay
> unmodified).  Does that sound reasonable to you?

That's actually how I would have implemented it [my original ipwx code 
notwithstanding ;-)].

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Thu Oct 28 21:10:21 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 18:10:21 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <201010281654.07105.mark.voorhies@ucsf.edu>
References: <ia9h0f$v1j$1@dough.gmane.org> <iacvap$6dn$1@dough.gmane.org>
	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
	<201010281654.07105.mark.voorhies@ucsf.edu>
Message-ID: <AANLkTi=TzKxXHrWt8htNHoBFUOXUJnt-ybq7C+GB_0hW@mail.gmail.com>

On Thu, Oct 28, 2010 at 4:54 PM, Mark Voorhies <mark.voorhies at ucsf.edu> wrote:
> On Thursday, October 28, 2010 04:11:18 pm Fernando Perez wrote:
>> On Thu, Oct 28, 2010 at 4:00 PM, Robert Kern <robert.kern at gmail.com> wrote:
>> > I recommend exactly what I did in ipwx. The DisplayTrap is configured with a
>> > list of DisplayFormatters. Each DisplayFormatter gets a chance to decorate the
>> > return messaged with an additional entry, keyed by the type of the
>> > DisplayFormatter (probably something like 'string', 'html', 'image', etc. but
>> > also perhaps 'repr', 'pretty', 'mathtext'; needs some more thought).
>
> Would it make sense to use DisplayFormatter class names as keys? ?That would
> avoid name collisions. ?Clients wanting more abstract/semantic formatting names
> could use an auxiliary hierarchy of general->specific->formatter strings (e.g.,
> provided by the pretty module) to find the best match to a target formatting
> in the message (e.g., like resolving fonts in CSS).

I think Robert picked up this theme as well, so I'll reply in his
message to this idea...

>> If the latter, I'm not sure I like the approach of passing a dict
>> through and letting each formatter modify it.
>
> Given that the outputs should be independent (i.e., shouldn't be modifying each
> other), it seems like the main advantage of chaining the formatters would be to
> avoid duplicating work (e.g., the html formatter could work off of the string result).
> This could also be done by linking the formatters directly (e.g., passing a result-caching
> string formatter to the html formatter's constructor) as long as we know the order that
> they will be called in.

What I don't like about this is that it introduces a fair amount of
coupling between formatters: order dependencies and mutation of state.
 In my mind, these guys are just an example of the observer pattern,
and I think in that context the most robust implementations are those
that have minimal/zero coupling between observers.  Each observer gets
notified of the relevant event (output ready for display) and both
order of execution and failures of some shouldn't impact the others.

I'm not sure I see the real-world benefit of the tighter coupling and
I do see a cost...

Cheers,

f


From fperez.net at gmail.com  Thu Oct 28 21:17:55 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 18:17:55 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <iad3j6$l3u$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<iac3j6$rau$1@dough.gmane.org>
	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
	<iacp72$e0d$1@dough.gmane.org>
	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
	<iacvap$6dn$1@dough.gmane.org>
	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
	<iad3j6$l3u$1@dough.gmane.org>
Message-ID: <AANLkTi=uBq7YkknhUAW3xWxAT7sF0dJPQoWQWYqLOc7z@mail.gmail.com>

On Thu, Oct 28, 2010 at 5:13 PM, Robert Kern <robert.kern at gmail.com> wrote:

>> OK, so how do you want to proceed: do you want to reopen your pull
>> request (possibly rebasing it if necessary) as it was, or do you want
>> to go ahead and implement the above approach right away?
>
> I'd rather implement this approach right away. We just need to decide what the
> keys should be and what they should mean. I originally used the ID of the
> DisplayFormatter. This would allow both a "normal" representation and an
> enhanced one both of the same type (plain text, HTML, PNG image) to coexist.
> Then the frontend could pick which one to display and let the user flip back and
> forth as desired even for old Out[] entries without reexecuting code. This may
> be a case of YAGNI.

Actually I don't think it's YAGNI, and I have a specific use case in
mind, with a practical example.  Lyx shows displayed equations, but if
you copy one, it's nice enough to actually feed the clipboard with the
raw Latex for the equation.  This is very convenient, and I often use
it to edit complex formulas in lyx that I then paste into reST docs.

We could similarly have pretty display of e.g. sympy output, but where
one could copy the raw latex fort the output cell.  The ui could
expose this via a context menu that offers 'copy image, copy latex,
copy string' for example.

So this does strike me like genuinely useful and valuable functionality.

> However, that means that the frontend needs to know about the IDs of the
> DisplayFormatters. It needs to know that 'my-tweaked-html' formatter is HTML. I
> might propose this as the fully-general solution:
>
> Each DisplayFormatter has a unique ID and a non-unique type. The type string
> determines how a frontend would actually interpret the data for display. If a
> frontend can display a particular type, it can display it for any
> DisplayFormatter of that type. There will be a few predefined type strings with
> meanings, but implementors can define new ones as long as they pick new names.
>
> ? text -- monospaced plain text (unicode)
> ? html -- snippet of HTML (anything one can slap inside of a <div>)
> ? image -- bytes of an image file (anything loadable by PIL, so no need to have
> different PNG and JPEG type strings)
> ? mathtext -- just the TeX-lite text (the frontend can render it itself)
>
> When given an object for display, the DisplayHook will give it to each of the
> DisplayFormatters in turn. If the formatter can handle the object, it will
> return some JSONable object[1]. The DisplayHook will append a 3-tuple
>
> ? (formatter.id, formatter.type, data)
>
> to a list. The DisplayHook will give this to whatever is forming the response
> message.
>
> Most likely, there won't be too many of these formatters for the same type
> active at any time and there should always be the (id='default', type='text')
> formatter. A simple frontend can just look for that. A more complicated GUI
> frontend may prefer a type='html' response and only fall back to a type='text'
> format. It may have an ordered list of formatter IDs that it will try to display
> before falling back in order. It might allow the user to flip through the
> different representations for each cell. For example, if I have a
> type='mathtext' formatter showing sympy expressions, I might wish to go back to
> a simple repr so I know what to type to reproduce the expression.
>
> I'm certain this is overengineered, but I think we have use cases for all of the
> features in it. I think most of the complexity is optional. The basic in-process
> terminal frontend doesn't even bother with most of this and just uses the
> default formatter to get the text and prints it.
>
> [1] Why a general JSONable object instead of just bytes? It would be nice to be
> able to define a formatter that could give some structured information about the
> object. For example, we could define an ArrayMetadataFormatter that gives a dict
> with shape, dtype, etc. A GUI frontend could display this information nicely
> formatted along with one of the other representations.

Most of this I agree with.  Just one question: why not use real mime
types for the type info?  I keep thinking that for our payloads and
perhaps also for this, we might as well encode type metadata as
mimetypes: they're reasonably standardized, python has a mime library,
and browsers are wired to do something sensible with mime-tagged data
already.  Am I missing something?

>> If the latter, I'm not sure I like the approach of passing a dict
>> through and letting each formatter modify it. ?Sate that mutates
>> as-it-goes tends to produce harder to understand code, at least in my
>> experience. ?Instead, we can call all the formatters in sequence and
>> get from each a pair of key, value. ?We can then insert the keys into
>> a dict as they come on our side (so if the storage structure ever
>> changes from a dict to anything else, likely the formatters can stay
>> unmodified). ?Does that sound reasonable to you?
>
> That's actually how I would have implemented it [my original ipwx code
> notwithstanding ;-)].

OK.  It seems we're converging design wise to the point where code can
continue the conversation :)

Thanks!

Cheers,

f


From ellisonbg at gmail.com  Thu Oct 28 21:31:15 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 28 Oct 2010 18:31:15 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTi=uBq7YkknhUAW3xWxAT7sF0dJPQoWQWYqLOc7z@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<iac3j6$rau$1@dough.gmane.org>
	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
	<iacp72$e0d$1@dough.gmane.org>
	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
	<iacvap$6dn$1@dough.gmane.org>
	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
	<iad3j6$l3u$1@dough.gmane.org>
	<AANLkTi=uBq7YkknhUAW3xWxAT7sF0dJPQoWQWYqLOc7z@mail.gmail.com>
Message-ID: <AANLkTin23qKKEZHdcZYLZsO0Nef9yjZPJgNALSrOktK3@mail.gmail.com>

I am in the middle of lab (they are taking a quiz), so I don't have
time to dig into the full thread ATM, but I do have a few comments:

* The main thing that I am concerned about is how we answer the
question i) "how do I (a developer of foo) make my class Foo, print
nice HTML/SVG.  IOW, what is does the public API for all of this look
like?

* In the current IPython, displayhook is only triggered 1x per block.
Thus, you can't use displayhook to get the str/html/svg/png
representation of an object inside a block block or loop.  This is a
serious limitation, that Fernando and I feel is a good thing in the
end.  But, this means that we will also need top-level functions that
users can put in their code to trigger all of this logic independent
of displayhook.

Like this:

for t in times:
    a = compute_thing(t)
    print_html(a)  # This should use the APIs that we are designing
and the payload system to deliver the html to the frontend.

We should also have functions like print_png, print_svg, print_latex
that we inject into builtins.

What this means is that we need to design an implementation that is
independent from displayhook and that is cleanly integrated with the
payload system.

Cheers,

Brian

On Thu, Oct 28, 2010 at 6:17 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Thu, Oct 28, 2010 at 5:13 PM, Robert Kern <robert.kern at gmail.com> wrote:
>
>>> OK, so how do you want to proceed: do you want to reopen your pull
>>> request (possibly rebasing it if necessary) as it was, or do you want
>>> to go ahead and implement the above approach right away?
>>
>> I'd rather implement this approach right away. We just need to decide what the
>> keys should be and what they should mean. I originally used the ID of the
>> DisplayFormatter. This would allow both a "normal" representation and an
>> enhanced one both of the same type (plain text, HTML, PNG image) to coexist.
>> Then the frontend could pick which one to display and let the user flip back and
>> forth as desired even for old Out[] entries without reexecuting code. This may
>> be a case of YAGNI.
>
> Actually I don't think it's YAGNI, and I have a specific use case in
> mind, with a practical example. ?Lyx shows displayed equations, but if
> you copy one, it's nice enough to actually feed the clipboard with the
> raw Latex for the equation. ?This is very convenient, and I often use
> it to edit complex formulas in lyx that I then paste into reST docs.
>
> We could similarly have pretty display of e.g. sympy output, but where
> one could copy the raw latex fort the output cell. ?The ui could
> expose this via a context menu that offers 'copy image, copy latex,
> copy string' for example.
>
> So this does strike me like genuinely useful and valuable functionality.
>
>> However, that means that the frontend needs to know about the IDs of the
>> DisplayFormatters. It needs to know that 'my-tweaked-html' formatter is HTML. I
>> might propose this as the fully-general solution:
>>
>> Each DisplayFormatter has a unique ID and a non-unique type. The type string
>> determines how a frontend would actually interpret the data for display. If a
>> frontend can display a particular type, it can display it for any
>> DisplayFormatter of that type. There will be a few predefined type strings with
>> meanings, but implementors can define new ones as long as they pick new names.
>>
>> ? text -- monospaced plain text (unicode)
>> ? html -- snippet of HTML (anything one can slap inside of a <div>)
>> ? image -- bytes of an image file (anything loadable by PIL, so no need to have
>> different PNG and JPEG type strings)
>> ? mathtext -- just the TeX-lite text (the frontend can render it itself)
>>
>> When given an object for display, the DisplayHook will give it to each of the
>> DisplayFormatters in turn. If the formatter can handle the object, it will
>> return some JSONable object[1]. The DisplayHook will append a 3-tuple
>>
>> ? (formatter.id, formatter.type, data)
>>
>> to a list. The DisplayHook will give this to whatever is forming the response
>> message.
>>
>> Most likely, there won't be too many of these formatters for the same type
>> active at any time and there should always be the (id='default', type='text')
>> formatter. A simple frontend can just look for that. A more complicated GUI
>> frontend may prefer a type='html' response and only fall back to a type='text'
>> format. It may have an ordered list of formatter IDs that it will try to display
>> before falling back in order. It might allow the user to flip through the
>> different representations for each cell. For example, if I have a
>> type='mathtext' formatter showing sympy expressions, I might wish to go back to
>> a simple repr so I know what to type to reproduce the expression.
>>
>> I'm certain this is overengineered, but I think we have use cases for all of the
>> features in it. I think most of the complexity is optional. The basic in-process
>> terminal frontend doesn't even bother with most of this and just uses the
>> default formatter to get the text and prints it.
>>
>> [1] Why a general JSONable object instead of just bytes? It would be nice to be
>> able to define a formatter that could give some structured information about the
>> object. For example, we could define an ArrayMetadataFormatter that gives a dict
>> with shape, dtype, etc. A GUI frontend could display this information nicely
>> formatted along with one of the other representations.
>
> Most of this I agree with. ?Just one question: why not use real mime
> types for the type info? ?I keep thinking that for our payloads and
> perhaps also for this, we might as well encode type metadata as
> mimetypes: they're reasonably standardized, python has a mime library,
> and browsers are wired to do something sensible with mime-tagged data
> already. ?Am I missing something?
>
>>> If the latter, I'm not sure I like the approach of passing a dict
>>> through and letting each formatter modify it. ?Sate that mutates
>>> as-it-goes tends to produce harder to understand code, at least in my
>>> experience. ?Instead, we can call all the formatters in sequence and
>>> get from each a pair of key, value. ?We can then insert the keys into
>>> a dict as they come on our side (so if the storage structure ever
>>> changes from a dict to anything else, likely the formatters can stay
>>> unmodified). ?Does that sound reasonable to you?
>>
>> That's actually how I would have implemented it [my original ipwx code
>> notwithstanding ;-)].
>
> OK. ?It seems we're converging design wise to the point where code can
> continue the conversation :)
>
> Thanks!
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From robert.kern at gmail.com  Thu Oct 28 22:38:55 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 21:38:55 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTi=uBq7YkknhUAW3xWxAT7sF0dJPQoWQWYqLOc7z@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>	<iac3j6$rau$1@dough.gmane.org>	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>	<iacp72$e0d$1@dough.gmane.org>	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>	<iacvap$6dn$1@dough.gmane.org>	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>	<iad3j6$l3u$1@dough.gmane.org>
	<AANLkTi=uBq7YkknhUAW3xWxAT7sF0dJPQoWQWYqLOc7z@mail.gmail.com>
Message-ID: <iadc40$fjd$1@dough.gmane.org>

On 2010-10-28 20:17 , Fernando Perez wrote:
> On Thu, Oct 28, 2010 at 5:13 PM, Robert Kern<robert.kern at gmail.com>  wrote:

>> [1] Why a general JSONable object instead of just bytes? It would be nice to be
>> able to define a formatter that could give some structured information about the
>> object. For example, we could define an ArrayMetadataFormatter that gives a dict
>> with shape, dtype, etc. A GUI frontend could display this information nicely
>> formatted along with one of the other representations.
>
> Most of this I agree with.  Just one question: why not use real mime
> types for the type info?  I keep thinking that for our payloads and
> perhaps also for this, we might as well encode type metadata as
> mimetypes: they're reasonably standardized, python has a mime library,
> and browsers are wired to do something sensible with mime-tagged data
> already.  Am I missing something?

I guess you could use MIME types for the ones that make sense to do so. However, 
it would be nice to have an 'image' type that didn't really care about whether 
it was PNG or JPEG. There's no reason a frontend should have to look for every 
image/<format> MIME type that PIL can handle. We'd never actually *use* the MIME 
library anywhere. We'd treat the MIME types just like our own identifiers anyways.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Thu Oct 28 22:45:02 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 19:45:02 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTin23qKKEZHdcZYLZsO0Nef9yjZPJgNALSrOktK3@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<iac3j6$rau$1@dough.gmane.org>
	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
	<iacp72$e0d$1@dough.gmane.org>
	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
	<iacvap$6dn$1@dough.gmane.org>
	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
	<iad3j6$l3u$1@dough.gmane.org>
	<AANLkTi=uBq7YkknhUAW3xWxAT7sF0dJPQoWQWYqLOc7z@mail.gmail.com>
	<AANLkTin23qKKEZHdcZYLZsO0Nef9yjZPJgNALSrOktK3@mail.gmail.com>
Message-ID: <AANLkTimEci57N3Gj=8WTaXVPbTP_cXW-1ttss+bXUwNJ@mail.gmail.com>

On Thu, Oct 28, 2010 at 6:31 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> I am in the middle of lab (they are taking a quiz), so I don't have
> time to dig into the full thread ATM, but I do have a few comments:
>
> * The main thing that I am concerned about is how we answer the
> question i) "how do I (a developer of foo) make my class Foo, print
> nice HTML/SVG. ?IOW, what is does the public API for all of this look
> like?

I would propose having a baseline protocol with the usual python
approach of __methods__.  We can define a few basic ones that are
auto-recognized:  __repr_html__, __repr_latex__, __repr_image__
(bitmap images), __repr_svg__, __repr_pdf__?  The system could by
default call any of these that are found, and stuff the output in the
repr output structure.

For the 'default' field of the repr dict, we'd use either __pretty__
if defined and pretty-printing is on, or __repr__ if pretty-printing
is off (I think that's how pretty.py works).

This lets classes define their own rich representations for common
formats.  And then, the complementary approach from pretty allows
anyone to define a formatter that pretty-prints any object that hadn't
been customized.

Though the cost of computing all of these every time does become a
concern...  There needs to be an easy way to toggle them on/off for
the user, so that you're not waiting for a bunch of

> * In the current IPython, displayhook is only triggered 1x per block.
> Thus, you can't use displayhook to get the str/html/svg/png
> representation of an object inside a block block or loop. ?This is a
> serious limitation, that Fernando and I feel is a good thing in the
> end. ?But, this means that we will also need top-level functions that
> users can put in their code to trigger all of this logic independent
> of displayhook.
>
> Like this:
>
> for t in times:
> ? ?a = compute_thing(t)
> ? ?print_html(a) ?# This should use the APIs that we are designing
> and the payload system to deliver the html to the frontend.
>
> We should also have functions like print_png, print_svg, print_latex
> that we inject into builtins.
>
> What this means is that we need to design an implementation that is
> independent from displayhook and that is cleanly integrated with the
> payload system.

That seems reasonable, and it can reuse the same machinery from
displayhook.  Basically the displayhook does it when called and passed
an object, these functions do it by explicit user request...

Cheers,

f


From fperez.net at gmail.com  Thu Oct 28 22:46:25 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 28 Oct 2010 19:46:25 -0700
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <iadc40$fjd$1@dough.gmane.org>
References: <ia9h0f$v1j$1@dough.gmane.org>
	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>
	<iac3j6$rau$1@dough.gmane.org>
	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>
	<iacp72$e0d$1@dough.gmane.org>
	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>
	<iacvap$6dn$1@dough.gmane.org>
	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>
	<iad3j6$l3u$1@dough.gmane.org>
	<AANLkTi=uBq7YkknhUAW3xWxAT7sF0dJPQoWQWYqLOc7z@mail.gmail.com>
	<iadc40$fjd$1@dough.gmane.org>
Message-ID: <AANLkTim3u9s3Mh5u2kOg6+BRDGgw6StA+VQeWZEyFX7+@mail.gmail.com>

On Thu, Oct 28, 2010 at 7:38 PM, Robert Kern <robert.kern at gmail.com> wrote:
> I guess you could use MIME types for the ones that make sense to do so. However,
> it would be nice to have an 'image' type that didn't really care about whether
> it was PNG or JPEG. There's no reason a frontend should have to look for every
> image/<format> MIME type that PIL can handle. We'd never actually *use* the MIME
> library anywhere. We'd treat the MIME types just like our own identifiers anyways.

Agreed.  I just think that if we tag things with real MIME types, the
web frontend has less logic to worry about. Other frontends are free
to treat all image/<foo> identically and just pass them to PIL, for
example.

Cheers,

f


From robert.kern at gmail.com  Thu Oct 28 23:04:31 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Thu, 28 Oct 2010 22:04:31 -0500
Subject: [IPython-dev] Extensible pretty-printing
In-Reply-To: <AANLkTin23qKKEZHdcZYLZsO0Nef9yjZPJgNALSrOktK3@mail.gmail.com>
References: <ia9h0f$v1j$1@dough.gmane.org>	<AANLkTikO+Uiwt+4c_vDa=rWRh0UuciskpeEwTBggnwxt@mail.gmail.com>	<iac3j6$rau$1@dough.gmane.org>	<AANLkTi=igmWkZPP-uvyzbZ7Y=6tZvE_2f=BZW1_TntV7@mail.gmail.com>	<iacp72$e0d$1@dough.gmane.org>	<AANLkTikWjczRKVcYXQ1boHUeSw5f_KCA7GdvD-g+cobC@mail.gmail.com>	<iacvap$6dn$1@dough.gmane.org>	<AANLkTikKjBNWJZKwZ2KY5YQbY=idAmo4EE7uUVtL+vTK@mail.gmail.com>	<iad3j6$l3u$1@dough.gmane.org>	<AANLkTi=uBq7YkknhUAW3xWxAT7sF0dJPQoWQWYqLOc7z@mail.gmail.com>
	<AANLkTin23qKKEZHdcZYLZsO0Nef9yjZPJgNALSrOktK3@mail.gmail.com>
Message-ID: <iaddjv$js4$1@dough.gmane.org>

On 2010-10-28 20:31 , Brian Granger wrote:
> I am in the middle of lab (they are taking a quiz), so I don't have
> time to dig into the full thread ATM, but I do have a few comments:
>
> * The main thing that I am concerned about is how we answer the
> question i) "how do I (a developer of foo) make my class Foo, print
> nice HTML/SVG.  IOW, what is does the public API for all of this look
> like?

Depends on how you write the HTMLFormatter and SVGFormatter. We can probably 
write a general base class that handles looking for _html_() methods and to 
allow registration of functions for types that cannot be modified. I've broken 
out the deferred type-dispatching I use in our pretty.py before; I'll try to 
find it.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From takowl at gmail.com  Fri Oct 29 06:30:29 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Fri, 29 Oct 2010 11:30:29 +0100
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
Message-ID: <AANLkTikkLoeRxtq-aQgiX_SXiJqZDUhAUoRiWRtRKxWC@mail.gmail.com>

On 29 October 2010 02:10, <ipython-dev-request at scipy.org> wrote:

> We now have a massive amount of  new code in the pipeline, and it's
> really high time we start getting this out in the hands of users
> beyond those willing to run from a git HEAD.  0.11 will be a 'tech
> preview' release, especially because the situation with regards to the
> parallel code is a bit messy right now.  But we shouldn't wait for too
> much longer.
>

I don't know what sort of QA process it would need to go through, but could
we look at doing a parallel release of IPython on Python 3? As it stands, it
would have to just be the plain terminal version, but I think that's still
valuable. I'm fairly happy that that side is working, and it passes all the
automated tests.

The Qt frontend is tantalisingly close to working, largely thanks to MinRK's
work with pyzmq, and I hope I can clear the remaining roadblocks soon, but I
don't know if it will be ready in a month. And if the new HTML frontend can
work with it too, that's just icing on the cake.

Thanks,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101029/b35ba426/attachment.html>

From ellisonbg at gmail.com  Fri Oct 29 13:44:22 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 29 Oct 2010 10:44:22 -0700
Subject: [IPython-dev] Code review of html frontend
Message-ID: <AANLkTiktooBMCnQ20CJ+YUV8NSSYHyZNCngeH5HQGaow@mail.gmail.com>

James,

Here is a longer code review of the html frontend.  Overall, this is
really nice.  Great job!

====================
HTML Notebook Review
====================

General
=======

* Add IPython standard copyright to each file and put an
  author section with yourself in the module level docstrings.
* Use a space after the "#" in comments.
* We should refactor the main KernelManager to consist of two
  classes, one that handles the starting of the kernel process
  and a second that handles the channels. In this case, there
  is not reason to have all the channel stuff, as I think
  we can simple use comet or websockets to pass the raw
  ZMQ JSON messages between the client and the kernel. The
  only reason we might not want to do this is to allow us
  to validate the messages in the web server, but really we
  should be doing that in the kernel anyways. This would
  make the webserver stuff even thinner.
* Let's document clearly the URL structure we are using. It
  will be much easier to do this if we move to tornado.
* Let's make sure we develop the html/js stuff in a way that
  will be independent of the webserver side of things, that
  way, in the future, we can easily swap out either component
  if we want.
* Please add brief docstrings to the important methods.
* We will probably have multiple html frontends, so we should
  probably put your stuff in something like html/basic or
  html/basicnb. We did this for the qt frontend (qt/console).
* For now I would mostly focus on the Javascript and UI side of
  things, as I think we probably want to move to Tornado
  even for simple servers like this.

UI
==

* The results of ? and ?? only go to one client.

ipythonhttp.py
==============

* We should discuss the possibility of using Tornado as our
  web server. It will be a much better match for working
  with zmq and many things will be much easier. We are
  already shipping a good part of Tornado with pyzmq and
  could possibly ship all of it. Using the stdlib for now
  is fine though. Tornado also has websocket support that
  would work extremely well with pyzmq.
* Remove command line options that you don't support yet.
* Move defer to the top level and rename something like
  "start_browser". Move import threading to top level as well.


kernelmanager.py
================

* In do_GET and do_POST, document what each elif clause is
  handling.
* I see that you are using time.time for the client id. Would
  it be better to use a real uuid? If you want to also have
  the time you could still pass that to manager.register as
  well.
* When a kernel dies, the client should note that and there
  should be an option to restart the kernel.
* We should probaby have top level URLs for the different
  ZMQ sockets like /XREQ, /SUB, etc. so that the GET and POST
  traffic has different URLS.

CometManager.py
================

* Change filename to cometmanager.py.


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Fri Oct 29 13:45:20 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 29 Oct 2010 10:45:20 -0700
Subject: [IPython-dev] Multiline bug screenshot
Message-ID: <AANLkTimZcM2j-b04vJsbwfBiwAjuOQ+dRxakhcTZbjW+@mail.gmail.com>

James,

Here is a screenshot of what the multiline input bug looks like in Chrome 6.

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: multiline_bug.jpg
Type: image/jpeg
Size: 7308 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101029/504759b8/attachment.jpg>

From ellisonbg at gmail.com  Fri Oct 29 14:02:16 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 29 Oct 2010 11:02:16 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
Message-ID: <AANLkTikO3SRy95DrEryw0yop+pu0JWX-0xxPa6nHsjvX@mail.gmail.com>

Fernando,

Thanks for summarizing this.  I think this sounds like a good plan.

Cheers,

Brian

On Thu, Oct 28, 2010 at 5:01 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi all,
>
> I know we've said several times that we should release 0.11 'soon', so
> I forgive anyone for laughing at this email. ?But don't ignore it,
> this time we mean it :)
>
> We now have a massive amount of ?new code in the pipeline, and it's
> really high time we start getting this out in the hands of users
> beyond those willing to run from a git HEAD. ?0.11 will be a 'tech
> preview' release, especially because the situation with regards to the
> parallel code is a bit messy right now. ?But we shouldn't wait for too
> much longer.
>
> Brian and I tried to compile a list of the main things that need work
> before we can make a release, and this is ?our best estimate right
> now:
>
> - Unicode: this is totally broken and totally unacceptable. ?But I'm
> pretty sure with a few clean hours I can get it done. ?It's not super
> hard, just detail-oriented work that I need a quiet block of time to
> tackle.
>
> - Updating top-level entry points to use the new config system,
> especially the Qt console. ?Brian said he could tackle this one.
>
> - Final checks on the state of the GUI/event loop support. ?Things are
> looking fairly good from usage, but we have concerns that there may
> still be problems lurking just beneath the surface.
>
> - Continue/finish the displayhook discussion: we're well on our way on
> this, we just need to finish it up. ?We mark it here because it's an
> important part of the api and a good test case for how we want to
> expose this kind of functionality.
>
> - Move all payloads to pub channel. ?This is also a big api item that
> affects all clients, so we might as well get it right from the start.
> I can try to work on this.
>
> - James' web frontend: I'd really like to get that code in for early
> battle-testing, even though it's clear it's early functionality
> subject still to much evolution.
>
> That's all I have in my list. ?Anything else you can all think of?
>
> As for non-blockers, we have:
>
> - the parallel code is not in a good situation right now: we have a
> few regressions re. the Twisted 0.10.1 code (e.g. the SGE code isn't
> ported yet), the Twisted winhpc scheduler is only in 0.11, and while
> the new zmq tools are looking great, they are NOT production-ready
> quite yet. ?In summary, we'll have to warn in bright, blinking pink
> letters 1995-style, everyone who uses the parallel code in production
> systems to stick with the 0.10 series for a little longer. ?Annoying,
> yes, but unfortunately such is life.
>
> - our docs have unfortunately gone fairly stale in a few places. ?We
> have no docs for the new Qt console and a lot of information is partly
> or completely stale. ?This is an area where volunteers could make a
> huge difference: any help here has a big impact in letting the project
> better serve users, and doc pull requests are likely to be reviewed
> very quickly. ?Additionally, you don't need to know too much about the
> code's intimate details to help with documenting the user-facing
> functionality.
>
> Anything else?
>
> Plan: I'd love to get 0.11 out in the first week of December. ?John
> Hunter, Stefan van der Walt and I (all three contributors) will be at
> Scipy India in Hyderabad Dec 11-18, and there will be sprint time
> there. ?Ideally, we'd have a stable release out for potential sprint
> participants who want to hack on IPython to work from. ?It would also
> be a good way to wrap up a great year of development and de-stagnation
> of IPython, leaving us with a nice fresh ball of warm code to play
> with over the winter holidays.
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Fri Oct 29 14:15:45 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 29 Oct 2010 11:15:45 -0700
Subject: [IPython-dev] Error in test_unicode for InputSplitter
Message-ID: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>

I am seeing the following on Python 2.6, Mac OS X 10.5:

======================================================================
ERROR: test_unicode
(IPython.core.tests.test_inputsplitter.InputSplitterTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/bgranger/Documents/Computation/IPython/code/ipython/IPython/core/tests/test_inputsplitter.py",
line 353, in test_unicode
    self.isp.push("u'\xc3\xa9'")
  File "/Users/bgranger/Documents/Computation/IPython/code/ipython/IPython/core/inputsplitter.py",
line 374, in push
    self._store(lines)
  File "/Users/bgranger/Documents/Computation/IPython/code/ipython/IPython/core/inputsplitter.py",
line 607, in _store
    setattr(self, store, self._set_source(buffer))
  File "/Users/bgranger/Documents/Computation/IPython/code/ipython/IPython/core/inputsplitter.py",
line 610, in _set_source
    return ''.join(buffer).encode(self.encoding)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
2: ordinal not in range(128)

----------------------------------------------------------------------
Ran 270 tests in 1.974s

Is this a regression or a known issue?

Cheers,

Brian


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Fri Oct 29 14:23:22 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 29 Oct 2010 11:23:22 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
Message-ID: <AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>

> As for non-blockers, we have:
>
> - the parallel code is not in a good situation right now: we have a
> few regressions re. the Twisted 0.10.1 code (e.g. the SGE code isn't
> ported yet), the Twisted winhpc scheduler is only in 0.11, and while
> the new zmq tools are looking great, they are NOT production-ready
> quite yet. ?In summary, we'll have to warn in bright, blinking pink
> letters 1995-style, everyone who uses the parallel code in production
> systems to stick with the 0.10 series for a little longer. ?Annoying,
> yes, but unfortunately such is life.

I want to at least proposed the following solution to this:

Remove all of the twisted stuff from 0.11 and put the new zmq stuff in
place as a prototype.

Here is my logic:

* The Twisted parallel stuff is *already* broken in 0.11 and if anyone
has stable code running on it, they should be using 0.10.
* If someone is happy to run non-production ready code, there is no
reason they should be using the Twisted stuff, they should use the
pyzmq stuff.
* Twisted is a *massive* burden on our code base:
  - For package managers, it brings in Twisted, Foolscap and zope.interface.
  - It makes our test suite unstable and fragile because we have to
run tests in subprocesses and use trial sometimes and nose other
times.
  - It is a huge # of LOC.
  - It means that most of our codebase is Python 3 ready.

There are lots of cons to this proposal:

* That is really quick to drop support for the Twisted stuff.
* We may piss some people off.
* It possibly means maintaining the 0.10 series longer than we imagined.
* We don't have a security story for the pyzmq parallel stuff yet.

I am not convinced this is the right thing to do, but the benefits are
significant.

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Fri Oct 29 19:34:11 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 29 Oct 2010 16:34:11 -0700
Subject: [IPython-dev] Error in test_unicode for InputSplitter
In-Reply-To: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>
References: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>
Message-ID: <AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>

Hey,

On Fri, Oct 29, 2010 at 11:15 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>
> ? ?return ''.join(buffer).encode(self.encoding)
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
> 2: ordinal not in range(128)

No, I don't see it here on trunk:

Ran 9 test groups in 45.840s

Status:
OK

Is this on your unmodified copy of trunk?  If so that's bad news,
because it means we have a unicode problem that's platform-dependent
:(

But let's not worry too much about it yet, since we know we have a
full complement of problems with unicode anyways.  Once I've had a go
at that code, we'll make sure this is gone.

Thanks for pointing it out though.  I've made a note about it in our
main unicode ticket just to make sure we don't forget:

http://github.com/ipython/ipython/issues/#issue/25/comment/503524

Cheers,

f


From benjaminrk at gmail.com  Fri Oct 29 19:48:56 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 29 Oct 2010 16:48:56 -0700
Subject: [IPython-dev] Error in test_unicode for InputSplitter
In-Reply-To: <AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>
References: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>
	<AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>
Message-ID: <AANLkTimM9Jf-1sa_tDssBCXcq5fB+nnG2x5Cg-VOSvME@mail.gmail.com>

Check the default encoding.  It could be that your Python's default encoding
is ascii, or some other such thing, causing a problem.

sys.getdefaultencoding()

Somethings this is utf8, sometimes it's ascii.

-MinRK

On Fri, Oct 29, 2010 at 16:34, Fernando Perez <fperez.net at gmail.com> wrote:

> Hey,
>
> On Fri, Oct 29, 2010 at 11:15 AM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >
> >    return ''.join(buffer).encode(self.encoding)
> > UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
> > 2: ordinal not in range(128)
>
> No, I don't see it here on trunk:
>
> Ran 9 test groups in 45.840s
>
> Status:
> OK
>
> Is this on your unmodified copy of trunk?  If so that's bad news,
> because it means we have a unicode problem that's platform-dependent
> :(
>
> But let's not worry too much about it yet, since we know we have a
> full complement of problems with unicode anyways.  Once I've had a go
> at that code, we'll make sure this is gone.
>
> Thanks for pointing it out though.  I've made a note about it in our
> main unicode ticket just to make sure we don't forget:
>
> http://github.com/ipython/ipython/issues/#issue/25/comment/503524
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101029/07a10602/attachment.html>

From robert.kern at gmail.com  Fri Oct 29 20:26:58 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Fri, 29 Oct 2010 19:26:58 -0500
Subject: [IPython-dev] Error in test_unicode for InputSplitter
In-Reply-To: <AANLkTimM9Jf-1sa_tDssBCXcq5fB+nnG2x5Cg-VOSvME@mail.gmail.com>
References: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>	<AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>
	<AANLkTimM9Jf-1sa_tDssBCXcq5fB+nnG2x5Cg-VOSvME@mail.gmail.com>
Message-ID: <iafooi$ge7$1@dough.gmane.org>

On 2010-10-29 18:48 , MinRK wrote:
> Check the default encoding.  It could be that your Python's default encoding is
> ascii, or some other such thing, causing a problem.
>
> sys.getdefaultencoding()
>
> Somethings this is utf8, sometimes it's ascii.

It should never, ever be anything but ascii. If you have it set to something 
else, you have a broken Python.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From robert.kern at gmail.com  Fri Oct 29 20:35:52 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Fri, 29 Oct 2010 19:35:52 -0500
Subject: [IPython-dev] Error in test_unicode for InputSplitter
In-Reply-To: <AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>
References: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>
	<AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>
Message-ID: <iafp99$inf$1@dough.gmane.org>

On 2010-10-29 18:34 , Fernando Perez wrote:
> Hey,
>
> On Fri, Oct 29, 2010 at 11:15 AM, Brian Granger<ellisonbg at gmail.com>  wrote:
>>
>>     return ''.join(buffer).encode(self.encoding)
>> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
>> 2: ordinal not in range(128)
>
> No, I don't see it here on trunk:
>
> Ran 9 test groups in 45.840s
>
> Status:
> OK
>
> Is this on your unmodified copy of trunk?  If so that's bad news,
> because it means we have a unicode problem that's platform-dependent
> :(

I can verify this test failure on OS X with an unmodified trunk. The code is 
just wrong (at least on Python 2) since it calls .encode() on a byte string, not 
a unicode string. You've never decoded it.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From benjaminrk at gmail.com  Fri Oct 29 20:39:03 2010
From: benjaminrk at gmail.com (Min RK)
Date: Fri, 29 Oct 2010 17:39:03 -0700
Subject: [IPython-dev] Error in test_unicode for InputSplitter
In-Reply-To: <iafooi$ge7$1@dough.gmane.org>
References: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>
	<AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>
	<AANLkTimM9Jf-1sa_tDssBCXcq5fB+nnG2x5Cg-VOSvME@mail.gmail.com>
	<iafooi$ge7$1@dough.gmane.org>
Message-ID: <1334DB12-86AA-4139-9EC2-C15E34863C6C@gmail.com>

My mistake.  I do remember there being an earlier problem related to someone's default encoding.

What would be broken if Python's default encoding were utf8?  Why is this a configurable option, if everything but ascii is broken?

-MinRK

On Oct 29, 2010, at 17:26, Robert Kern <robert.kern at gmail.com> wrote:

> On 2010-10-29 18:48 , MinRK wrote:
>> Check the default encoding.  It could be that your Python's default encoding is
>> ascii, or some other such thing, causing a problem.
>> 
>> sys.getdefaultencoding()
>> 
>> Somethings this is utf8, sometimes it's ascii.
> 
> It should never, ever be anything but ascii. If you have it set to something 
> else, you have a broken Python.
> 
> -- 
> Robert Kern
> 
> "I have come to believe that the whole world is an enigma, a harmless enigma
>  that is made terrible by our own mad attempt to interpret it as though it had
>  an underlying truth."
>   -- Umberto Eco
> 
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev


From robert.kern at gmail.com  Fri Oct 29 20:42:31 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Fri, 29 Oct 2010 19:42:31 -0500
Subject: [IPython-dev] Error in test_unicode for InputSplitter
In-Reply-To: <1334DB12-86AA-4139-9EC2-C15E34863C6C@gmail.com>
References: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>	<AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>	<AANLkTimM9Jf-1sa_tDssBCXcq5fB+nnG2x5Cg-VOSvME@mail.gmail.com>	<iafooi$ge7$1@dough.gmane.org>
	<1334DB12-86AA-4139-9EC2-C15E34863C6C@gmail.com>
Message-ID: <iafpln$j9e$2@dough.gmane.org>

On 2010-10-29 19:39 , Min RK wrote:
> My mistake.  I do remember there being an earlier problem related to someone's default encoding.
>
> What would be broken if Python's default encoding were utf8?

Things like this would work differently on different people's machines. 
Internally, the __hash__/__eq__ relationship between unicode and str objects 
would fail to hold.

> Why is this a configurable option, if everything but ascii is broken?

It's not a configurable option.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From robert.kern at gmail.com  Fri Oct 29 20:39:55 2010
From: robert.kern at gmail.com (Robert Kern)
Date: Fri, 29 Oct 2010 19:39:55 -0500
Subject: [IPython-dev] Error in test_unicode for InputSplitter
In-Reply-To: <iafooi$ge7$1@dough.gmane.org>
References: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>	<AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>	<AANLkTimM9Jf-1sa_tDssBCXcq5fB+nnG2x5Cg-VOSvME@mail.gmail.com>
	<iafooi$ge7$1@dough.gmane.org>
Message-ID: <iafpgr$j9e$1@dough.gmane.org>

On 2010-10-29 19:26 , Robert Kern wrote:
> On 2010-10-29 18:48 , MinRK wrote:
>> Check the default encoding.  It could be that your Python's default encoding is
>> ascii, or some other such thing, causing a problem.
>>
>> sys.getdefaultencoding()
>>
>> Somethings this is utf8, sometimes it's ascii.
>
> It should never, ever be anything but ascii. If you have it set to something
> else, you have a broken Python.

Or rather, on Python 2, it must always be ascii. On Python 3, it must always be 
utf-8.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth."
   -- Umberto Eco



From fperez.net at gmail.com  Fri Oct 29 21:01:21 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 29 Oct 2010 18:01:21 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTikkLoeRxtq-aQgiX_SXiJqZDUhAUoRiWRtRKxWC@mail.gmail.com>
References: <AANLkTikkLoeRxtq-aQgiX_SXiJqZDUhAUoRiWRtRKxWC@mail.gmail.com>
Message-ID: <AANLkTinhRowBGoWNCjnZkLk2VTeVTSQPO7xkE7smOPa_@mail.gmail.com>

Hi Thomas,

On Fri, Oct 29, 2010 at 3:30 AM, Thomas Kluyver <takowl at gmail.com> wrote:
> I don't know what sort of QA process it would need to go through, but could
> we look at doing a parallel release of IPython on Python 3? As it stands, it
> would have to just be the plain terminal version, but I think that's still
> valuable. I'm fairly happy that that side is working, and it passes all the
> automated tests.
>
> The Qt frontend is tantalisingly close to working, largely thanks to MinRK's
> work with pyzmq, and I hope I can clear the remaining roadblocks soon, but I
> don't know if it will be ready in a month. And if the new HTML frontend can
> work with it too, that's just icing on the cake.
>

This would be fantastic.  Given we have exactly *zero* official
IPython on python3, pretty much anything that works reasonably well at
the terminal would be great to have.  As long as the test suite and
basic interactive use are there, it will already be a great starting
point for anyone wanting to use python3.  With numpy already on
python3, we really want this moving.

To get things going, I've made a new Python3 team for the IPython organization:

https://github.com/organizations/ipython/teams

For now I only added you and me to it because I don't want to
forcefully volunteer anyone else :)  But anyone who wants to help out
with this, just write back on list and I'll add you right away.

Thomas, I think the easiest way forward will be to move your py3 repo
to be owned by this team, as that will make it easier to get
collaboration from others, pull requests reviewed, etc.  Doing that on
a personally-held repo is awkward.   You need to ask github manually
to do it:

http://support.github.com/discussions/organization-issues/123-request-to-move-a-personal-repo-to-an-organization

Once they've moved it, we'll announce it on the -user list as well as
the numpy one, and hopefully others will begin using this as well,
testing it, etc.

Many thanks for taking the initiative on this!!!

Regards,

f


From ellisonbg at gmail.com  Fri Oct 29 21:35:53 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 29 Oct 2010 18:35:53 -0700
Subject: [IPython-dev] Error in test_unicode for InputSplitter
In-Reply-To: <iafpgr$j9e$1@dough.gmane.org>
References: <AANLkTik4Ma1j-F7mHudGfG0-o+egC1F8uxy+mGxrBGFN@mail.gmail.com>
	<AANLkTi=g9qdvVCij7KaNU4YuPY54BJbozFhYHR_k8kre@mail.gmail.com>
	<AANLkTimM9Jf-1sa_tDssBCXcq5fB+nnG2x5Cg-VOSvME@mail.gmail.com>
	<iafooi$ge7$1@dough.gmane.org> <iafpgr$j9e$1@dough.gmane.org>
Message-ID: <AANLkTi=MmbG0yv8e51FCzuf3ewSJKpo-aqdgnptGKQF6@mail.gmail.com>

Python 2.6.5 |CUSTOM| (r265:79063, May 28 2010, 15:13:03)
Type "copyright", "credits" or "license" for more information.

IPython 0.11.alpha1.git -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import sys

In [2]: sys.getdefaultencoding()
Out[2]: 'ascii'

This is EPD on the Mac...

Brian

On Fri, Oct 29, 2010 at 5:39 PM, Robert Kern <robert.kern at gmail.com> wrote:
> On 2010-10-29 19:26 , Robert Kern wrote:
>> On 2010-10-29 18:48 , MinRK wrote:
>>> Check the default encoding. ?It could be that your Python's default encoding is
>>> ascii, or some other such thing, causing a problem.
>>>
>>> sys.getdefaultencoding()
>>>
>>> Somethings this is utf8, sometimes it's ascii.
>>
>> It should never, ever be anything but ascii. If you have it set to something
>> else, you have a broken Python.
>
> Or rather, on Python 2, it must always be ascii. On Python 3, it must always be
> utf-8.
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless enigma
> ?that is made terrible by our own mad attempt to interpret it as though it had
> ?an underlying truth."
> ? -- Umberto Eco
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Fri Oct 29 23:28:19 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 29 Oct 2010 20:28:19 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
Message-ID: <AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>

On Fri, Oct 29, 2010 at 11:23 AM, Brian Granger <ellisonbg at gmail.com> wrote:
> Remove all of the twisted stuff from 0.11 and put the new zmq stuff in
> place as a prototype.
>
> Here is my logic:
>
> * The Twisted parallel stuff is *already* broken in 0.11 and if anyone
> has stable code running on it, they should be using 0.10.
> * If someone is happy to run non-production ready code, there is no
> reason they should be using the Twisted stuff, they should use the
> pyzmq stuff.
> * Twisted is a *massive* burden on our code base:
> ?- For package managers, it brings in Twisted, Foolscap and zope.interface.
> ?- It makes our test suite unstable and fragile because we have to
> run tests in subprocesses and use trial sometimes and nose other
> times.
> ?- It is a huge # of LOC.
> ?- It means that most of our codebase is Python 3 ready.
>
> There are lots of cons to this proposal:
>
> * That is really quick to drop support for the Twisted stuff.
> * We may piss some people off.
> * It possibly means maintaining the 0.10 series longer than we imagined.
> * We don't have a security story for the pyzmq parallel stuff yet.

I have to say that I simply didn't have Brian's boldness to propose
this, but I think it's the right thing to do, ultimately.  It *is*
painful in the short term, but it's also the honest approach.  I keep
forgetting but Brian reminded me that even the Twisted-based code in
0.11 has serious regressions re. the 0.10.x series, since in the big
refactoring for 0.11 not quite everything made it through.

The 0.10 maintenance doesn't worry me a whole lot: as long as we limit
it to small changes, by now merging them as self-contained pull
requests is really easy (as I just did recently with the ones Paul and
Tom sent).  And rolling out a new release when the total delta is
small is actually not that much work.

So I'm totally +1 on this radical, but I think ultimately beneficial,
approach.  It's important to keep in mind that doing this will lift a
big load off our shoulders, and we're a small enough team that this
benefit is significant.  It will let us concentrate on moving the new
machinery forward quickly without having to worry about the large
Twisted code.  It will also help Thomas with his py3 efforts, as it's
one less thing he has to keep getting out of his way.

Concrete plan:

- Wait a week or two for feedback.
- If we decide to move ahead, make a shared branch on the main repo
where we can do this work and review it, with all having the chance to
contribute while it happens.
- Move all twisted-using code (IPython/kernel and some code in
IPython/testing) into IPython/deathrow.  This will let anyone who
reall wants it find it easily, without having to dig through version
control history.  Note that deathrow does *not* make it into official
release tarballs.

Cheers,

f


From benjaminrk at gmail.com  Sat Oct 30 01:55:32 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 29 Oct 2010 22:55:32 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
Message-ID: <AANLkTi=san27Ld=C_2R26Q8=dXb01_QttZEEemtF3tFY@mail.gmail.com>

This is more agressive than I expected, I'll have to get the new parallel
stuff in gear.

The main roadblock for me is merging work into the Kernels.  I plan to spend
tomorrow working on getting the new parallel code ready for review, and
identifying what needs to happen with code in master in order for this to go
into 0.11.  The only work that needs merge rather than drop-in is in Kernels
and Session.  I expect that just using the new Session will just be fine
after a rewview, but getting the existing Kernels to provide what is
necessary for the parallel code will be some work, and I'll try to identify
exactly what that will look like.

The main things I know already:

* Names should change
(GH-178<http://github.com/ipython/ipython/issues/#issue/178>).
It's really a coincidence that we had just one action per socket type, and
the parallel code has several sockets of the same type, and some actions
that can be on different socket types, depending on the scheduler.
* Use IOLoop/ZMQStream - this isn't necessarily critical, and I can probably
do it with a subclass if we don't want it in the main kernels.
* apply_request. This should be all new code, and shouldn't collide with
anything.


Let me know what I can do to help things along.

-MinRK

On Fri, Oct 29, 2010 at 20:28, Fernando Perez <fperez.net at gmail.com> wrote:

> On Fri, Oct 29, 2010 at 11:23 AM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> > Remove all of the twisted stuff from 0.11 and put the new zmq stuff in
> > place as a prototype.
> >
> > Here is my logic:
> >
> > * The Twisted parallel stuff is *already* broken in 0.11 and if anyone
> > has stable code running on it, they should be using 0.10.
> > * If someone is happy to run non-production ready code, there is no
> > reason they should be using the Twisted stuff, they should use the
> > pyzmq stuff.
> > * Twisted is a *massive* burden on our code base:
> >  - For package managers, it brings in Twisted, Foolscap and
> zope.interface.
> >  - It makes our test suite unstable and fragile because we have to
> > run tests in subprocesses and use trial sometimes and nose other
> > times.
> >  - It is a huge # of LOC.
> >  - It means that most of our codebase is Python 3 ready.
> >
> > There are lots of cons to this proposal:
> >
> > * That is really quick to drop support for the Twisted stuff.
> > * We may piss some people off.
> > * It possibly means maintaining the 0.10 series longer than we imagined.
> > * We don't have a security story for the pyzmq parallel stuff yet.
>
> I have to say that I simply didn't have Brian's boldness to propose
> this, but I think it's the right thing to do, ultimately.  It *is*
> painful in the short term, but it's also the honest approach.  I keep
> forgetting but Brian reminded me that even the Twisted-based code in
> 0.11 has serious regressions re. the 0.10.x series, since in the big
> refactoring for 0.11 not quite everything made it through.
>
> The 0.10 maintenance doesn't worry me a whole lot: as long as we limit
> it to small changes, by now merging them as self-contained pull
> requests is really easy (as I just did recently with the ones Paul and
> Tom sent).  And rolling out a new release when the total delta is
> small is actually not that much work.
>
> So I'm totally +1 on this radical, but I think ultimately beneficial,
> approach.  It's important to keep in mind that doing this will lift a
> big load off our shoulders, and we're a small enough team that this
> benefit is significant.  It will let us concentrate on moving the new
> machinery forward quickly without having to worry about the large
> Twisted code.  It will also help Thomas with his py3 efforts, as it's
> one less thing he has to keep getting out of his way.
>
> Concrete plan:
>
> - Wait a week or two for feedback.
> - If we decide to move ahead, make a shared branch on the main repo
> where we can do this work and review it, with all having the chance to
> contribute while it happens.
> - Move all twisted-using code (IPython/kernel and some code in
> IPython/testing) into IPython/deathrow.  This will let anyone who
> reall wants it find it easily, without having to dig through version
> control history.  Note that deathrow does *not* make it into official
> release tarballs.
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101029/f04154a8/attachment.html>

From ellisonbg at gmail.com  Sat Oct 30 02:25:18 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 29 Oct 2010 23:25:18 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTi=san27Ld=C_2R26Q8=dXb01_QttZEEemtF3tFY@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<AANLkTi=san27Ld=C_2R26Q8=dXb01_QttZEEemtF3tFY@mail.gmail.com>
Message-ID: <AANLkTin1aGHDNYMA_B5RRRB=ruu_bOk1GmrEbB-6tYqs@mail.gmail.com>

Min,

On Fri, Oct 29, 2010 at 10:55 PM, MinRK <benjaminrk at gmail.com> wrote:
> This is more agressive than I expected, I'll have to get the new parallel
> stuff in gear.

If you stopped writing great code, we wouldn't be tempted to do crazy
things like this ;-)

> The main roadblock for me is merging work into the Kernels. ?I plan to spend
> tomorrow working on getting the new parallel code ready for review, and
> identifying what needs to happen with code in master in order for this to go
> into 0.11. ?The only work that needs merge rather than drop-in is in Kernels
> and Session. ?I expect that just using the new Session will just be fine
> after a rewview, but getting the existing Kernels to provide what is
> necessary for the parallel code will be some work, and I'll try to identify
> exactly what that will look like.

Are you thinking of having only one top-level kernel script that
handles both the parallel computing stuff and the interactive IPython?
 I think the idea of that is fantastic, but I am not sure we need to
have all of that working to merge your stuff.  I am not opposed to
attempting this before/during the merge, but I don't view it as
absolutely needed.  Also, it may make sense to review your code
standalone first and then discuss merging the kernel and session stuff
with what we already have.

> The main things I know already:
> * Names should change (GH-178). It's really a coincidence that we had just
> one action per socket type, and the parallel code has several sockets of the
> same type, and some actions that can be on different socket types, depending
> on the scheduler.

Yep.

> * Use IOLoop/ZMQStream - this isn't necessarily critical, and I can probably
> do it with a subclass if we don't want it in the main kernels.

At this point I think that zmqstream has stablized enough that we
*should* be using it in the kernel and kernel manager code anyways.  I
am completely fine with this.

> * apply_request. This should be all new code, and shouldn't collide with
> anything.

Ok.

One other point that Fernando and I talked about is actually shipping
the rest of tornado with pyzmq.  I have been thinking more about the
architecture of the html notebook that James has been working on and
it is an absolutely perfect fit for implementing the server using our
zmq enabled Tornado event loop with tornado's regular http handling.
It would also give us ssl support, authentication and lots of other
web server goodies like websockets.  If we did this, I think it would
be possible to have a decent prototype of James' html notebook in
0.11.  What do you think about this Min?  We are already shipping a
good portion of tornado already with pyzmq and the rest is just a
dozen or so .py files (there is one .c file that we don't need for
python 2.6 and up).
Eventually I would like to contribute our ioloop.py and zmqstream to
tornado itself, but I don't think we have to worry about that yet.

Also, moving tornado into pyzmq would allow us to so secure https
connections for the parallel computing client - controller connection.

Cheers,

Brian

> Let me know what I can do to help things along.
> -MinRK
>
> On Fri, Oct 29, 2010 at 20:28, Fernando Perez <fperez.net at gmail.com> wrote:
>>
>> On Fri, Oct 29, 2010 at 11:23 AM, Brian Granger <ellisonbg at gmail.com>
>> wrote:
>> > Remove all of the twisted stuff from 0.11 and put the new zmq stuff in
>> > place as a prototype.
>> >
>> > Here is my logic:
>> >
>> > * The Twisted parallel stuff is *already* broken in 0.11 and if anyone
>> > has stable code running on it, they should be using 0.10.
>> > * If someone is happy to run non-production ready code, there is no
>> > reason they should be using the Twisted stuff, they should use the
>> > pyzmq stuff.
>> > * Twisted is a *massive* burden on our code base:
>> > ?- For package managers, it brings in Twisted, Foolscap and
>> > zope.interface.
>> > ?- It makes our test suite unstable and fragile because we have to
>> > run tests in subprocesses and use trial sometimes and nose other
>> > times.
>> > ?- It is a huge # of LOC.
>> > ?- It means that most of our codebase is Python 3 ready.
>> >
>> > There are lots of cons to this proposal:
>> >
>> > * That is really quick to drop support for the Twisted stuff.
>> > * We may piss some people off.
>> > * It possibly means maintaining the 0.10 series longer than we imagined.
>> > * We don't have a security story for the pyzmq parallel stuff yet.
>>
>> I have to say that I simply didn't have Brian's boldness to propose
>> this, but I think it's the right thing to do, ultimately. ?It *is*
>> painful in the short term, but it's also the honest approach. ?I keep
>> forgetting but Brian reminded me that even the Twisted-based code in
>> 0.11 has serious regressions re. the 0.10.x series, since in the big
>> refactoring for 0.11 not quite everything made it through.
>>
>> The 0.10 maintenance doesn't worry me a whole lot: as long as we limit
>> it to small changes, by now merging them as self-contained pull
>> requests is really easy (as I just did recently with the ones Paul and
>> Tom sent). ?And rolling out a new release when the total delta is
>> small is actually not that much work.
>>
>> So I'm totally +1 on this radical, but I think ultimately beneficial,
>> approach. ?It's important to keep in mind that doing this will lift a
>> big load off our shoulders, and we're a small enough team that this
>> benefit is significant. ?It will let us concentrate on moving the new
>> machinery forward quickly without having to worry about the large
>> Twisted code. ?It will also help Thomas with his py3 efforts, as it's
>> one less thing he has to keep getting out of his way.
>>
>> Concrete plan:
>>
>> - Wait a week or two for feedback.
>> - If we decide to move ahead, make a shared branch on the main repo
>> where we can do this work and review it, with all having the chance to
>> contribute while it happens.
>> - Move all twisted-using code (IPython/kernel and some code in
>> IPython/testing) into IPython/deathrow. ?This will let anyone who
>> reall wants it find it easily, without having to dig through version
>> control history. ?Note that deathrow does *not* make it into official
>> release tarballs.
>>
>> Cheers,
>>
>> f
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From benjaminrk at gmail.com  Sat Oct 30 03:10:44 2010
From: benjaminrk at gmail.com (MinRK)
Date: Sat, 30 Oct 2010 00:10:44 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTin1aGHDNYMA_B5RRRB=ruu_bOk1GmrEbB-6tYqs@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<AANLkTi=san27Ld=C_2R26Q8=dXb01_QttZEEemtF3tFY@mail.gmail.com>
	<AANLkTin1aGHDNYMA_B5RRRB=ruu_bOk1GmrEbB-6tYqs@mail.gmail.com>
Message-ID: <AANLkTim3gTGO=DXw2Dosf4kbESDQD92Z8bT3Saq6885d@mail.gmail.com>

On Fri, Oct 29, 2010 at 23:25, Brian Granger <ellisonbg at gmail.com> wrote:

> Min,
>
> On Fri, Oct 29, 2010 at 10:55 PM, MinRK <benjaminrk at gmail.com> wrote:
> > This is more agressive than I expected, I'll have to get the new parallel
> > stuff in gear.
>
> If you stopped writing great code, we wouldn't be tempted to do crazy
> things like this ;-)
>
> > The main roadblock for me is merging work into the Kernels.  I plan to
> spend
> > tomorrow working on getting the new parallel code ready for review, and
> > identifying what needs to happen with code in master in order for this to
> go
> > into 0.11.  The only work that needs merge rather than drop-in is in
> Kernels
> > and Session.  I expect that just using the new Session will just be fine
> > after a rewview, but getting the existing Kernels to provide what is
> > necessary for the parallel code will be some work, and I'll try to
> identify
> > exactly what that will look like.
>
> Are you thinking of having only one top-level kernel script that
> handles both the parallel computing stuff and the interactive IPython?
>  I think the idea of that is fantastic, but I am not sure we need to
> have all of that working to merge your stuff.  I am not opposed to
> attempting this before/during the merge, but I don't view it as
> absolutely needed.  Also, it may make sense to review your code
> standalone first and then discuss merging the kernel and session stuff
> with what we already have.
>

I was thinking that we already have a remote execution object, and the only
difference between the two is the connection patterns. New features/bugfixes
will likely want to be shared by both.  My StreamKernel was derived from the
original pykernel, but I kept working on it while you were developing on it,
so they diverged.  I think they can be merged, as long as we do a few
things, mostly to do with abstracting the connections:

     * allow Kernels to connect, not just bind
     * use action-based, not socket-type names
     * allow execution requests to come from a *list* of connections, not
just one
     * use sessions/ioloop instead of direct send/recv_json

I also think using a KernelManager would be good, because it gets nice
process management (restart the kernel, etc.), and I can't really do that
without a Kernel, but I could subclass.

Related question:

why is ipkernel not a subclass of pykernel?  There's lots of identical code
there.


>
> > The main things I know already:
> > * Names should change (GH-178). It's really a coincidence that we had
> just
> > one action per socket type, and the parallel code has several sockets of
> the
> > same type, and some actions that can be on different socket types,
> depending
> > on the scheduler.
>
> Yep.
>
> > * Use IOLoop/ZMQStream - this isn't necessarily critical, and I can
> probably
> > do it with a subclass if we don't want it in the main kernels.
>
> At this point I think that zmqstream has stablized enough that we
> *should* be using it in the kernel and kernel manager code anyways.  I
> am completely fine with this.
>
> > * apply_request. This should be all new code, and shouldn't collide with
> > anything.
>
> Ok.
>
> One other point that Fernando and I talked about is actually shipping
> the rest of tornado with pyzmq.  I have been thinking more about the
> architecture of the html notebook that James has been working on and
> it is an absolutely perfect fit for implementing the server using our
> zmq enabled Tornado event loop with tornado's regular http handling.
> It would also give us ssl support, authentication and lots of other
> web server goodies like websockets.  If we did this, I think it would
> be possible to have a decent prototype of James' html notebook in
> 0.11.  What do you think about this Min?  We are already shipping a
> good portion of tornado already with pyzmq and the rest is just a
> dozen or so .py files (there is one .c file that we don't need for
> python 2.6 and up).
> Eventually I would like to contribute our ioloop.py and zmqstream to
> tornado itself, but I don't think we have to worry about that yet.
>

I'm not very familiar with Tornado other than our use in pyzmq.  If we can
use it for authentication
without significant performance penalty, then that's a pretty big deal, and
well worth it.

It sounds like it would definitely provide a good toolkit for web backends,
so using it is probably a good idea.

I'm not sure that it should be *shipped* with pyzmq, though.  I think it
would be fine to ship with IPython
if we use it there, but I don't see a need to include it inside pyzmq.  If
we depend on it, then depend on it in PyPI,
but if it's only for some extended functionality, I don't see any problem
with asking people to install it, since it is
easy_installable (and apt-installable on Ubuntu).  PyZMQ is a pretty
low-level library - I don't think shipping someone else's
project inside it is a good idea unless there are significant benefits.


>
> Also, moving tornado into pyzmq would allow us to so secure https
> connections for the parallel computing client - controller connection.
>

Secure connections would be *great* if the performance is good enough.


>
> Cheers,
>
> Brian
>
> > Let me know what I can do to help things along.
> > -MinRK
> >
> > On Fri, Oct 29, 2010 at 20:28, Fernando Perez <fperez.net at gmail.com>
> wrote:
> >>
> >> On Fri, Oct 29, 2010 at 11:23 AM, Brian Granger <ellisonbg at gmail.com>
> >> wrote:
> >> > Remove all of the twisted stuff from 0.11 and put the new zmq stuff in
> >> > place as a prototype.
> >> >
> >> > Here is my logic:
> >> >
> >> > * The Twisted parallel stuff is *already* broken in 0.11 and if anyone
> >> > has stable code running on it, they should be using 0.10.
> >> > * If someone is happy to run non-production ready code, there is no
> >> > reason they should be using the Twisted stuff, they should use the
> >> > pyzmq stuff.
> >> > * Twisted is a *massive* burden on our code base:
> >> >  - For package managers, it brings in Twisted, Foolscap and
> >> > zope.interface.
> >> >  - It makes our test suite unstable and fragile because we have to
> >> > run tests in subprocesses and use trial sometimes and nose other
> >> > times.
> >> >  - It is a huge # of LOC.
> >> >  - It means that most of our codebase is Python 3 ready.
> >> >
> >> > There are lots of cons to this proposal:
> >> >
> >> > * That is really quick to drop support for the Twisted stuff.
> >> > * We may piss some people off.
> >> > * It possibly means maintaining the 0.10 series longer than we
> imagined.
> >> > * We don't have a security story for the pyzmq parallel stuff yet.
> >>
> >> I have to say that I simply didn't have Brian's boldness to propose
> >> this, but I think it's the right thing to do, ultimately.  It *is*
> >> painful in the short term, but it's also the honest approach.  I keep
> >> forgetting but Brian reminded me that even the Twisted-based code in
> >> 0.11 has serious regressions re. the 0.10.x series, since in the big
> >> refactoring for 0.11 not quite everything made it through.
> >>
> >> The 0.10 maintenance doesn't worry me a whole lot: as long as we limit
> >> it to small changes, by now merging them as self-contained pull
> >> requests is really easy (as I just did recently with the ones Paul and
> >> Tom sent).  And rolling out a new release when the total delta is
> >> small is actually not that much work.
> >>
> >> So I'm totally +1 on this radical, but I think ultimately beneficial,
> >> approach.  It's important to keep in mind that doing this will lift a
> >> big load off our shoulders, and we're a small enough team that this
> >> benefit is significant.  It will let us concentrate on moving the new
> >> machinery forward quickly without having to worry about the large
> >> Twisted code.  It will also help Thomas with his py3 efforts, as it's
> >> one less thing he has to keep getting out of his way.
> >>
> >> Concrete plan:
> >>
> >> - Wait a week or two for feedback.
> >> - If we decide to move ahead, make a shared branch on the main repo
> >> where we can do this work and review it, with all having the chance to
> >> contribute while it happens.
> >> - Move all twisted-using code (IPython/kernel and some code in
> >> IPython/testing) into IPython/deathrow.  This will let anyone who
> >> reall wants it find it easily, without having to dig through version
> >> control history.  Note that deathrow does *not* make it into official
> >> release tarballs.
> >>
> >> Cheers,
> >>
> >> f
> >> _______________________________________________
> >> IPython-dev mailing list
> >> IPython-dev at scipy.org
> >> http://mail.scipy.org/mailman/listinfo/ipython-dev
> >
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101030/9fb2e499/attachment.html>

From jorgen.stenarson at bostream.nu  Sat Oct 30 03:12:13 2010
From: jorgen.stenarson at bostream.nu (=?ISO-8859-1?Q?J=F6rgen_Stenarson?=)
Date: Sat, 30 Oct 2010 09:12:13 +0200
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTin1aGHDNYMA_B5RRRB=ruu_bOk1GmrEbB-6tYqs@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>	<AANLkTi=san27Ld=C_2R26Q8=dXb01_QttZEEemtF3tFY@mail.gmail.com>
	<AANLkTin1aGHDNYMA_B5RRRB=ruu_bOk1GmrEbB-6tYqs@mail.gmail.com>
Message-ID: <4CCBC54D.9030807@bostream.nu>

Hi,

Brian Granger skrev 2010-10-30 08:25:
> Eventually I would like to contribute our ioloop.py and zmqstream to
> tornado itself, but I don't think we have to worry about that yet.
>
> Also, moving tornado into pyzmq would allow us to so secure https
> connections for the parallel computing client - controller connection.

I did a quick websearch for tornado and found tornadoweb.org is that 
what you are talking about? Looking at their webpage I can't see 
anything about windows. What would going that route mean for supporting 
the windows platform?

/J?rgen


From benjaminrk at gmail.com  Sat Oct 30 03:17:12 2010
From: benjaminrk at gmail.com (MinRK)
Date: Sat, 30 Oct 2010 00:17:12 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <4CCBC54D.9030807@bostream.nu>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<AANLkTi=san27Ld=C_2R26Q8=dXb01_QttZEEemtF3tFY@mail.gmail.com>
	<AANLkTin1aGHDNYMA_B5RRRB=ruu_bOk1GmrEbB-6tYqs@mail.gmail.com>
	<4CCBC54D.9030807@bostream.nu>
Message-ID: <AANLkTimD8UOFXELV=_pbRbe7KwYK5yJ7h109qoRNBC7z@mail.gmail.com>

On Sat, Oct 30, 2010 at 00:12, J?rgen Stenarson <
jorgen.stenarson at bostream.nu> wrote:

> Hi,
>
> Brian Granger skrev 2010-10-30 08:25:
> > Eventually I would like to contribute our ioloop.py and zmqstream to
> > tornado itself, but I don't think we have to worry about that yet.
> >
> > Also, moving tornado into pyzmq would allow us to so secure https
> > connections for the parallel computing client - controller connection.
>
> I did a quick websearch for tornado and found tornadoweb.org is that
> what you are talking about? Looking at their webpage I can't see
> anything about windows. What would going that route mean for supporting
> the windows platform?
>

Yes, that's the right tornado.  Tornado is (almost) pure Python, and at
least the parts we use work just fine on Windows.


>
> /J?rgen
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101030/8a557c8e/attachment.html>

From gael.varoquaux at normalesup.org  Sat Oct 30 03:36:03 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sat, 30 Oct 2010 09:36:03 +0200
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
Message-ID: <20101030073603.GA1308@phare.normalesup.org>

Hi there,

On Fri, Oct 29, 2010 at 08:28:19PM -0700, Fernando Perez wrote:
> I have to say that I simply didn't have Brian's boldness to propose
> this, but I think it's the right thing to do, ultimately.  It *is*
> painful in the short term, but it's also the honest approach.  I keep
> forgetting but Brian reminded me that even the Twisted-based code in
> 0.11 has serious regressions re. the 0.10.x series, since in the big
> refactoring for 0.11 not quite everything made it through.

I haven't been contributing to the discussion, because I don't have any
time to contribute. Here is my gut feeling and an end user:

Progress made in 0.11 looks awesome. However, I am not sure what the net
gain is for an user. It seems to me that the core architecture is there,
but the end-user aspects are not finished. The twisted code is gone, but
I hear that the pyzmq code to replace it is not stable yet. So, here is
the question: why release now, and not in 6 months? If you release now,
distributions will almost automatically package 0.11, so it will land on
user's boxes.

This is a naive question, I may very well have missed important aspect of
the discussion.

Ga?l


From shr066 at gmail.com  Sat Oct 30 10:33:55 2010
From: shr066 at gmail.com (Steve Rogers)
Date: Sat, 30 Oct 2010 08:33:55 -0600
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <20101030073603.GA1308@phare.normalesup.org>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<20101030073603.GA1308@phare.normalesup.org>
Message-ID: <AANLkTim1PLsUNrAPmMTt83QWt-JnO3bP3kjjwo7R5oky@mail.gmail.com>

+1 on transitioning from Twisted to pyzmq for the 0.11 release..

I'm investigating pyzmq for a project that requires some security and I may
be able to help in that area.

-- 
Steve Rogers
http://www.linkedin.com/in/shrogers
?Do what you can, with what you have, where you are.? -- Theodore Roosevelt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101030/b40f3502/attachment.html>

From benjaminrk at gmail.com  Sat Oct 30 16:27:22 2010
From: benjaminrk at gmail.com (MinRK)
Date: Sat, 30 Oct 2010 13:27:22 -0700
Subject: [IPython-dev] Parallel IPython with ZMQ slides
Message-ID: <AANLkTi=d6HdwOJO2XmnM=6yQ6E5Jo-ESmvcFO_bTBgib@mail.gmail.com>

As requested, the slides from my presentation of the new parallel code to
Py4Science last week:
http://ptsg.berkeley.edu/~minrk/ipzmq/ipzmq.p4s.pdf

And code for some of the demos (+ NetworkX DAG dependencies):
http://ptsg.berkeley.edu/~minrk/ipzmq/demo.zip

-MinRK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101030/295e9ac1/attachment.html>

From benjaminrk at gmail.com  Sat Oct 30 17:42:53 2010
From: benjaminrk at gmail.com (MinRK)
Date: Sat, 30 Oct 2010 14:42:53 -0700
Subject: [IPython-dev] ipqt doc started
Message-ID: <AANLkTinNOgcoTaVimOU8nOJPSuK08tNpV3Lp14=pEZRS@mail.gmail.com>

Fernando,

I realized that I totally forgot about the skeleton doc for the qt console
last week. Sorry!

I'm putting it together in my ipqt-docs branch, but I will do it directly in
master, if you want.

You can see it in progress:
http://ptsg.berkeley.edu/~minrk/ipdocs/interactive/qtconsole.html

-MinRK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101030/bacaa982/attachment.html>

From fperez.net at gmail.com  Sat Oct 30 18:24:28 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 30 Oct 2010 15:24:28 -0700
Subject: [IPython-dev] ipqt doc started
In-Reply-To: <AANLkTinNOgcoTaVimOU8nOJPSuK08tNpV3Lp14=pEZRS@mail.gmail.com>
References: <AANLkTinNOgcoTaVimOU8nOJPSuK08tNpV3Lp14=pEZRS@mail.gmail.com>
Message-ID: <AANLkTind=7dh9YfdvO3XTu779WaRx-8AG9utED6a_ka0@mail.gmail.com>

Hey,

On Sat, Oct 30, 2010 at 2:42 PM, MinRK <benjaminrk at gmail.com> wrote:
> Fernando,
> I realized that I totally forgot about the skeleton doc for the qt console
> last week. Sorry!
> I'm putting it together in my ipqt-docs branch, but I will do it directly in
> master, if you want.
> You can see it in progress:
> http://ptsg.berkeley.edu/~minrk/ipdocs/interactive/qtconsole.html
> -MinRK

This is great, thanks!  Just push straight to trunk: for pure doc
work, especially of this kind (where anything is an improvement, since
we have zero), there's no need to add a layer of friction; we trust
you :)

Cheers,

f


From fperez.net at gmail.com  Sat Oct 30 18:32:22 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 30 Oct 2010 15:32:22 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTi=san27Ld=C_2R26Q8=dXb01_QttZEEemtF3tFY@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<AANLkTi=san27Ld=C_2R26Q8=dXb01_QttZEEemtF3tFY@mail.gmail.com>
Message-ID: <AANLkTikkpL7c=9oBYRiVWxFcpdN2XAWhbpwbVHxyderi@mail.gmail.com>

On Fri, Oct 29, 2010 at 10:55 PM, MinRK <benjaminrk at gmail.com> wrote:
> This is more agressive than I expected, I'll have to get the new parallel
> stuff in gear.

Sorry if it sounded like we were putting you on the spot, that was
most definitely *not* our intention.  As Brian said, if you hadn't
done such a crazy good job, we wouldn't even have dreamed of going in
this direction :)

But I think that Brian's point, with which I agree, is basically that
0.11 should go with the pyzmq code *even if it's very raw*, because
the 0.11 Twisted code is broken enough that it would be a dis-service
to our users to lure them into that code.

0.11 will be released as a "tech preview" release, prominently and
clearly labeled as not for production systems, with a note to
distributors that they probably should *not* package it by default as
a replacement for 0.10 yet (that may take another two or three
releases before it's a sensible thing to do).  But the Twisted code in
0.10 is just not in really usable shape, and this is a way of being
upfront and honest about it.

So don't worry too much on your side, we're *not* putting all the
burden on your shoulders.

Cheers,

f


From fperez.net at gmail.com  Sat Oct 30 18:32:47 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 30 Oct 2010 15:32:47 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTim1PLsUNrAPmMTt83QWt-JnO3bP3kjjwo7R5oky@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<20101030073603.GA1308@phare.normalesup.org>
	<AANLkTim1PLsUNrAPmMTt83QWt-JnO3bP3kjjwo7R5oky@mail.gmail.com>
Message-ID: <AANLkTimuQcSwg6Zk50V3BjeGagaS5g+J_28i_ALgBTiu@mail.gmail.com>

On Sat, Oct 30, 2010 at 7:33 AM, Steve Rogers <shr066 at gmail.com> wrote:
>
> I'm investigating pyzmq for a project that requires some security and I may
> be able to help in that area

That would be fantastic, by all means pitch in.

Regards,

f


From fperez.net at gmail.com  Sat Oct 30 18:47:29 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 30 Oct 2010 15:47:29 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <20101030073603.GA1308@phare.normalesup.org>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<20101030073603.GA1308@phare.normalesup.org>
Message-ID: <AANLkTinskQzCeAEs-VC6jCfvUF+2kPw8S0CcnoRgHDEy@mail.gmail.com>

On Sat, Oct 30, 2010 at 12:36 AM, Gael Varoquaux
<gael.varoquaux at normalesup.org> wrote:
> I haven't been contributing to the discussion, because I don't have any
> time to contribute. Here is my gut feeling and an end user:
>
> Progress made in 0.11 looks awesome. However, I am not sure what the net
> gain is for an user. It seems to me that the core architecture is there,
> but the end-user aspects are not finished. The twisted code is gone, but
> I hear that the pyzmq code to replace it is not stable yet. So, here is
> the question: why release now, and not in 6 months? If you release now,
> distributions will almost automatically package 0.11, so it will land on
> user's boxes.
>
> This is a naive question, I may very well have missed important aspect of
> the discussion.

Yes, you missed the whole point, I'm afraid.

0.11 has major new *features* for end-users that are already very
usable.  The Qt console isn't perfect, but it's very, very, very nice
as an everyday working environment, and it has a massive amount of
improvements for many use cases over the terminal-only one.  James'
html client also offers remote collaboration and zero-install
capabilities that we don't have anywhere else, and even as the
architecture there is refined, that's already a big win for end users.
 The new configuration system is also much cleaner and nicer, and the
layout of the actual source code is far more rational and navigable
than before, a big win for potential new contributors.

Most importantly, now there's a proper *architecture* for how the
entire ipython machinery works across all modalities (one-process,
multi-process interactive, parallel) that is well thought out *and*
documented.  A data point that indicates that we probably got more
things right than wrong was that I spent probably less than one hour
explaining to James the messaging design, and without *ever* having
done any ipython coding, in a couple of days he came back with a fully
working html notebook.  Doing something like that with the 0.10-era
code would have been unthinkable (as someone who wrestled that code
into the ground for embedding as a client, you know full well the
nightmare it was, and how much nasty work it took).

We've recently had Evan Patterson, Mark Voorhies and James Gao land
into the IPython codebase and immediately make real contributions:
this  is an indicator that for the first time ever, we have a codebase
that can actually accept new participants without dragging them into
the nightmare-inducing maze of object-oriented spaghetti we had
before.  Having seen this, the best way to gain contributors is to
have people use the new features, find something they like and then
find something they don't like but are willing to fix/improve.

Furthermore, many many apis have changed in backwards-incompatible
ways.  It's important that people who had developed other projects
using ipython as a library have a chance to adapt their projects, give
us feedback so we can fine-tune things to make their job easier, make
suggestions of what could be done better from our side, etc.  So we
want those projects to have a clear starting point where their
developers can test things out.  If we wait until everything has
settled 6 months from now, it may be harder to adjust to their
feedback.

In summary, both from the perspective of end users and that of
developers building atop ipython, we have very good reasons to not
delay a release much further (4-8 weeks from now is OK, 6 months is
most definitely not).  We'll label it clearly so that nobody gets 0.11
when they don't expect it, but we do want these new features out for
people to both enjoy and to hammer on while they're still warm enough
that we can mold them.

Cheers,

f


From takowl at gmail.com  Sat Oct 30 19:18:41 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sun, 31 Oct 2010 00:18:41 +0100
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTinhRowBGoWNCjnZkLk2VTeVTSQPO7xkE7smOPa_@mail.gmail.com>
References: <AANLkTikkLoeRxtq-aQgiX_SXiJqZDUhAUoRiWRtRKxWC@mail.gmail.com>
	<AANLkTinhRowBGoWNCjnZkLk2VTeVTSQPO7xkE7smOPa_@mail.gmail.com>
Message-ID: <AANLkTi=rkgR=LPqkM8MhuFOmu4P+HAGfF57HDh957YaU@mail.gmail.com>

On 30 October 2010 02:01, Fernando Perez <fperez.net at gmail.com> wrote:

> Thomas, I think the easiest way forward will be to move your py3 repo
> to be owned by this team, as that will make it easier to get
> collaboration from others, pull requests reviewed, etc.  Doing that on
> a personally-held repo is awkward.   You need to ask github manually
> to do it:
>

I've asked, but apparently they can't do it, because it's a fork of the repo
the IPython organisation already owns:
http://support.github.com/discussions/organization-issues/315-request-to-transfer-my-personal-repository-to-an-organisation

I guess we could either push my branch to the organisation's existing repo
(without merging it into trunk), or set up a new ipython-py3k repository,
and push my branch as master there.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101031/61887646/attachment.html>

From erik.tollerud at gmail.com  Sat Oct 30 20:55:33 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Sat, 30 Oct 2010 17:55:33 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTim1PLsUNrAPmMTt83QWt-JnO3bP3kjjwo7R5oky@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<20101030073603.GA1308@phare.normalesup.org>
	<AANLkTim1PLsUNrAPmMTt83QWt-JnO3bP3kjjwo7R5oky@mail.gmail.com>
Message-ID: <AANLkTikLQmN==X8YxFuAerVk-axriv0xGCRQvFDkdFmn@mail.gmail.com>

+1 on removing Twisted (mostly from a user perspective) - I've had a
number of different strange dependency problems with Twisted while
trying to get IPython working on a cluster, so ditching it entirely
would make such things easier for experimenting with .11

With that in mind, I'm in the early stages of developing a small
application intended for use on the aforementioned cluster.  I'd like
to write it for use with .11 instead of .10 if possible (thinking of
it partly as a test case to help with debugging .11).  Because of the
network setup and the possible need for remote collaborator input, the
ideal situation would be to use the html frontend to get at the
cluster.  But that's a non-starter if there's no authentication - is
it at all realistic to try for the html frontend with authentication
in .11 (presumably w/ Tornado, based on the above discussion)?


-- 
Erik Tollerud


From fperez.net at gmail.com  Sat Oct 30 21:45:55 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 30 Oct 2010 18:45:55 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTi=rkgR=LPqkM8MhuFOmu4P+HAGfF57HDh957YaU@mail.gmail.com>
References: <AANLkTikkLoeRxtq-aQgiX_SXiJqZDUhAUoRiWRtRKxWC@mail.gmail.com>
	<AANLkTinhRowBGoWNCjnZkLk2VTeVTSQPO7xkE7smOPa_@mail.gmail.com>
	<AANLkTi=rkgR=LPqkM8MhuFOmu4P+HAGfF57HDh957YaU@mail.gmail.com>
Message-ID: <AANLkTi=8JyY67iSLsO4phCiFVSacZNGV6XmJmJBPLnsn@mail.gmail.com>

On Sat, Oct 30, 2010 at 4:18 PM, Thomas Kluyver <takowl at gmail.com> wrote:
> I've asked, but apparently they can't do it, because it's a fork of the repo
> the IPython organisation already owns:
> http://support.github.com/discussions/organization-issues/315-request-to-transfer-my-personal-repository-to-an-organisation

Of course, I should have thought of that from the start, sorry for
sending you on a wild goose chase.

> I guess we could either push my branch to the organisation's existing repo
> (without merging it into trunk), or set up a new ipython-py3k repository,
> and push my branch as master there.

Your wish is my command :)  I pushed from your master as it was just now.

http://github.com/ipython/ipython-py3k

I followed this route because I can imagine work on this repo being a
little erratic, and possibly doing large rebases/cleanups before a
final merge in the ipython trunk.  So having it isolated in a separate
repo seems like a cleaner alternative.

Cheers,

f


From fperez.net at gmail.com  Sat Oct 30 21:49:32 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 30 Oct 2010 18:49:32 -0700
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTikLQmN==X8YxFuAerVk-axriv0xGCRQvFDkdFmn@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<20101030073603.GA1308@phare.normalesup.org>
	<AANLkTim1PLsUNrAPmMTt83QWt-JnO3bP3kjjwo7R5oky@mail.gmail.com>
	<AANLkTikLQmN==X8YxFuAerVk-axriv0xGCRQvFDkdFmn@mail.gmail.com>
Message-ID: <AANLkTi=QNBCWM4wjXj5PK5vSGxU+tx8wOm35gWpk9TM=@mail.gmail.com>

On Sat, Oct 30, 2010 at 5:55 PM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
> +1 on removing Twisted (mostly from a user perspective) - I've had a
> number of different strange dependency problems with Twisted while
> trying to get IPython working on a cluster, so ditching it entirely
> would make such things easier for experimenting with .11

Thanks for the feedback.

> With that in mind, I'm in the early stages of developing a small
> application intended for use on the aforementioned cluster. ?I'd like
> to write it for use with .11 instead of .10 if possible (thinking of
> it partly as a test case to help with debugging .11). ?Because of the
> network setup and the possible need for remote collaborator input, the
> ideal situation would be to use the html frontend to get at the
> cluster. ?But that's a non-starter if there's no authentication - is
> it at all realistic to try for the html frontend with authentication
> in .11 (presumably w/ Tornado, based on the above discussion)?

That's our hope, but obviously I can't promise anything :)  But yes,
that's one big reason to think of Tornado, so that we can at least
have basic auth/ssl support for the web entry point.  The kernels by
default only listen on localhost, so it's not that bad: you have a
problem with being open to other local users, but not to the whole
internet.  But since the webnb effectively forwards the
obscure/multiport/local zmq connections over nice, clean,
everybody-understands-it http, it all of a sudden turns an improbable
problem into a likely catastrophe.

Let's see how the next few weeks go, and all help on this front will
be very welcome.

Cheers,

f


From gael.varoquaux at normalesup.org  Sun Oct 31 05:00:24 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sun, 31 Oct 2010 10:00:24 +0100
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
In-Reply-To: <AANLkTinskQzCeAEs-VC6jCfvUF+2kPw8S0CcnoRgHDEy@mail.gmail.com>
References: <AANLkTi=HDuEWgX931RCKZLhdFvcGiP2X3sCx7-D5-nV8@mail.gmail.com>
	<AANLkTimGQL4oT6dVgKCHMuyZWM-+iPrX6_WKhhFXq7Nv@mail.gmail.com>
	<AANLkTik=JHAT-0wni8-2Kb8Ce+XTuhOnWMLd0WsCVMDk@mail.gmail.com>
	<20101030073603.GA1308@phare.normalesup.org>
	<AANLkTinskQzCeAEs-VC6jCfvUF+2kPw8S0CcnoRgHDEy@mail.gmail.com>
Message-ID: <20101031090024.GA937@phare.normalesup.org>

On Sat, Oct 30, 2010 at 03:47:29PM -0700, Fernando Perez wrote:
> > Progress made in 0.11 looks awesome. However, I am not sure what the net
> > gain is for an user. It seems to me that the core architecture is there,
> > but the end-user aspects are not finished.
> > [...]

> Yes, you missed the whole point, I'm afraid.

Fair enough, I should have thought about the Qt-based stuff :),
especially since I have been following it quite closely.

> 0.11 has major new *features* for end-users that are already very
> usable.  The Qt console isn't perfect, but it's very, very, very nice
> as an everyday working environment, and it has a massive amount of
> improvements for many use cases over the terminal-only one.  James'
> html client also offers remote collaboration and zero-install
> capabilities that we don't have anywhere else, and even as the
> architecture there is refined, that's already a big win for end users.
> [...]

My question just boils down to: is it worth giving this win to end users
while removing other things they already have? Should a '0.X' release
ever temporarily remove features that will come back later? As an
outsider from a project, I expect the feature set always to be
increasing, or to be simply features to be dropped.

> Most importantly, now there's a proper *architecture* for how the
> entire ipython machinery works across all modalities (one-process,
> multi-process interactive, parallel) that is well thought out *and*
> documented.  A data point that indicates that we probably got more
> things right than wrong was that I spent probably less than one hour
> explaining to James the messaging design, and without *ever* having
> done any ipython coding, in a couple of days he came back with a fully
> working html notebook. 

I agree with the progress. I can see the gain. It's great, but it seems
to be more of a developper point of view that an end user point of view.

> We've recently had Evan Patterson, Mark Voorhies and James Gao land
> into the IPython codebase and immediately make real contributions:
> this  is an indicator that for the first time ever, we have a codebase
> that can actually accept new participants without dragging them into
> the nightmare-inducing maze of object-oriented spaghetti we had
> before. [...]

Absolutely, I have been witnessing this and I can tangibly feel this
improvement. There has clearly been a shift in the project's dynamics
that is awesome. 

Anyhow, it won't impact me much, because at work we are stuck with really
old stuff (.8, I believe), and I myself already use the latest code from
git updated every once in a while. I am just wondering if, from the point
of view of the end user, it is not worth waiting another 6 months to have
things stabilize a bit more. Release an 'alpha', or 'technology preview'
might be a better message to give.

Now I am just going to shut up, and watch the fantastic progress that is
being made.

Gael


From takowl at gmail.com  Sun Oct 31 14:19:57 2010
From: takowl at gmail.com (Thomas Kluyver)
Date: Sun, 31 Oct 2010 18:19:57 +0000
Subject: [IPython-dev] Starting to plan for 0.11 (this time for real)
Message-ID: <AANLkTimsSs1dU4AunermiAm3GHcBk9bBk-uhLM5AiBmX@mail.gmail.com>

On 31 October 2010 17:00, <ipython-dev-request at scipy.org> wrote:

> I am just wondering if, from the point
> of view of the end user, it is not worth waiting another 6 months to have
> things stabilize a bit more. Release an 'alpha', or 'technology preview'
> might be a better message to give.
>

I'll just chime in on this: if we make an official 'release', so we put out
0.11 and continue development towards 0.12, it is possible that
distributions will package it as an upgrade, no matter how clearly we say
"not ready for production." This happened quite prominently with KDE 4.0:
once it reached 'final release', it got out to users, then suffered a
backlash because it wasn't really ready. Obviously IPython isn't quite as
high profile as KDE, so it may be easier to avoid that sort of debacle. But
there is something to be said for calling things alphas/betas if that's what
they are.

I'm not saying that I think we should or shouldn't make a release. But I can
see the argument against it.

Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101031/cb9ea9f2/attachment.html>

From fperez.net at gmail.com  Sun Oct 31 15:08:46 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 31 Oct 2010 12:08:46 -0700
Subject: [IPython-dev] IPython on python3, for the adventurous...
Message-ID: <AANLkTikO4XbNHE2dLCB_+4hJUbg=NZXm0FKXMum-4VT9@mail.gmail.com>

Hi all,

thanks to the great work done by Thomas Kluyver, for those of you
starting to use Numpy (or other projects) on Python 3 and who wanted
to have IPython as your working environment, things are getting off
the ground:

(py3k)amirbar[py3k]> python3 ipython.py
Python 3.1.2 (release31-maint, Sep 17 2010, 20:34:23)
Type "copyright", "credits" or "license" for more information.

IPython 0.11.alpha1.git -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import io

In [2]: import sys

In [3]: sys.ver
sys.version       sys.version_info

In [3]: sys.version
Out[3]: '3.1.2 (release31-maint, Sep 17 2010, 20:34:23) \n[GCC 4.4.5]'

etc...

This is still very much a work in progress, and we haven't made any
official release.  But there's a github repo up and running with a
proper bug tracker, so by all means let us know what works and what
doesnt' (and even better, join in with help!):

http://github.com/ipython/ipython-py3k

Our hope is to eventually merge this work into ipython proper at some
point in the future, but since the history of this branch is likely to
be messy (with many merges from trunk, auto-generated code and
possible rebases down the road) we're keeping it in a separate repo
for now.

Cheers,

f


From Fernando.Perez at berkeley.edu  Sun Oct 31 15:21:00 2010
From: Fernando.Perez at berkeley.edu (Fernando Perez)
Date: Sun, 31 Oct 2010 12:21:00 -0700
Subject: [IPython-dev] Work on new Viz.Engine Fos
In-Reply-To: <1288257563.1734.20.camel@dragonfly>
References: <1288257563.1734.20.camel@dragonfly>
Message-ID: <AANLkTinqkmneFS9Wry4rwYmT-F+7J7bagMu9DW3wSDiT@mail.gmail.com>

Hi Guys,

these questions are best asked on the ipython-dev list, so that others
can both benefit from the discussion and contribute to it (often
others will be more knowledgeable than myself on specific topics).
I'm cc'ing the list here with the original question for full context.

There are two (well, three if you count the old 0.10.x threads-based
approach, but I'm ignoring that as we move forward and so should you,
if you value your sanity) models to consider in the 0.11.x development
tree:

- single-process, in-terminal ipython ("classic" ipython, if you
will): the gui handling is done by tying the gui event loop to calls
into PyOSInputHook.  Relevant files:

http://github.com/ipython/ipython/blob/master/IPython/lib/inputhook.py
 # General support
http://github.com/ipython/ipython/blob/master/IPython/lib/inputhookgtk.py
# gtk specifics
http://github.com/ipython/ipython/blob/master/IPython/lib/inputhookwx.py
 # wx specifics

The details are toolkits-pecific, and you'll need to find what the
right steps are for pyglet, but the basic idea is always the same: tie
the toolkit's own event loop to sync with python's PyOSInputHoook,
which fires every time the terminal is ready to read user input.

- multi-process, zeromq-based ipython: in this case there's no
PyOSInputHook to fire, because there's no terminal waiting to read.
All inputs to the kernel arrive over the network from a different
process.  In this case, a different approach is needed, based on the
same general idea: the toolkit's event loop is wired to operate on an
idle timer of some kind, the specifics being again toolkit-specific.
The details are in this file (see top-level docstring):

http://github.com/ipython/ipython/blob/master/IPython/lib/guisupport.py

and for GTK this file has the particular class:

http://github.com/ipython/ipython/blob/master/IPython/zmq/gui/gtkembed.py

In summary, you'll want to figure out how to sync the pyglet event
loop in both scenarios: at the terminal with PyOS_InputHook and at the
multiprocess ipython with some kind of idle timer.

Let us know how it goes, and we'll be happy to include the generic
support in ipython proper so that we support pyglet in addition to the
major GUI toolkits.

Cheers,

f

On Thu, Oct 28, 2010 at 02:19, Stephan Gerhard <connectome at unidesign.ch> wrote:
> Hi Fernando,
>
> Hope you are doing fine!
>
> Eleftherios and me (you might remember me from HBM, the ConnectomeViewer
> guy ;) are currently working together on Fos, the new 3d scientific
> visualization engine for neuroimaging based on pyglet.
>
> We came across an issue that might be related to new developments of
> IPython and we wanted to make sure that we are going in the right
> direction. It is also about event loops, where you are more expert than
> both of us.
>
> We would like to be able to interact with the Fos viz window (derived
> pyglet.window.Window) from IPython without Fos blocking IPython. We
> looked into Sympy which seems to have solved this issue and we are
> trying to understand what they did.
>
> How could we benefit from the 2 processes model that IPython will
> implement? I read in an email that the event handling of IPython is
> going to change. How would this impact the solution that Sympy came up
> with for this problem?
>
> At some point, we might want to have a fos viz server running, and
> communicate with it over sockets to send geometry and commands. But I am
> not sure how this would fit in this picture. Probably zeromq might be
> helpful here. Do you know any good tutorial?
>
> Is there anything you can think of we should take care of?
>
> Well, many questions, but I hope this is fine for you :)
>
> Cheers,
> Stephan & Eleftherios


From benjaminrk at gmail.com  Fri Oct 22 16:04:24 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 22 Oct 2010 20:04:24 -0000
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
In-Reply-To: <AANLkTinp-TzJEobf8xZf6npSVmt4O3f3TLickXmTcMpJ@mail.gmail.com>
References: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
	<AANLkTi=CrceUHbxfPxVui4WZ-BSBkZuYYi25ePH+NvTn@mail.gmail.com>
	<AANLkTinp-TzJEobf8xZf6npSVmt4O3f3TLickXmTcMpJ@mail.gmail.com>
Message-ID: <AANLkTika8y1Neg13JDD1rEjrO2rLL7hqtEeCXr+YaeC4@mail.gmail.com>

Re-run for throughput with data:

submit 16 tasks of a given size, plot against size.
new-style:
def echo(a):
    return a
old-style:
task = StringTask("b=a", push=dict(a=a), pull=['b'])

The input chosen was random numpy arrays (64b float, so len(A)/8 ~= size in
B).

Notable points:
* ZMQ submission remains flat, independent of size, due to non-copying sends
* size doesn't come into account until ~100kB, and clearly dominates both
after 1MB
    the turning point for Twisted is a little earlier than for ZMQ
* at 4MB, Twisted is submitting < 2 tasks per sec, while ZMQ is submitting
~90
* roundtrip, ZMQ is fairly consistently ~40x faster.

memory usage:
* Peak memory for the engines is 20% higher with ZMQ, because more than one
task can now be waiting in the queue on the engine at a time.
* Peak memory for the Controller including schedulers is 25% less than
Twisted with pure ZMQ, and 20% less with the Python scheduler. Note that all
results still reside in memory, since I haven't implemented the db backend
yet.
* Peak memory for the Python scheduler is approximately the same as the
engines
* Peak memory for the zmq scheduler is about half that.

-MinRK

On Fri, Oct 22, 2010 at 09:52, MinRK <benjaminrk at gmail.com> wrote:

> I'll get on the new tests, I already have a bandwidth one written, so I'm
> running it now.  As for Twisted's throughput performance, it's at least
> partly our fault.  Since the receiving is in Python, every time we try to
> send there are incoming results getting in the way.  If we wrote it such
> that sending prevented the receipt of results, I'm sure the Twisted code
> would be faster for large numbers of messages.  With ZMQ, though, we don't
> have to be receiving in Python to get the results to the client process, so
> they arrive in ZMQ and await simple memcpy/deserialization.
>
> -MinRK
>
>
> On Fri, Oct 22, 2010 at 09:27, Brian Granger <ellisonbg at gmail.com> wrote:
>
>> Min,
>>
>> Also, can you get memory consumption numbers for the controller and
>> queues.  I want to see how much worse Twisted is in that respect.
>>
>> Cheers,
>>
>> Brian
>>
>> On Thu, Oct 21, 2010 at 11:53 PM, MinRK <benjaminrk at gmail.com> wrote:
>>
>>> I have my first performance numbers for throughput with the new parallel
>>> code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
>>> ~512 tiny tasks submitted as fast as they can is ~100x faster than with
>>> Twisted.
>>>
>>> As a throughput test, I submitted a flood of many very small tasks that
>>> should take ~no time:
>>> new-style:
>>> def wait(t=0):
>>>     import time
>>>     time.sleep(t)
>>> submit:
>>> client.apply(wait, args=(t,))
>>>
>>> Twisted:
>>> task = StringTask("import time; time.sleep(%f)"%t)
>>> submit:
>>> client.run(task)
>>>
>>> Flooding the queue with these tasks with t=0, and then waiting for the
>>> results, I tracked two times:
>>> Sent: the time from the first submit until the last submit returns
>>> Roundtrip: the time from the first submit to getting the last result
>>>
>>> Plotting these times vs number of messages, we see some decent numbers:
>>> * The pure ZMQ scheduler is fastest, 10-100 times faster than Twisted
>>> roundtrip
>>> * The Python scheduler is ~3x slower roundtrip than pure ZMQ, but no
>>> penalty to the submission rate
>>> * Twisted performance falls off very quickly as the number of tasks grows
>>> * ZMQ performance is quite flat
>>>
>>> Legend:
>>> zmq: the pure ZMQ Device is used for routing tasks
>>> lru/weighted: the simplest/most complicated routing schemes respectively
>>> in the Python ZMQ Scheduler (which supports dependencies)
>>> twisted: the old IPython.kernel
>>>
>>> [image: roundtrip.png]
>>> [image: sent.png]
>>> Test system:
>>> Core-i7 930, 4x2 cores (ht), 4-engine cluster all over tcp/loopback,
>>> Ubuntu 10.04, Python 2.6.5
>>>
>>> -MinRK
>>> http://github.com/minrk
>>>
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/2aacc16f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: roundtrip.png
Type: image/png
Size: 30731 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/2aacc16f/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sent.png
Type: image/png
Size: 31114 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/2aacc16f/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: echo.png
Type: image/png
Size: 49607 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/2aacc16f/attachment-0002.png>

From benjaminrk at gmail.com  Fri Oct 22 16:11:29 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 22 Oct 2010 20:11:29 -0000
Subject: [IPython-dev] ZMQ Parallel IPython Performance preview
In-Reply-To: <AANLkTinp-TzJEobf8xZf6npSVmt4O3f3TLickXmTcMpJ@mail.gmail.com>
References: <AANLkTik6Vp77aPwbp42P2UxTeGe4G0oi8V6Ti32zjmrp@mail.gmail.com>
	<AANLkTi=CrceUHbxfPxVui4WZ-BSBkZuYYi25ePH+NvTn@mail.gmail.com>
	<AANLkTinp-TzJEobf8xZf6npSVmt4O3f3TLickXmTcMpJ@mail.gmail.com>
Message-ID: <AANLkTika8y1Neg13JDD1rEjrO2rLL7hqtEeCXr+YaeC4@mail.gmail.com>

Re-run for throughput with data:

submit 16 tasks of a given size, plot against size.
new-style:
def echo(a):
    return a
old-style:
task = StringTask("b=a", push=dict(a=a), pull=['b'])

The input chosen was random numpy arrays (64b float, so len(A)/8 ~= size in
B).

Notable points:
* ZMQ submission remains flat, independent of size, due to non-copying sends
* size doesn't come into account until ~100kB, and clearly dominates both
after 1MB
    the turning point for Twisted is a little earlier than for ZMQ
* at 4MB, Twisted is submitting < 2 tasks per sec, while ZMQ is submitting
~90
* roundtrip, ZMQ is fairly consistently ~40x faster.

memory usage:
* Peak memory for the engines is 20% higher with ZMQ, because more than one
task can now be waiting in the queue on the engine at a time.
* Peak memory for the Controller including schedulers is 25% less than
Twisted with pure ZMQ, and 20% less with the Python scheduler. Note that all
results still reside in memory, since I haven't implemented the db backend
yet.
* Peak memory for the Python scheduler is approximately the same as the
engines
* Peak memory for the zmq scheduler is about half that.

-MinRK

On Fri, Oct 22, 2010 at 09:52, MinRK <benjaminrk at gmail.com> wrote:

> I'll get on the new tests, I already have a bandwidth one written, so I'm
> running it now.  As for Twisted's throughput performance, it's at least
> partly our fault.  Since the receiving is in Python, every time we try to
> send there are incoming results getting in the way.  If we wrote it such
> that sending prevented the receipt of results, I'm sure the Twisted code
> would be faster for large numbers of messages.  With ZMQ, though, we don't
> have to be receiving in Python to get the results to the client process, so
> they arrive in ZMQ and await simple memcpy/deserialization.
>
> -MinRK
>
>
> On Fri, Oct 22, 2010 at 09:27, Brian Granger <ellisonbg at gmail.com> wrote:
>
>> Min,
>>
>> Also, can you get memory consumption numbers for the controller and
>> queues.  I want to see how much worse Twisted is in that respect.
>>
>> Cheers,
>>
>> Brian
>>
>> On Thu, Oct 21, 2010 at 11:53 PM, MinRK <benjaminrk at gmail.com> wrote:
>>
>>> I have my first performance numbers for throughput with the new parallel
>>> code riding on ZeroMQ, and results are fairly promising.  Roundtrip time for
>>> ~512 tiny tasks submitted as fast as they can is ~100x faster than with
>>> Twisted.
>>>
>>> As a throughput test, I submitted a flood of many very small tasks that
>>> should take ~no time:
>>> new-style:
>>> def wait(t=0):
>>>     import time
>>>     time.sleep(t)
>>> submit:
>>> client.apply(wait, args=(t,))
>>>
>>> Twisted:
>>> task = StringTask("import time; time.sleep(%f)"%t)
>>> submit:
>>> client.run(task)
>>>
>>> Flooding the queue with these tasks with t=0, and then waiting for the
>>> results, I tracked two times:
>>> Sent: the time from the first submit until the last submit returns
>>> Roundtrip: the time from the first submit to getting the last result
>>>
>>> Plotting these times vs number of messages, we see some decent numbers:
>>> * The pure ZMQ scheduler is fastest, 10-100 times faster than Twisted
>>> roundtrip
>>> * The Python scheduler is ~3x slower roundtrip than pure ZMQ, but no
>>> penalty to the submission rate
>>> * Twisted performance falls off very quickly as the number of tasks grows
>>> * ZMQ performance is quite flat
>>>
>>> Legend:
>>> zmq: the pure ZMQ Device is used for routing tasks
>>> lru/weighted: the simplest/most complicated routing schemes respectively
>>> in the Python ZMQ Scheduler (which supports dependencies)
>>> twisted: the old IPython.kernel
>>>
>>> [image: roundtrip.png]
>>> [image: sent.png]
>>> Test system:
>>> Core-i7 930, 4x2 cores (ht), 4-engine cluster all over tcp/loopback,
>>> Ubuntu 10.04, Python 2.6.5
>>>
>>> -MinRK
>>> http://github.com/minrk
>>>
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/2aacc16f/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: roundtrip.png
Type: image/png
Size: 30731 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/2aacc16f/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sent.png
Type: image/png
Size: 31114 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/2aacc16f/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: echo.png
Type: image/png
Size: 49607 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20101022/2aacc16f/attachment-0005.png>