From muzgash.lists at gmail.com  Fri Jul  2 09:19:25 2010
From: muzgash.lists at gmail.com (Gerardo Gutierrez)
Date: Fri, 2 Jul 2010 08:19:25 -0500
Subject: [IPython-dev] IPythonQt/ZMQ raw_input support issue.
Message-ID: <AANLkTilaFPSmWwbvqwxpsjqdVvpahFI4vmML5xa45kFz@mail.gmail.com>

Hi everyone.

I've been trying to implement from the examples in
pyzmq<http://github.com/ellisonbg/pyzmq>the support for raw_input
calls, to be used in IPthonZMQ and IPythonQt
projects.
Nothing goo has come from this attempts but a better understanding of what I
don't understand. Here's a guide through my last thoughts so you can help
me:

firs I wanted to add to this line(154) in the kernel, a new type of request:
*
*
*for msg_type in ['execute_request', 'complete_request','**raw_input_request
**']:**
**      self.handlers[msg_type] = getattr(self, msg_type)*

Also I need to add a function:
*
*
*def raw_input_request(self,ident,parent):*
*   print>>sys.__stdout__,"entered"*
*
*
Just to check if the messages thread gets there (Which it doesn't).*
*
The class RawInput is as it was always.
And obviously the overwriting of the raw_input function in main():

*rawinput=RawInput(session,pub_socket)
__builtin__.raw_input=rawinput*
*
*
Now for the frontend part, I need also a msg_type:*
*

*for msg_type in ['pyin', 'pyout', 'pyerr', 'stream','raw_input']:
      self.handlers[msg_type] = getattr(self, 'handle_%s' % msg_type)*

And a handler for raw_input type of messages:

*def handle_raw_input(self,omsg):
      stdin_msg=sys.stdin.readline()
      src=stdin_msg

self.session.send(self.request_socket,'raw_input_request',dict(code=src))*

As you can see this is just to send the raw_input request with the line
written by the user.
The error is this:

*<class 'zmq._zmq.ZMQError'> : Operation not supported
Traceback (most recent call last):
File "./kernel.py", line 194, in execute_request
exec comp_code in self.user_ns, self.user_ns
File "<zmq-kernel>", line 1, in <module>
File "./kernel.py", line 126, in __call__
reply=self.socket.recv_json()
File "_zmq.pyx", line 906, in zmq._zmq.Socket.recv_json (zmq/_zmq.c:6862)
File "_zmq.pyx", line 751, in zmq._zmq.Socket.recv (zmq/_zmq.c:5316)
File "_zmq.pyx", line 781, in zmq._zmq.Socket._recv_copy (zmq/_zmq.c:5690)
ZMQError: Operation not supported*
*
*
It says that the error is in *exec comp_code in self.user_ns, self.user_ns*,
which is in the execute_request function

{u'content': {u'code': u'raw_input()'},
u'header': {u'username': u'muzgash', u'msg_id': 0, u'session':
u'264d21d4-7e00-4b7e-b051-3c0ba7b221f6'},
u'msg_type': u'execute_request',
u'parent_header': {}}

{'content': {u'status': u'error', u'etype': u"<class 'zmq._zmq.ZMQError'>",
u'evalue': u'Operation n..........

so I can say that the error is in the first message sent by the frontend to
the kernel, but the raw_input function is called ( RawInput.__call__() ) and
also a message is sent to the frontend through:

msg = self.session.msg(u'raw_input')
self.socket.send_json(msg)

and the function handle_raw_input is called, which sends a new message to
the kernel

{u'content': {u'code': u'input-->\n'},
u'header': {u'username': u'muzgash', u'msg_id': 1, u'session':
u'264d21d4-7e00-4b7e-b051-3c0ba7b221f6'},
u'msg_type': u'raw_input_request',
u'parent_header': {}}

so this line in the class RawInput should recieve it:

while True:
      try:
          reply = self.socket.recv_json(zmq.NOBLOCK)

But it doesn't.

That's one thing.
Another one is that for this to work well in the Qt frontend I need to fix
pyout (keeping off course multiline input) which I have no clue how to do
it.
I could write a pretty crude fix with Qt and without requesting the kernel
twice, but no _NN call will work and it will have to be rewritten when this
problems are solved.
So I think for now I'll move on to the next point in the schedule 'till some
ideas popup.


thanks in advance.



Best regards.
--
Gerardo Guti?rrez Guti?rrez <http://he1.udea.edu.co/gweb>
Physics student
Universidad de Antioquia
Computational physics and astrophysics group
(FACom<http://urania.udea.edu.co/sites/sites.php>
)
Computational science and development
branch(FACom-dev<http://urania.udea.edu.co/sites/facom-dev/>
)
Usuario Linux #492295
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100702/b7d0a6d7/attachment.html>

From ben.v.root at gmail.com  Fri Jul  2 11:41:14 2010
From: ben.v.root at gmail.com (Benjamin Root)
Date: Fri, 2 Jul 2010 10:41:14 -0500
Subject: [IPython-dev] cProfile and iPython
Message-ID: <AANLkTikjh6SLTwugql9Dkb2RrMNjjaweuZ319l4yKBYP@mail.gmail.com>

Hello,

I have found an odd bug when I used cProfile in an iPython shell.  It seems
to not load the same environment as the shell.  The following is a very
simple example:

import cProfile
import math

x = 25
cProfile.run("y = math.sqrt(x)")

This throws an exception "NameError: name 'math' is not defined"

Similar problems occur if I define a function "foo" and call that in run.  I
should also note that using "run -p" works just fine for a cProfile-less
version of the above script.  I am using a stock install of python 2.6.5 and
ipython 0.10 on Ubuntu 10.04.

Ben Root
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100702/3b633583/attachment.html>

From fperez.net at gmail.com  Fri Jul  2 12:09:25 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 2 Jul 2010 11:09:25 -0500
Subject: [IPython-dev] cProfile and iPython
In-Reply-To: <AANLkTikjh6SLTwugql9Dkb2RrMNjjaweuZ319l4yKBYP@mail.gmail.com>
References: <AANLkTikjh6SLTwugql9Dkb2RrMNjjaweuZ319l4yKBYP@mail.gmail.com>
Message-ID: <AANLkTinzUcUsRPN52wJOjlwrIQyFAqlGAnS1cDaAQrRI@mail.gmail.com>

Hi Benjamin,

On Fri, Jul 2, 2010 at 10:41 AM, Benjamin Root <ben.v.root at gmail.com> wrote:
> I have found an odd bug when I used cProfile in an iPython shell.? It seems
> to not load the same environment as the shell.? The following is a very
> simple exampl

Thanks for the report! I just made a ticket for it:

http://github.com/ipython/ipython/issues/issue/131

Your example completely reproduces the problem, many thanks.  I hope
we can get a fix for it soon, though if you find any solutions by all
means send them our way.

Cheers,

f


From fperez.net at gmail.com  Sun Jul  4 01:17:04 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 3 Jul 2010 22:17:04 -0700
Subject: [IPython-dev] IPythonQt/ZMQ raw_input support issue.
In-Reply-To: <AANLkTilaFPSmWwbvqwxpsjqdVvpahFI4vmML5xa45kFz@mail.gmail.com>
References: <AANLkTilaFPSmWwbvqwxpsjqdVvpahFI4vmML5xa45kFz@mail.gmail.com>
Message-ID: <AANLkTikQlr_yY-K1LB8-wO1wL3jGhI4VwRjyq6NvAuzD@mail.gmail.com>

Hi all,

On Fri, Jul 2, 2010 at 6:19 AM, Gerardo Gutierrez
<muzgash.lists at gmail.com> wrote:
>
> I've been trying to implement from the examples in pyzmq the support for
> raw_input calls, to be used in IPthonZMQ and IPythonQt projects.
> Nothing goo has come from this attempts but a better understanding of what I
> don't understand. Here's a guide through my last thoughts so you can help
> me:

Just to let you know that I phoned Gerardo today and we went over the
details of this, so the (otherwise fairly urgent) question is answered
for now.

I'll be offline for 3 days, and will then have some more time to get
back on track with the list (the last few days were ipython-intensive
but at the scipy sprints, I'm trying to write up a summary report of
our activities there before  I crash tonight).

Cheers,

f


From fperez.net at gmail.com  Sun Jul  4 01:36:47 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 3 Jul 2010 22:36:47 -0700
Subject: [IPython-dev] ipython + pydev
In-Reply-To: <AANLkTikRYUZPmM9z7oWU7qdsnazDTzUgf953NWL8HyPZ@mail.gmail.com>
References: <AANLkTikRYUZPmM9z7oWU7qdsnazDTzUgf953NWL8HyPZ@mail.gmail.com>
Message-ID: <AANLkTinfCJ_UaGpErtBSULTlBdWzWOPkTtNSC2tp-hei@mail.gmail.com>

Hey Satra,

On Tue, Jun 29, 2010 at 10:18 PM, Satrajit Ghosh <satra at mit.edu> wrote:
> hi,
>
> just wanted to check if anybody knows a way of getting an ipython console to
> work with pydev.

No clue, sorry.  I don't use eclipse so I have no idea...

Cheers,

f


From fperez.net at gmail.com  Sun Jul  4 13:14:32 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 4 Jul 2010 10:14:32 -0700
Subject: [IPython-dev] IPython sprint summary (not)
Message-ID: <AANLkTinhceRY0_UuFrH3JmLEaPy026LgdKKkeuk140gM@mail.gmail.com>

Hi all,

our sprinting work  at scipy turned out to be a lot bigger and more
productive than I'd had imagined.  I wanted to write up a good summary
of the design changes and contributions but ran out of time last night
and I'm headed out offline for 3 days  now.  If anyone who was present
could write something up, it would be fantastic and would help out
those following trunk and recent commit activities understand what's
going on.  Otherwise I'll do it around Thursday when I'm back.

Cheers,

f


From dwf at cs.toronto.edu  Thu Jul  8 19:52:24 2010
From: dwf at cs.toronto.edu (David Warde-Farley)
Date: Thu, 8 Jul 2010 19:52:24 -0400
Subject: [IPython-dev] debugger.py refactoring
Message-ID: <BFCF3C40-5935-44BA-ABE1-2B2F20E99D87@cs.toronto.edu>

Hey folks,

I was just wondering (I didn't see a roadmap anywhere but then again I didn't look very hard) if a refactoring was planned for IPython/core/debugger.py, in particular to make it more extensible to third party tools. I just hacked in support for Andreas Kloeckner's pudb ( http://pypi.python.org/pypi/pudb ) but it wasn't pretty in the least. I guess some sort of 'debugger registry' would make sense, that a user could call into from their ipy_user_conf.py in order to hook up their favourite debugger's post-mortem mode?

This is all just fanciful thinking aloud, but if no one's planning on doing anything to debugger.py in the near future I might give it a try when I get back into town next week.

David

From dwf at cs.toronto.edu  Thu Jul  8 20:27:04 2010
From: dwf at cs.toronto.edu (David Warde-Farley)
Date: Thu, 8 Jul 2010 20:27:04 -0400
Subject: [IPython-dev] debugger.py refactoring
In-Reply-To: <BFCF3C40-5935-44BA-ABE1-2B2F20E99D87@cs.toronto.edu>
References: <BFCF3C40-5935-44BA-ABE1-2B2F20E99D87@cs.toronto.edu>
Message-ID: <65810004-4086-409F-9782-8B8C071F0882@cs.toronto.edu>

On 2010-07-08, at 7:52 PM, David Warde-Farley wrote:

> Hey folks,
> 
> I was just wondering (I didn't see a roadmap anywhere but then again I didn't look very hard) if a refactoring was planned for IPython/core/debugger.py, in particular to make it more extensible to third party tools.

Oops, debugger.py certainly isn't the place for this, and nor was it where I put it, but rather in iplib.py.

For the interested, here's my monkey-patch job:

        # use pydb if available
        if Debugger.has_pydb:
            from pydb import pm
        else:
            # try and use pudb
            try:
                import pudb
                pudb.post_mortem((sys.last_type,
                                  sys.last_value,
                                  sys.last_traceback))
                return
            except ImportError:
                pass
            # fallback to our internal debugger
            pm = lambda : self.InteractiveTB.debugger(force=True)
        self.history_saving_wrapper(pm)()

David

From benjaminrk at gmail.com  Fri Jul  9 18:35:27 2010
From: benjaminrk at gmail.com (MinRK)
Date: Fri, 9 Jul 2010 15:35:27 -0700
Subject: [IPython-dev] Heartbeat Device
Message-ID: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com>

Brian,

Have you worked on the Heartbeat Device? Does that need to go in 0MQ itself,
or can it be part of pyzmq?

I'm trying to work out how to really tell that an engine is down.

Is the heartbeat to be in a separate process?

Are we guaranteed that a zmq thread is responsive no matter what an engine
process is doing? If that's the case, is a moderate timeout on recv adequate
to determine engine failure?

If zmq threads are guaranteed to be responsive, it seems like a simple pair
socket might be good enough, rather than needing a new device. Or even
through the registration XREP socket.

Can we formalize exactly what the heartbeat needs to be?

-MinRK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100709/c8de1319/attachment.html>

From songofacandy at gmail.com  Sat Jul 10 05:18:16 2010
From: songofacandy at gmail.com (INADA Naoki)
Date: Sat, 10 Jul 2010 18:18:16 +0900
Subject: [IPython-dev] Porting to Python3
Message-ID: <AANLkTikTcvgW8HlfcjkUqwvp79S4-EQaDoA88A6LyyFW@mail.gmail.com>

Hi, all.

Today, Python hack-a-thon is held in Japan.
I've ported IPython to Python3 in there.
Some feature works now.

http://github.com/methane/ipython

-- 
INADA Naoki? <songofacandy at gmail.com>


From ellisonbg at gmail.com  Mon Jul 12 12:15:01 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 12 Jul 2010 09:15:01 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com>
Message-ID: <AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com>

On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com> wrote:
> Brian,
> Have you worked on the Heartbeat Device? Does that need to go in 0MQ itself,

I have not.  Ideally it could go into 0MQ itself.  But, in principle,
we could do it in pyzmq.  We just have to write a nogil pure C
function that uses the low-level C API to do the heartbeat.  Then we
can just run that function in a thread with a "with nogil" block.
Shouldn't be too bad, given how simple the heartbeat logic is.  The
main thing we will have to think about is how to start/stop the
heartbeat in a clean way.

> or can it be part of pyzmq?
> I'm trying to work out how to really tell that an engine is down.
> Is the heartbeat to be in a separate process?

No, just a separate C/C++ thread that doesn't hold the GIL.

> Are we guaranteed that a zmq thread is responsive no matter what an engine
> process is doing? If that's the case, is a moderate timeout on recv adequate
> to determine engine failure?

Yes, I think we can assume this.  The only thing that would take the
0mq thread down is something semi-fatal like a signal that doesn't get
handled.  But as long as the 0MQ thread doesn't have any bugs, it
should simply keep running no matter what the other thread does (OK,
other than segfaulting)

> If zmq threads are guaranteed to be responsive, it seems like a simple pair
> socket might be good enough, rather than needing a new device. Or even
> through the registration XREP socket.

That (registration XREP socket) won't work unless we want to write all
that logic in C.
I don't know about a PAIR socket because of the need for multiple clients?

> Can we formalize exactly what the heartbeat needs to be?

OK, let's think.  The engine needs to connect, the controller bind.
It would be nice if the controller didn't need a separate heartbeat
socket for each engine, but I guess we need the ability to track which
specific engine is heartbeating.   Also, there is the question of to
do want to do a reqest/reply or pub/sub style heartbeat.  What do you
think?

Brian


> -MinRK



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From benjaminrk at gmail.com  Mon Jul 12 15:51:47 2010
From: benjaminrk at gmail.com (MinRK)
Date: Mon, 12 Jul 2010 12:51:47 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com> 
	<AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com>
Message-ID: <AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com>

On Mon, Jul 12, 2010 at 09:15, Brian Granger <ellisonbg at gmail.com> wrote:

> On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com> wrote:
> > Brian,
> > Have you worked on the Heartbeat Device? Does that need to go in 0MQ
> itself,
>
> I have not.  Ideally it could go into 0MQ itself.  But, in principle,
> we could do it in pyzmq.  We just have to write a nogil pure C
> function that uses the low-level C API to do the heartbeat.  Then we
> can just run that function in a thread with a "with nogil" block.
> Shouldn't be too bad, given how simple the heartbeat logic is.  The
> main thing we will have to think about is how to start/stop the
> heartbeat in a clean way.
>
> > or can it be part of pyzmq?
> > I'm trying to work out how to really tell that an engine is down.
> > Is the heartbeat to be in a separate process?
>
> No, just a separate C/C++ thread that doesn't hold the GIL.
>
> > Are we guaranteed that a zmq thread is responsive no matter what an
> engine
> > process is doing? If that's the case, is a moderate timeout on recv
> adequate
> > to determine engine failure?
>
> Yes, I think we can assume this.  The only thing that would take the
> 0mq thread down is something semi-fatal like a signal that doesn't get
> handled.  But as long as the 0MQ thread doesn't have any bugs, it
> should simply keep running no matter what the other thread does (OK,
> other than segfaulting)
>
> > If zmq threads are guaranteed to be responsive, it seems like a simple
> pair
> > socket might be good enough, rather than needing a new device. Or even
> > through the registration XREP socket.
>
> That (registration XREP socket) won't work unless we want to write all
> that logic in C.
> I don't know about a PAIR socket because of the need for multiple clients?
>
I wasn't thinking of a single PAIR socket, but rather a pair for each
engine. We already have a pair for each engine for the queue, but I am not
quite seeing the need for a special device beyond a PAIR socket in the
heartbeat.


>
> > Can we formalize exactly what the heartbeat needs to be?
>
> OK, let's think.  The engine needs to connect, the controller bind.
> It would be nice if the controller didn't need a separate heartbeat
> socket for each engine, but I guess we need the ability to track which
> specific engine is heartbeating.   Also, there is the question of to
> do want to do a reqest/reply or pub/sub style heartbeat.  What do you
> think?
>
The way we talked about it, the heartbeat needs to issue commands both ways.
While it is used for checking whether an engine remains alive, it is also
the avenue for aborting jobs.  If we do have a strict heartbeat, then I
think PUB/SUB is a good choice.

However, if heartbeat is all it does, then we need a _third_ connection to
each engine for control commands. Since messages cannot jump the queue, the
engine queue PAIR socket cannot be used for commands, and a PUB/SUB model
for heartbeat can _either_ receive commands _or_ have results.

control commands:
beat (check alive)
abort (remove a task from the queue)
signal (SIGINT, etc.)
exit (engine.kill)
reset (clear queue, namespace)

more?

It's possible that we could implement these with a PUB on the controller and
a SUB on each engine, only interpreting results received via the queue's
PAIR socket. But then every command would be sent to every engine, even
though many would only be meant for one (too inefficient/costly?). It would
however make the actual heartbeat command very simple as a single send.

It does not allow for the engine to initiate queries of the controller, for
instance a work stealing implementation. Again, it is possible that this
could be implemented via the job queue PAIR socket, but that would only
allow for stealing when completely starved for work, since the job queue and
communication queue would be the same.

There's also the issue of task dependency.

If we are to implement dependency checking as we discussed (depend on
taskIDs, and only execute once the task has been completed), the engine
needs to be able to query the controller about the tasks depended upon. This
makes the controller being the PUB side unworkable.

This says to me that we need two-way connections between the engines and the
controller. That can either be implemented as multiple connections (PUB/SUB
+ PAIR or REQ/REP), or simply a PAIR socket for each engine could provide
the whole heartbeat/command channel.

-MinRK


>
> Brian
>
>
> > -MinRK
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100712/be552c58/attachment.html>

From benjaminrk at gmail.com  Mon Jul 12 19:10:55 2010
From: benjaminrk at gmail.com (MinRK)
Date: Mon, 12 Jul 2010 16:10:55 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com> 
	<AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com> 
	<AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com>
Message-ID: <AANLkTilcPFcNjJHDo_o-0Ui9o_iduyKFvLg1SdNBtgWI@mail.gmail.com>

I've been thinking about this, and it seems like we can't have a responsive
rich control connection unless it is in another process, like the old
IPython daemon.  Pure heartbeat is easy with a C device, and we may not even
need a new one. For instance, I added support for the builtin devices of
zeromq to pyzmq with a few lines, and you can have simple is_alive style
heartbeat with a FORWARDER device.

I pushed a basic example of this (examples/heartbeat) to my pyzmq fork.

Running a ~3 second numpy.dot action, the heartbeat pings remain responsive
at <1ms.

-MinRK


On Mon, Jul 12, 2010 at 12:51, MinRK <benjaminrk at gmail.com> wrote:

>
>
> On Mon, Jul 12, 2010 at 09:15, Brian Granger <ellisonbg at gmail.com> wrote:
>
>> On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com> wrote:
>> > Brian,
>> > Have you worked on the Heartbeat Device? Does that need to go in 0MQ
>> itself,
>>
>> I have not.  Ideally it could go into 0MQ itself.  But, in principle,
>> we could do it in pyzmq.  We just have to write a nogil pure C
>> function that uses the low-level C API to do the heartbeat.  Then we
>> can just run that function in a thread with a "with nogil" block.
>> Shouldn't be too bad, given how simple the heartbeat logic is.  The
>> main thing we will have to think about is how to start/stop the
>> heartbeat in a clean way.
>>
>> > or can it be part of pyzmq?
>> > I'm trying to work out how to really tell that an engine is down.
>> > Is the heartbeat to be in a separate process?
>>
>> No, just a separate C/C++ thread that doesn't hold the GIL.
>>
>> > Are we guaranteed that a zmq thread is responsive no matter what an
>> engine
>> > process is doing? If that's the case, is a moderate timeout on recv
>> adequate
>> > to determine engine failure?
>>
>> Yes, I think we can assume this.  The only thing that would take the
>> 0mq thread down is something semi-fatal like a signal that doesn't get
>> handled.  But as long as the 0MQ thread doesn't have any bugs, it
>> should simply keep running no matter what the other thread does (OK,
>> other than segfaulting)
>>
>> > If zmq threads are guaranteed to be responsive, it seems like a simple
>> pair
>> > socket might be good enough, rather than needing a new device. Or even
>> > through the registration XREP socket.
>>
>> That (registration XREP socket) won't work unless we want to write all
>> that logic in C.
>> I don't know about a PAIR socket because of the need for multiple clients?
>>
> I wasn't thinking of a single PAIR socket, but rather a pair for each
> engine. We already have a pair for each engine for the queue, but I am not
> quite seeing the need for a special device beyond a PAIR socket in the
> heartbeat.
>
>
>>
>> > Can we formalize exactly what the heartbeat needs to be?
>>
>> OK, let's think.  The engine needs to connect, the controller bind.
>> It would be nice if the controller didn't need a separate heartbeat
>> socket for each engine, but I guess we need the ability to track which
>> specific engine is heartbeating.   Also, there is the question of to
>> do want to do a reqest/reply or pub/sub style heartbeat.  What do you
>> think?
>>
> The way we talked about it, the heartbeat needs to issue commands both
> ways. While it is used for checking whether an engine remains alive, it is
> also the avenue for aborting jobs.  If we do have a strict heartbeat, then I
> think PUB/SUB is a good choice.
>
> However, if heartbeat is all it does, then we need a _third_ connection to
> each engine for control commands. Since messages cannot jump the queue, the
> engine queue PAIR socket cannot be used for commands, and a PUB/SUB model
> for heartbeat can _either_ receive commands _or_ have results.
>
> control commands:
> beat (check alive)
> abort (remove a task from the queue)
> signal (SIGINT, etc.)
> exit (engine.kill)
> reset (clear queue, namespace)
>
> more?
>
> It's possible that we could implement these with a PUB on the controller
> and a SUB on each engine, only interpreting results received via the queue's
> PAIR socket. But then every command would be sent to every engine, even
> though many would only be meant for one (too inefficient/costly?). It would
> however make the actual heartbeat command very simple as a single send.
>
> It does not allow for the engine to initiate queries of the controller, for
> instance a work stealing implementation. Again, it is possible that this
> could be implemented via the job queue PAIR socket, but that would only
> allow for stealing when completely starved for work, since the job queue and
> communication queue would be the same.
>
> There's also the issue of task dependency.
>
> If we are to implement dependency checking as we discussed (depend on
> taskIDs, and only execute once the task has been completed), the engine
> needs to be able to query the controller about the tasks depended upon. This
> makes the controller being the PUB side unworkable.
>
> This says to me that we need two-way connections between the engines and
> the controller. That can either be implemented as multiple connections
> (PUB/SUB + PAIR or REQ/REP), or simply a PAIR socket for each engine could
> provide the whole heartbeat/command channel.
>
> -MinRK
>
>
>>
>> Brian
>>
>>
>> > -MinRK
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100712/17a66468/attachment.html>

From vano at mail.mipt.ru  Mon Jul 12 21:26:12 2010
From: vano at mail.mipt.ru (vano)
Date: Tue, 13 Jul 2010 05:26:12 +0400
Subject: [IPython-dev] %run -d is broken in Python 2.7
Message-ID: <1213230248.20100713052612@mail.mipt.ru>

Subj. On attempting to run a script under (colored :-) ) debugger the
following appears (VV excerpt, full message is attached):

In [1]: %run -e -d setup.py build
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)

C:\Ivan\Cython-0.12.1\<ipython console> in <module>()
D:\py\lib\site-packages\ipython-0.10-py2.7.egg\IPython\iplib.pyc in ipmagic(self, arg_s)
-> 1182             return fn(magic_args)

D:\py\lib\site-packages\ipython-0.10-py2.7.egg\IPython\Magic.pyc in magic_run(self, parameter_s, runner, file_finder)
-> 1633                     checkline = deb.checkline(filename,bp)

D:\py\lib\pdb.py in checkline(self, filename, lineno)
--> 470         line = linecache.getline(filename, lineno, self.curframe.f_globals)

AttributeError: Pdb instance has no attribute 'curframe'
> d:\py\lib\pdb.py(470)checkline()
--> 470         line = linecache.getline(filename, lineno, self.curframe.f_globals)


---------------------------------------------------------------------------
After thorough investigation, it turned out a pdb issue (details are
on the link), so i filed a bug there (http://bugs.python.org/issue9230) as
well as a bugfix.

If any of you have write access to python source, you can help me to get
it fixed quickly.
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: backtrace.txt
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100713/d0fbc776/attachment.txt>

From ellisonbg at gmail.com  Mon Jul 12 23:26:04 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 12 Jul 2010 20:26:04 -0700
Subject: [IPython-dev] Fwd: [zeromq-dev] Authentication on "topic"
In-Reply-To: <AANLkTilLxuJLxLGthH3Lk3nBL8e_23lGQPByTKGqUS2J@mail.gmail.com>
References: <3127C7C2-4A7D-4DF7-8A62-42BFE6F12E0C@quant-edge.com>
	<AANLkTilLxuJLxLGthH3Lk3nBL8e_23lGQPByTKGqUS2J@mail.gmail.com>
Message-ID: <AANLkTimqnk307_cLtouSnXymAEOMNQFAq8ZvU9pnlh-o@mail.gmail.com>

Just saw this on the 0MQ list about authentication and 0MQ.

Cheers,

Brian


---------- Forwarded message ----------
From: Pieter Hintjens <ph at imatix.com>
Date: Mon, Jul 12, 2010 at 11:37 AM
Subject: Re: [zeromq-dev] Authentication on "topic"
To: 0MQ development list <zeromq-dev at lists.zeromq.org>


Hi Viet,

There is no plan to add authentication to ZeroMQ core. ?However we are
developing a data plant layer above ZeroMQ, which will do secure
distribution over multicast as well as TCP. ?It will use
request-response to do key distribution, and then clients will use
those keys to unlock streams of data.

The data plant layer will provide a stream-based pubsub fabric with
tools such as fork, clone, arbitrate, failover, delay, log, etc. ?It
will eventually connect to feed handlers to provide a ticket plant.

This new product will be open source but we're developing it off-line
initially, i.e. with a closed community of participants. ?If you are
interested in getting access to it early, drop me a line.

Regards
-
Pieter Hintjens
iMatix


On Mon, Jul 12, 2010 at 7:41 PM, Viet Hoang, Quant Edge
<viet.hoang at quant-edge.com> wrote:
> Hi,
> We are evaluating ZeroMQ to replace our existing client/server architecture.
> Our requirements are:
> 1. Clients login into the servers farm
> 2. Each client will have its own topic
> 3. Many traders/risk managers will subscribe to client topics to monitor
> trading activities
> 3. Client sends an order to Order gateway, responses & status will be
> published back to the clients and trader/risk manager screen
> The initial feedback is excellent, with its load balancing and
> publish/subscribe features, ZeroMQ simply fits our requirements.?We need
> some sort of authentication mode for publish/subscribe feature, so that only
> un-authorized people cannot siphon on "topics", but I could not find it
> anywhere in the code? Do you guys have any plan to add the feature on soon?
> Cheers,
> Viet
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
_______________________________________________
zeromq-dev mailing list
zeromq-dev at lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Jul 12 23:43:29 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 12 Jul 2010 20:43:29 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTilcPFcNjJHDo_o-0Ui9o_iduyKFvLg1SdNBtgWI@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com>
	<AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com>
	<AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com>
	<AANLkTilcPFcNjJHDo_o-0Ui9o_iduyKFvLg1SdNBtgWI@mail.gmail.com>
Message-ID: <AANLkTikWHsr0Srz0F8kVDQx8nCaslKI6SV7IWOfEPRGQ@mail.gmail.com>

Min,

On Mon, Jul 12, 2010 at 4:10 PM, MinRK <benjaminrk at gmail.com> wrote:
> I've been thinking about this, and it seems like we can't have a responsive
> rich control connection unless it is in another process, like the old
> IPython daemon.

I am not quite sure I follow what you mean by this.  Can you elaborate?

> Pure heartbeat is easy with a C device, and we may not even
> need a new one. For instance, I added support for the builtin devices of
> zeromq to pyzmq with a few lines, and you can have simple is_alive style
> heartbeat with a FORWARDER device.

I looked at this and it looks very nice.  I think for basic is_alive
type heartbeats this will work fine.  The only thing to be careful of
is that 0MQ sockets are not thread safe.  Thus, it would be best to
actually create the socket in the thread as well.  But we do want the
flexibility to be able to pass in sockets to the device.  We will have
to think about that issue.

> I pushed a basic example of this (examples/heartbeat) to my pyzmq fork.
> Running a ~3 second numpy.dot action, the heartbeat pings remain responsive
> at <1ms.

This is great!

Cheers,

Brian
> -MinRK
>
> On Mon, Jul 12, 2010 at 12:51, MinRK <benjaminrk at gmail.com> wrote:
>>
>>
>> On Mon, Jul 12, 2010 at 09:15, Brian Granger <ellisonbg at gmail.com> wrote:
>>>
>>> On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com> wrote:
>>> > Brian,
>>> > Have you worked on the Heartbeat Device? Does that need to go in 0MQ
>>> > itself,
>>>
>>> I have not. ?Ideally it could go into 0MQ itself. ?But, in principle,
>>> we could do it in pyzmq. ?We just have to write a nogil pure C
>>> function that uses the low-level C API to do the heartbeat. ?Then we
>>> can just run that function in a thread with a "with nogil" block.
>>> Shouldn't be too bad, given how simple the heartbeat logic is. ?The
>>> main thing we will have to think about is how to start/stop the
>>> heartbeat in a clean way.
>>>
>>> > or can it be part of pyzmq?
>>> > I'm trying to work out how to really tell that an engine is down.
>>> > Is the heartbeat to be in a separate process?
>>>
>>> No, just a separate C/C++ thread that doesn't hold the GIL.
>>>
>>> > Are we guaranteed that a zmq thread is responsive no matter what an
>>> > engine
>>> > process is doing? If that's the case, is a moderate timeout on recv
>>> > adequate
>>> > to determine engine failure?
>>>
>>> Yes, I think we can assume this. ?The only thing that would take the
>>> 0mq thread down is something semi-fatal like a signal that doesn't get
>>> handled. ?But as long as the 0MQ thread doesn't have any bugs, it
>>> should simply keep running no matter what the other thread does (OK,
>>> other than segfaulting)
>>>
>>> > If zmq threads are guaranteed to be responsive, it seems like a simple
>>> > pair
>>> > socket might be good enough, rather than needing a new device. Or even
>>> > through the registration XREP socket.
>>>
>>> That (registration XREP socket) won't work unless we want to write all
>>> that logic in C.
>>> I don't know about a PAIR socket because of the need for multiple
>>> clients?
>>
>> I wasn't thinking of a single PAIR socket, but rather a pair for each
>> engine. We already have a pair for each engine for the queue, but I am not
>> quite seeing the need for a special device beyond a PAIR socket in the
>> heartbeat.
>>
>>>
>>> > Can we formalize exactly what the heartbeat needs to be?
>>>
>>> OK, let's think. ?The engine needs to connect, the controller bind.
>>> It would be nice if the controller didn't need a separate heartbeat
>>> socket for each engine, but I guess we need the ability to track which
>>> specific engine is heartbeating. ? Also, there is the question of to
>>> do want to do a reqest/reply or pub/sub style heartbeat. ?What do you
>>> think?
>>
>> The way we talked about it, the heartbeat needs to issue commands both
>> ways. While it is used for checking whether an engine remains alive, it is
>> also the avenue for aborting jobs. ?If we do have a strict heartbeat, then I
>> think PUB/SUB is a good choice.
>> However, if heartbeat is all it does, then we need a _third_ connection to
>> each engine for control commands. Since messages cannot jump the queue, the
>> engine queue PAIR socket cannot be used for commands, and a PUB/SUB model
>> for heartbeat can _either_ receive commands _or_ have results.
>> control commands:
>> beat (check alive)
>> abort (remove a task from the queue)
>> signal (SIGINT, etc.)
>> exit (engine.kill)
>> reset (clear queue, namespace)
>> more?
>> It's possible that we could implement these with a PUB on the controller
>> and a SUB on each engine, only interpreting results received via the queue's
>> PAIR socket. But then every command would be sent to every engine, even
>> though many would only be meant for one (too inefficient/costly?). It would
>> however make the actual heartbeat command very simple as a single send.
>> It does not allow for the engine to initiate queries of the controller,
>> for instance a work stealing implementation. Again, it is possible that this
>> could be implemented via the job queue PAIR socket, but that would only
>> allow for stealing when completely starved for work, since the job queue and
>> communication queue would be the same.
>> There's also the issue of task dependency.
>> If we are to implement dependency checking as we discussed (depend on
>> taskIDs, and only execute once the task has been completed), the engine
>> needs to be able to query the controller about the tasks depended upon. This
>> makes the controller being the PUB side unworkable.
>> This says to me that we need two-way connections between the engines and
>> the controller. That can either be implemented as multiple connections
>> (PUB/SUB + PAIR or REQ/REP), or simply a PAIR socket for each engine could
>> provide the whole heartbeat/command channel.
>> -MinRK
>>
>>>
>>> Brian
>>>
>>>
>>> > -MinRK
>>>
>>>
>>>
>>> --
>>> Brian E. Granger, Ph.D.
>>> Assistant Professor of Physics
>>> Cal Poly State University, San Luis Obispo
>>> bgranger at calpoly.edu
>>> ellisonbg at gmail.com
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From benjaminrk at gmail.com  Tue Jul 13 00:49:01 2010
From: benjaminrk at gmail.com (MinRK)
Date: Mon, 12 Jul 2010 21:49:01 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTikWHsr0Srz0F8kVDQx8nCaslKI6SV7IWOfEPRGQ@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com> 
	<AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com> 
	<AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com> 
	<AANLkTilcPFcNjJHDo_o-0Ui9o_iduyKFvLg1SdNBtgWI@mail.gmail.com> 
	<AANLkTikWHsr0Srz0F8kVDQx8nCaslKI6SV7IWOfEPRGQ@mail.gmail.com>
Message-ID: <AANLkTiktMbKJc11xepDhssl54-uvuwwUF4GkrdDxxXNb@mail.gmail.com>

On Mon, Jul 12, 2010 at 20:43, Brian Granger <ellisonbg at gmail.com> wrote:

> Min,
>
> On Mon, Jul 12, 2010 at 4:10 PM, MinRK <benjaminrk at gmail.com> wrote:
> > I've been thinking about this, and it seems like we can't have a
> responsive
> > rich control connection unless it is in another process, like the old
> > IPython daemon.
>
> I am not quite sure I follow what you mean by this.  Can you elaborate?
>

The main advantage that we were to gain from the out-of-process ipdaemon was
the ability to abort/kill (signal) blocking jobs. With 0MQ threads, the only
logic we can have in a control/heartbeat thread must be implemented in
GIL-free C/C++. That limits what we can do in terms of interacting with the
main work thread, as I understand it.


>
> > Pure heartbeat is easy with a C device, and we may not even
> > need a new one. For instance, I added support for the builtin devices of
> > zeromq to pyzmq with a few lines, and you can have simple is_alive style
> > heartbeat with a FORWARDER device.
>
> I looked at this and it looks very nice.  I think for basic is_alive
> type heartbeats this will work fine.  The only thing to be careful of
> is that 0MQ sockets are not thread safe.  Thus, it would be best to
> actually create the socket in the thread as well.  But we do want the
> flexibility to be able to pass in sockets to the device.  We will have
> to think about that issue.
>

I wrote/pushed a basic ThreadsafeDevice, which creates/binds/connects inside
the thread's run method.
It adds bind_in/out, connect_in/out, and setsockopt_in/out methods which
just queue up arguments to be called at the head of the run method. I added
a tspong.py in the heartbeat example using it.


>
> > I pushed a basic example of this (examples/heartbeat) to my pyzmq fork.
> > Running a ~3 second numpy.dot action, the heartbeat pings remain
> responsive
> > at <1ms.
>
> This is great!
>
> Cheers,
>
> Brian
> > -MinRK
> >
> > On Mon, Jul 12, 2010 at 12:51, MinRK <benjaminrk at gmail.com> wrote:
> >>
> >>
> >> On Mon, Jul 12, 2010 at 09:15, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >>>
> >>> On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com> wrote:
> >>> > Brian,
> >>> > Have you worked on the Heartbeat Device? Does that need to go in 0MQ
> >>> > itself,
> >>>
> >>> I have not.  Ideally it could go into 0MQ itself.  But, in principle,
> >>> we could do it in pyzmq.  We just have to write a nogil pure C
> >>> function that uses the low-level C API to do the heartbeat.  Then we
> >>> can just run that function in a thread with a "with nogil" block.
> >>> Shouldn't be too bad, given how simple the heartbeat logic is.  The
> >>> main thing we will have to think about is how to start/stop the
> >>> heartbeat in a clean way.
> >>>
> >>> > or can it be part of pyzmq?
> >>> > I'm trying to work out how to really tell that an engine is down.
> >>> > Is the heartbeat to be in a separate process?
> >>>
> >>> No, just a separate C/C++ thread that doesn't hold the GIL.
> >>>
> >>> > Are we guaranteed that a zmq thread is responsive no matter what an
> >>> > engine
> >>> > process is doing? If that's the case, is a moderate timeout on recv
> >>> > adequate
> >>> > to determine engine failure?
> >>>
> >>> Yes, I think we can assume this.  The only thing that would take the
> >>> 0mq thread down is something semi-fatal like a signal that doesn't get
> >>> handled.  But as long as the 0MQ thread doesn't have any bugs, it
> >>> should simply keep running no matter what the other thread does (OK,
> >>> other than segfaulting)
> >>>
> >>> > If zmq threads are guaranteed to be responsive, it seems like a
> simple
> >>> > pair
> >>> > socket might be good enough, rather than needing a new device. Or
> even
> >>> > through the registration XREP socket.
> >>>
> >>> That (registration XREP socket) won't work unless we want to write all
> >>> that logic in C.
> >>> I don't know about a PAIR socket because of the need for multiple
> >>> clients?
> >>
> >> I wasn't thinking of a single PAIR socket, but rather a pair for each
> >> engine. We already have a pair for each engine for the queue, but I am
> not
> >> quite seeing the need for a special device beyond a PAIR socket in the
> >> heartbeat.
> >>
> >>>
> >>> > Can we formalize exactly what the heartbeat needs to be?
> >>>
> >>> OK, let's think.  The engine needs to connect, the controller bind.
> >>> It would be nice if the controller didn't need a separate heartbeat
> >>> socket for each engine, but I guess we need the ability to track which
> >>> specific engine is heartbeating.   Also, there is the question of to
> >>> do want to do a reqest/reply or pub/sub style heartbeat.  What do you
> >>> think?
> >>
> >> The way we talked about it, the heartbeat needs to issue commands both
> >> ways. While it is used for checking whether an engine remains alive, it
> is
> >> also the avenue for aborting jobs.  If we do have a strict heartbeat,
> then I
> >> think PUB/SUB is a good choice.
> >> However, if heartbeat is all it does, then we need a _third_ connection
> to
> >> each engine for control commands. Since messages cannot jump the queue,
> the
> >> engine queue PAIR socket cannot be used for commands, and a PUB/SUB
> model
> >> for heartbeat can _either_ receive commands _or_ have results.
> >> control commands:
> >> beat (check alive)
> >> abort (remove a task from the queue)
> >> signal (SIGINT, etc.)
> >> exit (engine.kill)
> >> reset (clear queue, namespace)
> >> more?
> >> It's possible that we could implement these with a PUB on the controller
> >> and a SUB on each engine, only interpreting results received via the
> queue's
> >> PAIR socket. But then every command would be sent to every engine, even
> >> though many would only be meant for one (too inefficient/costly?). It
> would
> >> however make the actual heartbeat command very simple as a single send.
> >> It does not allow for the engine to initiate queries of the controller,
> >> for instance a work stealing implementation. Again, it is possible that
> this
> >> could be implemented via the job queue PAIR socket, but that would only
> >> allow for stealing when completely starved for work, since the job queue
> and
> >> communication queue would be the same.
> >> There's also the issue of task dependency.
> >> If we are to implement dependency checking as we discussed (depend on
> >> taskIDs, and only execute once the task has been completed), the engine
> >> needs to be able to query the controller about the tasks depended upon.
> This
> >> makes the controller being the PUB side unworkable.
> >> This says to me that we need two-way connections between the engines and
> >> the controller. That can either be implemented as multiple connections
> >> (PUB/SUB + PAIR or REQ/REP), or simply a PAIR socket for each engine
> could
> >> provide the whole heartbeat/command channel.
> >> -MinRK
> >>
> >>>
> >>> Brian
> >>>
> >>>
> >>> > -MinRK
> >>>
> >>>
> >>>
> >>> --
> >>> Brian E. Granger, Ph.D.
> >>> Assistant Professor of Physics
> >>> Cal Poly State University, San Luis Obispo
> >>> bgranger at calpoly.edu
> >>> ellisonbg at gmail.com
> >>
> >
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100712/90b2c56f/attachment.html>

From ellisonbg at gmail.com  Tue Jul 13 01:04:24 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 12 Jul 2010 22:04:24 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTiktMbKJc11xepDhssl54-uvuwwUF4GkrdDxxXNb@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com>
	<AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com>
	<AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com>
	<AANLkTilcPFcNjJHDo_o-0Ui9o_iduyKFvLg1SdNBtgWI@mail.gmail.com>
	<AANLkTikWHsr0Srz0F8kVDQx8nCaslKI6SV7IWOfEPRGQ@mail.gmail.com>
	<AANLkTiktMbKJc11xepDhssl54-uvuwwUF4GkrdDxxXNb@mail.gmail.com>
Message-ID: <AANLkTinIpwIYVSDu9z4Z1glBqoFjYI1hzCAmOhLb85Mq@mail.gmail.com>

On Mon, Jul 12, 2010 at 9:49 PM, MinRK <benjaminrk at gmail.com> wrote:
>
>
> On Mon, Jul 12, 2010 at 20:43, Brian Granger <ellisonbg at gmail.com> wrote:
>>
>> Min,
>>
>> On Mon, Jul 12, 2010 at 4:10 PM, MinRK <benjaminrk at gmail.com> wrote:
>> > I've been thinking about this, and it seems like we can't have a
>> > responsive
>> > rich control connection unless it is in another process, like the old
>> > IPython daemon.
>>
>> I am not quite sure I follow what you mean by this. ?Can you elaborate?
>
> The main advantage that we were to gain from the out-of-process ipdaemon was
> the ability to abort/kill (signal) blocking jobs. With 0MQ threads, the only
> logic we can have in a control/heartbeat thread must be implemented in
> GIL-free C/C++. That limits what we can do in terms of interacting with the
> main work thread, as I understand it.

Yes, but I think it might be possible to spawn an external process to
send a signal back to the process.  But I am not sure about this.

>>
>> > Pure heartbeat is easy with a C device, and we may not even
>> > need a new one. For instance, I added support for the builtin devices of
>> > zeromq to pyzmq with a few lines, and you can have simple is_alive style
>> > heartbeat with a FORWARDER device.
>>
>> I looked at this and it looks very nice. ?I think for basic is_alive
>> type heartbeats this will work fine. ?The only thing to be careful of
>> is that 0MQ sockets are not thread safe. ?Thus, it would be best to
>> actually create the socket in the thread as well. ?But we do want the
>> flexibility to be able to pass in sockets to the device. ?We will have
>> to think about that issue.
>
>
> I wrote/pushed a basic ThreadsafeDevice, which creates/binds/connects inside
> the thread's run method.
> It adds bind_in/out, connect_in/out, and setsockopt_in/out methods which
> just queue up arguments to be called at the head of the run method. I added
> a tspong.py in the heartbeat example using it.

Cool, I will review this and merge it into master.

Cheers,

Brian

>>
>> > I pushed a basic example of this (examples/heartbeat) to my pyzmq fork.
>> > Running a ~3 second numpy.dot action, the heartbeat pings remain
>> > responsive
>> > at <1ms.
>>
>> This is great!
>>
>> Cheers,
>>
>> Brian
>> > -MinRK
>> >
>> > On Mon, Jul 12, 2010 at 12:51, MinRK <benjaminrk at gmail.com> wrote:
>> >>
>> >>
>> >> On Mon, Jul 12, 2010 at 09:15, Brian Granger <ellisonbg at gmail.com>
>> >> wrote:
>> >>>
>> >>> On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com> wrote:
>> >>> > Brian,
>> >>> > Have you worked on the Heartbeat Device? Does that need to go in 0MQ
>> >>> > itself,
>> >>>
>> >>> I have not. ?Ideally it could go into 0MQ itself. ?But, in principle,
>> >>> we could do it in pyzmq. ?We just have to write a nogil pure C
>> >>> function that uses the low-level C API to do the heartbeat. ?Then we
>> >>> can just run that function in a thread with a "with nogil" block.
>> >>> Shouldn't be too bad, given how simple the heartbeat logic is. ?The
>> >>> main thing we will have to think about is how to start/stop the
>> >>> heartbeat in a clean way.
>> >>>
>> >>> > or can it be part of pyzmq?
>> >>> > I'm trying to work out how to really tell that an engine is down.
>> >>> > Is the heartbeat to be in a separate process?
>> >>>
>> >>> No, just a separate C/C++ thread that doesn't hold the GIL.
>> >>>
>> >>> > Are we guaranteed that a zmq thread is responsive no matter what an
>> >>> > engine
>> >>> > process is doing? If that's the case, is a moderate timeout on recv
>> >>> > adequate
>> >>> > to determine engine failure?
>> >>>
>> >>> Yes, I think we can assume this. ?The only thing that would take the
>> >>> 0mq thread down is something semi-fatal like a signal that doesn't get
>> >>> handled. ?But as long as the 0MQ thread doesn't have any bugs, it
>> >>> should simply keep running no matter what the other thread does (OK,
>> >>> other than segfaulting)
>> >>>
>> >>> > If zmq threads are guaranteed to be responsive, it seems like a
>> >>> > simple
>> >>> > pair
>> >>> > socket might be good enough, rather than needing a new device. Or
>> >>> > even
>> >>> > through the registration XREP socket.
>> >>>
>> >>> That (registration XREP socket) won't work unless we want to write all
>> >>> that logic in C.
>> >>> I don't know about a PAIR socket because of the need for multiple
>> >>> clients?
>> >>
>> >> I wasn't thinking of a single PAIR socket, but rather a pair for each
>> >> engine. We already have a pair for each engine for the queue, but I am
>> >> not
>> >> quite seeing the need for a special device beyond a PAIR socket in the
>> >> heartbeat.
>> >>
>> >>>
>> >>> > Can we formalize exactly what the heartbeat needs to be?
>> >>>
>> >>> OK, let's think. ?The engine needs to connect, the controller bind.
>> >>> It would be nice if the controller didn't need a separate heartbeat
>> >>> socket for each engine, but I guess we need the ability to track which
>> >>> specific engine is heartbeating. ? Also, there is the question of to
>> >>> do want to do a reqest/reply or pub/sub style heartbeat. ?What do you
>> >>> think?
>> >>
>> >> The way we talked about it, the heartbeat needs to issue commands both
>> >> ways. While it is used for checking whether an engine remains alive, it
>> >> is
>> >> also the avenue for aborting jobs. ?If we do have a strict heartbeat,
>> >> then I
>> >> think PUB/SUB is a good choice.
>> >> However, if heartbeat is all it does, then we need a _third_ connection
>> >> to
>> >> each engine for control commands. Since messages cannot jump the queue,
>> >> the
>> >> engine queue PAIR socket cannot be used for commands, and a PUB/SUB
>> >> model
>> >> for heartbeat can _either_ receive commands _or_ have results.
>> >> control commands:
>> >> beat (check alive)
>> >> abort (remove a task from the queue)
>> >> signal (SIGINT, etc.)
>> >> exit (engine.kill)
>> >> reset (clear queue, namespace)
>> >> more?
>> >> It's possible that we could implement these with a PUB on the
>> >> controller
>> >> and a SUB on each engine, only interpreting results received via the
>> >> queue's
>> >> PAIR socket. But then every command would be sent to every engine, even
>> >> though many would only be meant for one (too inefficient/costly?). It
>> >> would
>> >> however make the actual heartbeat command very simple as a single send.
>> >> It does not allow for the engine to initiate queries of the controller,
>> >> for instance a work stealing implementation. Again, it is possible that
>> >> this
>> >> could be implemented via the job queue PAIR socket, but that would only
>> >> allow for stealing when completely starved for work, since the job
>> >> queue and
>> >> communication queue would be the same.
>> >> There's also the issue of task dependency.
>> >> If we are to implement dependency checking as we discussed (depend on
>> >> taskIDs, and only execute once the task has been completed), the engine
>> >> needs to be able to query the controller about the tasks depended upon.
>> >> This
>> >> makes the controller being the PUB side unworkable.
>> >> This says to me that we need two-way connections between the engines
>> >> and
>> >> the controller. That can either be implemented as multiple connections
>> >> (PUB/SUB + PAIR or REQ/REP), or simply a PAIR socket for each engine
>> >> could
>> >> provide the whole heartbeat/command channel.
>> >> -MinRK
>> >>
>> >>>
>> >>> Brian
>> >>>
>> >>>
>> >>> > -MinRK
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Brian E. Granger, Ph.D.
>> >>> Assistant Professor of Physics
>> >>> Cal Poly State University, San Luis Obispo
>> >>> bgranger at calpoly.edu
>> >>> ellisonbg at gmail.com
>> >>
>> >
>> >
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From benjaminrk at gmail.com  Tue Jul 13 01:10:01 2010
From: benjaminrk at gmail.com (MinRK)
Date: Mon, 12 Jul 2010 22:10:01 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTinIpwIYVSDu9z4Z1glBqoFjYI1hzCAmOhLb85Mq@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com> 
	<AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com> 
	<AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com> 
	<AANLkTilcPFcNjJHDo_o-0Ui9o_iduyKFvLg1SdNBtgWI@mail.gmail.com> 
	<AANLkTikWHsr0Srz0F8kVDQx8nCaslKI6SV7IWOfEPRGQ@mail.gmail.com> 
	<AANLkTiktMbKJc11xepDhssl54-uvuwwUF4GkrdDxxXNb@mail.gmail.com> 
	<AANLkTinIpwIYVSDu9z4Z1glBqoFjYI1hzCAmOhLb85Mq@mail.gmail.com>
Message-ID: <AANLkTikFfiKM9CIy4IBOI1HlJWbMXl_QHdLatCVwrP0Y@mail.gmail.com>

On Mon, Jul 12, 2010 at 22:04, Brian Granger <ellisonbg at gmail.com> wrote:

> On Mon, Jul 12, 2010 at 9:49 PM, MinRK <benjaminrk at gmail.com> wrote:
> >
> >
> > On Mon, Jul 12, 2010 at 20:43, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >>
> >> Min,
> >>
> >> On Mon, Jul 12, 2010 at 4:10 PM, MinRK <benjaminrk at gmail.com> wrote:
> >> > I've been thinking about this, and it seems like we can't have a
> >> > responsive
> >> > rich control connection unless it is in another process, like the old
> >> > IPython daemon.
> >>
> >> I am not quite sure I follow what you mean by this.  Can you elaborate?
> >
> > The main advantage that we were to gain from the out-of-process ipdaemon
> was
> > the ability to abort/kill (signal) blocking jobs. With 0MQ threads, the
> only
> > logic we can have in a control/heartbeat thread must be implemented in
> > GIL-free C/C++. That limits what we can do in terms of interacting with
> the
> > main work thread, as I understand it.
>
> Yes, but I think it might be possible to spawn an external process to
> send a signal back to the process.  But I am not sure about this.
>
> >>
> >> > Pure heartbeat is easy with a C device, and we may not even
> >> > need a new one. For instance, I added support for the builtin devices
> of
> >> > zeromq to pyzmq with a few lines, and you can have simple is_alive
> style
> >> > heartbeat with a FORWARDER device.
> >>
> >> I looked at this and it looks very nice.  I think for basic is_alive
> >> type heartbeats this will work fine.  The only thing to be careful of
> >> is that 0MQ sockets are not thread safe.  Thus, it would be best to
> >> actually create the socket in the thread as well.  But we do want the
> >> flexibility to be able to pass in sockets to the device.  We will have
> >> to think about that issue.
> >
> >
> > I wrote/pushed a basic ThreadsafeDevice, which creates/binds/connects
> inside
> > the thread's run method.
> > It adds bind_in/out, connect_in/out, and setsockopt_in/out methods which
> > just queue up arguments to be called at the head of the run method. I
> added
> > a tspong.py in the heartbeat example using it.
>
> Cool, I will review this and merge it into master.
>
>
I'd say it's not ready for master in one particular respect: The Device
thread doesn't respond to signals, so I have to kill it to stop it. I
haven't yet figured out why this is happening; it might be quite simple.

I'll push up some unit tests tomorrow



> Cheers,
>
> Brian
>
> >>
> >> > I pushed a basic example of this (examples/heartbeat) to my pyzmq
> fork.
> >> > Running a ~3 second numpy.dot action, the heartbeat pings remain
> >> > responsive
> >> > at <1ms.
> >>
> >> This is great!
> >>
> >> Cheers,
> >>
> >> Brian
> >> > -MinRK
> >> >
> >> > On Mon, Jul 12, 2010 at 12:51, MinRK <benjaminrk at gmail.com> wrote:
> >> >>
> >> >>
> >> >> On Mon, Jul 12, 2010 at 09:15, Brian Granger <ellisonbg at gmail.com>
> >> >> wrote:
> >> >>>
> >> >>> On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com> wrote:
> >> >>> > Brian,
> >> >>> > Have you worked on the Heartbeat Device? Does that need to go in
> 0MQ
> >> >>> > itself,
> >> >>>
> >> >>> I have not.  Ideally it could go into 0MQ itself.  But, in
> principle,
> >> >>> we could do it in pyzmq.  We just have to write a nogil pure C
> >> >>> function that uses the low-level C API to do the heartbeat.  Then we
> >> >>> can just run that function in a thread with a "with nogil" block.
> >> >>> Shouldn't be too bad, given how simple the heartbeat logic is.  The
> >> >>> main thing we will have to think about is how to start/stop the
> >> >>> heartbeat in a clean way.
> >> >>>
> >> >>> > or can it be part of pyzmq?
> >> >>> > I'm trying to work out how to really tell that an engine is down.
> >> >>> > Is the heartbeat to be in a separate process?
> >> >>>
> >> >>> No, just a separate C/C++ thread that doesn't hold the GIL.
> >> >>>
> >> >>> > Are we guaranteed that a zmq thread is responsive no matter what
> an
> >> >>> > engine
> >> >>> > process is doing? If that's the case, is a moderate timeout on
> recv
> >> >>> > adequate
> >> >>> > to determine engine failure?
> >> >>>
> >> >>> Yes, I think we can assume this.  The only thing that would take the
> >> >>> 0mq thread down is something semi-fatal like a signal that doesn't
> get
> >> >>> handled.  But as long as the 0MQ thread doesn't have any bugs, it
> >> >>> should simply keep running no matter what the other thread does (OK,
> >> >>> other than segfaulting)
> >> >>>
> >> >>> > If zmq threads are guaranteed to be responsive, it seems like a
> >> >>> > simple
> >> >>> > pair
> >> >>> > socket might be good enough, rather than needing a new device. Or
> >> >>> > even
> >> >>> > through the registration XREP socket.
> >> >>>
> >> >>> That (registration XREP socket) won't work unless we want to write
> all
> >> >>> that logic in C.
> >> >>> I don't know about a PAIR socket because of the need for multiple
> >> >>> clients?
> >> >>
> >> >> I wasn't thinking of a single PAIR socket, but rather a pair for each
> >> >> engine. We already have a pair for each engine for the queue, but I
> am
> >> >> not
> >> >> quite seeing the need for a special device beyond a PAIR socket in
> the
> >> >> heartbeat.
> >> >>
> >> >>>
> >> >>> > Can we formalize exactly what the heartbeat needs to be?
> >> >>>
> >> >>> OK, let's think.  The engine needs to connect, the controller bind.
> >> >>> It would be nice if the controller didn't need a separate heartbeat
> >> >>> socket for each engine, but I guess we need the ability to track
> which
> >> >>> specific engine is heartbeating.   Also, there is the question of to
> >> >>> do want to do a reqest/reply or pub/sub style heartbeat.  What do
> you
> >> >>> think?
> >> >>
> >> >> The way we talked about it, the heartbeat needs to issue commands
> both
> >> >> ways. While it is used for checking whether an engine remains alive,
> it
> >> >> is
> >> >> also the avenue for aborting jobs.  If we do have a strict heartbeat,
> >> >> then I
> >> >> think PUB/SUB is a good choice.
> >> >> However, if heartbeat is all it does, then we need a _third_
> connection
> >> >> to
> >> >> each engine for control commands. Since messages cannot jump the
> queue,
> >> >> the
> >> >> engine queue PAIR socket cannot be used for commands, and a PUB/SUB
> >> >> model
> >> >> for heartbeat can _either_ receive commands _or_ have results.
> >> >> control commands:
> >> >> beat (check alive)
> >> >> abort (remove a task from the queue)
> >> >> signal (SIGINT, etc.)
> >> >> exit (engine.kill)
> >> >> reset (clear queue, namespace)
> >> >> more?
> >> >> It's possible that we could implement these with a PUB on the
> >> >> controller
> >> >> and a SUB on each engine, only interpreting results received via the
> >> >> queue's
> >> >> PAIR socket. But then every command would be sent to every engine,
> even
> >> >> though many would only be meant for one (too inefficient/costly?). It
> >> >> would
> >> >> however make the actual heartbeat command very simple as a single
> send.
> >> >> It does not allow for the engine to initiate queries of the
> controller,
> >> >> for instance a work stealing implementation. Again, it is possible
> that
> >> >> this
> >> >> could be implemented via the job queue PAIR socket, but that would
> only
> >> >> allow for stealing when completely starved for work, since the job
> >> >> queue and
> >> >> communication queue would be the same.
> >> >> There's also the issue of task dependency.
> >> >> If we are to implement dependency checking as we discussed (depend on
> >> >> taskIDs, and only execute once the task has been completed), the
> engine
> >> >> needs to be able to query the controller about the tasks depended
> upon.
> >> >> This
> >> >> makes the controller being the PUB side unworkable.
> >> >> This says to me that we need two-way connections between the engines
> >> >> and
> >> >> the controller. That can either be implemented as multiple
> connections
> >> >> (PUB/SUB + PAIR or REQ/REP), or simply a PAIR socket for each engine
> >> >> could
> >> >> provide the whole heartbeat/command channel.
> >> >> -MinRK
> >> >>
> >> >>>
> >> >>> Brian
> >> >>>
> >> >>>
> >> >>> > -MinRK
> >> >>>
> >> >>>
> >> >>>
> >> >>> --
> >> >>> Brian E. Granger, Ph.D.
> >> >>> Assistant Professor of Physics
> >> >>> Cal Poly State University, San Luis Obispo
> >> >>> bgranger at calpoly.edu
> >> >>> ellisonbg at gmail.com
> >> >>
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Brian E. Granger, Ph.D.
> >> Assistant Professor of Physics
> >> Cal Poly State University, San Luis Obispo
> >> bgranger at calpoly.edu
> >> ellisonbg at gmail.com
> >
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100712/0d3a3cac/attachment.html>

From benjaminrk at gmail.com  Tue Jul 13 15:51:39 2010
From: benjaminrk at gmail.com (MinRK)
Date: Tue, 13 Jul 2010 12:51:39 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTikFfiKM9CIy4IBOI1HlJWbMXl_QHdLatCVwrP0Y@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com> 
	<AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com> 
	<AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com> 
	<AANLkTilcPFcNjJHDo_o-0Ui9o_iduyKFvLg1SdNBtgWI@mail.gmail.com> 
	<AANLkTikWHsr0Srz0F8kVDQx8nCaslKI6SV7IWOfEPRGQ@mail.gmail.com> 
	<AANLkTiktMbKJc11xepDhssl54-uvuwwUF4GkrdDxxXNb@mail.gmail.com> 
	<AANLkTinIpwIYVSDu9z4Z1glBqoFjYI1hzCAmOhLb85Mq@mail.gmail.com> 
	<AANLkTikFfiKM9CIy4IBOI1HlJWbMXl_QHdLatCVwrP0Y@mail.gmail.com>
Message-ID: <AANLkTinYm6Ef1LDyCFRMPXoipEgWHsfjcrxq4E1iJSUR@mail.gmail.com>

Re: not exiting without killing:
I just needed to add thread.setDaemon(True), so the device threads do exit
properly now.

committed/pushed to git.

-MinRK

On Mon, Jul 12, 2010 at 22:10, MinRK <benjaminrk at gmail.com> wrote:

>
>
> On Mon, Jul 12, 2010 at 22:04, Brian Granger <ellisonbg at gmail.com> wrote:
>
>> On Mon, Jul 12, 2010 at 9:49 PM, MinRK <benjaminrk at gmail.com> wrote:
>> >
>> >
>> > On Mon, Jul 12, 2010 at 20:43, Brian Granger <ellisonbg at gmail.com>
>> wrote:
>> >>
>> >> Min,
>> >>
>> >> On Mon, Jul 12, 2010 at 4:10 PM, MinRK <benjaminrk at gmail.com> wrote:
>> >> > I've been thinking about this, and it seems like we can't have a
>> >> > responsive
>> >> > rich control connection unless it is in another process, like the old
>> >> > IPython daemon.
>> >>
>> >> I am not quite sure I follow what you mean by this.  Can you elaborate?
>> >
>> > The main advantage that we were to gain from the out-of-process ipdaemon
>> was
>> > the ability to abort/kill (signal) blocking jobs. With 0MQ threads, the
>> only
>> > logic we can have in a control/heartbeat thread must be implemented in
>> > GIL-free C/C++. That limits what we can do in terms of interacting with
>> the
>> > main work thread, as I understand it.
>>
>> Yes, but I think it might be possible to spawn an external process to
>> send a signal back to the process.  But I am not sure about this.
>>
>> >>
>> >> > Pure heartbeat is easy with a C device, and we may not even
>> >> > need a new one. For instance, I added support for the builtin devices
>> of
>> >> > zeromq to pyzmq with a few lines, and you can have simple is_alive
>> style
>> >> > heartbeat with a FORWARDER device.
>> >>
>> >> I looked at this and it looks very nice.  I think for basic is_alive
>> >> type heartbeats this will work fine.  The only thing to be careful of
>> >> is that 0MQ sockets are not thread safe.  Thus, it would be best to
>> >> actually create the socket in the thread as well.  But we do want the
>> >> flexibility to be able to pass in sockets to the device.  We will have
>> >> to think about that issue.
>> >
>> >
>> > I wrote/pushed a basic ThreadsafeDevice, which creates/binds/connects
>> inside
>> > the thread's run method.
>> > It adds bind_in/out, connect_in/out, and setsockopt_in/out methods which
>> > just queue up arguments to be called at the head of the run method. I
>> added
>> > a tspong.py in the heartbeat example using it.
>>
>> Cool, I will review this and merge it into master.
>>
>>
> I'd say it's not ready for master in one particular respect: The Device
> thread doesn't respond to signals, so I have to kill it to stop it. I
> haven't yet figured out why this is happening; it might be quite simple.
>
> I'll push up some unit tests tomorrow
>
>
>
>> Cheers,
>>
>> Brian
>>
>> >>
>> >> > I pushed a basic example of this (examples/heartbeat) to my pyzmq
>> fork.
>> >> > Running a ~3 second numpy.dot action, the heartbeat pings remain
>> >> > responsive
>> >> > at <1ms.
>> >>
>> >> This is great!
>> >>
>> >> Cheers,
>> >>
>> >> Brian
>> >> > -MinRK
>> >> >
>> >> > On Mon, Jul 12, 2010 at 12:51, MinRK <benjaminrk at gmail.com> wrote:
>> >> >>
>> >> >>
>> >> >> On Mon, Jul 12, 2010 at 09:15, Brian Granger <ellisonbg at gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com>
>> wrote:
>> >> >>> > Brian,
>> >> >>> > Have you worked on the Heartbeat Device? Does that need to go in
>> 0MQ
>> >> >>> > itself,
>> >> >>>
>> >> >>> I have not.  Ideally it could go into 0MQ itself.  But, in
>> principle,
>> >> >>> we could do it in pyzmq.  We just have to write a nogil pure C
>> >> >>> function that uses the low-level C API to do the heartbeat.  Then
>> we
>> >> >>> can just run that function in a thread with a "with nogil" block.
>> >> >>> Shouldn't be too bad, given how simple the heartbeat logic is.  The
>> >> >>> main thing we will have to think about is how to start/stop the
>> >> >>> heartbeat in a clean way.
>> >> >>>
>> >> >>> > or can it be part of pyzmq?
>> >> >>> > I'm trying to work out how to really tell that an engine is down.
>> >> >>> > Is the heartbeat to be in a separate process?
>> >> >>>
>> >> >>> No, just a separate C/C++ thread that doesn't hold the GIL.
>> >> >>>
>> >> >>> > Are we guaranteed that a zmq thread is responsive no matter what
>> an
>> >> >>> > engine
>> >> >>> > process is doing? If that's the case, is a moderate timeout on
>> recv
>> >> >>> > adequate
>> >> >>> > to determine engine failure?
>> >> >>>
>> >> >>> Yes, I think we can assume this.  The only thing that would take
>> the
>> >> >>> 0mq thread down is something semi-fatal like a signal that doesn't
>> get
>> >> >>> handled.  But as long as the 0MQ thread doesn't have any bugs, it
>> >> >>> should simply keep running no matter what the other thread does
>> (OK,
>> >> >>> other than segfaulting)
>> >> >>>
>> >> >>> > If zmq threads are guaranteed to be responsive, it seems like a
>> >> >>> > simple
>> >> >>> > pair
>> >> >>> > socket might be good enough, rather than needing a new device. Or
>> >> >>> > even
>> >> >>> > through the registration XREP socket.
>> >> >>>
>> >> >>> That (registration XREP socket) won't work unless we want to write
>> all
>> >> >>> that logic in C.
>> >> >>> I don't know about a PAIR socket because of the need for multiple
>> >> >>> clients?
>> >> >>
>> >> >> I wasn't thinking of a single PAIR socket, but rather a pair for
>> each
>> >> >> engine. We already have a pair for each engine for the queue, but I
>> am
>> >> >> not
>> >> >> quite seeing the need for a special device beyond a PAIR socket in
>> the
>> >> >> heartbeat.
>> >> >>
>> >> >>>
>> >> >>> > Can we formalize exactly what the heartbeat needs to be?
>> >> >>>
>> >> >>> OK, let's think.  The engine needs to connect, the controller bind.
>> >> >>> It would be nice if the controller didn't need a separate heartbeat
>> >> >>> socket for each engine, but I guess we need the ability to track
>> which
>> >> >>> specific engine is heartbeating.   Also, there is the question of
>> to
>> >> >>> do want to do a reqest/reply or pub/sub style heartbeat.  What do
>> you
>> >> >>> think?
>> >> >>
>> >> >> The way we talked about it, the heartbeat needs to issue commands
>> both
>> >> >> ways. While it is used for checking whether an engine remains alive,
>> it
>> >> >> is
>> >> >> also the avenue for aborting jobs.  If we do have a strict
>> heartbeat,
>> >> >> then I
>> >> >> think PUB/SUB is a good choice.
>> >> >> However, if heartbeat is all it does, then we need a _third_
>> connection
>> >> >> to
>> >> >> each engine for control commands. Since messages cannot jump the
>> queue,
>> >> >> the
>> >> >> engine queue PAIR socket cannot be used for commands, and a PUB/SUB
>> >> >> model
>> >> >> for heartbeat can _either_ receive commands _or_ have results.
>> >> >> control commands:
>> >> >> beat (check alive)
>> >> >> abort (remove a task from the queue)
>> >> >> signal (SIGINT, etc.)
>> >> >> exit (engine.kill)
>> >> >> reset (clear queue, namespace)
>> >> >> more?
>> >> >> It's possible that we could implement these with a PUB on the
>> >> >> controller
>> >> >> and a SUB on each engine, only interpreting results received via the
>> >> >> queue's
>> >> >> PAIR socket. But then every command would be sent to every engine,
>> even
>> >> >> though many would only be meant for one (too inefficient/costly?).
>> It
>> >> >> would
>> >> >> however make the actual heartbeat command very simple as a single
>> send.
>> >> >> It does not allow for the engine to initiate queries of the
>> controller,
>> >> >> for instance a work stealing implementation. Again, it is possible
>> that
>> >> >> this
>> >> >> could be implemented via the job queue PAIR socket, but that would
>> only
>> >> >> allow for stealing when completely starved for work, since the job
>> >> >> queue and
>> >> >> communication queue would be the same.
>> >> >> There's also the issue of task dependency.
>> >> >> If we are to implement dependency checking as we discussed (depend
>> on
>> >> >> taskIDs, and only execute once the task has been completed), the
>> engine
>> >> >> needs to be able to query the controller about the tasks depended
>> upon.
>> >> >> This
>> >> >> makes the controller being the PUB side unworkable.
>> >> >> This says to me that we need two-way connections between the engines
>> >> >> and
>> >> >> the controller. That can either be implemented as multiple
>> connections
>> >> >> (PUB/SUB + PAIR or REQ/REP), or simply a PAIR socket for each engine
>> >> >> could
>> >> >> provide the whole heartbeat/command channel.
>> >> >> -MinRK
>> >> >>
>> >> >>>
>> >> >>> Brian
>> >> >>>
>> >> >>>
>> >> >>> > -MinRK
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> --
>> >> >>> Brian E. Granger, Ph.D.
>> >> >>> Assistant Professor of Physics
>> >> >>> Cal Poly State University, San Luis Obispo
>> >> >>> bgranger at calpoly.edu
>> >> >>> ellisonbg at gmail.com
>> >> >>
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Brian E. Granger, Ph.D.
>> >> Assistant Professor of Physics
>> >> Cal Poly State University, San Luis Obispo
>> >> bgranger at calpoly.edu
>> >> ellisonbg at gmail.com
>> >
>> >
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100713/9b3e80bd/attachment.html>

From fperez.net at gmail.com  Tue Jul 13 23:08:03 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 13 Jul 2010 20:08:03 -0700
Subject: [IPython-dev] Fwd: [zeromq-dev] Authentication on "topic"
In-Reply-To: <AANLkTimqnk307_cLtouSnXymAEOMNQFAq8ZvU9pnlh-o@mail.gmail.com>
References: <3127C7C2-4A7D-4DF7-8A62-42BFE6F12E0C@quant-edge.com> 
	<AANLkTilLxuJLxLGthH3Lk3nBL8e_23lGQPByTKGqUS2J@mail.gmail.com> 
	<AANLkTimqnk307_cLtouSnXymAEOMNQFAq8ZvU9pnlh-o@mail.gmail.com>
Message-ID: <AANLkTikkmtsS06bTUXMq8Z9uTybFkaNjTTzObw0hjPGc@mail.gmail.com>

On Mon, Jul 12, 2010 at 8:26 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> Just saw this on the 0MQ list about authentication and 0MQ.

Interesting...

>
> Cheers,
>
> Brian
>
>
> ---------- Forwarded message ----------
> From: Pieter Hintjens <ph at imatix.com>
> Date: Mon, Jul 12, 2010 at 11:37 AM
> Subject: Re: [zeromq-dev] Authentication on "topic"
> To: 0MQ development list <zeromq-dev at lists.zeromq.org>
>
>
> Hi Viet,
>
> There is no plan to add authentication to ZeroMQ core. ?However we are
> developing a data plant layer above ZeroMQ, which will do secure
> distribution over multicast as well as TCP. ?It will use
> request-response to do key distribution, and then clients will use
> those keys to unlock streams of data.
>

I wonder if their model for authentication would be enough for us.
Not sure quite yet...  But at least it's good to see 0mq moving in
this direction, even if it's just to see patterns of security to take
ideas from.

Cheers,

f


From fperez.net at gmail.com  Tue Jul 13 23:11:32 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 13 Jul 2010 20:11:32 -0700
Subject: [IPython-dev] Porting to Python3
In-Reply-To: <AANLkTikTcvgW8HlfcjkUqwvp79S4-EQaDoA88A6LyyFW@mail.gmail.com>
References: <AANLkTikTcvgW8HlfcjkUqwvp79S4-EQaDoA88A6LyyFW@mail.gmail.com>
Message-ID: <AANLkTinquEkVmINENEFnwcNWyYlpgkyW2EEhhfCmd625@mail.gmail.com>

Hi Naoki,

On Sat, Jul 10, 2010 at 2:18 AM, INADA Naoki <songofacandy at gmail.com> wrote:
> Today, Python hack-a-thon is held in Japan.
> I've ported IPython to Python3 in there.
> Some feature works now.
>
> http://github.com/methane/ipython

Fantastic!  This is great.  Can you run the test suite?  It should
naturally skip the twisted parts but at least the other pieces should
run.

Can your fork run with 2.x as well? It would be good to keep
compatibility with both, if possible, for a while... I know that's not
what python originally recommended, but in practice I think it's a
more realistic goal.

Cheers,

f


From ellisonbg at gmail.com  Tue Jul 13 23:57:31 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 13 Jul 2010 20:57:31 -0700
Subject: [IPython-dev] Heartbeat Device
In-Reply-To: <AANLkTinYm6Ef1LDyCFRMPXoipEgWHsfjcrxq4E1iJSUR@mail.gmail.com>
References: <AANLkTinCKk4AJZsIlvGbvo2PAQoI2zuZC7BqAG1peQEJ@mail.gmail.com>
	<AANLkTinsdGXodFBWzCZj1Qi81g5K4N3cKwr4ykjIDEIX@mail.gmail.com>
	<AANLkTilKwIPpv9b3ocyznSTBAVQqF4WNFS81Rnb5Xi2X@mail.gmail.com>
	<AANLkTilcPFcNjJHDo_o-0Ui9o_iduyKFvLg1SdNBtgWI@mail.gmail.com>
	<AANLkTikWHsr0Srz0F8kVDQx8nCaslKI6SV7IWOfEPRGQ@mail.gmail.com>
	<AANLkTiktMbKJc11xepDhssl54-uvuwwUF4GkrdDxxXNb@mail.gmail.com>
	<AANLkTinIpwIYVSDu9z4Z1glBqoFjYI1hzCAmOhLb85Mq@mail.gmail.com>
	<AANLkTikFfiKM9CIy4IBOI1HlJWbMXl_QHdLatCVwrP0Y@mail.gmail.com>
	<AANLkTinYm6Ef1LDyCFRMPXoipEgWHsfjcrxq4E1iJSUR@mail.gmail.com>
Message-ID: <AANLkTinqusMkwR-U6fgu1bm4H388pUXpWudOfb4WX2R8@mail.gmail.com>

Nice to know that works, but I don't think that will work for devices
that use blocking recv calls.  But it may.

Brian

On Tue, Jul 13, 2010 at 12:51 PM, MinRK <benjaminrk at gmail.com> wrote:
> Re: not exiting without killing:
> I just needed to add thread.setDaemon(True), so the device threads do exit
> properly now.
> committed/pushed to git.
>
> -MinRK
> On Mon, Jul 12, 2010 at 22:10, MinRK <benjaminrk at gmail.com> wrote:
>>
>>
>> On Mon, Jul 12, 2010 at 22:04, Brian Granger <ellisonbg at gmail.com> wrote:
>>>
>>> On Mon, Jul 12, 2010 at 9:49 PM, MinRK <benjaminrk at gmail.com> wrote:
>>> >
>>> >
>>> > On Mon, Jul 12, 2010 at 20:43, Brian Granger <ellisonbg at gmail.com>
>>> > wrote:
>>> >>
>>> >> Min,
>>> >>
>>> >> On Mon, Jul 12, 2010 at 4:10 PM, MinRK <benjaminrk at gmail.com> wrote:
>>> >> > I've been thinking about this, and it seems like we can't have a
>>> >> > responsive
>>> >> > rich control connection unless it is in another process, like the
>>> >> > old
>>> >> > IPython daemon.
>>> >>
>>> >> I am not quite sure I follow what you mean by this. ?Can you
>>> >> elaborate?
>>> >
>>> > The main advantage that we were to gain from the out-of-process
>>> > ipdaemon was
>>> > the ability to abort/kill (signal) blocking jobs. With 0MQ threads, the
>>> > only
>>> > logic we can have in a control/heartbeat thread must be implemented in
>>> > GIL-free C/C++. That limits what we can do in terms of interacting with
>>> > the
>>> > main work thread, as I understand it.
>>>
>>> Yes, but I think it might be possible to spawn an external process to
>>> send a signal back to the process. ?But I am not sure about this.
>>>
>>> >>
>>> >> > Pure heartbeat is easy with a C device, and we may not even
>>> >> > need a new one. For instance, I added support for the builtin
>>> >> > devices of
>>> >> > zeromq to pyzmq with a few lines, and you can have simple is_alive
>>> >> > style
>>> >> > heartbeat with a FORWARDER device.
>>> >>
>>> >> I looked at this and it looks very nice. ?I think for basic is_alive
>>> >> type heartbeats this will work fine. ?The only thing to be careful of
>>> >> is that 0MQ sockets are not thread safe. ?Thus, it would be best to
>>> >> actually create the socket in the thread as well. ?But we do want the
>>> >> flexibility to be able to pass in sockets to the device. ?We will have
>>> >> to think about that issue.
>>> >
>>> >
>>> > I wrote/pushed a basic ThreadsafeDevice, which creates/binds/connects
>>> > inside
>>> > the thread's run method.
>>> > It adds bind_in/out, connect_in/out, and setsockopt_in/out methods
>>> > which
>>> > just queue up arguments to be called at the head of the run method. I
>>> > added
>>> > a tspong.py in the heartbeat example using it.
>>>
>>> Cool, I will review this and merge it into master.
>>>
>>
>> I'd say it's not ready for master in one particular respect: The Device
>> thread doesn't respond to signals, so I have to kill it to stop it. I
>> haven't yet figured out why this is happening; it might be quite simple.
>> I'll push up some unit tests tomorrow
>>
>>>
>>> Cheers,
>>>
>>> Brian
>>>
>>> >>
>>> >> > I pushed a basic example of this (examples/heartbeat) to my pyzmq
>>> >> > fork.
>>> >> > Running a ~3 second numpy.dot action, the heartbeat pings remain
>>> >> > responsive
>>> >> > at <1ms.
>>> >>
>>> >> This is great!
>>> >>
>>> >> Cheers,
>>> >>
>>> >> Brian
>>> >> > -MinRK
>>> >> >
>>> >> > On Mon, Jul 12, 2010 at 12:51, MinRK <benjaminrk at gmail.com> wrote:
>>> >> >>
>>> >> >>
>>> >> >> On Mon, Jul 12, 2010 at 09:15, Brian Granger <ellisonbg at gmail.com>
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> On Fri, Jul 9, 2010 at 3:35 PM, MinRK <benjaminrk at gmail.com>
>>> >> >>> wrote:
>>> >> >>> > Brian,
>>> >> >>> > Have you worked on the Heartbeat Device? Does that need to go in
>>> >> >>> > 0MQ
>>> >> >>> > itself,
>>> >> >>>
>>> >> >>> I have not. ?Ideally it could go into 0MQ itself. ?But, in
>>> >> >>> principle,
>>> >> >>> we could do it in pyzmq. ?We just have to write a nogil pure C
>>> >> >>> function that uses the low-level C API to do the heartbeat. ?Then
>>> >> >>> we
>>> >> >>> can just run that function in a thread with a "with nogil" block.
>>> >> >>> Shouldn't be too bad, given how simple the heartbeat logic is.
>>> >> >>> ?The
>>> >> >>> main thing we will have to think about is how to start/stop the
>>> >> >>> heartbeat in a clean way.
>>> >> >>>
>>> >> >>> > or can it be part of pyzmq?
>>> >> >>> > I'm trying to work out how to really tell that an engine is
>>> >> >>> > down.
>>> >> >>> > Is the heartbeat to be in a separate process?
>>> >> >>>
>>> >> >>> No, just a separate C/C++ thread that doesn't hold the GIL.
>>> >> >>>
>>> >> >>> > Are we guaranteed that a zmq thread is responsive no matter what
>>> >> >>> > an
>>> >> >>> > engine
>>> >> >>> > process is doing? If that's the case, is a moderate timeout on
>>> >> >>> > recv
>>> >> >>> > adequate
>>> >> >>> > to determine engine failure?
>>> >> >>>
>>> >> >>> Yes, I think we can assume this. ?The only thing that would take
>>> >> >>> the
>>> >> >>> 0mq thread down is something semi-fatal like a signal that doesn't
>>> >> >>> get
>>> >> >>> handled. ?But as long as the 0MQ thread doesn't have any bugs, it
>>> >> >>> should simply keep running no matter what the other thread does
>>> >> >>> (OK,
>>> >> >>> other than segfaulting)
>>> >> >>>
>>> >> >>> > If zmq threads are guaranteed to be responsive, it seems like a
>>> >> >>> > simple
>>> >> >>> > pair
>>> >> >>> > socket might be good enough, rather than needing a new device.
>>> >> >>> > Or
>>> >> >>> > even
>>> >> >>> > through the registration XREP socket.
>>> >> >>>
>>> >> >>> That (registration XREP socket) won't work unless we want to write
>>> >> >>> all
>>> >> >>> that logic in C.
>>> >> >>> I don't know about a PAIR socket because of the need for multiple
>>> >> >>> clients?
>>> >> >>
>>> >> >> I wasn't thinking of a single PAIR socket, but rather a pair for
>>> >> >> each
>>> >> >> engine. We already have a pair for each engine for the queue, but I
>>> >> >> am
>>> >> >> not
>>> >> >> quite seeing the need for a special device beyond a PAIR socket in
>>> >> >> the
>>> >> >> heartbeat.
>>> >> >>
>>> >> >>>
>>> >> >>> > Can we formalize exactly what the heartbeat needs to be?
>>> >> >>>
>>> >> >>> OK, let's think. ?The engine needs to connect, the controller
>>> >> >>> bind.
>>> >> >>> It would be nice if the controller didn't need a separate
>>> >> >>> heartbeat
>>> >> >>> socket for each engine, but I guess we need the ability to track
>>> >> >>> which
>>> >> >>> specific engine is heartbeating. ? Also, there is the question of
>>> >> >>> to
>>> >> >>> do want to do a reqest/reply or pub/sub style heartbeat. ?What do
>>> >> >>> you
>>> >> >>> think?
>>> >> >>
>>> >> >> The way we talked about it, the heartbeat needs to issue commands
>>> >> >> both
>>> >> >> ways. While it is used for checking whether an engine remains
>>> >> >> alive, it
>>> >> >> is
>>> >> >> also the avenue for aborting jobs. ?If we do have a strict
>>> >> >> heartbeat,
>>> >> >> then I
>>> >> >> think PUB/SUB is a good choice.
>>> >> >> However, if heartbeat is all it does, then we need a _third_
>>> >> >> connection
>>> >> >> to
>>> >> >> each engine for control commands. Since messages cannot jump the
>>> >> >> queue,
>>> >> >> the
>>> >> >> engine queue PAIR socket cannot be used for commands, and a PUB/SUB
>>> >> >> model
>>> >> >> for heartbeat can _either_ receive commands _or_ have results.
>>> >> >> control commands:
>>> >> >> beat (check alive)
>>> >> >> abort (remove a task from the queue)
>>> >> >> signal (SIGINT, etc.)
>>> >> >> exit (engine.kill)
>>> >> >> reset (clear queue, namespace)
>>> >> >> more?
>>> >> >> It's possible that we could implement these with a PUB on the
>>> >> >> controller
>>> >> >> and a SUB on each engine, only interpreting results received via
>>> >> >> the
>>> >> >> queue's
>>> >> >> PAIR socket. But then every command would be sent to every engine,
>>> >> >> even
>>> >> >> though many would only be meant for one (too inefficient/costly?).
>>> >> >> It
>>> >> >> would
>>> >> >> however make the actual heartbeat command very simple as a single
>>> >> >> send.
>>> >> >> It does not allow for the engine to initiate queries of the
>>> >> >> controller,
>>> >> >> for instance a work stealing implementation. Again, it is possible
>>> >> >> that
>>> >> >> this
>>> >> >> could be implemented via the job queue PAIR socket, but that would
>>> >> >> only
>>> >> >> allow for stealing when completely starved for work, since the job
>>> >> >> queue and
>>> >> >> communication queue would be the same.
>>> >> >> There's also the issue of task dependency.
>>> >> >> If we are to implement dependency checking as we discussed (depend
>>> >> >> on
>>> >> >> taskIDs, and only execute once the task has been completed), the
>>> >> >> engine
>>> >> >> needs to be able to query the controller about the tasks depended
>>> >> >> upon.
>>> >> >> This
>>> >> >> makes the controller being the PUB side unworkable.
>>> >> >> This says to me that we need two-way connections between the
>>> >> >> engines
>>> >> >> and
>>> >> >> the controller. That can either be implemented as multiple
>>> >> >> connections
>>> >> >> (PUB/SUB + PAIR or REQ/REP), or simply a PAIR socket for each
>>> >> >> engine
>>> >> >> could
>>> >> >> provide the whole heartbeat/command channel.
>>> >> >> -MinRK
>>> >> >>
>>> >> >>>
>>> >> >>> Brian
>>> >> >>>
>>> >> >>>
>>> >> >>> > -MinRK
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>> --
>>> >> >>> Brian E. Granger, Ph.D.
>>> >> >>> Assistant Professor of Physics
>>> >> >>> Cal Poly State University, San Luis Obispo
>>> >> >>> bgranger at calpoly.edu
>>> >> >>> ellisonbg at gmail.com
>>> >> >>
>>> >> >
>>> >> >
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Brian E. Granger, Ph.D.
>>> >> Assistant Professor of Physics
>>> >> Cal Poly State University, San Luis Obispo
>>> >> bgranger at calpoly.edu
>>> >> ellisonbg at gmail.com
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Brian E. Granger, Ph.D.
>>> Assistant Professor of Physics
>>> Cal Poly State University, San Luis Obispo
>>> bgranger at calpoly.edu
>>> ellisonbg at gmail.com
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Wed Jul 14 01:05:57 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 13 Jul 2010 22:05:57 -0700
Subject: [IPython-dev] SciPy Sprint summary
Message-ID: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>

Hello all,

We wanted to updated everyone on the IPython sprint at SciPy 2010.  We
had lots of people sprinting on IPython (something like 7-10), which
was fantastic.  Here are some highlights:

* A number of us worked on PyZMQ itself, to prepare for IPython's
relying on it more and more.
* Justin Riley added a nice example to PyZMQ that uses 0MQ sockets to
talk to a MongoDB based key-value store.
* Min Ragan-Kelley and Brian Granger added a Tornado compatible event
loop to PyZMQ.  This event loop will help us refactor the Twisted
using parts of IPython to use PyZMQ instead.
* Min created a nice PyZMQ based log handler for the logging module.
This makes it easy to build distributed logging systems using the
publish/subscript sockets of 0MQ.  We will be using this throughout
IPython.
* We spent a considerable amount of time discussing how to port the
IPython parallel computing platform from Twisted to PyZMQ.  Min
started coding a prototype task controller using PyZMQ.
* Justin Riley created a very nice diagram illustrating the design of
the new PyZMQ based kernel/frontend architecture for IPython.
* Everyone helped code a new interface that will allow various IPython
frontends to interact with the IPython kernel using PyZMQ.
* Fernando Perez and Jonathan March worked on the git workflow and on
getting Jonathan set up for patch management.
* Fernando Perez and Robert Kern worked on the message specification
for the JSON message format will are starting to use.

Much of this work will be hitting master over the summer.  Thanks to
everyone for helping out and I apologize if I forgot anyone or
anything.

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Wed Jul 14 01:08:35 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 13 Jul 2010 22:08:35 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
Message-ID: <AANLkTik3ITnAn7wXIaLubfIgrhZXfYbLcbwyQHJ5fuii@mail.gmail.com>

Here is a link to the nice diagram of the kernel/frontend design that
Justin did:

http://github.com/ipython/ipython/commit/e21b32e89a634cb1393fd54c1a5657f63f40b1ff

Thanks Justin!

Cheers,

Brian

On Tue, Jul 13, 2010 at 10:05 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> Hello all,
>
> We wanted to updated everyone on the IPython sprint at SciPy 2010. ?We
> had lots of people sprinting on IPython (something like 7-10), which
> was fantastic. ?Here are some highlights:
>
> * A number of us worked on PyZMQ itself, to prepare for IPython's
> relying on it more and more.
> * Justin Riley added a nice example to PyZMQ that uses 0MQ sockets to
> talk to a MongoDB based key-value store.
> * Min Ragan-Kelley and Brian Granger added a Tornado compatible event
> loop to PyZMQ. ?This event loop will help us refactor the Twisted
> using parts of IPython to use PyZMQ instead.
> * Min created a nice PyZMQ based log handler for the logging module.
> This makes it easy to build distributed logging systems using the
> publish/subscript sockets of 0MQ. ?We will be using this throughout
> IPython.
> * We spent a considerable amount of time discussing how to port the
> IPython parallel computing platform from Twisted to PyZMQ. ?Min
> started coding a prototype task controller using PyZMQ.
> * Justin Riley created a very nice diagram illustrating the design of
> the new PyZMQ based kernel/frontend architecture for IPython.
> * Everyone helped code a new interface that will allow various IPython
> frontends to interact with the IPython kernel using PyZMQ.
> * Fernando Perez and Jonathan March worked on the git workflow and on
> getting Jonathan set up for patch management.
> * Fernando Perez and Robert Kern worked on the message specification
> for the JSON message format will are starting to use.
>
> Much of this work will be hitting master over the summer. ?Thanks to
> everyone for helping out and I apologize if I forgot anyone or
> anything.
>
> Cheers,
>
> Brian
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Wed Jul 14 03:38:07 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 14 Jul 2010 00:38:07 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
Message-ID: <AANLkTinySOhN6OMACQDTAllRdwzD4htdq10pUCz5Wh8i@mail.gmail.com>

On Tue, Jul 13, 2010 at 10:05 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> Hello all,
>
> We wanted to updated everyone on the IPython sprint at SciPy 2010. ?We
> had lots of people sprinting on IPython (something like 7-10), which
> was fantastic. ?Here are some highlights:

[...]

Sorry, we forgot to include:

* Omar Zapata, one of the two IPython Google Summer of Code students
who was present at the sprints, made progress on the terminal-based
zmq interactive frontend, and implemented the extra socket design to
support calls to raw_input() in the kernel.

Cheers,

f


From fperez.net at gmail.com  Wed Jul 14 03:43:14 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 14 Jul 2010 00:43:14 -0700
Subject: [IPython-dev] %run -d is broken in Python 2.7
In-Reply-To: <1213230248.20100713052612@mail.mipt.ru>
References: <1213230248.20100713052612@mail.mipt.ru>
Message-ID: <AANLkTinIVMq8qokXEPSzdj6zcCUK4xIJwNY6gYLShMcP@mail.gmail.com>

2010/7/12 vano <vano at mail.mipt.ru>:
> After thorough investigation, it turned out a pdb issue (details are
> on the link), so i filed a bug there (http://bugs.python.org/issue9230) as
> well as a bugfix.
>
> If any of you have write access to python source, you can help me to get
> it fixed quickly.

Ouch, thanks for finding this and providing the pdb patch.
Unfortunately I don't have write access to Python itself (I have
2-year old patches lingering in the python tracker, I'm afraid).

If you can make a (most likely ugly) monkeypatch at runtime to fix
this from the IPython side, we'll include that.  There's a good chance
this will take forever to fix in Python itself, so carrying our own
version-checked ugly fix is better than having broken functionality
for 2.7 users.

I imagine that grabbing the pdb instance and injecting a frame object
into it will do the trick, from looking at your traceback.

If you make such a fix, just post a pull  request for us or a patch,
as you prefer:

http://ipython.scipy.org/doc/nightly/html/development/gitwash/index.html

and we'll be happy to include it.

Cheers,

f


From fperez.net at gmail.com  Wed Jul 14 03:50:21 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 14 Jul 2010 00:50:21 -0700
Subject: [IPython-dev] debugger.py refactoring
In-Reply-To: <BFCF3C40-5935-44BA-ABE1-2B2F20E99D87@cs.toronto.edu>
References: <BFCF3C40-5935-44BA-ABE1-2B2F20E99D87@cs.toronto.edu>
Message-ID: <AANLkTimt4UEkkpJnG9HKppKDJajE4GVE3ClF9hzHpyzL@mail.gmail.com>

Hi David,

On Thu, Jul 8, 2010 at 4:52 PM, David Warde-Farley <dwf at cs.toronto.edu> wrote:
>
> I was just wondering (I didn't see a roadmap anywhere but then again I didn't look very hard) if a refactoring was planned for IPython/core/debugger.py, in particular to make it more extensible to third party tools. I just hacked in support for Andreas Kloeckner's pudb ( http://pypi.python.org/pypi/pudb ) but it wasn't pretty in the least. I guess some sort of 'debugger registry' would make sense, that a user could call into from their ipy_user_conf.py in order to hook up their favourite debugger's post-mortem mode?
>
> This is all just fanciful thinking aloud, but if no one's planning on doing anything to debugger.py in the near future I might give it a try when I get back into town next week.

certainly this kind of improvement for integration with other tools is
always welcome.

Could you post your fixes as either an attached patch or a github pull
request, whatever you find most convenient? Some directions:

http://ipython.scipy.org/doc/nightly/html/development/gitwash/index.html

Cheers,

f


From P.Schellart at astro.ru.nl  Wed Jul 14 04:16:51 2010
From: P.Schellart at astro.ru.nl (Pim Schellart)
Date: Wed, 14 Jul 2010 10:16:51 +0200
Subject: [IPython-dev] Error when running ipcluster
Message-ID: <AANLkTinwDeklmtMIWWLouYpi2UpOFs8bRALvviKWo8sK@mail.gmail.com>

Dear IPython developers,

I would like to use IPython to do some basic parallelization.
However when I execute ipcluster to setup a controller and some
engines I get the following error:

~ $ ipcluster
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.6/bin/ipcluster",
line 16, in <module>
    from IPython.kernel.ipclusterapp import launch_new_instance
ImportError: No module named ipclusterapp

I have installed all dependencies listed as required for the parallel
computing tasks.
When building IPython (0.10) I they are found as follows:

BUILDING IPYTHON
                python: 2.6.5 (r265:79063, Jun  6 2010, 11:37:41)  [GCC
                        4.2.1 (Apple Inc. build 5659)]
              platform: darwin

OPTIONAL DEPENDENCIES
        Zope.Interface: yes
               Twisted: 10.1.0
              Foolscap: 0.5.1
               OpenSSL: 0.6
                sphinx: 1.0b2
              pygments: 1.3.1
                  nose: Not found (required for running the test suite)
               pexpect: no (required for running standalone doctests)

Any idea what is going wrong here?

Kind regards,

Pim Schellart


From fperez.net at gmail.com  Wed Jul 14 14:24:27 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 14 Jul 2010 11:24:27 -0700
Subject: [IPython-dev] Error when running ipcluster
In-Reply-To: <AANLkTinwDeklmtMIWWLouYpi2UpOFs8bRALvviKWo8sK@mail.gmail.com>
References: <AANLkTinwDeklmtMIWWLouYpi2UpOFs8bRALvviKWo8sK@mail.gmail.com>
Message-ID: <AANLkTilk4OAuxKzD2Eq92-Zg-ffT2ab0Fmlpl9D5JCjf@mail.gmail.com>

Hi Pim,

On Wed, Jul 14, 2010 at 1:16 AM, Pim Schellart <P.Schellart at astro.ru.nl> wrote:
>
> I would like to use IPython to do some basic parallelization.
> However when I execute ipcluster to setup a controller and some
> engines I get the following error:
>
> ~ $ ipcluster
> Traceback (most recent call last):
> ?File "/Library/Frameworks/Python.framework/Versions/2.6/bin/ipcluster",
> line 16, in <module>
> ? ?from IPython.kernel.ipclusterapp import launch_new_instance
> ImportError: No module named ipclusterapp
[...]
> Any idea what is going wrong here?

That script is a 0.11 series startup script, while you mention you are
running 0.10.  It seems you've somehow mixed up the installation of
the 0.10 and the 0.11 versions of IPython...  It's possible you have
in your $path a 0.11 startup ipcluster script, but the version of the
IPython package in $pythonpath is the 0.10 version...

I'd suggest cleaning up the combination and reinstalling, somehow you
have a weird hybrid...

Cheers,

f


From Fernando.Perez at berkeley.edu  Wed Jul 14 15:17:43 2010
From: Fernando.Perez at berkeley.edu (Fernando Perez)
Date: Wed, 14 Jul 2010 12:17:43 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com> 
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
Message-ID: <AANLkTikY-j2kNhLR_2egEh2qrI4cMqqDp8BHcTCpkODo@mail.gmail.com>

Hi Evan,

[ quick note, I'm cc-ing the ipython-dev list so these technical
discussions on the new code happen there, so other developers benefit
as well ]

On Wed, Jul 14, 2010 at 12:10, Brian Granger <ellisonbg at gmail.com> wrote:
> On Wed, Jul 14, 2010 at 10:56 AM, Evan Patterson <epatters at enthought.com> wrote:
>> Hi guys,
>>
>> I've been making decent progress at connecting my FrontendWidget to a
>> KernelManager. I have, however, encountered one fairly serious problem:
>> since the XREQ and SUB channels of the KernelManager are in separate
>> threads, there is no guarantee about the order in which signals are emitted.
>> I'm finding that 'execute_reply' signals are frequently emitted *before* all
>> the output signals have been emitted.
>
> Yes, that is definitely possible and we really don't have control over
> it. ?Part of the difficulty is that the SUB/SUB channel does buffering
> of stdout/stderr (just like sys.stdout/sys.stderr). ?While it will
> make your application logic more difficult, I think this is something
> fundamental we have to live with. ?Also, I wouldn't be surprised if
> the same were true of the regular python shell because of the
> buffering of stdout.
>
>> It seems to me that we should be enforcing, to the extent that we can (i.e.
>> ignoring threads in the kernel for now), the assumption that when
>> 'execute_reply' is signaled, all output has been signaled. Is this
>> reasonable?
>
> I don't think so. ?I would write the frontend to allow for arbitrary
> timing of the execute_reply and the SUB messages. ?You will have to
> use the parent_id information to order things properly in the GUI.
> Does this make sense. ?I think if we try to impose the timing of the
> signals, we will end up breaking the model or introducing extra
> latencies. ?Let us know if you have questions. ?I know this will
> probably be one of the more subtle parts of the frontend logic.
>
> Cheers,
>
> Brian
>
>
>
>
>> Evan
>>
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>

I was in the middle of writing my reply when Brian's arrived pretty
much along the same lines :)

The parent_id info is the key: clients should have enough information
to reconstruct the chain of messages with this, because every message
effectively has a 'pointer' to its parent.  It's possible that we may
need to extend the message spec with a bit more data to make this
easier, if you spot anything along those lines we can look into it.
We cobbled together that message spec very quickly, so it should be
considered by no means final.

But the key idea is always: a client makes a request, and all outputs
that are products of honoring this request (on stdout/err, pyout, etc)
should have enough info in the messages to trace them back to that
original cause.  With this, the client should be able to put the
output in the right places as it arrives, since it can reconstruct
what output goes with what input.

The simplest example of that is what we showed you with two terminal
clients talking to the same kernel, where each client would show the
other's inputs and outputs with [OUT from ...] messages.  The client
was receiving *all* outputs on its SUB socket, and disentangling what
came from its own inputs vs what was input/output from other clients
running simultaneously.

Let us know if this is clear...

Cheers,

f


From epatters at enthought.com  Wed Jul 14 17:21:18 2010
From: epatters at enthought.com (Evan Patterson)
Date: Wed, 14 Jul 2010 16:21:18 -0500
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
Message-ID: <AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>

On Wed, Jul 14, 2010 at 2:10 PM, Brian Granger <ellisonbg at gmail.com> wrote:

> On Wed, Jul 14, 2010 at 10:56 AM, Evan Patterson <epatters at enthought.com>
> wrote:
> > Hi guys,
> >
> > I've been making decent progress at connecting my FrontendWidget to a
> > KernelManager. I have, however, encountered one fairly serious problem:
> > since the XREQ and SUB channels of the KernelManager are in separate
> > threads, there is no guarantee about the order in which signals are
> emitted.
> > I'm finding that 'execute_reply' signals are frequently emitted *before*
> all
> > the output signals have been emitted.
>
> Yes, that is definitely possible and we really don't have control over
> it.  Part of the difficulty is that the SUB/SUB channel does buffering
> of stdout/stderr (just like sys.stdout/sys.stderr).  While it will
> make your application logic more difficult, I think this is something
> fundamental we have to live with.  Also, I wouldn't be surprised if
> the same were true of the regular python shell because of the
> buffering of stdout.
>

I'm not sure it's fair to call this problem fundamental (if we ignore the
corner case of the threads in the kernel). After all, output and execution
completion happen in a very predictable order in the kernel; it's only our
use of multiple frontend-side channel threads that has complicated the
issue.

In a regular same-process shell, this wouldn't be a problem because you
would simply flush stdout before writing the new prompt. It makes sense to
be able to request a flush here, I think. A 'flush' in this case would just
consist of the making the SubChannel thread active, so that its event loop
would pick up whatever it needs to. I believe calling time.sleep(0) once in
the XReqChannel before sending an execute reply will be sufficient. The
latency introduced should be negligible. I'll experiment with this.


> > It seems to me that we should be enforcing, to the extent that we can
> (i.e.
> > ignoring threads in the kernel for now), the assumption that when
> > 'execute_reply' is signaled, all output has been signaled. Is this
> > reasonable?
>
> I don't think so.  I would write the frontend to allow for arbitrary
> timing of the execute_reply and the SUB messages.  You will have to
> use the parent_id information to order things properly in the GUI.
> Does this make sense.  I think if we try to impose the timing of the
> signals, we will end up breaking the model or introducing extra
> latencies.  Let us know if you have questions.  I know this will
> probably be one of the more subtle parts of the frontend logic.
>

Yes, this is something that will be quite difficult to get right. For
frontend implementors who are interested only in console-style interaction,
it doesn't make sense for them to have worry about this.

Evan


>
> Cheers,
>
> Brian
>
>
>
>
> > Evan
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100714/ba14ec44/attachment.html>

From justin.t.riley at gmail.com  Thu Jul 15 10:49:05 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Thu, 15 Jul 2010 10:49:05 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
Message-ID: <4C3F1FE1.4040000@gmail.com>

On 07/14/2010 01:05 AM, Brian Granger wrote:
> Hello all,
> 
> We wanted to updated everyone on the IPython sprint at SciPy 2010.  We
> had lots of people sprinting on IPython (something like 7-10), which
> was fantastic.  Here are some highlights:
> 

Also just wanted to mention, for those using the EC2 cloud, that during
SciPy I also added a new plugin to the StarCluster project
(http://web.mit.edu/starcluster) that will automatically configure and
launch ipcluster on EC2:

http://github.com/jtriley/StarCluster/blob/master/starcluster/plugins/ipcluster.py

The ipcluster plugin will be released in the next version coming out soon.

<plug>
For those unfamiliar, StarCluster creates/configures scientific
computing clusters on EC2. The clusters launched have MPI and Sun Grid
Engine as well as NumPy/SciPy installations compiled against an
ATLAS/LAPACK that has been optimized for the 8-core instance types.</plug>

Thanks,

~Justin


From epatters at enthought.com  Thu Jul 15 12:23:57 2010
From: epatters at enthought.com (Evan Patterson)
Date: Thu, 15 Jul 2010 11:23:57 -0500
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
Message-ID: <AANLkTimSEKwpj18ILIdU9f-0D7L94Ai81nJdfEUt1x6a@mail.gmail.com>

I've added a 'flush' method to the KernelManager here:

http://github.com/epatters/ipython/commit/2ecde29e8f2a5e7012236f61819b2f7833248553

It works, although there may be a more intelligent way to do it. That being
said, I tried a number of different things, and none of the others worked.

Brian: since the 'flush' method must be called explicitly by clients, this
won't break our model or extra induce latencies for clients that want to
take a more sophisticated approach to SUB channel monitoring.

Evan

On Wed, Jul 14, 2010 at 4:21 PM, Evan Patterson <epatters at enthought.com>wrote:

> On Wed, Jul 14, 2010 at 2:10 PM, Brian Granger <ellisonbg at gmail.com>wrote:
>
>> On Wed, Jul 14, 2010 at 10:56 AM, Evan Patterson <epatters at enthought.com>
>> wrote:
>> > Hi guys,
>> >
>> > I've been making decent progress at connecting my FrontendWidget to a
>> > KernelManager. I have, however, encountered one fairly serious problem:
>> > since the XREQ and SUB channels of the KernelManager are in separate
>> > threads, there is no guarantee about the order in which signals are
>> emitted.
>> > I'm finding that 'execute_reply' signals are frequently emitted *before*
>> all
>> > the output signals have been emitted.
>>
>> Yes, that is definitely possible and we really don't have control over
>> it.  Part of the difficulty is that the SUB/SUB channel does buffering
>> of stdout/stderr (just like sys.stdout/sys.stderr).  While it will
>> make your application logic more difficult, I think this is something
>> fundamental we have to live with.  Also, I wouldn't be surprised if
>> the same were true of the regular python shell because of the
>> buffering of stdout.
>>
>
> I'm not sure it's fair to call this problem fundamental (if we ignore the
> corner case of the threads in the kernel). After all, output and execution
> completion happen in a very predictable order in the kernel; it's only our
> use of multiple frontend-side channel threads that has complicated the
> issue.
>
> In a regular same-process shell, this wouldn't be a problem because you
> would simply flush stdout before writing the new prompt. It makes sense to
> be able to request a flush here, I think. A 'flush' in this case would just
> consist of the making the SubChannel thread active, so that its event loop
> would pick up whatever it needs to. I believe calling time.sleep(0) once in
> the XReqChannel before sending an execute reply will be sufficient. The
> latency introduced should be negligible. I'll experiment with this.
>
>
>> > It seems to me that we should be enforcing, to the extent that we can
>> (i.e.
>> > ignoring threads in the kernel for now), the assumption that when
>> > 'execute_reply' is signaled, all output has been signaled. Is this
>> > reasonable?
>>
>> I don't think so.  I would write the frontend to allow for arbitrary
>> timing of the execute_reply and the SUB messages.  You will have to
>> use the parent_id information to order things properly in the GUI.
>> Does this make sense.  I think if we try to impose the timing of the
>> signals, we will end up breaking the model or introducing extra
>> latencies.  Let us know if you have questions.  I know this will
>> probably be one of the more subtle parts of the frontend logic.
>>
>
> Yes, this is something that will be quite difficult to get right. For
> frontend implementors who are interested only in console-style interaction,
> it doesn't make sense for them to have worry about this.
>
> Evan
>
>
>>
>> Cheers,
>>
>> Brian
>>
>>
>>
>>
>> > Evan
>> >
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100715/a3ce46d8/attachment.html>

From ellisonbg at gmail.com  Thu Jul 15 13:34:36 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 15 Jul 2010 10:34:36 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C3F1FE1.4040000@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
Message-ID: <AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>

Justin,

Thanks for the post.  You should also know that it looks like someone
is going to add native SGE support to ipcluster for 0.10.1.  This
should allow the starting of the engines on the compute nodes using
SGE.  I was quite excited with Amazon's announcement that they were
adding a new HPC instance type.  Sounds killer.

Cheers,

Brian

On Thu, Jul 15, 2010 at 7:49 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> On 07/14/2010 01:05 AM, Brian Granger wrote:
>> Hello all,
>>
>> We wanted to updated everyone on the IPython sprint at SciPy 2010. ?We
>> had lots of people sprinting on IPython (something like 7-10), which
>> was fantastic. ?Here are some highlights:
>>
>
> Also just wanted to mention, for those using the EC2 cloud, that during
> SciPy I also added a new plugin to the StarCluster project
> (http://web.mit.edu/starcluster) that will automatically configure and
> launch ipcluster on EC2:
>
> http://github.com/jtriley/StarCluster/blob/master/starcluster/plugins/ipcluster.py
>
> The ipcluster plugin will be released in the next version coming out soon.
>
> <plug>
> For those unfamiliar, StarCluster creates/configures scientific
> computing clusters on EC2. The clusters launched have MPI and Sun Grid
> Engine as well as NumPy/SciPy installations compiled against an
> ATLAS/LAPACK that has been optimized for the 8-core instance types.</plug>
>
> Thanks,
>
> ~Justin
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Thu Jul 15 13:44:10 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 15 Jul 2010 10:44:10 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTimSEKwpj18ILIdU9f-0D7L94Ai81nJdfEUt1x6a@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
	<AANLkTimSEKwpj18ILIdU9f-0D7L94Ai81nJdfEUt1x6a@mail.gmail.com>
Message-ID: <AANLkTilNHdnaqtX1NCLKtGerXn4wxVC2QT9cjm0jZaLb@mail.gmail.com>

On Thu, Jul 15, 2010 at 9:23 AM, Evan Patterson <epatters at enthought.com> wrote:
> I've added a 'flush' method to the KernelManager here:
>
> http://github.com/epatters/ipython/commit/2ecde29e8f2a5e7012236f61819b2f7833248553
>
> It works, although there may be a more intelligent way to do it. That being
> said, I tried a number of different things, and none of the others worked.

The only issue that I see with this is that if the SUB channel keeps
getting incoming message, flush will not return immediately.

> Brian: since the 'flush' method must be called explicitly by clients, this
> won't break our model or extra induce latencies for clients that want to
> take a more sophisticated approach to SUB channel monitoring.

That is true, so I think it this helps you to get going, it is worth
using for now.  But, I still don't see why we reorder the messages in
the frontend based on the parent_ids.  Just so you know, Fernando and
I have set aside time starting this Sunday to work extensively on
this.  At that time we can talk more about this issue.

Cheers,

Brian



> Evan
>
> On Wed, Jul 14, 2010 at 4:21 PM, Evan Patterson <epatters at enthought.com>
> wrote:
>>
>> On Wed, Jul 14, 2010 at 2:10 PM, Brian Granger <ellisonbg at gmail.com>
>> wrote:
>>>
>>> On Wed, Jul 14, 2010 at 10:56 AM, Evan Patterson <epatters at enthought.com>
>>> wrote:
>>> > Hi guys,
>>> >
>>> > I've been making decent progress at connecting my FrontendWidget to a
>>> > KernelManager. I have, however, encountered one fairly serious problem:
>>> > since the XREQ and SUB channels of the KernelManager are in separate
>>> > threads, there is no guarantee about the order in which signals are
>>> > emitted.
>>> > I'm finding that 'execute_reply' signals are frequently emitted
>>> > *before* all
>>> > the output signals have been emitted.
>>>
>>> Yes, that is definitely possible and we really don't have control over
>>> it. ?Part of the difficulty is that the SUB/SUB channel does buffering
>>> of stdout/stderr (just like sys.stdout/sys.stderr). ?While it will
>>> make your application logic more difficult, I think this is something
>>> fundamental we have to live with. ?Also, I wouldn't be surprised if
>>> the same were true of the regular python shell because of the
>>> buffering of stdout.
>>
>> I'm not sure it's fair to call this problem fundamental (if we ignore the
>> corner case of the threads in the kernel). After all, output and execution
>> completion happen in a very predictable order in the kernel; it's only our
>> use of multiple frontend-side channel threads that has complicated the
>> issue.
>>
>> In a regular same-process shell, this wouldn't be a problem because you
>> would simply flush stdout before writing the new prompt. It makes sense to
>> be able to request a flush here, I think. A 'flush' in this case would just
>> consist of the making the SubChannel thread active, so that its event loop
>> would pick up whatever it needs to. I believe calling time.sleep(0) once in
>> the XReqChannel before sending an execute reply will be sufficient. The
>> latency introduced should be negligible. I'll experiment with this.
>>
>>>
>>> > It seems to me that we should be enforcing, to the extent that we can
>>> > (i.e.
>>> > ignoring threads in the kernel for now), the assumption that when
>>> > 'execute_reply' is signaled, all output has been signaled. Is this
>>> > reasonable?
>>>
>>> I don't think so. ?I would write the frontend to allow for arbitrary
>>> timing of the execute_reply and the SUB messages. ?You will have to
>>> use the parent_id information to order things properly in the GUI.
>>> Does this make sense. ?I think if we try to impose the timing of the
>>> signals, we will end up breaking the model or introducing extra
>>> latencies. ?Let us know if you have questions. ?I know this will
>>> probably be one of the more subtle parts of the frontend logic.
>>
>> Yes, this is something that will be quite difficult to get right. For
>> frontend implementors who are interested only in console-style interaction,
>> it doesn't make sense for them to have worry about this.
>>
>> Evan
>>
>>>
>>> Cheers,
>>>
>>> Brian
>>>
>>>
>>>
>>>
>>> > Evan
>>> >
>>>
>>>
>>>
>>> --
>>> Brian E. Granger, Ph.D.
>>> Assistant Professor of Physics
>>> Cal Poly State University, San Luis Obispo
>>> bgranger at calpoly.edu
>>> ellisonbg at gmail.com
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From epatters at enthought.com  Thu Jul 15 14:00:12 2010
From: epatters at enthought.com (Evan Patterson)
Date: Thu, 15 Jul 2010 13:00:12 -0500
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTilNHdnaqtX1NCLKtGerXn4wxVC2QT9cjm0jZaLb@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
	<AANLkTimSEKwpj18ILIdU9f-0D7L94Ai81nJdfEUt1x6a@mail.gmail.com>
	<AANLkTilNHdnaqtX1NCLKtGerXn4wxVC2QT9cjm0jZaLb@mail.gmail.com>
Message-ID: <AANLkTilkmowbG_LsQfbEsQjHlB7tBkgoRu2YFRTo945Y@mail.gmail.com>

On Thu, Jul 15, 2010 at 12:44 PM, Brian Granger <ellisonbg at gmail.com> wrote:

> On Thu, Jul 15, 2010 at 9:23 AM, Evan Patterson <epatters at enthought.com>
> wrote:
> > I've added a 'flush' method to the KernelManager here:
> >
> >
> http://github.com/epatters/ipython/commit/2ecde29e8f2a5e7012236f61819b2f7833248553
> >
> > It works, although there may be a more intelligent way to do it. That
> being
> > said, I tried a number of different things, and none of the others
> worked.
>
> The only issue that I see with this is that if the SUB channel keeps
> getting incoming message, flush will not return immediately.
>
> > Brian: since the 'flush' method must be called explicitly by clients,
> this
> > won't break our model or extra induce latencies for clients that want to
> > take a more sophisticated approach to SUB channel monitoring.
>
> That is true, so I think it this helps you to get going, it is worth
> using for now.  But, I still don't see why we reorder the messages in
> the frontend based on the parent_ids.  Just so you know, Fernando and
> I have set aside time starting this Sunday to work extensively on
> this.  At that time we can talk more about this issue.
>

Just to clarify: the issue isn't so much that the message themselves have to
be reordered, but what this implies for the text widget update. Currently, I
more or less blindly append text the end of text widget buffer as I go. To
support arbitrary order insertion, I would have to have a mechanism whereby
blocks of texts are tagged according to the message that they correspond to.
Then, whenever output messages come in, I would have to find the correct
spot to insert them. Since this is considerably more complex than just
calling 'flush', doing this the "right" way is not a priority until more
important things get done.

Evan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100715/6f8b78c1/attachment.html>

From ellisonbg at gmail.com  Thu Jul 15 14:07:08 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 15 Jul 2010 11:07:08 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
Message-ID: <AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>

Evan,

On Wed, Jul 14, 2010 at 2:21 PM, Evan Patterson <epatters at enthought.com> wrote:
> On Wed, Jul 14, 2010 at 2:10 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>>
>> On Wed, Jul 14, 2010 at 10:56 AM, Evan Patterson <epatters at enthought.com>
>> wrote:
>> > Hi guys,
>> >
>> > I've been making decent progress at connecting my FrontendWidget to a
>> > KernelManager. I have, however, encountered one fairly serious problem:
>> > since the XREQ and SUB channels of the KernelManager are in separate
>> > threads, there is no guarantee about the order in which signals are
>> > emitted.
>> > I'm finding that 'execute_reply' signals are frequently emitted *before*
>> > all
>> > the output signals have been emitted.
>>
>> Yes, that is definitely possible and we really don't have control over
>> it. ?Part of the difficulty is that the SUB/SUB channel does buffering
>> of stdout/stderr (just like sys.stdout/sys.stderr). ?While it will
>> make your application logic more difficult, I think this is something
>> fundamental we have to live with. ?Also, I wouldn't be surprised if
>> the same were true of the regular python shell because of the
>> buffering of stdout.
>
> I'm not sure it's fair to call this problem fundamental (if we ignore the
> corner case of the threads in the kernel). After all, output and execution
> completion happen in a very predictable order in the kernel; it's only our
> use of multiple frontend-side channel threads that has complicated the
> issue.

But this leaves out one of the main causes of unpredictability in a
distributed system:  network and IO latency.  I  our architecture,
this occurs when we ask 0MQ to send a message.  At that point, it is
up to 0MQ, the OS Kernel, the network stack (including routers, etc.)
to get deliver the messages in the best way they can.  In a
multi-channel model like this, there is simple no promise that the
order in which we send messages is the order in which tey will arrive.
 This is a fundamental issue that I consider a feature of our current
architecture, because, in my experience, if you artificially try to
impose determinacy on network traffic you end up with extremely
awkward error handing.

> In a regular same-process shell, this wouldn't be a problem because you
> would simply flush stdout before writing the new prompt. It makes sense to
> be able to request a flush here, I think. A 'flush' in this case would just
> consist of the making the SubChannel thread active, so that its event loop
> would pick up whatever it needs to. I believe calling time.sleep(0) once in
> the XReqChannel before sending an execute reply will be sufficient. The
> latency introduced should be negligible. I'll experiment with this.

OK, I think this is worth a shot.

>>
>> > It seems to me that we should be enforcing, to the extent that we can
>> > (i.e.
>> > ignoring threads in the kernel for now), the assumption that when
>> > 'execute_reply' is signaled, all output has been signaled. Is this
>> > reasonable?

Because of the networking issues, I don't think so.

>> I don't think so. ?I would write the frontend to allow for arbitrary
>> timing of the execute_reply and the SUB messages. ?You will have to
>> use the parent_id information to order things properly in the GUI.
>> Does this make sense. ?I think if we try to impose the timing of the
>> signals, we will end up breaking the model or introducing extra
>> latencies. ?Let us know if you have questions. ?I know this will
>> probably be one of the more subtle parts of the frontend logic.
>
> Yes, this is something that will be quite difficult to get right. For
> frontend implementors who are interested only in console-style interaction,
> it doesn't make sense for them to have worry about this.

Definitely hard to get right and terminal based frontends will
definitely need something like flush.  Let's see how it goes with this
approach.

Brian

> Evan
>
>>
>> Cheers,
>>
>> Brian
>>
>>
>>
>>
>> > Evan
>> >
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Thu Jul 15 15:34:37 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 15 Jul 2010 12:34:37 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com> 
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
Message-ID: <AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>

On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger <ellisonbg at gmail.com> wrote:
> Thanks for the post. ?You should also know that it looks like someone
> is going to add native SGE support to ipcluster for 0.10.1.

Yes, Satra and I went over this last night in detail (thanks to Brian
for the pointers), and he said he might actually already have some
code for it.  I suspect we'll get this in soon.

Cheers,

f


From fperez.net at gmail.com  Thu Jul 15 16:22:40 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 15 Jul 2010 13:22:40 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com> 
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com> 
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com> 
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>
Message-ID: <AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>

On Thu, Jul 15, 2010 at 11:07 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>
> Definitely hard to get right and terminal based frontends will
> definitely need something like flush. ?Let's see how it goes with this
> approach.

Though absent a real event loop with a callback model, there it will
need to be implemented with a real sleep(epsilon) and a total timeout.
 Terminal frontends will always simply be bound to flushing what they
can and then moving on if nothing has come in a given window they wait
for.  Such is life when your 'event loop' is the human hitting the
RETURN key...

Evan, quick question: when I open your frontend_widget, I see 100% cpu
utilization all the time.  Do you see this on your end?

Cheers,

f


From justin.t.riley at gmail.com  Thu Jul 15 16:33:32 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Thu, 15 Jul 2010 16:33:32 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
Message-ID: <4C3F709C.5080505@gmail.com>

This is great news. Right now StarCluster just takes advantage of
password-less ssh already being installed and runs:

$ ipcluster ssh --clusterfile /path/to/cluster_file.py

This works fine for now, however, having SGE support would allow
ipcluster's load to be accounted for by the queue.

Is Satra on the list? I have experience with SGE and could help with the
code if needed. I can also help test this functionality.

~Justin

On 07/15/2010 03:34 PM, Fernando Perez wrote:
> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>> Thanks for the post.  You should also know that it looks like someone
>> is going to add native SGE support to ipcluster for 0.10.1.
> 
> Yes, Satra and I went over this last night in detail (thanks to Brian
> for the pointers), and he said he might actually already have some
> code for it.  I suspect we'll get this in soon.
> 
> Cheers,
> 
> f



From justin.t.riley at gmail.com  Thu Jul 15 16:40:02 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Thu, 15 Jul 2010 16:40:02 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
Message-ID: <4C3F7222.6020902@gmail.com>

Brian,

> I was quite excited with Amazon's announcement that they were
> adding a new HPC instance type.  Sounds killer.

Same here, this is very exciting. The new HPC instance type packs a
serious punch especially with regards to network latency between
machines which has really been the main problem for folks running MPI on
EC2. I'll be working on getting support for the new HPC instance type in
StarCluster soon.

~Justin


On 07/15/2010 01:34 PM, Brian Granger wrote:
> Justin,
> 
> Thanks for the post.  You should also know that it looks like someone
> is going to add native SGE support to ipcluster for 0.10.1.  This
> should allow the starting of the engines on the compute nodes using
> SGE.  I was quite excited with Amazon's announcement that they were
> adding a new HPC instance type.  Sounds killer.
> 
> Cheers,
> 
> Brian
> 
> On Thu, Jul 15, 2010 at 7:49 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
>> On 07/14/2010 01:05 AM, Brian Granger wrote:
>>> Hello all,
>>>
>>> We wanted to updated everyone on the IPython sprint at SciPy 2010.  We
>>> had lots of people sprinting on IPython (something like 7-10), which
>>> was fantastic.  Here are some highlights:
>>>
>>
>> Also just wanted to mention, for those using the EC2 cloud, that during
>> SciPy I also added a new plugin to the StarCluster project
>> (http://web.mit.edu/starcluster) that will automatically configure and
>> launch ipcluster on EC2:
>>
>> http://github.com/jtriley/StarCluster/blob/master/starcluster/plugins/ipcluster.py
>>
>> The ipcluster plugin will be released in the next version coming out soon.
>>
>> <plug>
>> For those unfamiliar, StarCluster creates/configures scientific
>> computing clusters on EC2. The clusters launched have MPI and Sun Grid
>> Engine as well as NumPy/SciPy installations compiled against an
>> ATLAS/LAPACK that has been optimized for the 8-core instance types.</plug>
>>
>> Thanks,
>>
>> ~Justin
>>
> 
> 
> 



From epatters at enthought.com  Thu Jul 15 17:24:12 2010
From: epatters at enthought.com (Evan Patterson)
Date: Thu, 15 Jul 2010 16:24:12 -0500
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>
	<AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>
Message-ID: <AANLkTiknHzaaTwSwg8MeqoXFJcSUCr8-eeVSxBIEMKu-@mail.gmail.com>

On Thu, Jul 15, 2010 at 3:22 PM, Fernando Perez <fperez.net at gmail.com>wrote:

> On Thu, Jul 15, 2010 at 11:07 AM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >
> > Definitely hard to get right and terminal based frontends will
> > definitely need something like flush.  Let's see how it goes with this
> > approach.
>
> Though absent a real event loop with a callback model, there it will
> need to be implemented with a real sleep(epsilon) and a total timeout.
>  Terminal frontends will always simply be bound to flushing what they
> can and then moving on if nothing has come in a given window they wait
> for.  Such is life when your 'event loop' is the human hitting the
> RETURN key...
>

This may have been lost in the stream of messages, but you can see my
current implementation of flush here:

http://github.com/epatters/ipython/commit/2ecde29e8f2a5e7012236f61819b2f7833248553

I'm not sure if my approach is better or worse than a using an epsilon for
sleep.


>
> Evan, quick question: when I open your frontend_widget, I see 100% cpu
> utilization all the time.  Do you see this on your end?
>

I hadn't noticed this before (probably because I never pay attention to what
my CPU utilization is), but I am seeing this on my end. Thanks for pointing
it out; I'll look into it.

Evan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100715/20089a6a/attachment.html>

From fperez.net at gmail.com  Thu Jul 15 17:31:05 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 15 Jul 2010 14:31:05 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C3F7222.6020902@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com> 
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com> 
	<4C3F7222.6020902@gmail.com>
Message-ID: <AANLkTimkXn9IAbaM9e--OeELn48ONKKuv6KqiVXR0c4N@mail.gmail.com>

On Thu, Jul 15, 2010 at 1:40 PM, Justin Riley <justin.t.riley at gmail.com> wrote:
>
> Same here, this is very exciting. The new HPC instance type packs a
> serious punch especially with regards to network latency between
> machines

Have you tested it yet?  I saw they listed 10GB interconnects, but I
don't recall if they specified the kind of backplane and any actual
latency data...

Cheers,

f


From fperez.net at gmail.com  Thu Jul 15 17:34:42 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 15 Jul 2010 14:34:42 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTiknHzaaTwSwg8MeqoXFJcSUCr8-eeVSxBIEMKu-@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com> 
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com> 
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com> 
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com> 
	<AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com> 
	<AANLkTiknHzaaTwSwg8MeqoXFJcSUCr8-eeVSxBIEMKu-@mail.gmail.com>
Message-ID: <AANLkTilPkvaxXsCD2OHsKk85MLEiU81Ccl8XKRTT5Tt4@mail.gmail.com>

Hi Evan,

On Thu, Jul 15, 2010 at 2:24 PM, Evan Patterson <epatters at enthought.com> wrote:
> This may have been lost in the stream of messages, but you can see my
> current implementation of flush here:
>
> http://github.com/epatters/ipython/commit/2ecde29e8f2a5e7012236f61819b2f7833248553
>
> I'm not sure if my approach is better or worse than a using an epsilon for
> sleep.

Even a 0.01 can help avoid hogging the cpu unnecessarily and is
completely below human thresholds.  It's also probably a good idea to
have a safety fallback, so the loop can't stay there forever.  Or do
we trust the ioloop to be bulletproof in terms of calling the flush
callback appropriately?  That part isn't clear to me yet.

>> Evan, quick question: when I open your frontend_widget, I see 100% cpu
>> utilization all the time. ?Do you see this on your end?
>
> I hadn't noticed this before (probably because I never pay attention to what
> my CPU utilization is), but I am seeing this on my end. Thanks for pointing
> it out; I'll look into it.

I noticed it because my fans started making loud noises after a few
seconds of having your shell open.  A loud fan is a very good cpu
alert :)

Cheers,

f


From fperez.net at gmail.com  Thu Jul 15 17:42:01 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 15 Jul 2010 14:42:01 -0700
Subject: [IPython-dev] Paul Ivanov: Did you get any feedback from GH when I
	merged?
Message-ID: <AANLkTime9dtq7a9aFyMnbnVtIeQwlsH09J-DP44moogc@mail.gmail.com>

Hi Paul,

I just applied your pull request into trunk, thanks a lot for the bug
fix.  I used the GH interface to do it, and I'm curious whether it
generated any feedback to you when that happened or not.

Cheers,

f


From satra at mit.edu  Thu Jul 15 20:55:48 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Thu, 15 Jul 2010 20:55:48 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C3F709C.5080505@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
Message-ID: <AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>

hi justin,

i hope to test it out tonight. from what fernando and i discussed, this
should be relatively straightforward. once i'm done i'll push it to my fork
of ipython and announce it here for others to test.

cheers,

satra


On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley <justin.t.riley at gmail.com>wrote:

> This is great news. Right now StarCluster just takes advantage of
> password-less ssh already being installed and runs:
>
> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>
> This works fine for now, however, having SGE support would allow
> ipcluster's load to be accounted for by the queue.
>
> Is Satra on the list? I have experience with SGE and could help with the
> code if needed. I can also help test this functionality.
>
> ~Justin
>
> On 07/15/2010 03:34 PM, Fernando Perez wrote:
> > On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >> Thanks for the post.  You should also know that it looks like someone
> >> is going to add native SGE support to ipcluster for 0.10.1.
> >
> > Yes, Satra and I went over this last night in detail (thanks to Brian
> > for the pointers), and he said he might actually already have some
> > code for it.  I suspect we'll get this in soon.
> >
> > Cheers,
> >
> > f
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100715/347f3135/attachment.html>

From ellisonbg at gmail.com  Fri Jul 16 01:25:14 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Thu, 15 Jul 2010 22:25:14 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>
	<AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>
Message-ID: <AANLkTikSL5cplN12kwr0D--H289vQvyjdhFbBKRh1rrW@mail.gmail.com>

On Thu, Jul 15, 2010 at 1:22 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Thu, Jul 15, 2010 at 11:07 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>>
>> Definitely hard to get right and terminal based frontends will
>> definitely need something like flush. ?Let's see how it goes with this
>> approach.
>
> Though absent a real event loop with a callback model, there it will
> need to be implemented with a real sleep(epsilon) and a total timeout.
> ?Terminal frontends will always simply be bound to flushing what they
> can and then moving on if nothing has come in a given window they wait
> for. ?Such is life when your 'event loop' is the human hitting the
> RETURN key...
>
> Evan, quick question: when I open your frontend_widget, I see 100% cpu
> utilization all the time. ?Do you see this on your end?

We should make sure we understand this. Min and I found that our new
Tornado event loop in pyzmq was using 100% CPU because of a bug in the
 poll timeout (units problems).  We have fixed this (so we think!), so
I am hopeful the current issue is coming from the flush logic.

Brian

> Cheers,
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From epatters at enthought.com  Fri Jul 16 10:40:55 2010
From: epatters at enthought.com (Evan Patterson)
Date: Fri, 16 Jul 2010 09:40:55 -0500
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTikSL5cplN12kwr0D--H289vQvyjdhFbBKRh1rrW@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>
	<AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>
	<AANLkTikSL5cplN12kwr0D--H289vQvyjdhFbBKRh1rrW@mail.gmail.com>
Message-ID: <AANLkTinlTz2e2ehIO4aFmCrdPvx3tJS_NF8fLPdfRPSD@mail.gmail.com>

On Fri, Jul 16, 2010 at 12:25 AM, Brian Granger <ellisonbg at gmail.com> wrote:

> On Thu, Jul 15, 2010 at 1:22 PM, Fernando Perez <fperez.net at gmail.com>
> wrote:
> > On Thu, Jul 15, 2010 at 11:07 AM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >>
> >> Definitely hard to get right and terminal based frontends will
> >> definitely need something like flush.  Let's see how it goes with this
> >> approach.
> >
> > Though absent a real event loop with a callback model, there it will
> > need to be implemented with a real sleep(epsilon) and a total timeout.
> >  Terminal frontends will always simply be bound to flushing what they
> > can and then moving on if nothing has come in a given window they wait
> > for.  Such is life when your 'event loop' is the human hitting the
> > RETURN key...
> >
> > Evan, quick question: when I open your frontend_widget, I see 100% cpu
> > utilization all the time.  Do you see this on your end?
>
> We should make sure we understand this. Min and I found that our new
> Tornado event loop in pyzmq was using 100% CPU because of a bug in the
>  poll timeout (units problems).  We have fixed this (so we think!), so
> I am hopeful the current issue is coming from the flush logic.
>

Unfortunately, this does not seem to be the case. I have confirmed that the
problem is indeed with the IOLoops. They have the the CPU pegged at 100%
even when the console is idle, i.e. when no flushing or communication of any
sort occurring.

Did you commit your fix to the main branch of PyZMQ? Maybe I am not using
the right stuff.

Evan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100716/9af6d851/attachment.html>

From ellisonbg at gmail.com  Fri Jul 16 12:00:25 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 16 Jul 2010 09:00:25 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTinlTz2e2ehIO4aFmCrdPvx3tJS_NF8fLPdfRPSD@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>
	<AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>
	<AANLkTikSL5cplN12kwr0D--H289vQvyjdhFbBKRh1rrW@mail.gmail.com>
	<AANLkTinlTz2e2ehIO4aFmCrdPvx3tJS_NF8fLPdfRPSD@mail.gmail.com>
Message-ID: <AANLkTikNJzCQA2gx86fGOncSmYxrXmLX05Qg-GmAX6Ur@mail.gmail.com>

Here is the commit:

http://github.com/ellisonbg/pyzmq/commit/18f5d061558a176f5496aa8e049182c1a7da64f6

You will need to recompile pyzmq for this to go into affect.  Let me
know if this doesn't fix the problem.

Cheers,

Brian

On Fri, Jul 16, 2010 at 7:40 AM, Evan Patterson <epatters at enthought.com> wrote:
> On Fri, Jul 16, 2010 at 12:25 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>>
>> On Thu, Jul 15, 2010 at 1:22 PM, Fernando Perez <fperez.net at gmail.com>
>> wrote:
>> > On Thu, Jul 15, 2010 at 11:07 AM, Brian Granger <ellisonbg at gmail.com>
>> > wrote:
>> >>
>> >> Definitely hard to get right and terminal based frontends will
>> >> definitely need something like flush. ?Let's see how it goes with this
>> >> approach.
>> >
>> > Though absent a real event loop with a callback model, there it will
>> > need to be implemented with a real sleep(epsilon) and a total timeout.
>> > ?Terminal frontends will always simply be bound to flushing what they
>> > can and then moving on if nothing has come in a given window they wait
>> > for. ?Such is life when your 'event loop' is the human hitting the
>> > RETURN key...
>> >
>> > Evan, quick question: when I open your frontend_widget, I see 100% cpu
>> > utilization all the time. ?Do you see this on your end?
>>
>> We should make sure we understand this. Min and I found that our new
>> Tornado event loop in pyzmq was using 100% CPU because of a bug in the
>> ?poll timeout (units problems). ?We have fixed this (so we think!), so
>> I am hopeful the current issue is coming from the flush logic.
>
> Unfortunately, this does not seem to be the case. I have confirmed that the
> problem is indeed with the IOLoops. They have the the CPU pegged at 100%
> even when the console is idle, i.e. when no flushing or communication of any
> sort occurring.
>
> Did you commit your fix to the main branch of PyZMQ? Maybe I am not using
> the right stuff.
>
> Evan
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From epatters at enthought.com  Fri Jul 16 12:21:41 2010
From: epatters at enthought.com (Evan Patterson)
Date: Fri, 16 Jul 2010 11:21:41 -0500
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTikNJzCQA2gx86fGOncSmYxrXmLX05Qg-GmAX6Ur@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>
	<AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>
	<AANLkTikSL5cplN12kwr0D--H289vQvyjdhFbBKRh1rrW@mail.gmail.com>
	<AANLkTinlTz2e2ehIO4aFmCrdPvx3tJS_NF8fLPdfRPSD@mail.gmail.com>
	<AANLkTikNJzCQA2gx86fGOncSmYxrXmLX05Qg-GmAX6Ur@mail.gmail.com>
Message-ID: <AANLkTilvxIYOFP0kXPw55yr9SKQ7SgM8G5TcylIOpVWM@mail.gmail.com>

I verified that I have that commit and that I recompiled PyZMQ.
Unfortunately, the problem persists.

Fernando: as a sanity check, can you confirm that you have this problem with
the latest version of PyZMQ?

Evan

On Fri, Jul 16, 2010 at 11:00 AM, Brian Granger <ellisonbg at gmail.com> wrote:

> Here is the commit:
>
>
> http://github.com/ellisonbg/pyzmq/commit/18f5d061558a176f5496aa8e049182c1a7da64f6
>
> You will need to recompile pyzmq for this to go into affect.  Let me
> know if this doesn't fix the problem.
>
> Cheers,
>
> Brian
>
> On Fri, Jul 16, 2010 at 7:40 AM, Evan Patterson <epatters at enthought.com>
> wrote:
> > On Fri, Jul 16, 2010 at 12:25 AM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >>
> >> On Thu, Jul 15, 2010 at 1:22 PM, Fernando Perez <fperez.net at gmail.com>
> >> wrote:
> >> > On Thu, Jul 15, 2010 at 11:07 AM, Brian Granger <ellisonbg at gmail.com>
> >> > wrote:
> >> >>
> >> >> Definitely hard to get right and terminal based frontends will
> >> >> definitely need something like flush.  Let's see how it goes with
> this
> >> >> approach.
> >> >
> >> > Though absent a real event loop with a callback model, there it will
> >> > need to be implemented with a real sleep(epsilon) and a total timeout.
> >> >  Terminal frontends will always simply be bound to flushing what they
> >> > can and then moving on if nothing has come in a given window they wait
> >> > for.  Such is life when your 'event loop' is the human hitting the
> >> > RETURN key...
> >> >
> >> > Evan, quick question: when I open your frontend_widget, I see 100% cpu
> >> > utilization all the time.  Do you see this on your end?
> >>
> >> We should make sure we understand this. Min and I found that our new
> >> Tornado event loop in pyzmq was using 100% CPU because of a bug in the
> >>  poll timeout (units problems).  We have fixed this (so we think!), so
> >> I am hopeful the current issue is coming from the flush logic.
> >
> > Unfortunately, this does not seem to be the case. I have confirmed that
> the
> > problem is indeed with the IOLoops. They have the the CPU pegged at 100%
> > even when the console is idle, i.e. when no flushing or communication of
> any
> > sort occurring.
> >
> > Did you commit your fix to the main branch of PyZMQ? Maybe I am not using
> > the right stuff.
> >
> > Evan
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100716/74c38d8d/attachment.html>

From fperez.net at gmail.com  Fri Jul 16 14:24:32 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 16 Jul 2010 11:24:32 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTilvxIYOFP0kXPw55yr9SKQ7SgM8G5TcylIOpVWM@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com> 
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com> 
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com> 
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com> 
	<AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com> 
	<AANLkTikSL5cplN12kwr0D--H289vQvyjdhFbBKRh1rrW@mail.gmail.com> 
	<AANLkTinlTz2e2ehIO4aFmCrdPvx3tJS_NF8fLPdfRPSD@mail.gmail.com> 
	<AANLkTikNJzCQA2gx86fGOncSmYxrXmLX05Qg-GmAX6Ur@mail.gmail.com> 
	<AANLkTilvxIYOFP0kXPw55yr9SKQ7SgM8G5TcylIOpVWM@mail.gmail.com>
Message-ID: <AANLkTike3Ew3HGLTDWd9iA15-nbC8EpJQWwmKqvd7v4C@mail.gmail.com>

On Fri, Jul 16, 2010 at 9:21 AM, Evan Patterson <epatters at enthought.com> wrote:
> I verified that I have that commit and that I recompiled PyZMQ.
> Unfortunately, the problem persists.
>
> Fernando: as a sanity check, can you confirm that you have this problem with
> the latest version of PyZMQ?

Same here.  I had rebuilt zmq/pyzmq to confirm whether the other
problem we'd seen was gone (kernel dying when clients disconnect), and
that one is indeed now fixed.

But the CPU 100% use is still there, even when Evan's qt frontend is
idle.  As a data point, Gerardo's, which does not yet use the ioloop,
doesn't show the problem.

Cheers,
f


From ellisonbg at gmail.com  Fri Jul 16 14:53:16 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Fri, 16 Jul 2010 11:53:16 -0700
Subject: [IPython-dev] Coordinating the XREQ and SUB channels
In-Reply-To: <AANLkTike3Ew3HGLTDWd9iA15-nbC8EpJQWwmKqvd7v4C@mail.gmail.com>
References: <AANLkTimVnN68XIog7IlD096GIDI9jFhkr9vodQVMYiYH@mail.gmail.com>
	<AANLkTinuk4mJmQOPlvivUfwGG6NhQnA8LaW_qmIQ54UV@mail.gmail.com>
	<AANLkTinnyf2hnhdSv6d-RPN_5HzjPx_7EnuT7uQMqGQD@mail.gmail.com>
	<AANLkTimkjfhXmFOAH7vBkV63E96HlwGuOqg9F5v4mwbY@mail.gmail.com>
	<AANLkTikvuYrFkZU45PhmwJCmRLExSjyRbtR82IZpKx_G@mail.gmail.com>
	<AANLkTikSL5cplN12kwr0D--H289vQvyjdhFbBKRh1rrW@mail.gmail.com>
	<AANLkTinlTz2e2ehIO4aFmCrdPvx3tJS_NF8fLPdfRPSD@mail.gmail.com>
	<AANLkTikNJzCQA2gx86fGOncSmYxrXmLX05Qg-GmAX6Ur@mail.gmail.com>
	<AANLkTilvxIYOFP0kXPw55yr9SKQ7SgM8G5TcylIOpVWM@mail.gmail.com>
	<AANLkTike3Ew3HGLTDWd9iA15-nbC8EpJQWwmKqvd7v4C@mail.gmail.com>
Message-ID: <AANLkTikVB4TmpoNtWiyV-Hop_NakGUxVYXuJJICMwvh5@mail.gmail.com>

The issue is the units of the timeout passed to poll in the ioloop.
Here is the line of code where you can see my comment about this:

http://github.com/ellisonbg/pyzmq/commit/18f5d061558a176f5496aa8e049182c1a7da64f6#L2R189

Can you try increasing/decreasing it by a factor of 1000.  As long as
you are using an inplace build, you shouldn't have to recompile.  I
need to install qt+pyqt and then I can give this a try as well.

Brian


On Fri, Jul 16, 2010 at 11:24 AM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Fri, Jul 16, 2010 at 9:21 AM, Evan Patterson <epatters at enthought.com> wrote:
>> I verified that I have that commit and that I recompiled PyZMQ.
>> Unfortunately, the problem persists.
>>
>> Fernando: as a sanity check, can you confirm that you have this problem with
>> the latest version of PyZMQ?
>
> Same here. ?I had rebuilt zmq/pyzmq to confirm whether the other
> problem we'd seen was gone (kernel dying when clients disconnect), and
> that one is indeed now fixed.
>
> But the CPU 100% use is still there, even when Evan's qt frontend is
> idle. ?As a data point, Gerardo's, which does not yet use the ioloop,
> doesn't show the problem.
>
> Cheers,
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From satra at mit.edu  Sat Jul 17 09:23:50 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Sat, 17 Jul 2010 09:23:50 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
Message-ID: <AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>

hi ,

i've pushed my changes to:

http://github.com/satra/ipython/tree/0.10.1-sge

notes:

1. it starts cleanly. i can connect and execute things. when i kill using
ctrl-c, the messages appear to indicate that everything shut down well.
however, the sge ipengine jobs are still running.

2. the pbs option appears to require mpi to be present. i don't think one
can launch multiple engines using pbs without mpi or without the workaround
i've applied to the sge engine. basically it submits an sge job for each
engine that i want to run. i would love to know if a single job can launch
multiple engines on a sge/pbs cluster without mpi.

cheers,

satra

On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh <satra at mit.edu> wrote:

> hi justin,
>
> i hope to test it out tonight. from what fernando and i discussed, this
> should be relatively straightforward. once i'm done i'll push it to my fork
> of ipython and announce it here for others to test.
>
> cheers,
>
> satra
>
>
>
> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley <justin.t.riley at gmail.com>wrote:
>
>> This is great news. Right now StarCluster just takes advantage of
>> password-less ssh already being installed and runs:
>>
>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>
>> This works fine for now, however, having SGE support would allow
>> ipcluster's load to be accounted for by the queue.
>>
>> Is Satra on the list? I have experience with SGE and could help with the
>> code if needed. I can also help test this functionality.
>>
>> ~Justin
>>
>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>> > On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger <ellisonbg at gmail.com>
>> wrote:
>> >> Thanks for the post.  You should also know that it looks like someone
>> >> is going to add native SGE support to ipcluster for 0.10.1.
>> >
>> > Yes, Satra and I went over this last night in detail (thanks to Brian
>> > for the pointers), and he said he might actually already have some
>> > code for it.  I suspect we'll get this in soon.
>> >
>> > Cheers,
>> >
>> > f
>>
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100717/4b959d52/attachment.html>

From ellisonbg at gmail.com  Sun Jul 18 00:00:19 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sat, 17 Jul 2010 21:00:19 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
Message-ID: <AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>

On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh <satra at mit.edu> wrote:
> hi ,
>
> i've pushed my changes to:
>
> http://github.com/satra/ipython/tree/0.10.1-sge
>
> notes:
>
> 1. it starts cleanly. i can connect and execute things. when i kill using
> ctrl-c, the messages appear to indicate that everything shut down well.
> however, the sge ipengine jobs are still running.

What version of Python and Twisted are you running?

> 2. the pbs option appears to require mpi to be present. i don't think one
> can launch multiple engines using pbs without mpi or without the workaround
> i've applied to the sge engine. basically it submits an sge job for each
> engine that i want to run. i would love to know if a single job can launch
> multiple engines on a sge/pbs cluster without mpi.

I think you are right that pbs needs to use mpirun/mpiexec to start
multiple engines using a single PBS job.  I am not that familiar with
SGE, can you start mulitple processes without mpi and with just a
single SGE job?  If so, let's try to get that working.

Cheers,

Brian

> cheers,
>
> satra
>
> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>
>> hi justin,
>>
>> i hope to test it out tonight. from what fernando and i discussed, this
>> should be relatively straightforward. once i'm done i'll push it to my fork
>> of ipython and announce it here for others to test.
>>
>> cheers,
>>
>> satra
>>
>>
>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley <justin.t.riley at gmail.com>
>> wrote:
>>>
>>> This is great news. Right now StarCluster just takes advantage of
>>> password-less ssh already being installed and runs:
>>>
>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>
>>> This works fine for now, however, having SGE support would allow
>>> ipcluster's load to be accounted for by the queue.
>>>
>>> Is Satra on the list? I have experience with SGE and could help with the
>>> code if needed. I can also help test this functionality.
>>>
>>> ~Justin
>>>
>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>> > On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger <ellisonbg at gmail.com>
>>> > wrote:
>>> >> Thanks for the post. ?You should also know that it looks like someone
>>> >> is going to add native SGE support to ipcluster for 0.10.1.
>>> >
>>> > Yes, Satra and I went over this last night in detail (thanks to Brian
>>> > for the pointers), and he said he might actually already have some
>>> > code for it. ?I suspect we'll get this in soon.
>>> >
>>> > Cheers,
>>> >
>>> > f
>>>
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Sun Jul 18 00:05:32 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sat, 17 Jul 2010 21:05:32 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
Message-ID: <AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>

Is the array jobs feature what you want?

http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs

Brian

On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh <satra at mit.edu> wrote:
>> hi ,
>>
>> i've pushed my changes to:
>>
>> http://github.com/satra/ipython/tree/0.10.1-sge
>>
>> notes:
>>
>> 1. it starts cleanly. i can connect and execute things. when i kill using
>> ctrl-c, the messages appear to indicate that everything shut down well.
>> however, the sge ipengine jobs are still running.
>
> What version of Python and Twisted are you running?
>
>> 2. the pbs option appears to require mpi to be present. i don't think one
>> can launch multiple engines using pbs without mpi or without the workaround
>> i've applied to the sge engine. basically it submits an sge job for each
>> engine that i want to run. i would love to know if a single job can launch
>> multiple engines on a sge/pbs cluster without mpi.
>
> I think you are right that pbs needs to use mpirun/mpiexec to start
> multiple engines using a single PBS job. ?I am not that familiar with
> SGE, can you start mulitple processes without mpi and with just a
> single SGE job? ?If so, let's try to get that working.
>
> Cheers,
>
> Brian
>
>> cheers,
>>
>> satra
>>
>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>>
>>> hi justin,
>>>
>>> i hope to test it out tonight. from what fernando and i discussed, this
>>> should be relatively straightforward. once i'm done i'll push it to my fork
>>> of ipython and announce it here for others to test.
>>>
>>> cheers,
>>>
>>> satra
>>>
>>>
>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley <justin.t.riley at gmail.com>
>>> wrote:
>>>>
>>>> This is great news. Right now StarCluster just takes advantage of
>>>> password-less ssh already being installed and runs:
>>>>
>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>
>>>> This works fine for now, however, having SGE support would allow
>>>> ipcluster's load to be accounted for by the queue.
>>>>
>>>> Is Satra on the list? I have experience with SGE and could help with the
>>>> code if needed. I can also help test this functionality.
>>>>
>>>> ~Justin
>>>>
>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>> > On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger <ellisonbg at gmail.com>
>>>> > wrote:
>>>> >> Thanks for the post. ?You should also know that it looks like someone
>>>> >> is going to add native SGE support to ipcluster for 0.10.1.
>>>> >
>>>> > Yes, Satra and I went over this last night in detail (thanks to Brian
>>>> > for the pointers), and he said he might actually already have some
>>>> > code for it. ?I suspect we'll get this in soon.
>>>> >
>>>> > Cheers,
>>>> >
>>>> > f
>>>>
>>>> _______________________________________________
>>>> IPython-dev mailing list
>>>> IPython-dev at scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>
>>
>>
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>>
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From justin.t.riley at gmail.com  Sun Jul 18 00:40:13 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Sun, 18 Jul 2010 00:40:13 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
Message-ID: <4C4285AD.7080702@gmail.com>

Hi Satra/Brian,

I was just going to suggest array jobs and decided to give it a try 
before posting. I've hacked it in starting from Satra's 0.10.1-sge 
branch. I'll commit it to my fork after some testing.

~Justin

On 07/18/2010 12:05 AM, Brian Granger wrote:
> Is the array jobs feature what you want?
>
> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>
> Brian
>
> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>  wrote:
>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu>  wrote:
>>> hi ,
>>>
>>> i've pushed my changes to:
>>>
>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>
>>> notes:
>>>
>>> 1. it starts cleanly. i can connect and execute things. when i kill using
>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>> however, the sge ipengine jobs are still running.
>>
>> What version of Python and Twisted are you running?
>>
>>> 2. the pbs option appears to require mpi to be present. i don't think one
>>> can launch multiple engines using pbs without mpi or without the workaround
>>> i've applied to the sge engine. basically it submits an sge job for each
>>> engine that i want to run. i would love to know if a single job can launch
>>> multiple engines on a sge/pbs cluster without mpi.
>>
>> I think you are right that pbs needs to use mpirun/mpiexec to start
>> multiple engines using a single PBS job.  I am not that familiar with
>> SGE, can you start mulitple processes without mpi and with just a
>> single SGE job?  If so, let's try to get that working.
>>
>> Cheers,
>>
>> Brian
>>
>>> cheers,
>>>
>>> satra
>>>
>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu>  wrote:
>>>>
>>>> hi justin,
>>>>
>>>> i hope to test it out tonight. from what fernando and i discussed, this
>>>> should be relatively straightforward. once i'm done i'll push it to my fork
>>>> of ipython and announce it here for others to test.
>>>>
>>>> cheers,
>>>>
>>>> satra
>>>>
>>>>
>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley<justin.t.riley at gmail.com>
>>>> wrote:
>>>>>
>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>> password-less ssh already being installed and runs:
>>>>>
>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>
>>>>> This works fine for now, however, having SGE support would allow
>>>>> ipcluster's load to be accounted for by the queue.
>>>>>
>>>>> Is Satra on the list? I have experience with SGE and could help with the
>>>>> code if needed. I can also help test this functionality.
>>>>>
>>>>> ~Justin
>>>>>
>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>> wrote:
>>>>>>> Thanks for the post.  You should also know that it looks like someone
>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>
>>>>>> Yes, Satra and I went over this last night in detail (thanks to Brian
>>>>>> for the pointers), and he said he might actually already have some
>>>>>> code for it.  I suspect we'll get this in soon.
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> f
>>>>>
>>>>> _______________________________________________
>>>>> IPython-dev mailing list
>>>>> IPython-dev at scipy.org
>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>
>>>
>>>
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>
>>>
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>>
>
>
>



From justin.t.riley at gmail.com  Sun Jul 18 03:43:27 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Sun, 18 Jul 2010 03:43:27 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
Message-ID: <4C42B09F.50106@gmail.com>

Hi Satra/Brian,

I modified your code to use the job array feature of SGE. I've also made 
it so that users don't need to specify --sge-script if they don't need a 
custom SGE launch script. My guess is that most users will choose not to 
specify --sge-script first and resort to using --sge-script when the 
generated launch script no longer meets their needs. More details in the 
git log here:

http://github.com/jtriley/ipython/tree/0.10.1-sge

Also, I need to test this, but I believe this code will fail if the 
folder containing the furl file is not NFS-mounted on the SGE cluster. 
Another option besides requiring NFS is to scp the furl file to each 
host as is done in the ssh mode of ipcluster, however, this would 
require password-less ssh to be configured properly (maybe not so bad). 
Another option is to dump the generated furl file into the job script 
itself. This has the advantage of only needing SGE installed but 
certainly doesn't seem like the safest practice. Any thoughts on how to 
approach this?

Let me know what you think.

~Justin

On 07/18/2010 12:05 AM, Brian Granger wrote:
> Is the array jobs feature what you want?
>
> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>
> Brian
>
> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>  wrote:
>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu>  wrote:
>>> hi ,
>>>
>>> i've pushed my changes to:
>>>
>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>
>>> notes:
>>>
>>> 1. it starts cleanly. i can connect and execute things. when i kill using
>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>> however, the sge ipengine jobs are still running.
>>
>> What version of Python and Twisted are you running?
>>
>>> 2. the pbs option appears to require mpi to be present. i don't think one
>>> can launch multiple engines using pbs without mpi or without the workaround
>>> i've applied to the sge engine. basically it submits an sge job for each
>>> engine that i want to run. i would love to know if a single job can launch
>>> multiple engines on a sge/pbs cluster without mpi.
>>
>> I think you are right that pbs needs to use mpirun/mpiexec to start
>> multiple engines using a single PBS job.  I am not that familiar with
>> SGE, can you start mulitple processes without mpi and with just a
>> single SGE job?  If so, let's try to get that working.
>>
>> Cheers,
>>
>> Brian
>>
>>> cheers,
>>>
>>> satra
>>>
>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu>  wrote:
>>>>
>>>> hi justin,
>>>>
>>>> i hope to test it out tonight. from what fernando and i discussed, this
>>>> should be relatively straightforward. once i'm done i'll push it to my fork
>>>> of ipython and announce it here for others to test.
>>>>
>>>> cheers,
>>>>
>>>> satra
>>>>
>>>>
>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley<justin.t.riley at gmail.com>
>>>> wrote:
>>>>>
>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>> password-less ssh already being installed and runs:
>>>>>
>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>
>>>>> This works fine for now, however, having SGE support would allow
>>>>> ipcluster's load to be accounted for by the queue.
>>>>>
>>>>> Is Satra on the list? I have experience with SGE and could help with the
>>>>> code if needed. I can also help test this functionality.
>>>>>
>>>>> ~Justin
>>>>>
>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>> wrote:
>>>>>>> Thanks for the post.  You should also know that it looks like someone
>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>
>>>>>> Yes, Satra and I went over this last night in detail (thanks to Brian
>>>>>> for the pointers), and he said he might actually already have some
>>>>>> code for it.  I suspect we'll get this in soon.
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> f
>>>>>
>>>>> _______________________________________________
>>>>> IPython-dev mailing list
>>>>> IPython-dev at scipy.org
>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>
>>>
>>>
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>
>>>
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>>
>
>
>



From tomspur at fedoraproject.org  Sun Jul 18 11:14:12 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Sun, 18 Jul 2010 17:14:12 +0200
Subject: [IPython-dev] correct test-suite
Message-ID: <20100718171412.42f4e970@earth>

Hi list,

I'm trying to fix the test suite at the moment and run into a problem,
I can't resolve...

There is now a Makefile, so it's nicer to run repetive tasks in the
repository, but currently there is only 'make test-suite', which should
run the test suite.
(now = in branch my_fix_test_suite at github:
http://github.com/tomspur/ipython/commits/my_fix_test_suite)

One failing test pointed out, that there is a programming error in
IPython/Shell.py and is now corrected in this commit:
http://github.com/tomspur/ipython/commit/7e7988ee9e7c35b2e5302725ebdf6c22135f334e

But now, there is a problem with test: "Test that object's __del__
methods are called on exit." in IPython/core/tests/test_run.py:146.

Before that commit, this test was simply failing. Now it seems it's in
a infinite loop and there is no progress anymore...

Does someone know, what's going on there?
(To run into this issue, run this:
'PYTHONPATH=. IPython/scripts/iptest -v IPython.core')

	Thomas

P.S. The same is happening with: 'PYTHONPATH=. IPython/scripts/iptest
-v IPython.extensions' the the test
"IPython.extensions.tests.test_pretty.TestPrettyInteractively.test_printers")


From justin.t.riley at gmail.com  Sun Jul 18 12:58:45 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Sun, 18 Jul 2010 12:58:45 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C42B09F.50106@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
Message-ID: <4C4332C5.5050006@gmail.com>

Turns out that torque/pbs also support job arrays. I've updated my 
0.10.1-sge branch with PBS job array support. Works well with torque 
2.4.6. Also tested SGE support against 6.2u3.

Since the code is extremely similar between PBS/SGE I decided to update 
the BatchEngineSet base class to handle the core job array logic. Given 
that PBS/SGE are the only subclasses I figured this was OK. If not, 
should be easy to break it out again.

~Justin

On 07/18/2010 03:43 AM, Justin Riley wrote:
> Hi Satra/Brian,
>
> I modified your code to use the job array feature of SGE. I've also made
> it so that users don't need to specify --sge-script if they don't need a
> custom SGE launch script. My guess is that most users will choose not to
> specify --sge-script first and resort to using --sge-script when the
> generated launch script no longer meets their needs. More details in the
> git log here:
>
> http://github.com/jtriley/ipython/tree/0.10.1-sge
>
> Also, I need to test this, but I believe this code will fail if the
> folder containing the furl file is not NFS-mounted on the SGE cluster.
> Another option besides requiring NFS is to scp the furl file to each
> host as is done in the ssh mode of ipcluster, however, this would
> require password-less ssh to be configured properly (maybe not so bad).
> Another option is to dump the generated furl file into the job script
> itself. This has the advantage of only needing SGE installed but
> certainly doesn't seem like the safest practice. Any thoughts on how to
> approach this?
>
> Let me know what you think.
>
> ~Justin
>
> On 07/18/2010 12:05 AM, Brian Granger wrote:
>> Is the array jobs feature what you want?
>>
>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>
>> Brian
>>
>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>> wrote:
>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>> hi ,
>>>>
>>>> i've pushed my changes to:
>>>>
>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>
>>>> notes:
>>>>
>>>> 1. it starts cleanly. i can connect and execute things. when i kill
>>>> using
>>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>>> however, the sge ipengine jobs are still running.
>>>
>>> What version of Python and Twisted are you running?
>>>
>>>> 2. the pbs option appears to require mpi to be present. i don't
>>>> think one
>>>> can launch multiple engines using pbs without mpi or without the
>>>> workaround
>>>> i've applied to the sge engine. basically it submits an sge job for
>>>> each
>>>> engine that i want to run. i would love to know if a single job can
>>>> launch
>>>> multiple engines on a sge/pbs cluster without mpi.
>>>
>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>> multiple engines using a single PBS job. I am not that familiar with
>>> SGE, can you start mulitple processes without mpi and with just a
>>> single SGE job? If so, let's try to get that working.
>>>
>>> Cheers,
>>>
>>> Brian
>>>
>>>> cheers,
>>>>
>>>> satra
>>>>
>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>>
>>>>> hi justin,
>>>>>
>>>>> i hope to test it out tonight. from what fernando and i discussed,
>>>>> this
>>>>> should be relatively straightforward. once i'm done i'll push it to
>>>>> my fork
>>>>> of ipython and announce it here for others to test.
>>>>>
>>>>> cheers,
>>>>>
>>>>> satra
>>>>>
>>>>>
>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin
>>>>> Riley<justin.t.riley at gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>> password-less ssh already being installed and runs:
>>>>>>
>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>
>>>>>> This works fine for now, however, having SGE support would allow
>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>
>>>>>> Is Satra on the list? I have experience with SGE and could help
>>>>>> with the
>>>>>> code if needed. I can also help test this functionality.
>>>>>>
>>>>>> ~Justin
>>>>>>
>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>>> wrote:
>>>>>>>> Thanks for the post. You should also know that it looks like
>>>>>>>> someone
>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>
>>>>>>> Yes, Satra and I went over this last night in detail (thanks to
>>>>>>> Brian
>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>> code for it. I suspect we'll get this in soon.
>>>>>>>
>>>>>>> Cheers,
>>>>>>>
>>>>>>> f
>>>>>>
>>>>>> _______________________________________________
>>>>>> IPython-dev mailing list
>>>>>> IPython-dev at scipy.org
>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> IPython-dev mailing list
>>>> IPython-dev at scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Brian E. Granger, Ph.D.
>>> Assistant Professor of Physics
>>> Cal Poly State University, San Luis Obispo
>>> bgranger at calpoly.edu
>>> ellisonbg at gmail.com
>>>
>>
>>
>>
>



From justin.t.riley at gmail.com  Sun Jul 18 13:02:47 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Sun, 18 Jul 2010 13:02:47 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C4332C5.5050006@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com> <4C4332C5.5050006@gmail.com>
Message-ID: <4C4333B7.5080807@gmail.com>

Forgot to mention, in my fork PBS now automatically generates a launch 
script as well if one is not specified. So, assuming you have either SGE 
or Torque/PBS working it *should* be as simple as:

$ ipcluster sge -n 4

or

$ ipcluster pbs -n 4

You can of course still pass the --sge-script/--pbs-script options but 
the user is no longer required to create a launch script themselves.

~Justin

On 07/18/2010 12:58 PM, Justin Riley wrote:
> Turns out that torque/pbs also support job arrays. I've updated my
> 0.10.1-sge branch with PBS job array support. Works well with torque
> 2.4.6. Also tested SGE support against 6.2u3.
>
> Since the code is extremely similar between PBS/SGE I decided to update
> the BatchEngineSet base class to handle the core job array logic. Given
> that PBS/SGE are the only subclasses I figured this was OK. If not,
> should be easy to break it out again.
>
> ~Justin
>
> On 07/18/2010 03:43 AM, Justin Riley wrote:
>> Hi Satra/Brian,
>>
>> I modified your code to use the job array feature of SGE. I've also made
>> it so that users don't need to specify --sge-script if they don't need a
>> custom SGE launch script. My guess is that most users will choose not to
>> specify --sge-script first and resort to using --sge-script when the
>> generated launch script no longer meets their needs. More details in the
>> git log here:
>>
>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>
>> Also, I need to test this, but I believe this code will fail if the
>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>> Another option besides requiring NFS is to scp the furl file to each
>> host as is done in the ssh mode of ipcluster, however, this would
>> require password-less ssh to be configured properly (maybe not so bad).
>> Another option is to dump the generated furl file into the job script
>> itself. This has the advantage of only needing SGE installed but
>> certainly doesn't seem like the safest practice. Any thoughts on how to
>> approach this?
>>
>> Let me know what you think.
>>
>> ~Justin
>>
>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>> Is the array jobs feature what you want?
>>>
>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>
>>> Brian
>>>
>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>>> wrote:
>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>> hi ,
>>>>>
>>>>> i've pushed my changes to:
>>>>>
>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>
>>>>> notes:
>>>>>
>>>>> 1. it starts cleanly. i can connect and execute things. when i kill
>>>>> using
>>>>> ctrl-c, the messages appear to indicate that everything shut down
>>>>> well.
>>>>> however, the sge ipengine jobs are still running.
>>>>
>>>> What version of Python and Twisted are you running?
>>>>
>>>>> 2. the pbs option appears to require mpi to be present. i don't
>>>>> think one
>>>>> can launch multiple engines using pbs without mpi or without the
>>>>> workaround
>>>>> i've applied to the sge engine. basically it submits an sge job for
>>>>> each
>>>>> engine that i want to run. i would love to know if a single job can
>>>>> launch
>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>
>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>> multiple engines using a single PBS job. I am not that familiar with
>>>> SGE, can you start mulitple processes without mpi and with just a
>>>> single SGE job? If so, let's try to get that working.
>>>>
>>>> Cheers,
>>>>
>>>> Brian
>>>>
>>>>> cheers,
>>>>>
>>>>> satra
>>>>>
>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>>>
>>>>>> hi justin,
>>>>>>
>>>>>> i hope to test it out tonight. from what fernando and i discussed,
>>>>>> this
>>>>>> should be relatively straightforward. once i'm done i'll push it to
>>>>>> my fork
>>>>>> of ipython and announce it here for others to test.
>>>>>>
>>>>>> cheers,
>>>>>>
>>>>>> satra
>>>>>>
>>>>>>
>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin
>>>>>> Riley<justin.t.riley at gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>> password-less ssh already being installed and runs:
>>>>>>>
>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>
>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>
>>>>>>> Is Satra on the list? I have experience with SGE and could help
>>>>>>> with the
>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>
>>>>>>> ~Justin
>>>>>>>
>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian
>>>>>>>> Granger<ellisonbg at gmail.com>
>>>>>>>> wrote:
>>>>>>>>> Thanks for the post. You should also know that it looks like
>>>>>>>>> someone
>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>
>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to
>>>>>>>> Brian
>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>> code for it. I suspect we'll get this in soon.
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>>
>>>>>>>> f
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> IPython-dev mailing list
>>>>>>> IPython-dev at scipy.org
>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> IPython-dev mailing list
>>>>> IPython-dev at scipy.org
>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Brian E. Granger, Ph.D.
>>>> Assistant Professor of Physics
>>>> Cal Poly State University, San Luis Obispo
>>>> bgranger at calpoly.edu
>>>> ellisonbg at gmail.com
>>>>
>>>
>>>
>>>
>>
>



From matthieu.brucher at gmail.com  Sun Jul 18 13:13:46 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Sun, 18 Jul 2010 19:13:46 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C42B09F.50106@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
Message-ID: <AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>

Hi,

Does IPython support now sending engines to nodes that do not have the
same $HOME as the main instance? This is what kept me from testing
correctly IPython with LSF some months ago :|

Matthieu

2010/7/18 Justin Riley <justin.t.riley at gmail.com>:
> Hi Satra/Brian,
>
> I modified your code to use the job array feature of SGE. I've also made
> it so that users don't need to specify --sge-script if they don't need a
> custom SGE launch script. My guess is that most users will choose not to
> specify --sge-script first and resort to using --sge-script when the
> generated launch script no longer meets their needs. More details in the
> git log here:
>
> http://github.com/jtriley/ipython/tree/0.10.1-sge
>
> Also, I need to test this, but I believe this code will fail if the
> folder containing the furl file is not NFS-mounted on the SGE cluster.
> Another option besides requiring NFS is to scp the furl file to each
> host as is done in the ssh mode of ipcluster, however, this would
> require password-less ssh to be configured properly (maybe not so bad).
> Another option is to dump the generated furl file into the job script
> itself. This has the advantage of only needing SGE installed but
> certainly doesn't seem like the safest practice. Any thoughts on how to
> approach this?
>
> Let me know what you think.
>
> ~Justin
>
> On 07/18/2010 12:05 AM, Brian Granger wrote:
>> Is the array jobs feature what you want?
>>
>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>
>> Brian
>>
>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com> ?wrote:
>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> ?wrote:
>>>> hi ,
>>>>
>>>> i've pushed my changes to:
>>>>
>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>
>>>> notes:
>>>>
>>>> 1. it starts cleanly. i can connect and execute things. when i kill using
>>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>>> however, the sge ipengine jobs are still running.
>>>
>>> What version of Python and Twisted are you running?
>>>
>>>> 2. the pbs option appears to require mpi to be present. i don't think one
>>>> can launch multiple engines using pbs without mpi or without the workaround
>>>> i've applied to the sge engine. basically it submits an sge job for each
>>>> engine that i want to run. i would love to know if a single job can launch
>>>> multiple engines on a sge/pbs cluster without mpi.
>>>
>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>> multiple engines using a single PBS job. ?I am not that familiar with
>>> SGE, can you start mulitple processes without mpi and with just a
>>> single SGE job? ?If so, let's try to get that working.
>>>
>>> Cheers,
>>>
>>> Brian
>>>
>>>> cheers,
>>>>
>>>> satra
>>>>
>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> ?wrote:
>>>>>
>>>>> hi justin,
>>>>>
>>>>> i hope to test it out tonight. from what fernando and i discussed, this
>>>>> should be relatively straightforward. once i'm done i'll push it to my fork
>>>>> of ipython and announce it here for others to test.
>>>>>
>>>>> cheers,
>>>>>
>>>>> satra
>>>>>
>>>>>
>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley<justin.t.riley at gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>> password-less ssh already being installed and runs:
>>>>>>
>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>
>>>>>> This works fine for now, however, having SGE support would allow
>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>
>>>>>> Is Satra on the list? I have experience with SGE and could help with the
>>>>>> code if needed. I can also help test this functionality.
>>>>>>
>>>>>> ~Justin
>>>>>>
>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>>> wrote:
>>>>>>>> Thanks for the post. ?You should also know that it looks like someone
>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>
>>>>>>> Yes, Satra and I went over this last night in detail (thanks to Brian
>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>> code for it. ?I suspect we'll get this in soon.
>>>>>>>
>>>>>>> Cheers,
>>>>>>>
>>>>>>> f
>>>>>>
>>>>>> _______________________________________________
>>>>>> IPython-dev mailing list
>>>>>> IPython-dev at scipy.org
>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> IPython-dev mailing list
>>>> IPython-dev at scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Brian E. Granger, Ph.D.
>>> Assistant Professor of Physics
>>> Cal Poly State University, San Luis Obispo
>>> bgranger at calpoly.edu
>>> ellisonbg at gmail.com
>>>
>>
>>
>>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From satra at mit.edu  Sun Jul 18 14:01:03 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Sun, 18 Jul 2010 14:01:03 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C4333B7.5080807@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com> <4C4332C5.5050006@gmail.com>
	<4C4333B7.5080807@gmail.com>
Message-ID: <AANLkTimqPbfeSl6-yThMiGCa7zwj3GvmubUob7yMe9gD@mail.gmail.com>

hi justin,

this is fantastic. i think it will be good to at least put the queue to be
used as a user option. most of the time installations of torque or sge have
multiple queues.

cheers,

satra


On Sun, Jul 18, 2010 at 1:02 PM, Justin Riley <justin.t.riley at gmail.com>wrote:

> Forgot to mention, in my fork PBS now automatically generates a launch
> script as well if one is not specified. So, assuming you have either SGE or
> Torque/PBS working it *should* be as simple as:
>
> $ ipcluster sge -n 4
>
> or
>
> $ ipcluster pbs -n 4
>
> You can of course still pass the --sge-script/--pbs-script options but the
> user is no longer required to create a launch script themselves.
>
> ~Justin
>
>
> On 07/18/2010 12:58 PM, Justin Riley wrote:
>
>> Turns out that torque/pbs also support job arrays. I've updated my
>> 0.10.1-sge branch with PBS job array support. Works well with torque
>> 2.4.6. Also tested SGE support against 6.2u3.
>>
>> Since the code is extremely similar between PBS/SGE I decided to update
>> the BatchEngineSet base class to handle the core job array logic. Given
>> that PBS/SGE are the only subclasses I figured this was OK. If not,
>> should be easy to break it out again.
>>
>> ~Justin
>>
>> On 07/18/2010 03:43 AM, Justin Riley wrote:
>>
>>> Hi Satra/Brian,
>>>
>>> I modified your code to use the job array feature of SGE. I've also made
>>> it so that users don't need to specify --sge-script if they don't need a
>>> custom SGE launch script. My guess is that most users will choose not to
>>> specify --sge-script first and resort to using --sge-script when the
>>> generated launch script no longer meets their needs. More details in the
>>> git log here:
>>>
>>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>>
>>> Also, I need to test this, but I believe this code will fail if the
>>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>>> Another option besides requiring NFS is to scp the furl file to each
>>> host as is done in the ssh mode of ipcluster, however, this would
>>> require password-less ssh to be configured properly (maybe not so bad).
>>> Another option is to dump the generated furl file into the job script
>>> itself. This has the advantage of only needing SGE installed but
>>> certainly doesn't seem like the safest practice. Any thoughts on how to
>>> approach this?
>>>
>>> Let me know what you think.
>>>
>>> ~Justin
>>>
>>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>>
>>>> Is the array jobs feature what you want?
>>>>
>>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>>
>>>> Brian
>>>>
>>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>>>> wrote:
>>>>
>>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>>
>>>>>> hi ,
>>>>>>
>>>>>> i've pushed my changes to:
>>>>>>
>>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>>
>>>>>> notes:
>>>>>>
>>>>>> 1. it starts cleanly. i can connect and execute things. when i kill
>>>>>> using
>>>>>> ctrl-c, the messages appear to indicate that everything shut down
>>>>>> well.
>>>>>> however, the sge ipengine jobs are still running.
>>>>>>
>>>>>
>>>>> What version of Python and Twisted are you running?
>>>>>
>>>>>  2. the pbs option appears to require mpi to be present. i don't
>>>>>> think one
>>>>>> can launch multiple engines using pbs without mpi or without the
>>>>>> workaround
>>>>>> i've applied to the sge engine. basically it submits an sge job for
>>>>>> each
>>>>>> engine that i want to run. i would love to know if a single job can
>>>>>> launch
>>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>>>
>>>>>
>>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>>> multiple engines using a single PBS job. I am not that familiar with
>>>>> SGE, can you start mulitple processes without mpi and with just a
>>>>> single SGE job? If so, let's try to get that working.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Brian
>>>>>
>>>>>  cheers,
>>>>>>
>>>>>> satra
>>>>>>
>>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>>>
>>>>>>>
>>>>>>> hi justin,
>>>>>>>
>>>>>>> i hope to test it out tonight. from what fernando and i discussed,
>>>>>>> this
>>>>>>> should be relatively straightforward. once i'm done i'll push it to
>>>>>>> my fork
>>>>>>> of ipython and announce it here for others to test.
>>>>>>>
>>>>>>> cheers,
>>>>>>>
>>>>>>> satra
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin
>>>>>>> Riley<justin.t.riley at gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>>> password-less ssh already being installed and runs:
>>>>>>>>
>>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>>
>>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>>
>>>>>>>> Is Satra on the list? I have experience with SGE and could help
>>>>>>>> with the
>>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>>
>>>>>>>> ~Justin
>>>>>>>>
>>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>>
>>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian
>>>>>>>>> Granger<ellisonbg at gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Thanks for the post. You should also know that it looks like
>>>>>>>>>> someone
>>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to
>>>>>>>>> Brian
>>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>>> code for it. I suspect we'll get this in soon.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>>
>>>>>>>>> f
>>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> IPython-dev mailing list
>>>>>>>> IPython-dev at scipy.org
>>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> IPython-dev mailing list
>>>>>> IPython-dev at scipy.org
>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Brian E. Granger, Ph.D.
>>>>> Assistant Professor of Physics
>>>>> Cal Poly State University, San Luis Obispo
>>>>> bgranger at calpoly.edu
>>>>> ellisonbg at gmail.com
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100718/f9be832d/attachment.html>

From satra at mit.edu  Sun Jul 18 14:03:18 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Sun, 18 Jul 2010 14:03:18 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
Message-ID: <AANLkTimpKnR4qJ9fBQwhSL7QCUOfpuQV6AOMBM4CyrAe@mail.gmail.com>

if i'm not mistake this is related to the furl files. if we can implement
furl passing, we won't need the the engines to have the same $HOME as the
controller. btw, since you have LSF, does it have the same options as
SGE/Torque? Assuming you have the same home, can you run ipython on an lsf
cluster.

cheers,

satra


On Sun, Jul 18, 2010 at 1:13 PM, Matthieu Brucher <
matthieu.brucher at gmail.com> wrote:

> Hi,
>
> Does IPython support now sending engines to nodes that do not have the
> same $HOME as the main instance? This is what kept me from testing
> correctly IPython with LSF some months ago :|
>
> Matthieu
>
> 2010/7/18 Justin Riley <justin.t.riley at gmail.com>:
> > Hi Satra/Brian,
> >
> > I modified your code to use the job array feature of SGE. I've also made
> > it so that users don't need to specify --sge-script if they don't need a
> > custom SGE launch script. My guess is that most users will choose not to
> > specify --sge-script first and resort to using --sge-script when the
> > generated launch script no longer meets their needs. More details in the
> > git log here:
> >
> > http://github.com/jtriley/ipython/tree/0.10.1-sge
> >
> > Also, I need to test this, but I believe this code will fail if the
> > folder containing the furl file is not NFS-mounted on the SGE cluster.
> > Another option besides requiring NFS is to scp the furl file to each
> > host as is done in the ssh mode of ipcluster, however, this would
> > require password-less ssh to be configured properly (maybe not so bad).
> > Another option is to dump the generated furl file into the job script
> > itself. This has the advantage of only needing SGE installed but
> > certainly doesn't seem like the safest practice. Any thoughts on how to
> > approach this?
> >
> > Let me know what you think.
> >
> > ~Justin
> >
> > On 07/18/2010 12:05 AM, Brian Granger wrote:
> >> Is the array jobs feature what you want?
> >>
> >> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
> >>
> >> Brian
> >>
> >> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>  wrote:
> >>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu>  wrote:
> >>>> hi ,
> >>>>
> >>>> i've pushed my changes to:
> >>>>
> >>>> http://github.com/satra/ipython/tree/0.10.1-sge
> >>>>
> >>>> notes:
> >>>>
> >>>> 1. it starts cleanly. i can connect and execute things. when i kill
> using
> >>>> ctrl-c, the messages appear to indicate that everything shut down
> well.
> >>>> however, the sge ipengine jobs are still running.
> >>>
> >>> What version of Python and Twisted are you running?
> >>>
> >>>> 2. the pbs option appears to require mpi to be present. i don't think
> one
> >>>> can launch multiple engines using pbs without mpi or without the
> workaround
> >>>> i've applied to the sge engine. basically it submits an sge job for
> each
> >>>> engine that i want to run. i would love to know if a single job can
> launch
> >>>> multiple engines on a sge/pbs cluster without mpi.
> >>>
> >>> I think you are right that pbs needs to use mpirun/mpiexec to start
> >>> multiple engines using a single PBS job.  I am not that familiar with
> >>> SGE, can you start mulitple processes without mpi and with just a
> >>> single SGE job?  If so, let's try to get that working.
> >>>
> >>> Cheers,
> >>>
> >>> Brian
> >>>
> >>>> cheers,
> >>>>
> >>>> satra
> >>>>
> >>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu>
>  wrote:
> >>>>>
> >>>>> hi justin,
> >>>>>
> >>>>> i hope to test it out tonight. from what fernando and i discussed,
> this
> >>>>> should be relatively straightforward. once i'm done i'll push it to
> my fork
> >>>>> of ipython and announce it here for others to test.
> >>>>>
> >>>>> cheers,
> >>>>>
> >>>>> satra
> >>>>>
> >>>>>
> >>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley<
> justin.t.riley at gmail.com>
> >>>>> wrote:
> >>>>>>
> >>>>>> This is great news. Right now StarCluster just takes advantage of
> >>>>>> password-less ssh already being installed and runs:
> >>>>>>
> >>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
> >>>>>>
> >>>>>> This works fine for now, however, having SGE support would allow
> >>>>>> ipcluster's load to be accounted for by the queue.
> >>>>>>
> >>>>>> Is Satra on the list? I have experience with SGE and could help with
> the
> >>>>>> code if needed. I can also help test this functionality.
> >>>>>>
> >>>>>> ~Justin
> >>>>>>
> >>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
> >>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<
> ellisonbg at gmail.com>
> >>>>>>> wrote:
> >>>>>>>> Thanks for the post.  You should also know that it looks like
> someone
> >>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
> >>>>>>>
> >>>>>>> Yes, Satra and I went over this last night in detail (thanks to
> Brian
> >>>>>>> for the pointers), and he said he might actually already have some
> >>>>>>> code for it.  I suspect we'll get this in soon.
> >>>>>>>
> >>>>>>> Cheers,
> >>>>>>>
> >>>>>>> f
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> IPython-dev mailing list
> >>>>>> IPython-dev at scipy.org
> >>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
> >>>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> IPython-dev mailing list
> >>>> IPython-dev at scipy.org
> >>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Brian E. Granger, Ph.D.
> >>> Assistant Professor of Physics
> >>> Cal Poly State University, San Luis Obispo
> >>> bgranger at calpoly.edu
> >>> ellisonbg at gmail.com
> >>>
> >>
> >>
> >>
> >
> > _______________________________________________
> > IPython-dev mailing list
> > IPython-dev at scipy.org
> > http://mail.scipy.org/mailman/listinfo/ipython-dev
> >
>
>
>
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100718/d4445cae/attachment.html>

From matthieu.brucher at gmail.com  Sun Jul 18 14:06:43 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Sun, 18 Jul 2010 20:06:43 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTimpKnR4qJ9fBQwhSL7QCUOfpuQV6AOMBM4CyrAe@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<AANLkTimpKnR4qJ9fBQwhSL7QCUOfpuQV6AOMBM4CyrAe@mail.gmail.com>
Message-ID: <AANLkTimmqRNcDQV8q1dbFWp5aoIVAAzJsbz-aVExpK5_@mail.gmail.com>

Hi,

I don't know, as I only have access to clusters where $HOME is
different. I should be able to get around by directly logging on
nodes, but I have to ask the administrators. LSF has more options than
SGE/Torque. At least when I checked, I could implement the same
environment. Job arrays are supported for a long time, so this is not
an issue.
The more worrying part may be the MPI backend. They are not launched
in the same way :|

Matthieu

2010/7/18 Satrajit Ghosh <satra at mit.edu>:
> if i'm not mistake this is related to the furl files. if we can implement
> furl passing, we won't need the the engines to have the same $HOME as the
> controller. btw, since you have LSF, does it have the same options as
> SGE/Torque? Assuming you have the same home, can you run ipython on an lsf
> cluster.
>
> cheers,
>
> satra


-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From satra at mit.edu  Sun Jul 18 14:12:09 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Sun, 18 Jul 2010 14:12:09 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTimmqRNcDQV8q1dbFWp5aoIVAAzJsbz-aVExpK5_@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<AANLkTimpKnR4qJ9fBQwhSL7QCUOfpuQV6AOMBM4CyrAe@mail.gmail.com>
	<AANLkTimmqRNcDQV8q1dbFWp5aoIVAAzJsbz-aVExpK5_@mail.gmail.com>
Message-ID: <AANLkTilYQENXDEcJcjiMRk0Gyxr4HfEUEBv6uaCuQ_xD@mail.gmail.com>

hi matthieu,

the new code that justin implemented doesn't require mpi. it just requires
the native torque/sge and hopefully lfs install.

cheers,

satra


On Sun, Jul 18, 2010 at 2:06 PM, Matthieu Brucher <
matthieu.brucher at gmail.com> wrote:

> Hi,
>
> I don't know, as I only have access to clusters where $HOME is
> different. I should be able to get around by directly logging on
> nodes, but I have to ask the administrators. LSF has more options than
> SGE/Torque. At least when I checked, I could implement the same
> environment. Job arrays are supported for a long time, so this is not
> an issue.
> The more worrying part may be the MPI backend. They are not launched
> in the same way :|
>
> Matthieu
>
> 2010/7/18 Satrajit Ghosh <satra at mit.edu>:
> > if i'm not mistake this is related to the furl files. if we can implement
> > furl passing, we won't need the the engines to have the same $HOME as the
> > controller. btw, since you have LSF, does it have the same options as
> > SGE/Torque? Assuming you have the same home, can you run ipython on an
> lsf
> > cluster.
> >
> > cheers,
> >
> > satra
>
>
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100718/060a8e86/attachment.html>

From matthieu.brucher at gmail.com  Sun Jul 18 14:14:34 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Sun, 18 Jul 2010 20:14:34 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTimpKnR4qJ9fBQwhSL7QCUOfpuQV6AOMBM4CyrAe@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<AANLkTimpKnR4qJ9fBQwhSL7QCUOfpuQV6AOMBM4CyrAe@mail.gmail.com>
Message-ID: <AANLkTil5oWoLDMS_jagFvwwdDs-06O4-qPaShn-SSr6t@mail.gmail.com>

When you say furl passing, it's giving a path to the furl file? The
general issue may not be solved this way. On the clusters I have
access to, $HOME is the same path, but $HOME on the submission node is
available as /nf/$HOME on the compute nodes, $HOME is another
folder...

Cheers,

Matthieu

2010/7/18 Satrajit Ghosh <satra at mit.edu>:
> if i'm not mistake this is related to the furl files. if we can implement
> furl passing, we won't need the the engines to have the same $HOME as the
> controller. btw, since you have LSF, does it have the same options as
> SGE/Torque? Assuming you have the same home, can you run ipython on an lsf
> cluster.
>
> cheers,
>
> satra

-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From matthieu.brucher at gmail.com  Sun Jul 18 14:15:20 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Sun, 18 Jul 2010 20:15:20 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTilYQENXDEcJcjiMRk0Gyxr4HfEUEBv6uaCuQ_xD@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<AANLkTimpKnR4qJ9fBQwhSL7QCUOfpuQV6AOMBM4CyrAe@mail.gmail.com>
	<AANLkTimmqRNcDQV8q1dbFWp5aoIVAAzJsbz-aVExpK5_@mail.gmail.com>
	<AANLkTilYQENXDEcJcjiMRk0Gyxr4HfEUEBv6uaCuQ_xD@mail.gmail.com>
Message-ID: <AANLkTilpZ7ctrZeBHqZZ_bA8qjSAqJgtg7G8bYI0E0wG@mail.gmail.com>

Of course, I forgot that we don't require MPI for IPython ;)

2010/7/18 Satrajit Ghosh <satra at mit.edu>:
> hi matthieu,
>
> the new code that justin implemented doesn't require mpi. it just requires
> the native torque/sge and hopefully lfs install.
>
> cheers,
>
> satra
>
>
> On Sun, Jul 18, 2010 at 2:06 PM, Matthieu Brucher
> <matthieu.brucher at gmail.com> wrote:
>>
>> Hi,
>>
>> I don't know, as I only have access to clusters where $HOME is
>> different. I should be able to get around by directly logging on
>> nodes, but I have to ask the administrators. LSF has more options than
>> SGE/Torque. At least when I checked, I could implement the same
>> environment. Job arrays are supported for a long time, so this is not
>> an issue.
>> The more worrying part may be the MPI backend. They are not launched
>> in the same way :|
>>
>> Matthieu
>>
>> 2010/7/18 Satrajit Ghosh <satra at mit.edu>:
>> > if i'm not mistake this is related to the furl files. if we can
>> > implement
>> > furl passing, we won't need the the engines to have the same $HOME as
>> > the
>> > controller. btw, since you have LSF, does it have the same options as
>> > SGE/Torque? Assuming you have the same home, can you run ipython on an
>> > lsf
>> > cluster.
>> >
>> > cheers,
>> >
>> > satra
>>
>>
>> --
>> Information System Engineer, Ph.D.
>> Blog: http://matt.eifelle.com
>> LinkedIn: http://www.linkedin.com/in/matthieubrucher
>
>



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From justin.t.riley at gmail.com  Sun Jul 18 14:18:07 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Sun, 18 Jul 2010 14:18:07 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
Message-ID: <4C43455F.1050508@gmail.com>

Hi Matthieu,

At least for the modifications I made, no not yet. This is exactly what 
I'm asking about in the second paragraph of my response. The new SGE/PBS 
support will work with multiple hosts assuming the ~/.ipython/security 
folder is NFS-shared on the cluster.

If that's not the case, then AFAIK we have two options:

1. scp the furl file from ~/.ipython/security to each host's 
~/.ipython/security folder.

2. put the contents of the furl file directly inside the job script
used to start the engines

The first option relies on the user having password-less configured 
properly to each node on the cluster. ipcluster would first need to scp 
the furl and then launch the engines using PBS/SGE.

The second option is the easiest approach given that it only requires 
SGE to be installed, however, it's probably not the best idea to put the 
furl file in the job script itself for security reasons. I'm curious to 
get opinions on this. This would require slight code modifications.

~Justin

On 07/18/2010 01:13 PM, Matthieu Brucher wrote:
> Hi,
>
> Does IPython support now sending engines to nodes that do not have the
> same $HOME as the main instance? This is what kept me from testing
> correctly IPython with LSF some months ago :|
>
> Matthieu
>
> 2010/7/18 Justin Riley<justin.t.riley at gmail.com>:
>> Hi Satra/Brian,
>>
>> I modified your code to use the job array feature of SGE. I've also made
>> it so that users don't need to specify --sge-script if they don't need a
>> custom SGE launch script. My guess is that most users will choose not to
>> specify --sge-script first and resort to using --sge-script when the
>> generated launch script no longer meets their needs. More details in the
>> git log here:
>>
>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>
>> Also, I need to test this, but I believe this code will fail if the
>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>> Another option besides requiring NFS is to scp the furl file to each
>> host as is done in the ssh mode of ipcluster, however, this would
>> require password-less ssh to be configured properly (maybe not so bad).
>> Another option is to dump the generated furl file into the job script
>> itself. This has the advantage of only needing SGE installed but
>> certainly doesn't seem like the safest practice. Any thoughts on how to
>> approach this?
>>
>> Let me know what you think.
>>
>> ~Justin
>>
>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>> Is the array jobs feature what you want?
>>>
>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>
>>> Brian
>>>
>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>    wrote:
>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu>    wrote:
>>>>> hi ,
>>>>>
>>>>> i've pushed my changes to:
>>>>>
>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>
>>>>> notes:
>>>>>
>>>>> 1. it starts cleanly. i can connect and execute things. when i kill using
>>>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>>>> however, the sge ipengine jobs are still running.
>>>>
>>>> What version of Python and Twisted are you running?
>>>>
>>>>> 2. the pbs option appears to require mpi to be present. i don't think one
>>>>> can launch multiple engines using pbs without mpi or without the workaround
>>>>> i've applied to the sge engine. basically it submits an sge job for each
>>>>> engine that i want to run. i would love to know if a single job can launch
>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>
>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>> multiple engines using a single PBS job.  I am not that familiar with
>>>> SGE, can you start mulitple processes without mpi and with just a
>>>> single SGE job?  If so, let's try to get that working.
>>>>
>>>> Cheers,
>>>>
>>>> Brian
>>>>
>>>>> cheers,
>>>>>
>>>>> satra
>>>>>
>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu>    wrote:
>>>>>>
>>>>>> hi justin,
>>>>>>
>>>>>> i hope to test it out tonight. from what fernando and i discussed, this
>>>>>> should be relatively straightforward. once i'm done i'll push it to my fork
>>>>>> of ipython and announce it here for others to test.
>>>>>>
>>>>>> cheers,
>>>>>>
>>>>>> satra
>>>>>>
>>>>>>
>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley<justin.t.riley at gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>> password-less ssh already being installed and runs:
>>>>>>>
>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>
>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>
>>>>>>> Is Satra on the list? I have experience with SGE and could help with the
>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>
>>>>>>> ~Justin
>>>>>>>
>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>>>> wrote:
>>>>>>>>> Thanks for the post.  You should also know that it looks like someone
>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>
>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to Brian
>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>> code for it.  I suspect we'll get this in soon.
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>>
>>>>>>>> f
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> IPython-dev mailing list
>>>>>>> IPython-dev at scipy.org
>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> IPython-dev mailing list
>>>>> IPython-dev at scipy.org
>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Brian E. Granger, Ph.D.
>>>> Assistant Professor of Physics
>>>> Cal Poly State University, San Luis Obispo
>>>> bgranger at calpoly.edu
>>>> ellisonbg at gmail.com
>>>>
>>>
>>>
>>>
>>
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>
>
>



From justin.t.riley at gmail.com  Sun Jul 18 14:20:28 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Sun, 18 Jul 2010 14:20:28 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTimqPbfeSl6-yThMiGCa7zwj3GvmubUob7yMe9gD@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>	<4C42B09F.50106@gmail.com>	<4C4332C5.5050006@gmail.com>	<4C4333B7.5080807@gmail.com>
	<AANLkTimqPbfeSl6-yThMiGCa7zwj3GvmubUob7yMe9gD@mail.gmail.com>
Message-ID: <4C4345EC.2090601@gmail.com>

Hi Satra,

 > i think it will be good to at least put the queue to
 > be used as a user option. most of the time installations of torque or
 > sge have multiple queues.

Agreed, this would be useful. I'll try to hack it in later tonight 
unless you get to it first ;)

~Justin


On 07/18/2010 02:01 PM, Satrajit Ghosh wrote:
> hi justin,
>
> this is fantastic. i think it will be good to at least put the queue to
> be used as a user option. most of the time installations of torque or
> sge have multiple queues.
>
> cheers,
>
> satra
>
>
> On Sun, Jul 18, 2010 at 1:02 PM, Justin Riley <justin.t.riley at gmail.com
> <mailto:justin.t.riley at gmail.com>> wrote:
>
>     Forgot to mention, in my fork PBS now automatically generates a
>     launch script as well if one is not specified. So, assuming you have
>     either SGE or Torque/PBS working it *should* be as simple as:
>
>     $ ipcluster sge -n 4
>
>     or
>
>     $ ipcluster pbs -n 4
>
>     You can of course still pass the --sge-script/--pbs-script options
>     but the user is no longer required to create a launch script themselves.
>
>     ~Justin
>
>
>     On 07/18/2010 12:58 PM, Justin Riley wrote:
>
>         Turns out that torque/pbs also support job arrays. I've updated my
>         0.10.1-sge branch with PBS job array support. Works well with torque
>         2.4.6. Also tested SGE support against 6.2u3.
>
>         Since the code is extremely similar between PBS/SGE I decided to
>         update
>         the BatchEngineSet base class to handle the core job array
>         logic. Given
>         that PBS/SGE are the only subclasses I figured this was OK. If not,
>         should be easy to break it out again.
>
>         ~Justin
>
>         On 07/18/2010 03:43 AM, Justin Riley wrote:
>
>             Hi Satra/Brian,
>
>             I modified your code to use the job array feature of SGE.
>             I've also made
>             it so that users don't need to specify --sge-script if they
>             don't need a
>             custom SGE launch script. My guess is that most users will
>             choose not to
>             specify --sge-script first and resort to using --sge-script
>             when the
>             generated launch script no longer meets their needs. More
>             details in the
>             git log here:
>
>             http://github.com/jtriley/ipython/tree/0.10.1-sge
>
>             Also, I need to test this, but I believe this code will fail
>             if the
>             folder containing the furl file is not NFS-mounted on the
>             SGE cluster.
>             Another option besides requiring NFS is to scp the furl file
>             to each
>             host as is done in the ssh mode of ipcluster, however, this
>             would
>             require password-less ssh to be configured properly (maybe
>             not so bad).
>             Another option is to dump the generated furl file into the
>             job script
>             itself. This has the advantage of only needing SGE installed but
>             certainly doesn't seem like the safest practice. Any
>             thoughts on how to
>             approach this?
>
>             Let me know what you think.
>
>             ~Justin
>
>             On 07/18/2010 12:05 AM, Brian Granger wrote:
>
>                 Is the array jobs feature what you want?
>
>                 http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>
>                 Brian
>
>                 On Sat, Jul 17, 2010 at 9:00 PM, Brian
>                 Granger<ellisonbg at gmail.com <mailto:ellisonbg at gmail.com>>
>                 wrote:
>
>                     On Sat, Jul 17, 2010 at 6:23 AM, Satrajit
>                     Ghosh<satra at mit.edu <mailto:satra at mit.edu>> wrote:
>
>                         hi ,
>
>                         i've pushed my changes to:
>
>                         http://github.com/satra/ipython/tree/0.10.1-sge
>
>                         notes:
>
>                         1. it starts cleanly. i can connect and execute
>                         things. when i kill
>                         using
>                         ctrl-c, the messages appear to indicate that
>                         everything shut down
>                         well.
>                         however, the sge ipengine jobs are still running.
>
>
>                     What version of Python and Twisted are you running?
>
>                         2. the pbs option appears to require mpi to be
>                         present. i don't
>                         think one
>                         can launch multiple engines using pbs without
>                         mpi or without the
>                         workaround
>                         i've applied to the sge engine. basically it
>                         submits an sge job for
>                         each
>                         engine that i want to run. i would love to know
>                         if a single job can
>                         launch
>                         multiple engines on a sge/pbs cluster without mpi.
>
>
>                     I think you are right that pbs needs to use
>                     mpirun/mpiexec to start
>                     multiple engines using a single PBS job. I am not
>                     that familiar with
>                     SGE, can you start mulitple processes without mpi
>                     and with just a
>                     single SGE job? If so, let's try to get that working.
>
>                     Cheers,
>
>                     Brian
>
>                         cheers,
>
>                         satra
>
>                         On Thu, Jul 15, 2010 at 8:55 PM, Satrajit
>                         Ghosh<satra at mit.edu <mailto:satra at mit.edu>> wrote:
>
>
>                             hi justin,
>
>                             i hope to test it out tonight. from what
>                             fernando and i discussed,
>                             this
>                             should be relatively straightforward. once
>                             i'm done i'll push it to
>                             my fork
>                             of ipython and announce it here for others
>                             to test.
>
>                             cheers,
>
>                             satra
>
>
>                             On Thu, Jul 15, 2010 at 4:33 PM, Justin
>                             Riley<justin.t.riley at gmail.com
>                             <mailto:justin.t.riley at gmail.com>>
>                             wrote:
>
>
>                                 This is great news. Right now
>                                 StarCluster just takes advantage of
>                                 password-less ssh already being
>                                 installed and runs:
>
>                                 $ ipcluster ssh --clusterfile
>                                 /path/to/cluster_file.py
>
>                                 This works fine for now, however, having
>                                 SGE support would allow
>                                 ipcluster's load to be accounted for by
>                                 the queue.
>
>                                 Is Satra on the list? I have experience
>                                 with SGE and could help
>                                 with the
>                                 code if needed. I can also help test
>                                 this functionality.
>
>                                 ~Justin
>
>                                 On 07/15/2010 03:34 PM, Fernando Perez
>                                 wrote:
>
>                                     On Thu, Jul 15, 2010 at 10:34 AM, Brian
>                                     Granger<ellisonbg at gmail.com
>                                     <mailto:ellisonbg at gmail.com>>
>                                     wrote:
>
>                                         Thanks for the post. You should
>                                         also know that it looks like
>                                         someone
>                                         is going to add native SGE
>                                         support to ipcluster for 0.10.1.
>
>
>                                     Yes, Satra and I went over this last
>                                     night in detail (thanks to
>                                     Brian
>                                     for the pointers), and he said he
>                                     might actually already have some
>                                     code for it. I suspect we'll get
>                                     this in soon.
>
>                                     Cheers,
>
>                                     f
>
>
>                                 _______________________________________________
>                                 IPython-dev mailing list
>                                 IPython-dev at scipy.org
>                                 <mailto:IPython-dev at scipy.org>
>                                 http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
>
>
>                         _______________________________________________
>                         IPython-dev mailing list
>                         IPython-dev at scipy.org <mailto:IPython-dev at scipy.org>
>                         http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
>
>
>
>                     --
>                     Brian E. Granger, Ph.D.
>                     Assistant Professor of Physics
>                     Cal Poly State University, San Luis Obispo
>                     bgranger at calpoly.edu <mailto:bgranger at calpoly.edu>
>                     ellisonbg at gmail.com <mailto:ellisonbg at gmail.com>
>
>
>
>
>
>
>
>



From matthieu.brucher at gmail.com  Sun Jul 18 14:24:41 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Sun, 18 Jul 2010 20:24:41 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C43455F.1050508@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
Message-ID: <AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>

Hi,

I also prefer the first option, as it is the configuration I'm most
confortable with. Besides, people may have this already configured.

Matthieu

2010/7/18 Justin Riley <justin.t.riley at gmail.com>:
> Hi Matthieu,
>
> At least for the modifications I made, no not yet. This is exactly what I'm
> asking about in the second paragraph of my response. The new SGE/PBS support
> will work with multiple hosts assuming the ~/.ipython/security folder is
> NFS-shared on the cluster.
>
> If that's not the case, then AFAIK we have two options:
>
> 1. scp the furl file from ~/.ipython/security to each host's
> ~/.ipython/security folder.
>
> 2. put the contents of the furl file directly inside the job script
> used to start the engines
>
> The first option relies on the user having password-less configured properly
> to each node on the cluster. ipcluster would first need to scp the furl and
> then launch the engines using PBS/SGE.
>
> The second option is the easiest approach given that it only requires SGE to
> be installed, however, it's probably not the best idea to put the furl file
> in the job script itself for security reasons. I'm curious to get opinions
> on this. This would require slight code modifications.
>
> ~Justin
>
> On 07/18/2010 01:13 PM, Matthieu Brucher wrote:
>>
>> Hi,
>>
>> Does IPython support now sending engines to nodes that do not have the
>> same $HOME as the main instance? This is what kept me from testing
>> correctly IPython with LSF some months ago :|
>>
>> Matthieu
>>
>> 2010/7/18 Justin Riley<justin.t.riley at gmail.com>:
>>>
>>> Hi Satra/Brian,
>>>
>>> I modified your code to use the job array feature of SGE. I've also made
>>> it so that users don't need to specify --sge-script if they don't need a
>>> custom SGE launch script. My guess is that most users will choose not to
>>> specify --sge-script first and resort to using --sge-script when the
>>> generated launch script no longer meets their needs. More details in the
>>> git log here:
>>>
>>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>>
>>> Also, I need to test this, but I believe this code will fail if the
>>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>>> Another option besides requiring NFS is to scp the furl file to each
>>> host as is done in the ssh mode of ipcluster, however, this would
>>> require password-less ssh to be configured properly (maybe not so bad).
>>> Another option is to dump the generated furl file into the job script
>>> itself. This has the advantage of only needing SGE installed but
>>> certainly doesn't seem like the safest practice. Any thoughts on how to
>>> approach this?
>>>
>>> Let me know what you think.
>>>
>>> ~Justin
>>>
>>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>>>
>>>> Is the array jobs feature what you want?
>>>>
>>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>>
>>>> Brian
>>>>
>>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>>>> ?wrote:
>>>>>
>>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu>
>>>>> ?wrote:
>>>>>>
>>>>>> hi ,
>>>>>>
>>>>>> i've pushed my changes to:
>>>>>>
>>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>>
>>>>>> notes:
>>>>>>
>>>>>> 1. it starts cleanly. i can connect and execute things. when i kill
>>>>>> using
>>>>>> ctrl-c, the messages appear to indicate that everything shut down
>>>>>> well.
>>>>>> however, the sge ipengine jobs are still running.
>>>>>
>>>>> What version of Python and Twisted are you running?
>>>>>
>>>>>> 2. the pbs option appears to require mpi to be present. i don't think
>>>>>> one
>>>>>> can launch multiple engines using pbs without mpi or without the
>>>>>> workaround
>>>>>> i've applied to the sge engine. basically it submits an sge job for
>>>>>> each
>>>>>> engine that i want to run. i would love to know if a single job can
>>>>>> launch
>>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>>
>>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>>> multiple engines using a single PBS job. ?I am not that familiar with
>>>>> SGE, can you start mulitple processes without mpi and with just a
>>>>> single SGE job? ?If so, let's try to get that working.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Brian
>>>>>
>>>>>> cheers,
>>>>>>
>>>>>> satra
>>>>>>
>>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu>
>>>>>> ?wrote:
>>>>>>>
>>>>>>> hi justin,
>>>>>>>
>>>>>>> i hope to test it out tonight. from what fernando and i discussed,
>>>>>>> this
>>>>>>> should be relatively straightforward. once i'm done i'll push it to
>>>>>>> my fork
>>>>>>> of ipython and announce it here for others to test.
>>>>>>>
>>>>>>> cheers,
>>>>>>>
>>>>>>> satra
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin
>>>>>>> Riley<justin.t.riley at gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>>> password-less ssh already being installed and runs:
>>>>>>>>
>>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>>
>>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>>
>>>>>>>> Is Satra on the list? I have experience with SGE and could help with
>>>>>>>> the
>>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>>
>>>>>>>> ~Justin
>>>>>>>>
>>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>>>
>>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian
>>>>>>>>> Granger<ellisonbg at gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Thanks for the post. ?You should also know that it looks like
>>>>>>>>>> someone
>>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>>
>>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to
>>>>>>>>> Brian
>>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>>> code for it. ?I suspect we'll get this in soon.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>>
>>>>>>>>> f
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> IPython-dev mailing list
>>>>>>>> IPython-dev at scipy.org
>>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> IPython-dev mailing list
>>>>>> IPython-dev at scipy.org
>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Brian E. Granger, Ph.D.
>>>>> Assistant Professor of Physics
>>>>> Cal Poly State University, San Luis Obispo
>>>>> bgranger at calpoly.edu
>>>>> ellisonbg at gmail.com
>>>>>
>>>>
>>>>
>>>>
>>>
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>
>>
>>
>>
>
>



-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From justin.t.riley at gmail.com  Sun Jul 18 15:05:16 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Sun, 18 Jul 2010 15:05:16 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>	<4C42B09F.50106@gmail.com>	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>	<4C43455F.1050508@gmail.com>
	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>
Message-ID: <4C43506C.8070907@gmail.com>

Matthieu,

I agree that password-less ssh is a common configuration on HPC clusters 
and it would be useful to have the option of using SSH to copy the furl 
file to each host before launching engines with SGE/PBS/LSF. I'll see 
about hacking this in when I get some more time.

BTW, I just added experimental support for LSF to my fork. I can't test 
the code given that I don't have access to a LSF system but in theory it 
should work (again using job arrays) provided the ~/.ipython/security 
folder is shared.

~Justin

On 07/18/2010 02:24 PM, Matthieu Brucher wrote:
> Hi,
>
> I also prefer the first option, as it is the configuration I'm most
> confortable with. Besides, people may have this already configured.
>
> Matthieu
>
> 2010/7/18 Justin Riley<justin.t.riley at gmail.com>:
>> Hi Matthieu,
>>
>> At least for the modifications I made, no not yet. This is exactly what I'm
>> asking about in the second paragraph of my response. The new SGE/PBS support
>> will work with multiple hosts assuming the ~/.ipython/security folder is
>> NFS-shared on the cluster.
>>
>> If that's not the case, then AFAIK we have two options:
>>
>> 1. scp the furl file from ~/.ipython/security to each host's
>> ~/.ipython/security folder.
>>
>> 2. put the contents of the furl file directly inside the job script
>> used to start the engines
>>
>> The first option relies on the user having password-less configured properly
>> to each node on the cluster. ipcluster would first need to scp the furl and
>> then launch the engines using PBS/SGE.
>>
>> The second option is the easiest approach given that it only requires SGE to
>> be installed, however, it's probably not the best idea to put the furl file
>> in the job script itself for security reasons. I'm curious to get opinions
>> on this. This would require slight code modifications.
>>
>> ~Justin
>>
>> On 07/18/2010 01:13 PM, Matthieu Brucher wrote:
>>>
>>> Hi,
>>>
>>> Does IPython support now sending engines to nodes that do not have the
>>> same $HOME as the main instance? This is what kept me from testing
>>> correctly IPython with LSF some months ago :|
>>>
>>> Matthieu
>>>
>>> 2010/7/18 Justin Riley<justin.t.riley at gmail.com>:
>>>>
>>>> Hi Satra/Brian,
>>>>
>>>> I modified your code to use the job array feature of SGE. I've also made
>>>> it so that users don't need to specify --sge-script if they don't need a
>>>> custom SGE launch script. My guess is that most users will choose not to
>>>> specify --sge-script first and resort to using --sge-script when the
>>>> generated launch script no longer meets their needs. More details in the
>>>> git log here:
>>>>
>>>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>>>
>>>> Also, I need to test this, but I believe this code will fail if the
>>>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>>>> Another option besides requiring NFS is to scp the furl file to each
>>>> host as is done in the ssh mode of ipcluster, however, this would
>>>> require password-less ssh to be configured properly (maybe not so bad).
>>>> Another option is to dump the generated furl file into the job script
>>>> itself. This has the advantage of only needing SGE installed but
>>>> certainly doesn't seem like the safest practice. Any thoughts on how to
>>>> approach this?
>>>>
>>>> Let me know what you think.
>>>>
>>>> ~Justin
>>>>
>>>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>>>>
>>>>> Is the array jobs feature what you want?
>>>>>
>>>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>>>
>>>>> Brian
>>>>>
>>>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>>>>>   wrote:
>>>>>>
>>>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu>
>>>>>>   wrote:
>>>>>>>
>>>>>>> hi ,
>>>>>>>
>>>>>>> i've pushed my changes to:
>>>>>>>
>>>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>>>
>>>>>>> notes:
>>>>>>>
>>>>>>> 1. it starts cleanly. i can connect and execute things. when i kill
>>>>>>> using
>>>>>>> ctrl-c, the messages appear to indicate that everything shut down
>>>>>>> well.
>>>>>>> however, the sge ipengine jobs are still running.
>>>>>>
>>>>>> What version of Python and Twisted are you running?
>>>>>>
>>>>>>> 2. the pbs option appears to require mpi to be present. i don't think
>>>>>>> one
>>>>>>> can launch multiple engines using pbs without mpi or without the
>>>>>>> workaround
>>>>>>> i've applied to the sge engine. basically it submits an sge job for
>>>>>>> each
>>>>>>> engine that i want to run. i would love to know if a single job can
>>>>>>> launch
>>>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>>>
>>>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>>>> multiple engines using a single PBS job.  I am not that familiar with
>>>>>> SGE, can you start mulitple processes without mpi and with just a
>>>>>> single SGE job?  If so, let's try to get that working.
>>>>>>
>>>>>> Cheers,
>>>>>>
>>>>>> Brian
>>>>>>
>>>>>>> cheers,
>>>>>>>
>>>>>>> satra
>>>>>>>
>>>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu>
>>>>>>>   wrote:
>>>>>>>>
>>>>>>>> hi justin,
>>>>>>>>
>>>>>>>> i hope to test it out tonight. from what fernando and i discussed,
>>>>>>>> this
>>>>>>>> should be relatively straightforward. once i'm done i'll push it to
>>>>>>>> my fork
>>>>>>>> of ipython and announce it here for others to test.
>>>>>>>>
>>>>>>>> cheers,
>>>>>>>>
>>>>>>>> satra
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin
>>>>>>>> Riley<justin.t.riley at gmail.com>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>>>> password-less ssh already being installed and runs:
>>>>>>>>>
>>>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>>>
>>>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>>>
>>>>>>>>> Is Satra on the list? I have experience with SGE and could help with
>>>>>>>>> the
>>>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>>>
>>>>>>>>> ~Justin
>>>>>>>>>
>>>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>>>>
>>>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian
>>>>>>>>>> Granger<ellisonbg at gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Thanks for the post.  You should also know that it looks like
>>>>>>>>>>> someone
>>>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>>>
>>>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to
>>>>>>>>>> Brian
>>>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>>>> code for it.  I suspect we'll get this in soon.
>>>>>>>>>>
>>>>>>>>>> Cheers,
>>>>>>>>>>
>>>>>>>>>> f
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> IPython-dev mailing list
>>>>>>>>> IPython-dev at scipy.org
>>>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> IPython-dev mailing list
>>>>>>> IPython-dev at scipy.org
>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Brian E. Granger, Ph.D.
>>>>>> Assistant Professor of Physics
>>>>>> Cal Poly State University, San Luis Obispo
>>>>>> bgranger at calpoly.edu
>>>>>> ellisonbg at gmail.com
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> IPython-dev mailing list
>>>> IPython-dev at scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>
>>>
>>>
>>>
>>
>>
>
>
>



From benjaminrk at gmail.com  Sun Jul 18 15:06:37 2010
From: benjaminrk at gmail.com (MinRK)
Date: Sun, 18 Jul 2010 12:06:37 -0700
Subject: [IPython-dev] Engine Queue sockets
Message-ID: <AANLkTinyyHkWxIMaPZn_2Wwo7RquCNga_CFdsUO0tMAU@mail.gmail.com>

I'm working on the controller, making pretty decent progress, but keep
running into design questions.

The current one:
I have been thinking of the Controller-Engine connection as PAIR sockets,
but it seems like it could also be just one XREP on the Controller and XREQs
on the engines. Then the controller, rather than switching on actual
sockets, switches on engine IDs, since XREP uses the first entry in
send_multipart to determine the destination.

For a crude task model, we could reverse those connections and allow XREQ on
the controller to load balance. Of course, then we would lose any
information about which engine is doing what until jobs complete.

Could there be an issue in terms of scaling for the controller to be
creating thousands of PAIR sockets versus managing thousands of connections
to one XREP socket?

-MinRK

Also: I'm in the #ipython IRC channel now, and am generally trying to be
online there while I work on IPython stuff.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100718/ee3ebcc7/attachment.html>

From ellisonbg at gmail.com  Mon Jul 19 00:21:01 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 18 Jul 2010 21:21:01 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
Message-ID: <AANLkTinuahJ9UfhZ4V-hgxBw7Mxaf87erJJ-Oo86Z9bz@mail.gmail.com>

Statra,

On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh <satra at mit.edu> wrote:
> hi ,
>
> i've pushed my changes to:
>
> http://github.com/satra/ipython/tree/0.10.1-sge

This looks like a great start.  I looked through it quickly just now
and will follow up by looking at Justin's revisions.

> notes:
>
> 1. it starts cleanly. i can connect and execute things. when i kill using
> ctrl-c, the messages appear to indicate that everything shut down well.
> however, the sge ipengine jobs are still running.

I think this is a bug with Twisted on Python 2.6.  I see this in other
contexts as well.

> 2. the pbs option appears to require mpi to be present. i don't think one
> can launch multiple engines using pbs without mpi or without the workaround
> i've applied to the sge engine. basically it submits an sge job for each
> engine that i want to run. i would love to know if a single job can launch
> multiple engines on a sge/pbs cluster without mpi.
>
> cheers,
>
> satra
>
> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>
>> hi justin,
>>
>> i hope to test it out tonight. from what fernando and i discussed, this
>> should be relatively straightforward. once i'm done i'll push it to my fork
>> of ipython and announce it here for others to test.
>>
>> cheers,
>>
>> satra
>>
>>
>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley <justin.t.riley at gmail.com>
>> wrote:
>>>
>>> This is great news. Right now StarCluster just takes advantage of
>>> password-less ssh already being installed and runs:
>>>
>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>
>>> This works fine for now, however, having SGE support would allow
>>> ipcluster's load to be accounted for by the queue.
>>>
>>> Is Satra on the list? I have experience with SGE and could help with the
>>> code if needed. I can also help test this functionality.
>>>
>>> ~Justin
>>>
>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>> > On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger <ellisonbg at gmail.com>
>>> > wrote:
>>> >> Thanks for the post. ?You should also know that it looks like someone
>>> >> is going to add native SGE support to ipcluster for 0.10.1.
>>> >
>>> > Yes, Satra and I went over this last night in detail (thanks to Brian
>>> > for the pointers), and he said he might actually already have some
>>> > code for it. ?I suspect we'll get this in soon.
>>> >
>>> > Cheers,
>>> >
>>> > f
>>>
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Jul 19 00:25:01 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 18 Jul 2010 21:25:01 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C42B09F.50106@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
Message-ID: <AANLkTimTEDEhJ3NKHKdurTlLEIP48MByZAon_RbFwCgm@mail.gmail.com>

Justin,

On Sun, Jul 18, 2010 at 12:43 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> Hi Satra/Brian,
>
> I modified your code to use the job array feature of SGE. I've also made it
> so that users don't need to specify --sge-script if they don't need a custom
> SGE launch script. My guess is that most users will choose not to specify
> --sge-script first and resort to using --sge-script when the generated
> launch script no longer meets their needs. More details in the git log here:

Very nice.  I will do a code review in a few minutes.

> http://github.com/jtriley/ipython/tree/0.10.1-sge
>
> Also, I need to test this, but I believe this code will fail if the folder
> containing the furl file is not NFS-mounted on the SGE cluster. Another
> option besides requiring NFS is to scp the furl file to each host as is done
> in the ssh mode of ipcluster, however, this would require password-less ssh
> to be configured properly (maybe not so bad). Another option is to dump the
> generated furl file into the job script itself. This has the advantage of
> only needing SGE installed but certainly doesn't seem like the safest
> practice. Any thoughts on how to approach this?

Currently we do assume that the user has a shared $HOME directory that
is used to propagate the furl files.  There are obviously many ways of
setting up a cluster, but this is a common approach.  I think that
should be the default.  The idea of using scp to copy the furl files
is obviously another good option and if we can make it work with the
existing approach that would be great.  Just a warning though.  The
version of ipcluster in 0.10.1 has been completely changed in 0.11 to
support multiple cluster profiles and the new configuration system.
For now the new ipcluster is based on twisted, but before we release,
I think we will get rid of Twisted.  All that to say that I don't
think it is worth putting in too much time into the ipcluster for
0.10.1.  Just enough to get it working OK with PBS and SGE is a good
target.

Brian

> Let me know what you think.
>
> ~Justin
>
> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>
>> Is the array jobs feature what you want?
>>
>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>
>> Brian
>>
>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>> ?wrote:
>>>
>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> ?wrote:
>>>>
>>>> hi ,
>>>>
>>>> i've pushed my changes to:
>>>>
>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>
>>>> notes:
>>>>
>>>> 1. it starts cleanly. i can connect and execute things. when i kill
>>>> using
>>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>>> however, the sge ipengine jobs are still running.
>>>
>>> What version of Python and Twisted are you running?
>>>
>>>> 2. the pbs option appears to require mpi to be present. i don't think
>>>> one
>>>> can launch multiple engines using pbs without mpi or without the
>>>> workaround
>>>> i've applied to the sge engine. basically it submits an sge job for each
>>>> engine that i want to run. i would love to know if a single job can
>>>> launch
>>>> multiple engines on a sge/pbs cluster without mpi.
>>>
>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>> multiple engines using a single PBS job. ?I am not that familiar with
>>> SGE, can you start mulitple processes without mpi and with just a
>>> single SGE job? ?If so, let's try to get that working.
>>>
>>> Cheers,
>>>
>>> Brian
>>>
>>>> cheers,
>>>>
>>>> satra
>>>>
>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> ?wrote:
>>>>>
>>>>> hi justin,
>>>>>
>>>>> i hope to test it out tonight. from what fernando and i discussed, this
>>>>> should be relatively straightforward. once i'm done i'll push it to my
>>>>> fork
>>>>> of ipython and announce it here for others to test.
>>>>>
>>>>> cheers,
>>>>>
>>>>> satra
>>>>>
>>>>>
>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley<justin.t.riley at gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>> password-less ssh already being installed and runs:
>>>>>>
>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>
>>>>>> This works fine for now, however, having SGE support would allow
>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>
>>>>>> Is Satra on the list? I have experience with SGE and could help with
>>>>>> the
>>>>>> code if needed. I can also help test this functionality.
>>>>>>
>>>>>> ~Justin
>>>>>>
>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>
>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> Thanks for the post. ?You should also know that it looks like
>>>>>>>> someone
>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>
>>>>>>> Yes, Satra and I went over this last night in detail (thanks to Brian
>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>> code for it. ?I suspect we'll get this in soon.
>>>>>>>
>>>>>>> Cheers,
>>>>>>>
>>>>>>> f
>>>>>>
>>>>>> _______________________________________________
>>>>>> IPython-dev mailing list
>>>>>> IPython-dev at scipy.org
>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> IPython-dev mailing list
>>>> IPython-dev at scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Brian E. Granger, Ph.D.
>>> Assistant Professor of Physics
>>> Cal Poly State University, San Luis Obispo
>>> bgranger at calpoly.edu
>>> ellisonbg at gmail.com
>>>
>>
>>
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Jul 19 00:26:13 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 18 Jul 2010 21:26:13 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C4332C5.5050006@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com> <4C4332C5.5050006@gmail.com>
Message-ID: <AANLkTila84t0lFW1eLlpfLrZzlA1AkizJToYpePy-zBW@mail.gmail.com>

On Sun, Jul 18, 2010 at 9:58 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> Turns out that torque/pbs also support job arrays. I've updated my
> 0.10.1-sge branch with PBS job array support. Works well with torque 2.4.6.
> Also tested SGE support against 6.2u3.

Very nice!

> Since the code is extremely similar between PBS/SGE I decided to update the
> BatchEngineSet base class to handle the core job array logic. Given that
> PBS/SGE are the only subclasses I figured this was OK. If not, should be
> easy to break it out again.

Yes, these definitely makes sense.

Cheers,

Brian

> ~Justin
>
> On 07/18/2010 03:43 AM, Justin Riley wrote:
>>
>> Hi Satra/Brian,
>>
>> I modified your code to use the job array feature of SGE. I've also made
>> it so that users don't need to specify --sge-script if they don't need a
>> custom SGE launch script. My guess is that most users will choose not to
>> specify --sge-script first and resort to using --sge-script when the
>> generated launch script no longer meets their needs. More details in the
>> git log here:
>>
>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>
>> Also, I need to test this, but I believe this code will fail if the
>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>> Another option besides requiring NFS is to scp the furl file to each
>> host as is done in the ssh mode of ipcluster, however, this would
>> require password-less ssh to be configured properly (maybe not so bad).
>> Another option is to dump the generated furl file into the job script
>> itself. This has the advantage of only needing SGE installed but
>> certainly doesn't seem like the safest practice. Any thoughts on how to
>> approach this?
>>
>> Let me know what you think.
>>
>> ~Justin
>>
>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>>
>>> Is the array jobs feature what you want?
>>>
>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>
>>> Brian
>>>
>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>>> wrote:
>>>>
>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>>
>>>>> hi ,
>>>>>
>>>>> i've pushed my changes to:
>>>>>
>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>
>>>>> notes:
>>>>>
>>>>> 1. it starts cleanly. i can connect and execute things. when i kill
>>>>> using
>>>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>>>> however, the sge ipengine jobs are still running.
>>>>
>>>> What version of Python and Twisted are you running?
>>>>
>>>>> 2. the pbs option appears to require mpi to be present. i don't
>>>>> think one
>>>>> can launch multiple engines using pbs without mpi or without the
>>>>> workaround
>>>>> i've applied to the sge engine. basically it submits an sge job for
>>>>> each
>>>>> engine that i want to run. i would love to know if a single job can
>>>>> launch
>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>
>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>> multiple engines using a single PBS job. I am not that familiar with
>>>> SGE, can you start mulitple processes without mpi and with just a
>>>> single SGE job? If so, let's try to get that working.
>>>>
>>>> Cheers,
>>>>
>>>> Brian
>>>>
>>>>> cheers,
>>>>>
>>>>> satra
>>>>>
>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>>>
>>>>>> hi justin,
>>>>>>
>>>>>> i hope to test it out tonight. from what fernando and i discussed,
>>>>>> this
>>>>>> should be relatively straightforward. once i'm done i'll push it to
>>>>>> my fork
>>>>>> of ipython and announce it here for others to test.
>>>>>>
>>>>>> cheers,
>>>>>>
>>>>>> satra
>>>>>>
>>>>>>
>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin
>>>>>> Riley<justin.t.riley at gmail.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>> password-less ssh already being installed and runs:
>>>>>>>
>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>
>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>
>>>>>>> Is Satra on the list? I have experience with SGE and could help
>>>>>>> with the
>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>
>>>>>>> ~Justin
>>>>>>>
>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>>
>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>> Thanks for the post. You should also know that it looks like
>>>>>>>>> someone
>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>
>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to
>>>>>>>> Brian
>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>> code for it. I suspect we'll get this in soon.
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>>
>>>>>>>> f
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> IPython-dev mailing list
>>>>>>> IPython-dev at scipy.org
>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> IPython-dev mailing list
>>>>> IPython-dev at scipy.org
>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Brian E. Granger, Ph.D.
>>>> Assistant Professor of Physics
>>>> Cal Poly State University, San Luis Obispo
>>>> bgranger at calpoly.edu
>>>> ellisonbg at gmail.com
>>>>
>>>
>>>
>>>
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Jul 19 00:28:11 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 18 Jul 2010 21:28:11 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C4333B7.5080807@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com> <4C4332C5.5050006@gmail.com>
	<4C4333B7.5080807@gmail.com>
Message-ID: <AANLkTindqaMnYh23tGOIThq0_3ZUJSqFzDh04wXTitMy@mail.gmail.com>

Justin,

On Sun, Jul 18, 2010 at 10:02 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> Forgot to mention, in my fork PBS now automatically generates a launch
> script as well if one is not specified. So, assuming you have either SGE or
> Torque/PBS working it *should* be as simple as:
>
> $ ipcluster sge -n 4
>
> or
>
> $ ipcluster pbs -n 4

Great, this is definitely how it should work.  As Satra mentions
though, let's make it so the user can specify the queue name to use at
the command line.  That is a pretty common option that many users will
want to modify.

> You can of course still pass the --sge-script/--pbs-script options but the
> user is no longer required to create a launch script themselves.

Great.

Brian

> ~Justin
>
> On 07/18/2010 12:58 PM, Justin Riley wrote:
>>
>> Turns out that torque/pbs also support job arrays. I've updated my
>> 0.10.1-sge branch with PBS job array support. Works well with torque
>> 2.4.6. Also tested SGE support against 6.2u3.
>>
>> Since the code is extremely similar between PBS/SGE I decided to update
>> the BatchEngineSet base class to handle the core job array logic. Given
>> that PBS/SGE are the only subclasses I figured this was OK. If not,
>> should be easy to break it out again.
>>
>> ~Justin
>>
>> On 07/18/2010 03:43 AM, Justin Riley wrote:
>>>
>>> Hi Satra/Brian,
>>>
>>> I modified your code to use the job array feature of SGE. I've also made
>>> it so that users don't need to specify --sge-script if they don't need a
>>> custom SGE launch script. My guess is that most users will choose not to
>>> specify --sge-script first and resort to using --sge-script when the
>>> generated launch script no longer meets their needs. More details in the
>>> git log here:
>>>
>>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>>
>>> Also, I need to test this, but I believe this code will fail if the
>>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>>> Another option besides requiring NFS is to scp the furl file to each
>>> host as is done in the ssh mode of ipcluster, however, this would
>>> require password-less ssh to be configured properly (maybe not so bad).
>>> Another option is to dump the generated furl file into the job script
>>> itself. This has the advantage of only needing SGE installed but
>>> certainly doesn't seem like the safest practice. Any thoughts on how to
>>> approach this?
>>>
>>> Let me know what you think.
>>>
>>> ~Justin
>>>
>>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>>>
>>>> Is the array jobs feature what you want?
>>>>
>>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>>
>>>> Brian
>>>>
>>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com>
>>>> wrote:
>>>>>
>>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>>>
>>>>>> hi ,
>>>>>>
>>>>>> i've pushed my changes to:
>>>>>>
>>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>>
>>>>>> notes:
>>>>>>
>>>>>> 1. it starts cleanly. i can connect and execute things. when i kill
>>>>>> using
>>>>>> ctrl-c, the messages appear to indicate that everything shut down
>>>>>> well.
>>>>>> however, the sge ipengine jobs are still running.
>>>>>
>>>>> What version of Python and Twisted are you running?
>>>>>
>>>>>> 2. the pbs option appears to require mpi to be present. i don't
>>>>>> think one
>>>>>> can launch multiple engines using pbs without mpi or without the
>>>>>> workaround
>>>>>> i've applied to the sge engine. basically it submits an sge job for
>>>>>> each
>>>>>> engine that i want to run. i would love to know if a single job can
>>>>>> launch
>>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>>
>>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>>> multiple engines using a single PBS job. I am not that familiar with
>>>>> SGE, can you start mulitple processes without mpi and with just a
>>>>> single SGE job? If so, let's try to get that working.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Brian
>>>>>
>>>>>> cheers,
>>>>>>
>>>>>> satra
>>>>>>
>>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> wrote:
>>>>>>>
>>>>>>> hi justin,
>>>>>>>
>>>>>>> i hope to test it out tonight. from what fernando and i discussed,
>>>>>>> this
>>>>>>> should be relatively straightforward. once i'm done i'll push it to
>>>>>>> my fork
>>>>>>> of ipython and announce it here for others to test.
>>>>>>>
>>>>>>> cheers,
>>>>>>>
>>>>>>> satra
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin
>>>>>>> Riley<justin.t.riley at gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>>> password-less ssh already being installed and runs:
>>>>>>>>
>>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>>
>>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>>
>>>>>>>> Is Satra on the list? I have experience with SGE and could help
>>>>>>>> with the
>>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>>
>>>>>>>> ~Justin
>>>>>>>>
>>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>>>
>>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian
>>>>>>>>> Granger<ellisonbg at gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>> Thanks for the post. You should also know that it looks like
>>>>>>>>>> someone
>>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>>
>>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to
>>>>>>>>> Brian
>>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>>> code for it. I suspect we'll get this in soon.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>>
>>>>>>>>> f
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> IPython-dev mailing list
>>>>>>>> IPython-dev at scipy.org
>>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> IPython-dev mailing list
>>>>>> IPython-dev at scipy.org
>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Brian E. Granger, Ph.D.
>>>>> Assistant Professor of Physics
>>>>> Cal Poly State University, San Luis Obispo
>>>>> bgranger at calpoly.edu
>>>>> ellisonbg at gmail.com
>>>>>
>>>>
>>>>
>>>>
>>>
>>
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Jul 19 00:32:21 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 18 Jul 2010 21:32:21 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C43455F.1050508@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
Message-ID: <AANLkTinEMjzt92psngpT-sb9qLY9XxcpPbMNNhwNcVuJ@mail.gmail.com>

On Sun, Jul 18, 2010 at 11:18 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> Hi Matthieu,
>
> At least for the modifications I made, no not yet. This is exactly what
> I'm asking about in the second paragraph of my response. The new SGE/PBS
> support will work with multiple hosts assuming the ~/.ipython/security
> folder is NFS-shared on the cluster.

Without mpi being required as I understand it.

> If that's not the case, then AFAIK we have two options:
>
> 1. scp the furl file from ~/.ipython/security to each host's
> ~/.ipython/security folder.
>
> 2. put the contents of the furl file directly inside the job script
> used to start the engines

This is not that bad of an idea.  Remember that the furl file the
engine uses is only between the engines and controller and this
connection is not that vulnerable.  My only question is who can see
the script?  I don't know PBS/SGE well enough to know where the script
ends up and with what permissions.

> The first option relies on the user having password-less configured
> properly to each node on the cluster. ipcluster would first need to scp
> the furl and then launch the engines using PBS/SGE.
>
> The second option is the easiest approach given that it only requires
> SGE to be installed, however, it's probably not the best idea to put the
> furl file in the job script itself for security reasons. I'm curious to
> get opinions on this. This would require slight code modifications.

Do you know anything about what SGE/PBS does with the script?  I
honestly think this might not be a bad idea.  But, again, maybe for
0.10.1 this is not worth the effort because things will change so
incredibly much with 0.11.

Brian

> ~Justin
>
> On 07/18/2010 01:13 PM, Matthieu Brucher wrote:
>> Hi,
>>
>> Does IPython support now sending engines to nodes that do not have the
>> same $HOME as the main instance? This is what kept me from testing
>> correctly IPython with LSF some months ago :|
>>
>> Matthieu
>>
>> 2010/7/18 Justin Riley<justin.t.riley at gmail.com>:
>>> Hi Satra/Brian,
>>>
>>> I modified your code to use the job array feature of SGE. I've also made
>>> it so that users don't need to specify --sge-script if they don't need a
>>> custom SGE launch script. My guess is that most users will choose not to
>>> specify --sge-script first and resort to using --sge-script when the
>>> generated launch script no longer meets their needs. More details in the
>>> git log here:
>>>
>>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>>
>>> Also, I need to test this, but I believe this code will fail if the
>>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>>> Another option besides requiring NFS is to scp the furl file to each
>>> host as is done in the ssh mode of ipcluster, however, this would
>>> require password-less ssh to be configured properly (maybe not so bad).
>>> Another option is to dump the generated furl file into the job script
>>> itself. This has the advantage of only needing SGE installed but
>>> certainly doesn't seem like the safest practice. Any thoughts on how to
>>> approach this?
>>>
>>> Let me know what you think.
>>>
>>> ~Justin
>>>
>>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>>> Is the array jobs feature what you want?
>>>>
>>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>>
>>>> Brian
>>>>
>>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com> ? ?wrote:
>>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> ? ?wrote:
>>>>>> hi ,
>>>>>>
>>>>>> i've pushed my changes to:
>>>>>>
>>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>>
>>>>>> notes:
>>>>>>
>>>>>> 1. it starts cleanly. i can connect and execute things. when i kill using
>>>>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>>>>> however, the sge ipengine jobs are still running.
>>>>>
>>>>> What version of Python and Twisted are you running?
>>>>>
>>>>>> 2. the pbs option appears to require mpi to be present. i don't think one
>>>>>> can launch multiple engines using pbs without mpi or without the workaround
>>>>>> i've applied to the sge engine. basically it submits an sge job for each
>>>>>> engine that i want to run. i would love to know if a single job can launch
>>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>>
>>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>>> multiple engines using a single PBS job. ?I am not that familiar with
>>>>> SGE, can you start mulitple processes without mpi and with just a
>>>>> single SGE job? ?If so, let's try to get that working.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Brian
>>>>>
>>>>>> cheers,
>>>>>>
>>>>>> satra
>>>>>>
>>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> ? ?wrote:
>>>>>>>
>>>>>>> hi justin,
>>>>>>>
>>>>>>> i hope to test it out tonight. from what fernando and i discussed, this
>>>>>>> should be relatively straightforward. once i'm done i'll push it to my fork
>>>>>>> of ipython and announce it here for others to test.
>>>>>>>
>>>>>>> cheers,
>>>>>>>
>>>>>>> satra
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley<justin.t.riley at gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>>> password-less ssh already being installed and runs:
>>>>>>>>
>>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>>
>>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>>
>>>>>>>> Is Satra on the list? I have experience with SGE and could help with the
>>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>>
>>>>>>>> ~Justin
>>>>>>>>
>>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>> Thanks for the post. ?You should also know that it looks like someone
>>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>>
>>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to Brian
>>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>>> code for it. ?I suspect we'll get this in soon.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>>
>>>>>>>>> f
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> IPython-dev mailing list
>>>>>>>> IPython-dev at scipy.org
>>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> IPython-dev mailing list
>>>>>> IPython-dev at scipy.org
>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Brian E. Granger, Ph.D.
>>>>> Assistant Professor of Physics
>>>>> Cal Poly State University, San Luis Obispo
>>>>> bgranger at calpoly.edu
>>>>> ellisonbg at gmail.com
>>>>>
>>>>
>>>>
>>>>
>>>
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>
>>
>>
>>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Jul 19 01:06:31 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 18 Jul 2010 22:06:31 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C43455F.1050508@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
Message-ID: <AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>

Justin,

Here is a quick code review:

* I like the design of the BatchEngineSet.  This will be easy to port to
  0.11.
* I think if we are going to have default submission templates, we need to
  expose the queue name to the command line.  This shouldn't be too tough.
* Have you tested this with Python 2.6.  I saw that you mentioned that
  the engines were shutting down cleanly now.  What did you do to fix that?
  I am even running into that in 0.11 so any info you can provide would
  be helpful.
* For now, let's stick with the assumption of a shared $HOME for the furl files.
* The biggest thing is if people can test this thoroughly.  I don't have
  SGE/PBS/LSF access right now, so it is a bit difficult for me to help. I
  have a cluster coming later in the summer, but it is not here yet.  Once
  people have tested it well and are satisfied with it, let's merge it.
* If we can update the documentation about how the PBS/SGE support works
  that would be great.  The file is here:

http://github.com/jtriley/ipython/blob/8fef6d80ee4f69898351653b773029b36e118a64/docs/source/parallel/parallel_process.txt

Once these small changes have been made and everyone has tested, me
can merge it for the 0.10.1 release.
Thanks for doing this work Justin and Satra!  It is fantastic!  Just
so you all know where this is going in 0.11:

* We are going to get rid of using Twisted in ipcluster.  This means we have
  to re-write the process management stuff to use things like popen.
* We have a new configuration system in 0.11.  This allows users to maintain
  cluster profiles that are a set of configuration files for a particular
  cluster setup.  This makes it easy for a user to have multiple clusters
  configured, which they can then start by name.  The logging, security, etc.
  is also different for each cluster profile.
* It will be quite a bit of work to get everything working in 0.11, so I am
  glad we are getting good PBS/SGE support in 0.10.1.

Cheers,

Brian

On Sun, Jul 18, 2010 at 11:18 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> Hi Matthieu,
>
> At least for the modifications I made, no not yet. This is exactly what
> I'm asking about in the second paragraph of my response. The new SGE/PBS
> support will work with multiple hosts assuming the ~/.ipython/security
> folder is NFS-shared on the cluster.
>
> If that's not the case, then AFAIK we have two options:
>
> 1. scp the furl file from ~/.ipython/security to each host's
> ~/.ipython/security folder.
>
> 2. put the contents of the furl file directly inside the job script
> used to start the engines
>
> The first option relies on the user having password-less configured
> properly to each node on the cluster. ipcluster would first need to scp
> the furl and then launch the engines using PBS/SGE.
>
> The second option is the easiest approach given that it only requires
> SGE to be installed, however, it's probably not the best idea to put the
> furl file in the job script itself for security reasons. I'm curious to
> get opinions on this. This would require slight code modifications.
>
> ~Justin
>
> On 07/18/2010 01:13 PM, Matthieu Brucher wrote:
>> Hi,
>>
>> Does IPython support now sending engines to nodes that do not have the
>> same $HOME as the main instance? This is what kept me from testing
>> correctly IPython with LSF some months ago :|
>>
>> Matthieu
>>
>> 2010/7/18 Justin Riley<justin.t.riley at gmail.com>:
>>> Hi Satra/Brian,
>>>
>>> I modified your code to use the job array feature of SGE. I've also made
>>> it so that users don't need to specify --sge-script if they don't need a
>>> custom SGE launch script. My guess is that most users will choose not to
>>> specify --sge-script first and resort to using --sge-script when the
>>> generated launch script no longer meets their needs. More details in the
>>> git log here:
>>>
>>> http://github.com/jtriley/ipython/tree/0.10.1-sge
>>>
>>> Also, I need to test this, but I believe this code will fail if the
>>> folder containing the furl file is not NFS-mounted on the SGE cluster.
>>> Another option besides requiring NFS is to scp the furl file to each
>>> host as is done in the ssh mode of ipcluster, however, this would
>>> require password-less ssh to be configured properly (maybe not so bad).
>>> Another option is to dump the generated furl file into the job script
>>> itself. This has the advantage of only needing SGE installed but
>>> certainly doesn't seem like the safest practice. Any thoughts on how to
>>> approach this?
>>>
>>> Let me know what you think.
>>>
>>> ~Justin
>>>
>>> On 07/18/2010 12:05 AM, Brian Granger wrote:
>>>> Is the array jobs feature what you want?
>>>>
>>>> http://wikis.sun.com/display/gridengine62u6/Submitting+Jobs
>>>>
>>>> Brian
>>>>
>>>> On Sat, Jul 17, 2010 at 9:00 PM, Brian Granger<ellisonbg at gmail.com> ? ?wrote:
>>>>> On Sat, Jul 17, 2010 at 6:23 AM, Satrajit Ghosh<satra at mit.edu> ? ?wrote:
>>>>>> hi ,
>>>>>>
>>>>>> i've pushed my changes to:
>>>>>>
>>>>>> http://github.com/satra/ipython/tree/0.10.1-sge
>>>>>>
>>>>>> notes:
>>>>>>
>>>>>> 1. it starts cleanly. i can connect and execute things. when i kill using
>>>>>> ctrl-c, the messages appear to indicate that everything shut down well.
>>>>>> however, the sge ipengine jobs are still running.
>>>>>
>>>>> What version of Python and Twisted are you running?
>>>>>
>>>>>> 2. the pbs option appears to require mpi to be present. i don't think one
>>>>>> can launch multiple engines using pbs without mpi or without the workaround
>>>>>> i've applied to the sge engine. basically it submits an sge job for each
>>>>>> engine that i want to run. i would love to know if a single job can launch
>>>>>> multiple engines on a sge/pbs cluster without mpi.
>>>>>
>>>>> I think you are right that pbs needs to use mpirun/mpiexec to start
>>>>> multiple engines using a single PBS job. ?I am not that familiar with
>>>>> SGE, can you start mulitple processes without mpi and with just a
>>>>> single SGE job? ?If so, let's try to get that working.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Brian
>>>>>
>>>>>> cheers,
>>>>>>
>>>>>> satra
>>>>>>
>>>>>> On Thu, Jul 15, 2010 at 8:55 PM, Satrajit Ghosh<satra at mit.edu> ? ?wrote:
>>>>>>>
>>>>>>> hi justin,
>>>>>>>
>>>>>>> i hope to test it out tonight. from what fernando and i discussed, this
>>>>>>> should be relatively straightforward. once i'm done i'll push it to my fork
>>>>>>> of ipython and announce it here for others to test.
>>>>>>>
>>>>>>> cheers,
>>>>>>>
>>>>>>> satra
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Jul 15, 2010 at 4:33 PM, Justin Riley<justin.t.riley at gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> This is great news. Right now StarCluster just takes advantage of
>>>>>>>> password-less ssh already being installed and runs:
>>>>>>>>
>>>>>>>> $ ipcluster ssh --clusterfile /path/to/cluster_file.py
>>>>>>>>
>>>>>>>> This works fine for now, however, having SGE support would allow
>>>>>>>> ipcluster's load to be accounted for by the queue.
>>>>>>>>
>>>>>>>> Is Satra on the list? I have experience with SGE and could help with the
>>>>>>>> code if needed. I can also help test this functionality.
>>>>>>>>
>>>>>>>> ~Justin
>>>>>>>>
>>>>>>>> On 07/15/2010 03:34 PM, Fernando Perez wrote:
>>>>>>>>> On Thu, Jul 15, 2010 at 10:34 AM, Brian Granger<ellisonbg at gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>> Thanks for the post. ?You should also know that it looks like someone
>>>>>>>>>> is going to add native SGE support to ipcluster for 0.10.1.
>>>>>>>>>
>>>>>>>>> Yes, Satra and I went over this last night in detail (thanks to Brian
>>>>>>>>> for the pointers), and he said he might actually already have some
>>>>>>>>> code for it. ?I suspect we'll get this in soon.
>>>>>>>>>
>>>>>>>>> Cheers,
>>>>>>>>>
>>>>>>>>> f
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> IPython-dev mailing list
>>>>>>>> IPython-dev at scipy.org
>>>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> IPython-dev mailing list
>>>>>> IPython-dev at scipy.org
>>>>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Brian E. Granger, Ph.D.
>>>>> Assistant Professor of Physics
>>>>> Cal Poly State University, San Luis Obispo
>>>>> bgranger at calpoly.edu
>>>>> ellisonbg at gmail.com
>>>>>
>>>>
>>>>
>>>>
>>>
>>> _______________________________________________
>>> IPython-dev mailing list
>>> IPython-dev at scipy.org
>>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>>
>>
>>
>>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Mon Jul 19 01:16:57 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 18 Jul 2010 22:16:57 -0700
Subject: [IPython-dev] subprocess and Python 2.6
Message-ID: <AANLkTilgo6S6vimEIoBdMIcovB-aHzWKT1VYyBrSvJAK@mail.gmail.com>

Hi,

In IPython 0.11, we will be moving away from Twisted in many cases.
One of the biggest areas we use Twisted in 0.10 is for cross platform
process management.  In Python 2.5 and below, subprocess.Popen objects
did not have a kill or terminate method and os.kill didn't work on
Windows.  This is one of the big reasons we were using Twisted for
process management.  You could still kill a process on Windows, but it
took some hacks.

With Python 2.6, Popen objects have a kill and terminate method that
will work with Windows.  This will make it much easier to transition
away from using Twisted for process management.  BUT, this would mean
that Python 2.5 users are left in the dark.  If we want to keep 2.5
support in 0.11, we will need to spend some time thinking about this
issue.  Thoughts?

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From gael.varoquaux at normalesup.org  Mon Jul 19 01:20:26 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Mon, 19 Jul 2010 07:20:26 +0200
Subject: [IPython-dev] subprocess and Python 2.6
In-Reply-To: <AANLkTilgo6S6vimEIoBdMIcovB-aHzWKT1VYyBrSvJAK@mail.gmail.com>
References: <AANLkTilgo6S6vimEIoBdMIcovB-aHzWKT1VYyBrSvJAK@mail.gmail.com>
Message-ID: <20100719052026.GA29336@phare.normalesup.org>

On Sun, Jul 18, 2010 at 10:16:57PM -0700, Brian Granger wrote:
> With Python 2.6, Popen objects have a kill and terminate method that
> will work with Windows.  This will make it much easier to transition
> away from using Twisted for process management.  BUT, this would mean
> that Python 2.5 users are left in the dark.  If we want to keep 2.5
> support in 0.11, we will need to spend some time thinking about this
> issue.  Thoughts?

Maybe this could come in handy:
http://github.com/ipython/ipython/tree/master/IPython/frontend/process/
in particular 'killable_process.py', and the file 'winprocess.py' it
depends on.

Ga?l


From ellisonbg at gmail.com  Mon Jul 19 01:24:37 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 18 Jul 2010 22:24:37 -0700
Subject: [IPython-dev] subprocess and Python 2.6
In-Reply-To: <20100719052026.GA29336@phare.normalesup.org>
References: <AANLkTilgo6S6vimEIoBdMIcovB-aHzWKT1VYyBrSvJAK@mail.gmail.com>
	<20100719052026.GA29336@phare.normalesup.org>
Message-ID: <AANLkTikv_J0bQIvrQSTqZjRQm192-Z7ZlVfOu5GvGEyH@mail.gmail.com>

Gael,

On Sun, Jul 18, 2010 at 10:20 PM, Gael Varoquaux
<gael.varoquaux at normalesup.org> wrote:
> On Sun, Jul 18, 2010 at 10:16:57PM -0700, Brian Granger wrote:
>> With Python 2.6, Popen objects have a kill and terminate method that
>> will work with Windows. ?This will make it much easier to transition
>> away from using Twisted for process management. ?BUT, this would mean
>> that Python 2.5 users are left in the dark. ?If we want to keep 2.5
>> support in 0.11, we will need to spend some time thinking about this
>> issue. ?Thoughts?
>
> Maybe this could come in handy:
> http://github.com/ipython/ipython/tree/master/IPython/frontend/process/
> in particular 'killable_process.py', and the file 'winprocess.py' it
> depends on.

I am definitely aware of this and it is this type of thing and if we
want to keep 2.5 support in 0.11, we will definitely use it to kill
processes.  It would just be nice to be able to use Popen and be done
with it.

Cheers,

Brian

> Ga?l
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From benjaminrk at gmail.com  Mon Jul 19 20:33:03 2010
From: benjaminrk at gmail.com (MinRK)
Date: Mon, 19 Jul 2010 17:33:03 -0700
Subject: [IPython-dev] Load Balanced PyZMQ multikernel example
Message-ID: <AANLkTimK-zXWLT2nvnGjQbZcFHvDpdv0vdHOhuZfJndE@mail.gmail.com>

I thought this might be of some interest to the zmq related IPython folks:

pyzmq has a basic multiple client-one kernel remote process example called
'kernel'.  This morning, to explore zmq devices, I wrote a derived example
that is multiple client - multiple kernel, and load balanced across kernels,
and called it 'multikernel'. It took about an hour.

The code is trivial, and uses the zmq XREQ socket's round robin load
balancing.
o The main addition is a relay process containing two zmq devices: a queue
device for the XREQ/XREP connection, and a forwarder for PUB/SUB.
o kernel.py had to change a little, since two socket IDs are contained in
each message instead of just one, and its sockets connect instead of bind.
o frontend.py and other code didn't have to change a letter.
o Exactly zero work is done in Python in the relay process after the
creation of the ?MQ devices.

It does have some weird behavior, since even the tab-completion requests are
load balanced, so if you have two kernels, and you do:
>>>a=5
>>>a='asdf'
>>>a.<tab>
...
>>>a.<tab>
...
each press of the tab key will produce different results - which is fun to
watch, if not especially useful.

I even did a quick and dirty screencast to show 30 seconds of using it with
2 clients and 2 kernels.
http://ptsg.berkeley.edu/~minrk/multikernel.m4v

The example is pushed to my pyzmq fork on github, and depends on that fork
for its implementation of ?MQ devices, not yet merged into Brian's trunk.
http://github.com/minrk/pyzmq

?MQ really is spiffy.
-MinRK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100719/3cf8aee5/attachment.html>

From fperez.net at gmail.com  Mon Jul 19 21:08:49 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 19 Jul 2010 18:08:49 -0700
Subject: [IPython-dev] Load Balanced PyZMQ multikernel example
In-Reply-To: <AANLkTimK-zXWLT2nvnGjQbZcFHvDpdv0vdHOhuZfJndE@mail.gmail.com>
References: <AANLkTimK-zXWLT2nvnGjQbZcFHvDpdv0vdHOhuZfJndE@mail.gmail.com>
Message-ID: <AANLkTikmIVKMcg7Y3CIlOCPvH8ZmNFUImUC8DsenQL4_@mail.gmail.com>

On Mon, Jul 19, 2010 at 5:33 PM, MinRK <benjaminrk at gmail.com> wrote:
>
> The example is pushed to my pyzmq fork on github, and depends on that fork
> for its implementation of??MQ?devices, not yet merged into Brian's trunk.
> http://github.com/minrk/pyzmq
> ?MQ really is spiffy.

I just saw this in person, and it was very neat :)

f


From ellisonbg at gmail.com  Mon Jul 19 23:18:04 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 19 Jul 2010 20:18:04 -0700
Subject: [IPython-dev] Load Balanced PyZMQ multikernel example
In-Reply-To: <AANLkTimK-zXWLT2nvnGjQbZcFHvDpdv0vdHOhuZfJndE@mail.gmail.com>
References: <AANLkTimK-zXWLT2nvnGjQbZcFHvDpdv0vdHOhuZfJndE@mail.gmail.com>
Message-ID: <AANLkTimPLd7hn8pPL74JnzzUJMjPWdyYTt_O0dLfBBy9@mail.gmail.com>

Min,

This is a very nice demonstration of what you can do by hooking up
some 0mq sockets and devices.  I find that the possibilities are so
many that that it takes a while to really let it sink in.

Cheers,

Brian

On Mon, Jul 19, 2010 at 5:33 PM, MinRK <benjaminrk at gmail.com> wrote:
> I thought this might be of some interest to the zmq related IPython folks:
> pyzmq has a basic multiple client-one kernel remote process example called
> 'kernel'. ?This morning, to explore zmq devices, I wrote a derived example
> that is multiple client - multiple kernel, and load balanced across kernels,
> and called it 'multikernel'. It took about an hour.
> The code is trivial, and uses the zmq XREQ socket's round robin load
> balancing.
> o The main addition is a relay process containing two zmq devices: a queue
> device for the XREQ/XREP connection, and a forwarder for PUB/SUB.
> o kernel.py had to change a little, since two socket IDs are contained in
> each message instead of just one, and its sockets connect instead of bind.
> o frontend.py and other code didn't have to change a letter.
> o?Exactly zero work is done in Python in the relay process after the
> creation of the??MQ?devices.
> It does have some weird behavior, since even the tab-completion requests are
> load balanced, so if you have two kernels, and you do:
>>>>a=5
>>>>a='asdf'
>>>>a.<tab>
> ...
>>>>a.<tab>
> ...
> each press of the tab key will produce different results - which is fun to
> watch, if not especially useful.
> I even did a quick and dirty screencast to show 30 seconds of using it with
> 2 clients and 2 kernels.
> http://ptsg.berkeley.edu/~minrk/multikernel.m4v
> The example is pushed to my pyzmq fork on github, and depends on that fork
> for its implementation of??MQ?devices, not yet merged into Brian's trunk.
> http://github.com/minrk/pyzmq
> ?MQ really is spiffy.
> -MinRK



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From justin.t.riley at gmail.com  Tue Jul 20 10:48:15 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Tue, 20 Jul 2010 10:48:15 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>	<4C42B09F.50106@gmail.com>	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
Message-ID: <4C45B72F.5020000@gmail.com>

On 07/19/2010 01:06 AM, Brian Granger wrote:
> * I like the design of the BatchEngineSet.  This will be easy to port to
>   0.11.
Excellent :D

> * I think if we are going to have default submission templates, we need to
>   expose the queue name to the command line.  This shouldn't be too tough.

Added --queue option to my 0.10.1-sge branch and tested this with SGE
62u3 and Torque 2.4.6. I don't have LSF to test but I added in the code
that *should* work with LSF.

> * Have you tested this with Python 2.6.  I saw that you mentioned that
>   the engines were shutting down cleanly now.  What did you do to fix that?
>   I am even running into that in 0.11 so any info you can provide would
>   be helpful.

I've been testing the code with Python 2.6. I didn't do anything special
other than switch the BatchEngineSet to using job arrays (ie a single
qsub command instead of N qsubs). Now when I run "ipcluster sge -n 4"
the controller starts and the engines are launched and at that point the
ipcluster session is running indefinitely. If I then ctrl-c the
ipcluster session it catches the signal and calls kill() which
terminates the engines by canceling the job. Is this the same situation
you're trying to get working?

> * For now, let's stick with the assumption of a shared $HOME for the furl files.
> * The biggest thing is if people can test this thoroughly.  I don't have
>   SGE/PBS/LSF access right now, so it is a bit difficult for me to help. I
>   have a cluster coming later in the summer, but it is not here yet.  Once
>   people have tested it well and are satisfied with it, let's merge it.
> * If we can update the documentation about how the PBS/SGE support works
>   that would be great.  The file is here:

That sounds fine to me. I'm testing this stuff on my workstation's local
sge/torque queues and it works fine. I'll also test this with
StarCluster and make sure it works on a real cluster. If someone else
can test using LSF on a real cluster (with shared $HOME) that'd be
great. I'll try to update the docs some time this week.

> 
> Once these small changes have been made and everyone has tested, me
> can merge it for the 0.10.1 release.
Excellent :D

> Thanks for doing this work Justin and Satra!  It is fantastic!  Just
> so you all know where this is going in 0.11:
> 
> * We are going to get rid of using Twisted in ipcluster.  This means we have
>   to re-write the process management stuff to use things like popen.
> * We have a new configuration system in 0.11.  This allows users to maintain
>   cluster profiles that are a set of configuration files for a particular
>   cluster setup.  This makes it easy for a user to have multiple clusters
>   configured, which they can then start by name.  The logging, security, etc.
>   is also different for each cluster profile.
> * It will be quite a bit of work to get everything working in 0.11, so I am
>   glad we are getting good PBS/SGE support in 0.10.1.

I'm willing to help out with the PBS/SGE/LSF portion of ipcluster in
0.11, I guess just let me know when is appropriate to start hacking.

Thanks!

~Justin


From justin.t.riley at gmail.com  Tue Jul 20 10:53:30 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Tue, 20 Jul 2010 10:53:30 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTinEMjzt92psngpT-sb9qLY9XxcpPbMNNhwNcVuJ@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>	<4C42B09F.50106@gmail.com>	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>	<4C43455F.1050508@gmail.com>
	<AANLkTinEMjzt92psngpT-sb9qLY9XxcpPbMNNhwNcVuJ@mail.gmail.com>
Message-ID: <4C45B86A.9030801@gmail.com>

On 07/19/2010 12:32 AM, Brian Granger wrote:
> Without mpi being required as I understand it.

Yes, no MPI involved with SGE/PBS/LSF support

> This is not that bad of an idea.  Remember that the furl file the
> engine uses is only between the engines and controller and this
> connection is not that vulnerable.  My only question is who can see
> the script?  I don't know PBS/SGE well enough to know where the script
> ends up and with what permissions.

So I decided to test this and found that SGE spools all job scripts into
a location in the $SGE_ROOT that is readable by everyone (at least for
my installation). Given that this is the case, it's probably best not to
store the contents of the furl file directly in the job script.

> Do you know anything about what SGE/PBS does with the script?  I
> honestly think this might not be a bad idea.  But, again, maybe for
> 0.10.1 this is not worth the effort because things will change so
> incredibly much with 0.11.

It's certainly a clever way to get around needing to transfer furl files
between hosts but I'd say not worth the effort given that it's not
completely secure.

~Justin


From ellisonbg at gmail.com  Tue Jul 20 13:02:58 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 20 Jul 2010 10:02:58 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C45B72F.5020000@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
Message-ID: <AANLkTimAcPHjmZOqF7ioen5WAwhwJgO0KMHmoUr8sHqW@mail.gmail.com>

On Tue, Jul 20, 2010 at 7:48 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> On 07/19/2010 01:06 AM, Brian Granger wrote:
>> * I like the design of the BatchEngineSet. ?This will be easy to port to
>> ? 0.11.
> Excellent :D
>
>> * I think if we are going to have default submission templates, we need to
>> ? expose the queue name to the command line. ?This shouldn't be too tough.
>
> Added --queue option to my 0.10.1-sge branch and tested this with SGE
> 62u3 and Torque 2.4.6. I don't have LSF to test but I added in the code
> that *should* work with LSF.

Awesome!

>> * Have you tested this with Python 2.6. ?I saw that you mentioned that
>> ? the engines were shutting down cleanly now. ?What did you do to fix that?
>> ? I am even running into that in 0.11 so any info you can provide would
>> ? be helpful.
>
> I've been testing the code with Python 2.6. I didn't do anything special
> other than switch the BatchEngineSet to using job arrays (ie a single
> qsub command instead of N qsubs). Now when I run "ipcluster sge -n 4"
> the controller starts and the engines are launched and at that point the
> ipcluster session is running indefinitely. If I then ctrl-c the
> ipcluster session it catches the signal and calls kill() which
> terminates the engines by canceling the job. Is this the same situation
> you're trying to get working?

Basically yes, but sometimes the signal is not kllling the batch job.
I need to just debug this further.

>> * For now, let's stick with the assumption of a shared $HOME for the furl files.
>> * The biggest thing is if people can test this thoroughly. ?I don't have
>> ? SGE/PBS/LSF access right now, so it is a bit difficult for me to help. I
>> ? have a cluster coming later in the summer, but it is not here yet. ?Once
>> ? people have tested it well and are satisfied with it, let's merge it.
>> * If we can update the documentation about how the PBS/SGE support works
>> ? that would be great. ?The file is here:
>
> That sounds fine to me. I'm testing this stuff on my workstation's local
> sge/torque queues and it works fine. I'll also test this with
> StarCluster and make sure it works on a real cluster. If someone else
> can test using LSF on a real cluster (with shared $HOME) that'd be
> great. I'll try to update the docs some time this week.

That would be great.  Also when this is working I would like to test it myself.

>>
>> Once these small changes have been made and everyone has tested, me
>> can merge it for the 0.10.1 release.
> Excellent :D
>
>> Thanks for doing this work Justin and Satra! ?It is fantastic! ?Just
>> so you all know where this is going in 0.11:
>>
>> * We are going to get rid of using Twisted in ipcluster. ?This means we have
>> ? to re-write the process management stuff to use things like popen.
>> * We have a new configuration system in 0.11. ?This allows users to maintain
>> ? cluster profiles that are a set of configuration files for a particular
>> ? cluster setup. ?This makes it easy for a user to have multiple clusters
>> ? configured, which they can then start by name. ?The logging, security, etc.
>> ? is also different for each cluster profile.
>> * It will be quite a bit of work to get everything working in 0.11, so I am
>> ? glad we are getting good PBS/SGE support in 0.10.1.
>
> I'm willing to help out with the PBS/SGE/LSF portion of ipcluster in
> 0.11, I guess just let me know when is appropriate to start hacking.

That is great, we will keep you posted.

Cheers,

Brian

> Thanks!
>
> ~Justin
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Tue Jul 20 13:10:05 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 20 Jul 2010 10:10:05 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C45B86A.9030801@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTinEMjzt92psngpT-sb9qLY9XxcpPbMNNhwNcVuJ@mail.gmail.com>
	<4C45B86A.9030801@gmail.com>
Message-ID: <AANLkTil_vb-nnL5v90LWGwoWHpaCsA6Fg6sdiF8yTnqP@mail.gmail.com>

On Tue, Jul 20, 2010 at 7:53 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> On 07/19/2010 12:32 AM, Brian Granger wrote:
>> Without mpi being required as I understand it.
>
> Yes, no MPI involved with SGE/PBS/LSF support
>
>> This is not that bad of an idea. ?Remember that the furl file the
>> engine uses is only between the engines and controller and this
>> connection is not that vulnerable. ?My only question is who can see
>> the script? ?I don't know PBS/SGE well enough to know where the script
>> ends up and with what permissions.
>
> So I decided to test this and found that SGE spools all job scripts into
> a location in the $SGE_ROOT that is readable by everyone (at least for
> my installation). Given that this is the case, it's probably best not to
> store the contents of the furl file directly in the job script.

Thanks for investigating that.

>> Do you know anything about what SGE/PBS does with the script? ?I
>> honestly think this might not be a bad idea. ?But, again, maybe for
>> 0.10.1 this is not worth the effort because things will change so
>> incredibly much with 0.11.
>
> It's certainly a clever way to get around needing to transfer furl files
> between hosts but I'd say not worth the effort given that it's not
> completely secure.

 I think your conclusion is right.

Cheers,

Brian


> ~Justin
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Tue Jul 20 15:19:44 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 20 Jul 2010 12:19:44 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C45B72F.5020000@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
Message-ID: <AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>

Satra,

If you could test this as well, that would be great.  Thanks.  Justin,
let us know when you think it is ready to go with the documentation
and testing.

Cheers,

Brian

On Tue, Jul 20, 2010 at 7:48 AM, Justin Riley <justin.t.riley at gmail.com> wrote:
> On 07/19/2010 01:06 AM, Brian Granger wrote:
>> * I like the design of the BatchEngineSet. ?This will be easy to port to
>> ? 0.11.
> Excellent :D
>
>> * I think if we are going to have default submission templates, we need to
>> ? expose the queue name to the command line. ?This shouldn't be too tough.
>
> Added --queue option to my 0.10.1-sge branch and tested this with SGE
> 62u3 and Torque 2.4.6. I don't have LSF to test but I added in the code
> that *should* work with LSF.
>
>> * Have you tested this with Python 2.6. ?I saw that you mentioned that
>> ? the engines were shutting down cleanly now. ?What did you do to fix that?
>> ? I am even running into that in 0.11 so any info you can provide would
>> ? be helpful.
>
> I've been testing the code with Python 2.6. I didn't do anything special
> other than switch the BatchEngineSet to using job arrays (ie a single
> qsub command instead of N qsubs). Now when I run "ipcluster sge -n 4"
> the controller starts and the engines are launched and at that point the
> ipcluster session is running indefinitely. If I then ctrl-c the
> ipcluster session it catches the signal and calls kill() which
> terminates the engines by canceling the job. Is this the same situation
> you're trying to get working?
>
>> * For now, let's stick with the assumption of a shared $HOME for the furl files.
>> * The biggest thing is if people can test this thoroughly. ?I don't have
>> ? SGE/PBS/LSF access right now, so it is a bit difficult for me to help. I
>> ? have a cluster coming later in the summer, but it is not here yet. ?Once
>> ? people have tested it well and are satisfied with it, let's merge it.
>> * If we can update the documentation about how the PBS/SGE support works
>> ? that would be great. ?The file is here:
>
> That sounds fine to me. I'm testing this stuff on my workstation's local
> sge/torque queues and it works fine. I'll also test this with
> StarCluster and make sure it works on a real cluster. If someone else
> can test using LSF on a real cluster (with shared $HOME) that'd be
> great. I'll try to update the docs some time this week.
>
>>
>> Once these small changes have been made and everyone has tested, me
>> can merge it for the 0.10.1 release.
> Excellent :D
>
>> Thanks for doing this work Justin and Satra! ?It is fantastic! ?Just
>> so you all know where this is going in 0.11:
>>
>> * We are going to get rid of using Twisted in ipcluster. ?This means we have
>> ? to re-write the process management stuff to use things like popen.
>> * We have a new configuration system in 0.11. ?This allows users to maintain
>> ? cluster profiles that are a set of configuration files for a particular
>> ? cluster setup. ?This makes it easy for a user to have multiple clusters
>> ? configured, which they can then start by name. ?The logging, security, etc.
>> ? is also different for each cluster profile.
>> * It will be quite a bit of work to get everything working in 0.11, so I am
>> ? glad we are getting good PBS/SGE support in 0.10.1.
>
> I'm willing to help out with the PBS/SGE/LSF portion of ipcluster in
> 0.11, I guess just let me know when is appropriate to start hacking.
>
> Thanks!
>
> ~Justin
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From satra at mit.edu  Tue Jul 20 16:01:24 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Tue, 20 Jul 2010 16:01:24 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
	<AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>
Message-ID: <AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com>

hi brian,

i ran into a problem (my engines were not starting) and justin and i are
going to try and figure out what's causing it.

cheers,

satra


On Tue, Jul 20, 2010 at 3:19 PM, Brian Granger <ellisonbg at gmail.com> wrote:

> Satra,
>
> If you could test this as well, that would be great.  Thanks.  Justin,
> let us know when you think it is ready to go with the documentation
> and testing.
>
> Cheers,
>
> Brian
>
> On Tue, Jul 20, 2010 at 7:48 AM, Justin Riley <justin.t.riley at gmail.com>
> wrote:
> > On 07/19/2010 01:06 AM, Brian Granger wrote:
> >> * I like the design of the BatchEngineSet.  This will be easy to port to
> >>   0.11.
> > Excellent :D
> >
> >> * I think if we are going to have default submission templates, we need
> to
> >>   expose the queue name to the command line.  This shouldn't be too
> tough.
> >
> > Added --queue option to my 0.10.1-sge branch and tested this with SGE
> > 62u3 and Torque 2.4.6. I don't have LSF to test but I added in the code
> > that *should* work with LSF.
> >
> >> * Have you tested this with Python 2.6.  I saw that you mentioned that
> >>   the engines were shutting down cleanly now.  What did you do to fix
> that?
> >>   I am even running into that in 0.11 so any info you can provide would
> >>   be helpful.
> >
> > I've been testing the code with Python 2.6. I didn't do anything special
> > other than switch the BatchEngineSet to using job arrays (ie a single
> > qsub command instead of N qsubs). Now when I run "ipcluster sge -n 4"
> > the controller starts and the engines are launched and at that point the
> > ipcluster session is running indefinitely. If I then ctrl-c the
> > ipcluster session it catches the signal and calls kill() which
> > terminates the engines by canceling the job. Is this the same situation
> > you're trying to get working?
> >
> >> * For now, let's stick with the assumption of a shared $HOME for the
> furl files.
> >> * The biggest thing is if people can test this thoroughly.  I don't have
> >>   SGE/PBS/LSF access right now, so it is a bit difficult for me to help.
> I
> >>   have a cluster coming later in the summer, but it is not here yet.
>  Once
> >>   people have tested it well and are satisfied with it, let's merge it.
> >> * If we can update the documentation about how the PBS/SGE support works
> >>   that would be great.  The file is here:
> >
> > That sounds fine to me. I'm testing this stuff on my workstation's local
> > sge/torque queues and it works fine. I'll also test this with
> > StarCluster and make sure it works on a real cluster. If someone else
> > can test using LSF on a real cluster (with shared $HOME) that'd be
> > great. I'll try to update the docs some time this week.
> >
> >>
> >> Once these small changes have been made and everyone has tested, me
> >> can merge it for the 0.10.1 release.
> > Excellent :D
> >
> >> Thanks for doing this work Justin and Satra!  It is fantastic!  Just
> >> so you all know where this is going in 0.11:
> >>
> >> * We are going to get rid of using Twisted in ipcluster.  This means we
> have
> >>   to re-write the process management stuff to use things like popen.
> >> * We have a new configuration system in 0.11.  This allows users to
> maintain
> >>   cluster profiles that are a set of configuration files for a
> particular
> >>   cluster setup.  This makes it easy for a user to have multiple
> clusters
> >>   configured, which they can then start by name.  The logging, security,
> etc.
> >>   is also different for each cluster profile.
> >> * It will be quite a bit of work to get everything working in 0.11, so I
> am
> >>   glad we are getting good PBS/SGE support in 0.10.1.
> >
> > I'm willing to help out with the PBS/SGE/LSF portion of ipcluster in
> > 0.11, I guess just let me know when is appropriate to start hacking.
> >
> > Thanks!
> >
> > ~Justin
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100720/0032ce73/attachment.html>

From ellisonbg at gmail.com  Tue Jul 20 16:04:04 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 20 Jul 2010 13:04:04 -0700
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
	<AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>
	<AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com>
Message-ID: <AANLkTinE2gz627iSeHrZSN52ZViY_8JbXqu2tDux1gPN@mail.gmail.com>

Great!  I mean great that you and Justin are testing and debugging this.

Brian

On Tue, Jul 20, 2010 at 1:01 PM, Satrajit Ghosh <satra at mit.edu> wrote:
> hi brian,
>
> i ran into a problem (my engines were not starting) and justin and i are
> going to try and figure out what's causing it.
>
> cheers,
>
> satra
>
>
> On Tue, Jul 20, 2010 at 3:19 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>>
>> Satra,
>>
>> If you could test this as well, that would be great. ?Thanks. ?Justin,
>> let us know when you think it is ready to go with the documentation
>> and testing.
>>
>> Cheers,
>>
>> Brian
>>
>> On Tue, Jul 20, 2010 at 7:48 AM, Justin Riley <justin.t.riley at gmail.com>
>> wrote:
>> > On 07/19/2010 01:06 AM, Brian Granger wrote:
>> >> * I like the design of the BatchEngineSet. ?This will be easy to port
>> >> to
>> >> ? 0.11.
>> > Excellent :D
>> >
>> >> * I think if we are going to have default submission templates, we need
>> >> to
>> >> ? expose the queue name to the command line. ?This shouldn't be too
>> >> tough.
>> >
>> > Added --queue option to my 0.10.1-sge branch and tested this with SGE
>> > 62u3 and Torque 2.4.6. I don't have LSF to test but I added in the code
>> > that *should* work with LSF.
>> >
>> >> * Have you tested this with Python 2.6. ?I saw that you mentioned that
>> >> ? the engines were shutting down cleanly now. ?What did you do to fix
>> >> that?
>> >> ? I am even running into that in 0.11 so any info you can provide would
>> >> ? be helpful.
>> >
>> > I've been testing the code with Python 2.6. I didn't do anything special
>> > other than switch the BatchEngineSet to using job arrays (ie a single
>> > qsub command instead of N qsubs). Now when I run "ipcluster sge -n 4"
>> > the controller starts and the engines are launched and at that point the
>> > ipcluster session is running indefinitely. If I then ctrl-c the
>> > ipcluster session it catches the signal and calls kill() which
>> > terminates the engines by canceling the job. Is this the same situation
>> > you're trying to get working?
>> >
>> >> * For now, let's stick with the assumption of a shared $HOME for the
>> >> furl files.
>> >> * The biggest thing is if people can test this thoroughly. ?I don't
>> >> have
>> >> ? SGE/PBS/LSF access right now, so it is a bit difficult for me to
>> >> help. I
>> >> ? have a cluster coming later in the summer, but it is not here yet.
>> >> ?Once
>> >> ? people have tested it well and are satisfied with it, let's merge it.
>> >> * If we can update the documentation about how the PBS/SGE support
>> >> works
>> >> ? that would be great. ?The file is here:
>> >
>> > That sounds fine to me. I'm testing this stuff on my workstation's local
>> > sge/torque queues and it works fine. I'll also test this with
>> > StarCluster and make sure it works on a real cluster. If someone else
>> > can test using LSF on a real cluster (with shared $HOME) that'd be
>> > great. I'll try to update the docs some time this week.
>> >
>> >>
>> >> Once these small changes have been made and everyone has tested, me
>> >> can merge it for the 0.10.1 release.
>> > Excellent :D
>> >
>> >> Thanks for doing this work Justin and Satra! ?It is fantastic! ?Just
>> >> so you all know where this is going in 0.11:
>> >>
>> >> * We are going to get rid of using Twisted in ipcluster. ?This means we
>> >> have
>> >> ? to re-write the process management stuff to use things like popen.
>> >> * We have a new configuration system in 0.11. ?This allows users to
>> >> maintain
>> >> ? cluster profiles that are a set of configuration files for a
>> >> particular
>> >> ? cluster setup. ?This makes it easy for a user to have multiple
>> >> clusters
>> >> ? configured, which they can then start by name. ?The logging,
>> >> security, etc.
>> >> ? is also different for each cluster profile.
>> >> * It will be quite a bit of work to get everything working in 0.11, so
>> >> I am
>> >> ? glad we are getting good PBS/SGE support in 0.10.1.
>> >
>> > I'm willing to help out with the PBS/SGE/LSF portion of ipcluster in
>> > 0.11, I guess just let me know when is appropriate to start hacking.
>> >
>> > Thanks!
>> >
>> > ~Justin
>> >
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From pivanov314 at gmail.com  Tue Jul 20 20:40:56 2010
From: pivanov314 at gmail.com (Paul Ivanov)
Date: Tue, 20 Jul 2010 17:40:56 -0700
Subject: [IPython-dev] Paul Ivanov: Did you get any feedback from GH
 when I merged?
In-Reply-To: <AANLkTime9dtq7a9aFyMnbnVtIeQwlsH09J-DP44moogc@mail.gmail.com>
References: <AANLkTime9dtq7a9aFyMnbnVtIeQwlsH09J-DP44moogc@mail.gmail.com>
Message-ID: <4C464218.8000701@gmail.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Fernando,

yes, I did get an email from GH, and left another comment there, though
I'm not sure you got notified of that, see it here:
<http://github.com/ivanov/ipython/commit/8d86d579df0e76154fe9e9b7eddc8525cad3c343>

One unfortunate thing about commenting on the commits, is that the
comments don't seem to carry over across forks. The commits you merged
into trunk (ipython/ipython) don't have any reference to the comments we
made about them in my fork (ivanov/ipython).

best,
Paul


Fernando Perez, on 2010-07-15 14:42, wrote:
> Hi Paul,
> 
> I just applied your pull request into trunk, thanks a lot for the bug
> fix.  I used the GH interface to do it, and I'm curious whether it
> generated any feedback to you when that happened or not.
> 
> Cheers,
> 
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iD8DBQFMRkIYe+cmRQ8+KPcRAoXEAJ0ZPW9bfgUUBY6t1NFIJu6vToaXswCgkt26
k+jv2Eh8dsHxazr8TSKnXMc=
=odif
-----END PGP SIGNATURE-----


From benjaminrk at gmail.com  Wed Jul 21 05:35:30 2010
From: benjaminrk at gmail.com (MinRK)
Date: Wed, 21 Jul 2010 02:35:30 -0700
Subject: [IPython-dev] Named Engines
Message-ID: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com>

I now have my MonitoredQueue object on git, which is the three socket Queue
device that can be the core of the lightweight ME and Task models (depending
on whether it is XREP on both sides for ME, or XREP/XREQ for load balanced
tasks).

The biggest difference in terms of design between Python in the Controller
picking the destination and this new device is that the client code actually
needs to know the XREQ identity of each engine, and all the switching logic
lives in the client code (if not the user exposed code) instead of the
controller - if the client says 'do x in [1,2,3]' they actually issue 3
sends, unlike before, when they issued 1 and the controller issued 3. This
will increase traffic between the client and the controller, but
dramatically reduce work done in the controller.

Since the engines' XREP IDs are known at the client level, and these are
roughly any string, it brings up the question: should we have strictly
integer ID engines, or should we allow engines to have names, like
'franklin1', corresponding directly to their XREP identity?

I think people might like using names, but I imagine it could get confusing.
 It would be unambiguous in code, since we use integer IDs and XREP
identities must be strings, so if someone keys on a string it must be the
XREP id, and if they key on a number it must be by engine ID.

-MinRK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100721/eaefbdc7/attachment.html>

From fperez.net at gmail.com  Wed Jul 21 05:45:11 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 21 Jul 2010 02:45:11 -0700
Subject: [IPython-dev] Named Engines
In-Reply-To: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com>
References: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com>
Message-ID: <AANLkTimAcln9YlIR8xq77nMUx3LRABEr1xOhYsVyI6ll@mail.gmail.com>

On Wed, Jul 21, 2010 at 2:35 AM, MinRK <benjaminrk at gmail.com> wrote:
> I now have my MonitoredQueue object on git, which is the three socket Queue
> device that can be the core of the lightweight ME and Task models (depending
> on whether it is XREP on both sides for ME, or XREP/XREQ for load balanced
> tasks).

Great!

> The biggest difference in terms of design between Python in the Controller
> picking the destination and this new device is that the client code actually
> needs to know the XREQ identity of each engine, and all the switching logic
> lives in the client code (if not the user exposed code) instead of the
> controller - if the client says 'do x in [1,2,3]' they actually issue 3
> sends, unlike before, when they issued 1 and the controller issued 3. This
> will increase traffic between the client and the controller, but
> dramatically reduce work done in the controller.

As best I can see, that's actually a net win, as long as we hide it
from user-visible APIs: the simpler the controller code, the less our
chances of it bottlenecking (since the controller has also other
things to do).  I really like the idea of having most of the logic
effectively embedded in the 0mq device.

> Since the engines' XREP IDs are known at the client level, and these are
> roughly any string, it brings up the question: should we have strictly
> integer ID engines, or should we allow engines to have names, like
> 'franklin1', corresponding directly to their XREP identity?
> I think people might like using names, but I imagine it could get confusing.
> ?It would be unambiguous in code, since we use integer IDs and XREP
> identities must be strings, so if someone keys on a string it must be the
> XREP id, and if they key on a number it must be by engine ID.

I suspect having named IDs could be useful, as an optional feature.
People may have naming conventions for their hosts and we could expose
a way to auto-collect the hostname as default ID, and then assign
-0...-N suffixes to each engine in a multicore host (host-0, host-1,
...).

As long as internally these don't cause problems, I don't see why not have them.

Cheers,

f


From fperez.net at gmail.com  Wed Jul 21 06:32:47 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 21 Jul 2010 03:32:47 -0700
Subject: [IPython-dev] Interactive input block handling
Message-ID: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com>

Hi folks,

here:

http://github.com/fperez/ipython/commit/37182fcaaa893488c4655cd37049bb71b1f9152a

is the code that Evan can start using now (and so can Omar as we
refactor the terminal code) for properly handling incremental
interactive input.  I ran out of time to add the block-splitting
capabilities for Gerardo, but that should be easy tomorrow.

It would be a good habit to get into for all new code, to attempt as
best as possible 100% test coverage:

(blockbreaker)amirbar[core]> nosetests -vvs --with-coverage
--cover-package=IPython.core.blockbreaker blockbreaker.py
test_dedent (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_indent (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_indent2 (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_interactive_block_ready
(IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_interactive_block_ready2
(IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_interactive_block_ready3
(IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_interactive_block_ready4
(IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_push (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_push2 (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
Test input with leading whitespace ... ok
test_reset (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
test_source (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
IPython.core.blockbreaker.test_spaces ... ok
IPython.core.blockbreaker.test_remove_comments ... ok
IPython.core.blockbreaker.test_get_input_encoding ... ok

Name                        Stmts   Exec  Cover   Missing
---------------------------------------------------------
IPython.core.blockbreaker     171    171   100%
----------------------------------------------------------------------
Ran 15 tests in 0.022s

OK


###

In this case it actually helped me a lot, because in going from ~85%
to 100% I actually found that the untested codepaths were indeed
buggy.  As the saying goes, 'untested code is broken code'...

Cheers,

f


From matthieu.brucher at gmail.com  Wed Jul 21 07:08:03 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Wed, 21 Jul 2010 13:08:03 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C43506C.8070907@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>
	<4C43506C.8070907@gmail.com>
Message-ID: <AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>

2010/7/18 Justin Riley <justin.t.riley at gmail.com>:
> Matthieu,
>
> I agree that password-less ssh is a common configuration on HPC clusters and
> it would be useful to have the option of using SSH to copy the furl file to
> each host before launching engines with SGE/PBS/LSF. I'll see about hacking
> this in when I get some more time.
>
> BTW, I just added experimental support for LSF to my fork. I can't test the
> code given that I don't have access to a LSF system but in theory it should
> work (again using job arrays) provided the ~/.ipython/security folder is
> shared.

I've tried just a few minutes ago, but I got this:

/JOB_SPOOL_DIR/1279710223.17444: line 8: /tmp/tmphM4RKl: Permission denied

It seems that you may have to add some authorizations before excuting the file.

-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From matthieu.brucher at gmail.com  Wed Jul 21 07:23:15 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Wed, 21 Jul 2010 13:23:15 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>
	<4C43506C.8070907@gmail.com>
	<AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>
Message-ID: <AANLkTikRoPQMaeP6_anHXGf5Q-8kEsaIQ0nD3jDpKm_T@mail.gmail.com>

2010/7/21 Matthieu Brucher <matthieu.brucher at gmail.com>:
> 2010/7/18 Justin Riley <justin.t.riley at gmail.com>:
>> Matthieu,
>>
>> I agree that password-less ssh is a common configuration on HPC clusters and
>> it would be useful to have the option of using SSH to copy the furl file to
>> each host before launching engines with SGE/PBS/LSF. I'll see about hacking
>> this in when I get some more time.
>>
>> BTW, I just added experimental support for LSF to my fork. I can't test the
>> code given that I don't have access to a LSF system but in theory it should
>> work (again using job arrays) provided the ~/.ipython/security folder is
>> shared.
>
> I've tried just a few minutes ago, but I got this:
>
> /JOB_SPOOL_DIR/1279710223.17444: line 8: /tmp/tmphM4RKl: Permission denied
>
> It seems that you may have to add some authorizations before excuting the file.

I've added a os.chmod right after the file was created, but I still
have this error:

line 8: /tmp/tmpDEQR0U: Text file busy

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From matthieu.brucher at gmail.com  Wed Jul 21 09:32:27 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Wed, 21 Jul 2010 15:32:27 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTikRoPQMaeP6_anHXGf5Q-8kEsaIQ0nD3jDpKm_T@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>
	<4C43506C.8070907@gmail.com>
	<AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>
	<AANLkTikRoPQMaeP6_anHXGf5Q-8kEsaIQ0nD3jDpKm_T@mail.gmail.com>
Message-ID: <AANLkTim51k7WuIj5LnnEa9Z8CbXjk4XOp7CaLlU_lxL_@mail.gmail.com>

>
> I've added a os.chmod right after the file was created, but I still
> have this error:
>
> line 8: /tmp/tmpDEQR0U: Text file busy
>
> Matthieu

OK, I've managed to get LSF working. I had to modify this in
ipcluster.py at line 335 as well as import stat:

        self._temp_file.file.flush()
+        os.chmod(self._temp_file.name, stat.S_IRUSR | stat.S_IWUSR |
stat.S_IXUSR)
+        self._temp_file.file.close()
        d = getProcessOutput(self.submit_command,
                             [self.template_file],
                             env=os.environ)

Unfortunately, I only get one engine instead of four:
2010-07-21 15:27:20+0200 [-] Log opened.
2010-07-21 15:27:20+0200 [-] Process ['ipcontroller',
'--logfile=/users/brucher/.ipython/log/ipcontroller', '-x', '-y'] has
started with pid=204117
2010-07-21 15:27:20+0200 [-] Waiting for controller to finish starting...
2010-07-21 15:27:22+0200 [-] Controller started
2010-07-21 15:27:22+0200 [-] starting 4 engines
2010-07-21 15:27:22+0200 [-] using default ipengine LSF script:
        #BSUB -J ipengine[1-4]
        eid=$(($LSB_JOBINDEX - 1))
        ipengine --logfile=ipengine${eid}.log

2010-07-21 15:27:22+0200 [-] Job started with job id: '17448'


And then in IPython:

In [1]: from IPython.kernel import client

In [9]: mec = client.MultiEngineClient()

In [10]: mec.get_ids()
Out[10]: [0]

In [13]: mec.activate()

In [14]: %px print "test"
Parallel execution on engines: all
Out[14]:
<Results List>
[0] In [1]: print "test"
[0] Out[1]: test

In [15]: mec.kill()
Out[15]: [None]

Two issues:
- only one engine visible instead of four
- when I kill the mec, the job is finished, but ipcluster still runs.

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From epatters at enthought.com  Wed Jul 21 10:55:57 2010
From: epatters at enthought.com (Evan Patterson)
Date: Wed, 21 Jul 2010 09:55:57 -0500
Subject: [IPython-dev] Interactive input block handling
In-Reply-To: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com>
References: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com>
Message-ID: <AANLkTinL_zKE_SLcV7OXlPCCTiAQVKXaqcPuWGVYgwPg@mail.gmail.com>

Great! I've merged your 'blockbreaker' branch into my 'qtfrontend' branch
and will integrate BlockBreaker today.

Evan

On Wed, Jul 21, 2010 at 5:32 AM, Fernando Perez <fperez.net at gmail.com>wrote:

> Hi folks,
>
> here:
>
>
> http://github.com/fperez/ipython/commit/37182fcaaa893488c4655cd37049bb71b1f9152a
>
> is the code that Evan can start using now (and so can Omar as we
> refactor the terminal code) for properly handling incremental
> interactive input.  I ran out of time to add the block-splitting
> capabilities for Gerardo, but that should be easy tomorrow.
>
> It would be a good habit to get into for all new code, to attempt as
> best as possible 100% test coverage:
>
> (blockbreaker)amirbar[core]> nosetests -vvs --with-coverage
> --cover-package=IPython.core.blockbreaker blockbreaker.py
> test_dedent (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_indent (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_indent2 (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_interactive_block_ready
> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_interactive_block_ready2
> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_interactive_block_ready3
> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_interactive_block_ready4
> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_push (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_push2 (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> Test input with leading whitespace ... ok
> test_reset (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_source (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> IPython.core.blockbreaker.test_spaces ... ok
> IPython.core.blockbreaker.test_remove_comments ... ok
> IPython.core.blockbreaker.test_get_input_encoding ... ok
>
> Name                        Stmts   Exec  Cover   Missing
> ---------------------------------------------------------
> IPython.core.blockbreaker     171    171   100%
> ----------------------------------------------------------------------
> Ran 15 tests in 0.022s
>
> OK
>
>
> ###
>
> In this case it actually helped me a lot, because in going from ~85%
> to 100% I actually found that the untested codepaths were indeed
> buggy.  As the saying goes, 'untested code is broken code'...
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100721/70f989e8/attachment.html>

From epatters at enthought.com  Wed Jul 21 11:10:31 2010
From: epatters at enthought.com (Evan Patterson)
Date: Wed, 21 Jul 2010 10:10:31 -0500
Subject: [IPython-dev] Interactive input block handling
In-Reply-To: <AANLkTinL_zKE_SLcV7OXlPCCTiAQVKXaqcPuWGVYgwPg@mail.gmail.com>
References: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com>
	<AANLkTinL_zKE_SLcV7OXlPCCTiAQVKXaqcPuWGVYgwPg@mail.gmail.com>
Message-ID: <AANLkTimo2nP4fCBFjHOMyyUf30IF2t1FmvdkEnBHH7W0@mail.gmail.com>

Done. That was very easy and it works well. Thanks, Fernando.

Evan

On Wed, Jul 21, 2010 at 9:55 AM, Evan Patterson <epatters at enthought.com>wrote:

> Great! I've merged your 'blockbreaker' branch into my 'qtfrontend' branch
> and will integrate BlockBreaker today.
>
> Evan
>
>
> On Wed, Jul 21, 2010 at 5:32 AM, Fernando Perez <fperez.net at gmail.com>wrote:
>
>> Hi folks,
>>
>> here:
>>
>>
>> http://github.com/fperez/ipython/commit/37182fcaaa893488c4655cd37049bb71b1f9152a
>>
>> is the code that Evan can start using now (and so can Omar as we
>> refactor the terminal code) for properly handling incremental
>> interactive input.  I ran out of time to add the block-splitting
>> capabilities for Gerardo, but that should be easy tomorrow.
>>
>> It would be a good habit to get into for all new code, to attempt as
>> best as possible 100% test coverage:
>>
>> (blockbreaker)amirbar[core]> nosetests -vvs --with-coverage
>> --cover-package=IPython.core.blockbreaker blockbreaker.py
>> test_dedent (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_indent (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_indent2 (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_interactive_block_ready
>> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_interactive_block_ready2
>> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_interactive_block_ready3
>> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_interactive_block_ready4
>> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_push (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_push2 (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> Test input with leading whitespace ... ok
>> test_reset (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> test_source (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
>> IPython.core.blockbreaker.test_spaces ... ok
>> IPython.core.blockbreaker.test_remove_comments ... ok
>> IPython.core.blockbreaker.test_get_input_encoding ... ok
>>
>> Name                        Stmts   Exec  Cover   Missing
>> ---------------------------------------------------------
>> IPython.core.blockbreaker     171    171   100%
>> ----------------------------------------------------------------------
>> Ran 15 tests in 0.022s
>>
>> OK
>>
>>
>> ###
>>
>> In this case it actually helped me a lot, because in going from ~85%
>> to 100% I actually found that the untested codepaths were indeed
>> buggy.  As the saying goes, 'untested code is broken code'...
>>
>> Cheers,
>>
>> f
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100721/445c6582/attachment.html>

From justin.t.riley at gmail.com  Wed Jul 21 11:38:33 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Wed, 21 Jul 2010 11:38:33 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTim51k7WuIj5LnnEa9Z8CbXjk4XOp7CaLlU_lxL_@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>	<4C42B09F.50106@gmail.com>	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>	<4C43455F.1050508@gmail.com>	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>	<4C43506C.8070907@gmail.com>	<AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>	<AANLkTikRoPQMaeP6_anHXGf5Q-8kEsaIQ0nD3jDpKm_T@mail.gmail.com>
	<AANLkTim51k7WuIj5LnnEa9Z8CbXjk4XOp7CaLlU_lxL_@mail.gmail.com>
Message-ID: <4C471479.70403@gmail.com>

On 07/21/2010 09:32 AM, Matthieu Brucher wrote:
> Two issues:
> - only one engine visible instead of four
> - when I kill the mec, the job is finished, but ipcluster still runs

Thanks for testing this with LSF

1. Is your ~/.ipython/security folder shared on the cluster? Currently
the code assumes that this is the case.

2. By killing the mec do you mean ctrl-c'ing the ipcluster process? If
not, could you try that?

Also, with the changes you made you'll need to pass delete=False to
NamedTemporaryFile, otherwise I believe the file is deleted when it's
closed. I'll try to merge your ipcluster changes later today.

~Justin


From matthieu.brucher at gmail.com  Wed Jul 21 11:49:47 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Wed, 21 Jul 2010 17:49:47 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C471479.70403@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>
	<4C43506C.8070907@gmail.com>
	<AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>
	<AANLkTikRoPQMaeP6_anHXGf5Q-8kEsaIQ0nD3jDpKm_T@mail.gmail.com>
	<AANLkTim51k7WuIj5LnnEa9Z8CbXjk4XOp7CaLlU_lxL_@mail.gmail.com>
	<4C471479.70403@gmail.com>
Message-ID: <AANLkTikJLW-KrCJ9OAzkpxeQB9KBu119YVNAOpcqqP57@mail.gmail.com>

2010/7/21 Justin Riley <justin.t.riley at gmail.com>:
> On 07/21/2010 09:32 AM, Matthieu Brucher wrote:
>> Two issues:
>> - only one engine visible instead of four
>> - when I kill the mec, the job is finished, but ipcluster still runs
>
> Thanks for testing this with LSF
>
> 1. Is your ~/.ipython/security folder shared on the cluster? Currently
> the code assumes that this is the case.

Yes, we have a test machine with the same $HOME.

> 2. By killing the mec do you mean ctrl-c'ing the ipcluster process? If
> not, could you try that?

By killing, I tried mec.kill(). If I kill it then by Ctr+C, I get:
2010-07-21 15:34:49+0200 [-] Stopping LSF cluster
2010-07-21 15:34:49+0200 [-]
2010-07-21 15:34:49+0200 [-] Process ['ipcontroller',
'--logfile=/users/brucher/.ipython/log/ipcontroller', '-x', '-y'] has
stopped with 0
2010-07-21 15:34:51+0200 [-] Main loop terminated.
2010-07-21 15:34:51+0200 [-] Unhandled error in Deferred:
2010-07-21 15:34:51+0200 [-] Unhandled Error
        Traceback (most recent call last):
        Failure: twisted.internet.error.ProcessTerminated: A process
has ended with a probable error condition: process ended with exit
code 255.

If I kill it directly with Ctrl+C, it doesn't display an error.

> Also, with the changes you made you'll need to pass delete=False to
> NamedTemporaryFile, otherwise I believe the file is deleted when it's
> closed. I'll try to merge your ipcluster changes later today.

I can create a repository on github with the 3 changes. I don't think
the file is deleted, I'm calling close() on the inner file object, it
shouldn't propagate the closing to its parent, should it?

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From ellisonbg at gmail.com  Wed Jul 21 12:46:46 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 21 Jul 2010 09:46:46 -0700
Subject: [IPython-dev] pyzmq has moved to the zeromq organization on github
Message-ID: <AANLkTilxHknuxKgKxYHGkiFUqEsrVkFHsfbSBrcQNiBa@mail.gmail.com>

Hi,

In order to enable more community involvement in the development of
PyZMQ (the Python bindings to 0MQ), we have moved the main pyzmq from
ellisonbg/pyzmq to zeromq/pyzmq.  Here is the new repo:

http://github.com/zeromq/pyzmq

Please use this for all pyzmq development in the future.

Cheers,

Brian


From ellisonbg at gmail.com  Wed Jul 21 13:00:57 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 21 Jul 2010 10:00:57 -0700
Subject: [IPython-dev] Interactive input block handling
In-Reply-To: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com>
References: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com>
Message-ID: <AANLkTilOTsZ8RYNdPFqz6znVU4l8Jd08E91BUv0kXLsO@mail.gmail.com>

Fernando,

Fantastic!  Great work.  Ping me when you wake up and we can
strategize the next steps and do a code review of this.

Cheers,

Brian

On Wed, Jul 21, 2010 at 3:32 AM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi folks,
>
> here:
>
> http://github.com/fperez/ipython/commit/37182fcaaa893488c4655cd37049bb71b1f9152a
>
> is the code that Evan can start using now (and so can Omar as we
> refactor the terminal code) for properly handling incremental
> interactive input. ?I ran out of time to add the block-splitting
> capabilities for Gerardo, but that should be easy tomorrow.
>
> It would be a good habit to get into for all new code, to attempt as
> best as possible 100% test coverage:
>
> (blockbreaker)amirbar[core]> nosetests -vvs --with-coverage
> --cover-package=IPython.core.blockbreaker blockbreaker.py
> test_dedent (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_indent (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_indent2 (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_interactive_block_ready
> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_interactive_block_ready2
> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_interactive_block_ready3
> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_interactive_block_ready4
> (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_push (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_push2 (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> Test input with leading whitespace ... ok
> test_reset (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> test_source (IPython.core.blockbreaker.BlockBreakerTestCase) ... ok
> IPython.core.blockbreaker.test_spaces ... ok
> IPython.core.blockbreaker.test_remove_comments ... ok
> IPython.core.blockbreaker.test_get_input_encoding ... ok
>
> Name ? ? ? ? ? ? ? ? ? ? ? ?Stmts ? Exec ?Cover ? Missing
> ---------------------------------------------------------
> IPython.core.blockbreaker ? ? 171 ? ?171 ? 100%
> ----------------------------------------------------------------------
> Ran 15 tests in 0.022s
>
> OK
>
>
> ###
>
> In this case it actually helped me a lot, because in going from ~85%
> to 100% I actually found that the untested codepaths were indeed
> buggy. ?As the saying goes, 'untested code is broken code'...
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From ellisonbg at gmail.com  Wed Jul 21 13:07:23 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 21 Jul 2010 10:07:23 -0700
Subject: [IPython-dev] Named Engines
In-Reply-To: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com>
References: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com>
Message-ID: <AANLkTimJ0taipQVzdHwI2IHUlTaTK0RZg1nKkRUU1Prx@mail.gmail.com>

On Wed, Jul 21, 2010 at 2:35 AM, MinRK <benjaminrk at gmail.com> wrote:
> I now have my MonitoredQueue object on git, which is the three socket Queue
> device that can be the core of the lightweight ME and Task models (depending
> on whether it is XREP on both sides for ME, or XREP/XREQ for load balanced
> tasks).

This sounds very cool.  What repos is this in?

> The biggest difference in terms of design between Python in the Controller
> picking the destination and this new device is that the client code actually
> needs to know the XREQ identity of each engine, and all the switching logic
> lives in the client code (if not the user exposed code) instead of the
> controller - if the client says 'do x in [1,2,3]' they actually issue 3
> sends, unlike before, when they issued 1 and the controller issued 3. This
> will increase traffic between the client and the controller, but
> dramatically reduce work done in the controller.

But because 0MQ has such low latency it might be a win.  Each request
the controller gets will be smaller and easier to handle.  The idea of
allowing clients to specify the names is something I have thought
about before.  One question though:  what does 0MQ do when you try to
send on an XREP socket to an identity that doesn't exist?  Will the
client be able to know that the client wasn't there?  That seems like
an important failure case.

> Since the engines' XREP IDs are known at the client level, and these are
> roughly any string, it brings up the question: should we have strictly
> integer ID engines, or should we allow engines to have names, like
> 'franklin1', corresponding directly to their XREP identity?

The idea of having names is pretty cool.  Maybe default to numbers,
but allow named prefixes as well as raw names?

> I think people might like using names, but I imagine it could get confusing.
> ?It would be unambiguous in code, since we use integer IDs and XREP
> identities must be strings, so if someone keys on a string it must be the
> XREP id, and if they key on a number it must be by engine ID.

Right.  I will have a look at the code.

Cheers,

Brian

> -MinRK
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From matthieu.brucher at gmail.com  Wed Jul 21 13:12:28 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Wed, 21 Jul 2010 19:12:28 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTikJLW-KrCJ9OAzkpxeQB9KBu119YVNAOpcqqP57@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>
	<4C43506C.8070907@gmail.com>
	<AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>
	<AANLkTikRoPQMaeP6_anHXGf5Q-8kEsaIQ0nD3jDpKm_T@mail.gmail.com>
	<AANLkTim51k7WuIj5LnnEa9Z8CbXjk4XOp7CaLlU_lxL_@mail.gmail.com>
	<4C471479.70403@gmail.com>
	<AANLkTikJLW-KrCJ9OAzkpxeQB9KBu119YVNAOpcqqP57@mail.gmail.com>
Message-ID: <AANLkTinV9mM9p2IoGyaDxPKtJKIEu92WwHEIv27v-DTe@mail.gmail.com>

> I can create a repository on github with the 3 changes. I don't think
> the file is deleted, I'm calling close() on the inner file object, it
> shouldn't propagate the closing to its parent, should it?

Available on http://github.com/mbrucher/ipython (Finally, I had to
fight with git to understand how it works with remote branches)

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From benjaminrk at gmail.com  Wed Jul 21 13:51:11 2010
From: benjaminrk at gmail.com (MinRK)
Date: Wed, 21 Jul 2010 10:51:11 -0700
Subject: [IPython-dev] Named Engines
In-Reply-To: <AANLkTimJ0taipQVzdHwI2IHUlTaTK0RZg1nKkRUU1Prx@mail.gmail.com>
References: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com> 
	<AANLkTimJ0taipQVzdHwI2IHUlTaTK0RZg1nKkRUU1Prx@mail.gmail.com>
Message-ID: <AANLkTimVVEAY50QHkReYdikaV2fB0TEUnOZzRqz7meJ1@mail.gmail.com>

On Wed, Jul 21, 2010 at 10:07, Brian Granger <ellisonbg at gmail.com> wrote:

> On Wed, Jul 21, 2010 at 2:35 AM, MinRK <benjaminrk at gmail.com> wrote:
> > I now have my MonitoredQueue object on git, which is the three socket
> Queue
> > device that can be the core of the lightweight ME and Task models
> (depending
> > on whether it is XREP on both sides for ME, or XREP/XREQ for load
> balanced
> > tasks).
>
> This sounds very cool.  What repos is this in?
>

all on my pyzmq master: github.com/minrk/pyzmq

The Devices are specified in the growing _zmq.pyx. Should I move them?  I
don't have enough Cython experience (this is my first nontrivial Cython
work) to know how to correctly move it to a new file still with all the
right zmq imports.


> > The biggest difference in terms of design between Python in the
> Controller
> > picking the destination and this new device is that the client code
> actually
> > needs to know the XREQ identity of each engine, and all the switching
> logic
> > lives in the client code (if not the user exposed code) instead of the
> > controller - if the client says 'do x in [1,2,3]' they actually issue 3
> > sends, unlike before, when they issued 1 and the controller issued 3.
> This
> > will increase traffic between the client and the controller, but
> > dramatically reduce work done in the controller.
>
> But because 0MQ has such low latency it might be a win.  Each request
> the controller gets will be smaller and easier to handle.  The idea of
> allowing clients to specify the names is something I have thought
> about before.  One question though:  what does 0MQ do when you try to
> send on an XREP socket to an identity that doesn't exist?  Will the
> client be able to know that the client wasn't there?  That seems like
> an important failure case.
>

As far as I can tell, the XREP socket sends messages out to XREQ ids, and
trusts that such an XREQ exists. If no such id is connected, the message is
silently lost to the aether.  However, with the controller monitoring the
queue, it knows when you have sent a message to an engine that is not
_registered_, and can tell you about it. This should be sufficient, since
presumably all the connected XREQ sockets should be registered engines.

To test:
a = ctx.socket(zmq.XREP)
a.bind('tcp://127.0.0.1:1234')
b = ctx.socket(zmq.XREQ)
b.setsockopt(zmq.IDENTITY, 'hello')
a.send_multipart(['hello', 'mr. b'])
time.sleep(.2)
b.connect('tcp://127.0.0.1:1234')
a.send_multipart(['hello', 'again'])
b.recv()
# 'again'



>
> > Since the engines' XREP IDs are known at the client level, and these are
> > roughly any string, it brings up the question: should we have strictly
> > integer ID engines, or should we allow engines to have names, like
> > 'franklin1', corresponding directly to their XREP identity?
>
> The idea of having names is pretty cool.  Maybe default to numbers,
> but allow named prefixes as well as raw names?
>

This part is purely up to our user-facing side of the client code. It
certainly doesn't affect how anything works inside. It's just a question of
what a valid `targets' argument (or key for a dictionary interface) would be
in the multiengine.


>
> > I think people might like using names, but I imagine it could get
> confusing.
> >  It would be unambiguous in code, since we use integer IDs and XREP
> > identities must be strings, so if someone keys on a string it must be the
> > XREP id, and if they key on a number it must be by engine ID.
>
> Right.  I will have a look at the code.
>
> Cheers,
>
> Brian
>
> > -MinRK
> >
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100721/48f83b80/attachment.html>

From ellisonbg at gmail.com  Wed Jul 21 15:17:33 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Wed, 21 Jul 2010 12:17:33 -0700
Subject: [IPython-dev] Named Engines
In-Reply-To: <AANLkTimVVEAY50QHkReYdikaV2fB0TEUnOZzRqz7meJ1@mail.gmail.com>
References: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com>
	<AANLkTimJ0taipQVzdHwI2IHUlTaTK0RZg1nKkRUU1Prx@mail.gmail.com>
	<AANLkTimVVEAY50QHkReYdikaV2fB0TEUnOZzRqz7meJ1@mail.gmail.com>
Message-ID: <AANLkTikHtqFs5ybHJdZvtPkRLmsf1Aj-fztD0WMVdjHp@mail.gmail.com>

On Wed, Jul 21, 2010 at 10:51 AM, MinRK <benjaminrk at gmail.com> wrote:
>
>
> On Wed, Jul 21, 2010 at 10:07, Brian Granger <ellisonbg at gmail.com> wrote:
>>
>> On Wed, Jul 21, 2010 at 2:35 AM, MinRK <benjaminrk at gmail.com> wrote:
>> > I now have my MonitoredQueue object on git, which is the three socket
>> > Queue
>> > device that can be the core of the lightweight ME and Task models
>> > (depending
>> > on whether it is XREP on both sides for ME, or XREP/XREQ for load
>> > balanced
>> > tasks).
>>
>> This sounds very cool. ?What repos is this in?
>
> all on my pyzmq master: github.com/minrk/pyzmq
> The Devices are specified in the growing _zmq.pyx. Should I move them? ?I
> don't have enough Cython experience (this is my first nontrivial Cython
> work) to know how to correctly move it to a new file still with all the
> right zmq imports.

Yes, I think we do want to move them.  We should look at how mpi4py
splits things up.  My guess is that we want to have the declaration of
the 0MQ C API in a single file that other files can use.  Then have
files for the individual things like Socket, Message, Poller, Device,
etc.  That will make the code base much easier to work with.  But
splitting things like this in Cython is a bit suble.  I have done it
before, but I will ask Lisandro Dalcin the best way to approach it.
For now, I would keep going with the single file approach (unless you
want to learn about how to split things using pxi and pxd files).

>>
>> > The biggest difference in terms of design between Python in the
>> > Controller
>> > picking the destination and this new device is that the client code
>> > actually
>> > needs to know the XREQ identity of each engine, and all the switching
>> > logic
>> > lives in the client code (if not the user exposed code) instead of the
>> > controller - if the client says 'do x in [1,2,3]' they actually issue 3
>> > sends, unlike before, when they issued 1 and the controller issued 3.
>> > This
>> > will increase traffic between the client and the controller, but
>> > dramatically reduce work done in the controller.
>>
>> But because 0MQ has such low latency it might be a win. ?Each request
>> the controller gets will be smaller and easier to handle. ?The idea of
>> allowing clients to specify the names is something I have thought
>> about before. ?One question though: ?what does 0MQ do when you try to
>> send on an XREP socket to an identity that doesn't exist? ?Will the
>> client be able to know that the client wasn't there? ?That seems like
>> an important failure case.
>
> As far as I can tell, the XREP socket sends messages out to XREQ ids, and
> trusts that such an XREQ exists. If no such id is connected, the message is
> silently lost to the aether. ?However, with the controller monitoring the
> queue, it knows when you have sent a message to an engine that is not
> _registered_, and can tell you about it. This should be sufficient, since
> presumably all the connected XREQ sockets should be registered engines.

I guess I don't quite see how the monitoring is used yet, but it does
worry me that the message is silently lost.  So you think 0MQ should
raise on that?  I have a feeling that the identies were designed to be
a private API thing in 0MQ and we are challenging that.

> To test:
> a = ctx.socket(zmq.XREP)
> a.bind('tcp://127.0.0.1:1234')
> b = ctx.socket(zmq.XREQ)
> b.setsockopt(zmq.IDENTITY, 'hello')
> a.send_multipart(['hello', 'mr. b'])
> time.sleep(.2)
> b.connect('tcp://127.0.0.1:1234')
> a.send_multipart(['hello', 'again'])
> b.recv()
> # 'again'
>
>>
>> > Since the engines' XREP IDs are known at the client level, and these are
>> > roughly any string, it brings up the question: should we have strictly
>> > integer ID engines, or should we allow engines to have names, like
>> > 'franklin1', corresponding directly to their XREP identity?
>>
>> The idea of having names is pretty cool. ?Maybe default to numbers,
>> but allow named prefixes as well as raw names?
>
>
> This part is purely up to our user-facing side of the client code. It
> certainly doesn't affect how anything works inside. It's just a question of
> what a valid `targets' argument (or key for a dictionary interface) would be
> in the multiengine.

Any string or list of strings?

>>
>> > I think people might like using names, but I imagine it could get
>> > confusing.
>> > ?It would be unambiguous in code, since we use integer IDs and XREP
>> > identities must be strings, so if someone keys on a string it must be
>> > the
>> > XREP id, and if they key on a number it must be by engine ID.
>>
>> Right. ?I will have a look at the code.
>>
>> Cheers,
>>
>> Brian
>>
>> > -MinRK
>> >
>> >
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com


From fperez.net at gmail.com  Wed Jul 21 15:28:26 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 21 Jul 2010 12:28:26 -0700
Subject: [IPython-dev] Interactive input block handling
In-Reply-To: <AANLkTimo2nP4fCBFjHOMyyUf30IF2t1FmvdkEnBHH7W0@mail.gmail.com>
References: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com> 
	<AANLkTinL_zKE_SLcV7OXlPCCTiAQVKXaqcPuWGVYgwPg@mail.gmail.com> 
	<AANLkTimo2nP4fCBFjHOMyyUf30IF2t1FmvdkEnBHH7W0@mail.gmail.com>
Message-ID: <AANLkTimvKDUzrE7tLpJRhQoGyu-dGXaKJbrN2ssZDsl9@mail.gmail.com>

Hey Evan,

On Wed, Jul 21, 2010 at 8:10 AM, Evan Patterson <epatters at enthought.com> wrote:
> Done. That was very easy and it works well. Thanks, Fernando.
>

glad to hear it went so smoothly on your side, good job!

f


From erik.tollerud at gmail.com  Wed Jul 21 15:29:46 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Wed, 21 Jul 2010 12:29:46 -0700
Subject: [IPython-dev] Practices for .10 or .11 profile formats
Message-ID: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com>

Hello all,

I've been meaning to add some ipython profiles to a project I'm
working on ("recommended interactive environments" as it were), but
I'm a little unclear as to what is the best way is to do this.  I
personally much prefer the .11 style profiles, but of course that's
still in development, so I can't put it as a profile for general use
until there's been some kind of release.  So is .11 in some form
likely to be out soon?  Or a 10.1 that might include support for
.11-style profiles?  Or is it best to include both .11 and .10
profiles?

-- 
Erik Tollerud


From benjaminrk at gmail.com  Wed Jul 21 16:58:06 2010
From: benjaminrk at gmail.com (MinRK)
Date: Wed, 21 Jul 2010 13:58:06 -0700
Subject: [IPython-dev] Named Engines
In-Reply-To: <AANLkTikHtqFs5ybHJdZvtPkRLmsf1Aj-fztD0WMVdjHp@mail.gmail.com>
References: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com> 
	<AANLkTimJ0taipQVzdHwI2IHUlTaTK0RZg1nKkRUU1Prx@mail.gmail.com> 
	<AANLkTimVVEAY50QHkReYdikaV2fB0TEUnOZzRqz7meJ1@mail.gmail.com> 
	<AANLkTikHtqFs5ybHJdZvtPkRLmsf1Aj-fztD0WMVdjHp@mail.gmail.com>
Message-ID: <AANLkTimtYS0lyNLDUbq9h3e4vE4ug6MCo0MipVNhK6Ss@mail.gmail.com>

On Wed, Jul 21, 2010 at 12:17, Brian Granger <ellisonbg at gmail.com> wrote:

> On Wed, Jul 21, 2010 at 10:51 AM, MinRK <benjaminrk at gmail.com> wrote:
> >
> >
> > On Wed, Jul 21, 2010 at 10:07, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >>
> >> On Wed, Jul 21, 2010 at 2:35 AM, MinRK <benjaminrk at gmail.com> wrote:
> >> > I now have my MonitoredQueue object on git, which is the three socket
> >> > Queue
> >> > device that can be the core of the lightweight ME and Task models
> >> > (depending
> >> > on whether it is XREP on both sides for ME, or XREP/XREQ for load
> >> > balanced
> >> > tasks).
> >>
> >> This sounds very cool.  What repos is this in?
> >
> > all on my pyzmq master: github.com/minrk/pyzmq
> > The Devices are specified in the growing _zmq.pyx. Should I move them?  I
> > don't have enough Cython experience (this is my first nontrivial Cython
> > work) to know how to correctly move it to a new file still with all the
> > right zmq imports.
>
> Yes, I think we do want to move them.  We should look at how mpi4py
> splits things up.  My guess is that we want to have the declaration of
> the 0MQ C API in a single file that other files can use.  Then have
> files for the individual things like Socket, Message, Poller, Device,
> etc.  That will make the code base much easier to work with.  But
> splitting things like this in Cython is a bit suble.  I have done it
> before, but I will ask Lisandro Dalcin the best way to approach it.
> For now, I would keep going with the single file approach (unless you
> want to learn about how to split things using pxi and pxd files).
>

I'd be happy to help split it up if you find out the best way to go about
it.


>
> >>
> >> > The biggest difference in terms of design between Python in the
> >> > Controller
> >> > picking the destination and this new device is that the client code
> >> > actually
> >> > needs to know the XREQ identity of each engine, and all the switching
> >> > logic
> >> > lives in the client code (if not the user exposed code) instead of the
> >> > controller - if the client says 'do x in [1,2,3]' they actually issue
> 3
> >> > sends, unlike before, when they issued 1 and the controller issued 3.
> >> > This
> >> > will increase traffic between the client and the controller, but
> >> > dramatically reduce work done in the controller.
> >>
> >> But because 0MQ has such low latency it might be a win.  Each request
> >> the controller gets will be smaller and easier to handle.  The idea of
> >> allowing clients to specify the names is something I have thought
> >> about before.  One question though:  what does 0MQ do when you try to
> >> send on an XREP socket to an identity that doesn't exist?  Will the
> >> client be able to know that the client wasn't there?  That seems like
> >> an important failure case.
> >
> > As far as I can tell, the XREP socket sends messages out to XREQ ids, and
> > trusts that such an XREQ exists. If no such id is connected, the message
> is
> > silently lost to the aether.  However, with the controller monitoring the
> > queue, it knows when you have sent a message to an engine that is not
> > _registered_, and can tell you about it. This should be sufficient, since
> > presumably all the connected XREQ sockets should be registered engines.
>
> I guess I don't quite see how the monitoring is used yet, but it does
> worry me that the message is silently lost.  So you think 0MQ should
> raise on that?  I have a feeling that the identies were designed to be
> a private API thing in 0MQ and we are challenging that.
>

I don't know what 0MQ should do, but I imagine the silent loss is based on
thinking of XREP messages as always being replies. That way, a reply sent to
a nonexistent key is interpreted as being a reply to a message whose
requester is gone, and 0MQ presumes that nobody else would be interested in
the result, and drops it. As far as 0MQ is concerned, you wouldn't want the
following to happen:
A makes a request of B
A dies
B replies to A
B crashes because A didn't receive the reply

nothing went wrong in B, so it shouldn't crash.

For us, the XREP messages are not replies on the engine side (they are
replies on the client side). We are using the identities to treat the
engine-facing XREP as a keyed multiplexer. The result is that if you send a
message to nobody, nobody receives it. It's not that nobody knows about it -
the controller can tell, because it sees every message as it goes by, and
knows what the valid keys are, but the send itself will not fail.  In the
client code, you can easily check if a key is valid with the controller, so
I don't see a problem here.

The only source of a problem I can think of comes from the fact that the
client has a copy of the registration table, and presumably doesn't want to
update it every time.  That way, an engine could go away between the
client's updates of the registration, and some requests would vanish.  Note
that the controller still does receive them, and the client can check with
the controller on the status of requests that are taking too long.  The
controller can use a PUB socket to notify of engines coming/going, which
would mean the window for the client to not be up to date would be very
small, and it wouldn't even be a big problem if it happend, since the client
would be notified that its request won't be received.


>
> > To test:
> > a = ctx.socket(zmq.XREP)
> > a.bind('tcp://127.0.0.1:1234')
> > b = ctx.socket(zmq.XREQ)
> > b.setsockopt(zmq.IDENTITY, 'hello')
> > a.send_multipart(['hello', 'mr. b'])
> > time.sleep(.2)
> > b.connect('tcp://127.0.0.1:1234')
> > a.send_multipart(['hello', 'again'])
> > b.recv()
> > # 'again'
> >
> >>
> >> > Since the engines' XREP IDs are known at the client level, and these
> are
> >> > roughly any string, it brings up the question: should we have strictly
> >> > integer ID engines, or should we allow engines to have names, like
> >> > 'franklin1', corresponding directly to their XREP identity?
> >>
> >> The idea of having names is pretty cool.  Maybe default to numbers,
> >> but allow named prefixes as well as raw names?
> >
> >
> > This part is purely up to our user-facing side of the client code. It
> > certainly doesn't affect how anything works inside. It's just a question
> of
> > what a valid `targets' argument (or key for a dictionary interface) would
> be
> > in the multiengine.
>
> Any string or list of strings?
>

Well, for now targets is any int or list of ints. I don't see any reason
that you couldn't use a string anywhere an int would be used. It's perfectly
unambiguous, since the two key sets are of a different type.

you could do:
execute('a=5', targets=[0,1,'odin', 'franklin474'])
and the _build_targets method does:

target_idents = []
for t in targets:
    if isinstance(t, int):
        ident = identities[t]
    if isinstance(t, str) and t in identities.itervalues():
        ident = t
    else:
        raise KeyError("bad target: %s"%t)
    target_idents.append(t)
return target_idents



> >>
> >> > I think people might like using names, but I imagine it could get
> >> > confusing.
> >> >  It would be unambiguous in code, since we use integer IDs and XREP
> >> > identities must be strings, so if someone keys on a string it must be
> >> > the
> >> > XREP id, and if they key on a number it must be by engine ID.
> >>
> >> Right.  I will have a look at the code.
> >>
> >> Cheers,
> >>
> >> Brian
> >>
> >> > -MinRK
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Brian E. Granger, Ph.D.
> >> Assistant Professor of Physics
> >> Cal Poly State University, San Luis Obispo
> >> bgranger at calpoly.edu
> >> ellisonbg at gmail.com
> >
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100721/2764708c/attachment.html>

From fperez.net at gmail.com  Wed Jul 21 20:11:20 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 21 Jul 2010 17:11:20 -0700
Subject: [IPython-dev] Gerardo: merge questions?
Message-ID: <AANLkTinInC88iskjPihxPIn3TZ35zPPA-DN9BzExqTiV@mail.gmail.com>

Hi Gerardo,

sorry I missed your question on IRC and when I saw it you were gone.
What problems have you had regarding the integration with Evan's code?
 I hope we can help out here...

Cheers,

f


From fperez.net at gmail.com  Wed Jul 21 20:33:29 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Wed, 21 Jul 2010 17:33:29 -0700
Subject: [IPython-dev] Interactive input block handling
In-Reply-To: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com>
References: <AANLkTim01goFSEzHWkvsaNizTkZxgsMP4EcShs-iBiCl@mail.gmail.com>
Message-ID: <AANLkTilVvWg-W4Qsn03JDIoWLU-kntRGeHQ0meQs6ocg@mail.gmail.com>

On Wed, Jul 21, 2010 at 3:32 AM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi folks,
>
> here:
>
> http://github.com/fperez/ipython/commit/37182fcaaa893488c4655cd37049bb71b1f9152a
>
> is the code that Evan can start using now (and so can Omar as we
> refactor the terminal code) for properly handling incremental
> interactive input. ?I ran out of time to add the block-splitting
> capabilities for Gerardo, but that should be easy tomorrow.

I've updated this code now with a bunch of fixes from this morning's
code review with Brian and other discussions:

http://github.com/fperez/ipython/tree/blockbreaker

This should let you guys use it more cleanly; I'm starting the last
missing step, the full block breaking but I need to leave soon.  I'll
ping if I can finish it before heading out.

Cheers,

f


From satra at mit.edu  Wed Jul 21 21:05:25 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Wed, 21 Jul 2010 21:05:25 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTinE2gz627iSeHrZSN52ZViY_8JbXqu2tDux1gPN@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
	<AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>
	<AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com>
	<AANLkTinE2gz627iSeHrZSN52ZViY_8JbXqu2tDux1gPN@mail.gmail.com>
Message-ID: <AANLkTine__xHN3Gt9TWL36DpTsPa8LWpLG8nWMMoVOKn@mail.gmail.com>

hi justin.

i really don't know what the difference is, but i clean installed everything
and it works beautifully on SGE.

cheers,

satra


On Tue, Jul 20, 2010 at 4:04 PM, Brian Granger <ellisonbg at gmail.com> wrote:

> Great!  I mean great that you and Justin are testing and debugging this.
>
> Brian
>
> On Tue, Jul 20, 2010 at 1:01 PM, Satrajit Ghosh <satra at mit.edu> wrote:
> > hi brian,
> >
> > i ran into a problem (my engines were not starting) and justin and i are
> > going to try and figure out what's causing it.
> >
> > cheers,
> >
> > satra
> >
> >
> > On Tue, Jul 20, 2010 at 3:19 PM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >>
> >> Satra,
> >>
> >> If you could test this as well, that would be great.  Thanks.  Justin,
> >> let us know when you think it is ready to go with the documentation
> >> and testing.
> >>
> >> Cheers,
> >>
> >> Brian
> >>
> >> On Tue, Jul 20, 2010 at 7:48 AM, Justin Riley <justin.t.riley at gmail.com
> >
> >> wrote:
> >> > On 07/19/2010 01:06 AM, Brian Granger wrote:
> >> >> * I like the design of the BatchEngineSet.  This will be easy to port
> >> >> to
> >> >>   0.11.
> >> > Excellent :D
> >> >
> >> >> * I think if we are going to have default submission templates, we
> need
> >> >> to
> >> >>   expose the queue name to the command line.  This shouldn't be too
> >> >> tough.
> >> >
> >> > Added --queue option to my 0.10.1-sge branch and tested this with SGE
> >> > 62u3 and Torque 2.4.6. I don't have LSF to test but I added in the
> code
> >> > that *should* work with LSF.
> >> >
> >> >> * Have you tested this with Python 2.6.  I saw that you mentioned
> that
> >> >>   the engines were shutting down cleanly now.  What did you do to fix
> >> >> that?
> >> >>   I am even running into that in 0.11 so any info you can provide
> would
> >> >>   be helpful.
> >> >
> >> > I've been testing the code with Python 2.6. I didn't do anything
> special
> >> > other than switch the BatchEngineSet to using job arrays (ie a single
> >> > qsub command instead of N qsubs). Now when I run "ipcluster sge -n 4"
> >> > the controller starts and the engines are launched and at that point
> the
> >> > ipcluster session is running indefinitely. If I then ctrl-c the
> >> > ipcluster session it catches the signal and calls kill() which
> >> > terminates the engines by canceling the job. Is this the same
> situation
> >> > you're trying to get working?
> >> >
> >> >> * For now, let's stick with the assumption of a shared $HOME for the
> >> >> furl files.
> >> >> * The biggest thing is if people can test this thoroughly.  I don't
> >> >> have
> >> >>   SGE/PBS/LSF access right now, so it is a bit difficult for me to
> >> >> help. I
> >> >>   have a cluster coming later in the summer, but it is not here yet.
> >> >>  Once
> >> >>   people have tested it well and are satisfied with it, let's merge
> it.
> >> >> * If we can update the documentation about how the PBS/SGE support
> >> >> works
> >> >>   that would be great.  The file is here:
> >> >
> >> > That sounds fine to me. I'm testing this stuff on my workstation's
> local
> >> > sge/torque queues and it works fine. I'll also test this with
> >> > StarCluster and make sure it works on a real cluster. If someone else
> >> > can test using LSF on a real cluster (with shared $HOME) that'd be
> >> > great. I'll try to update the docs some time this week.
> >> >
> >> >>
> >> >> Once these small changes have been made and everyone has tested, me
> >> >> can merge it for the 0.10.1 release.
> >> > Excellent :D
> >> >
> >> >> Thanks for doing this work Justin and Satra!  It is fantastic!  Just
> >> >> so you all know where this is going in 0.11:
> >> >>
> >> >> * We are going to get rid of using Twisted in ipcluster.  This means
> we
> >> >> have
> >> >>   to re-write the process management stuff to use things like popen.
> >> >> * We have a new configuration system in 0.11.  This allows users to
> >> >> maintain
> >> >>   cluster profiles that are a set of configuration files for a
> >> >> particular
> >> >>   cluster setup.  This makes it easy for a user to have multiple
> >> >> clusters
> >> >>   configured, which they can then start by name.  The logging,
> >> >> security, etc.
> >> >>   is also different for each cluster profile.
> >> >> * It will be quite a bit of work to get everything working in 0.11,
> so
> >> >> I am
> >> >>   glad we are getting good PBS/SGE support in 0.10.1.
> >> >
> >> > I'm willing to help out with the PBS/SGE/LSF portion of ipcluster in
> >> > 0.11, I guess just let me know when is appropriate to start hacking.
> >> >
> >> > Thanks!
> >> >
> >> > ~Justin
> >> >
> >>
> >>
> >>
> >> --
> >> Brian E. Granger, Ph.D.
> >> Assistant Professor of Physics
> >> Cal Poly State University, San Luis Obispo
> >> bgranger at calpoly.edu
> >> ellisonbg at gmail.com
> >
> >
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100721/40d7000e/attachment.html>

From justin.t.riley at gmail.com  Thu Jul 22 10:40:59 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Thu, 22 Jul 2010 10:40:59 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTikAS-guPz0bvdfjzTxbz8QuMwb-GKa4Jop1tH42@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>	<4C3F1FE1.4040000@gmail.com>	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>	<4C3F709C.5080505@gmail.com>	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>	<4C42B09F.50106@gmail.com>	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>	<4C43455F.1050508@gmail.com>	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>	<4C43506C.8070907@gmail.com>	<AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>	<AANLkTimLbgxxhGWDEnKL0wnT-wjQwTFJf9zxInOm61TR@mail.gmail.com>
	<AANLkTikAS-guPz0bvdfjzTxbz8QuMwb-GKa4Jop1tH42@mail.gmail.com>
Message-ID: <4C48587B.1000306@gmail.com>

Hi Matthieu,

I forgot to cc the list on my last email. Details below. Thanks for
testing this.

~Justin

On 07/22/2010 10:20 AM, Matthieu Brucher wrote:
> 2010/7/22 Justin Riley <justin.t.riley at gmail.com>:
>> On Wed, Jul 21, 2010 at 7:08 AM, Matthieu Brucher
>> <matthieu.brucher at gmail.com> wrote:
>>> I've tried just a few minutes ago, but I got this:
>>>
>>> /JOB_SPOOL_DIR/1279710223.17444: line 8: /tmp/tmphM4RKl: Permission denied
>>>
>>> It seems that you may have to add some authorizations before excuting the file.
>>
>> You're right about needing to set permissions for LSF and I merged
>> your fork code. I found the following detail about using job scripts
>> with LSF (retrieved from
>> http://www.cisl.ucar.edu/docs/bluefire/lsf.html):
>>
>> --------
>> LSF command bsub allows submission of an executable, e.g.
>>
>>  $ bsub -i infile -o outfile -e errfile a.out
>>
>> LSF can also be used to submit a job script. However, then the LSF
>> command bsub requires redirection of its command file, specifically
>>
>>  $ bsub < myscript
>> --------
>>
>> I can't test this but I suspect LSF is ignoring our job script
>> variables given that we're submitting the job as an executable. If
>> this is the case then we'll either need to use redirection of the
>> script or just pass the -J option to the bsub command we're using now.
>>
>> With that said, would you mind running the following test?
>>
>> 1. download this job script: http://gist.github.com/485618
>> 2. run ipcontroller manually in a separate shell
>> 3. chmod +x the script and then bsub it the "executable" way (bsub -i
>> infile -o outfile -e errfile testscript.sh)
>> 4. check the output/error files for errors. you can also use the job's
>> id to tail it's output (bpeek -J jobid -f) and look at its history
>> info (bhist jobid)
>> 5. run ipython, obtain a mec and see how many ids you get (gist should
>> request 4)
>> 6. re-run 1-5 and use the "redirection" way (bsub < testscript.sh)
>>
>> If the redirection approach works and it's possible to do this with
>> twisted we could avoid the chmod'ing (although it doesn't hurt) and it
>> would "fit the mold" so far. Otherwise we'll need to chmod +x AND pass
>> in the -J option to the bsub command. This isn't too bad but would
>> require remolding.
>>
>> Thanks for testing,
>>
>> ~Justin
>>
> 
> Hi Justin,
> 
> I will try this, but I think you're right. I've just checked my own
> launch framework, and I submit the script by doing bsub <
> complex_temporaray_file. It should be done this way because we can set
> a lot of things in the script as arguments for LSF that we can't if we
> use bsub -i ...
> 
> I'll keep you posted.
> 
> Matthieu



From matthieu.brucher at gmail.com  Thu Jul 22 11:26:27 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Thu, 22 Jul 2010 17:26:27 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <4C48587B.1000306@gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTingP5flb2-AbD01CgxwQiw3SWwOhunUk84euZZv@mail.gmail.com>
	<4C43506C.8070907@gmail.com>
	<AANLkTilbzzonibo1BKeM-JHfloj8R6biomFD2lsUEtDM@mail.gmail.com>
	<AANLkTimLbgxxhGWDEnKL0wnT-wjQwTFJf9zxInOm61TR@mail.gmail.com>
	<AANLkTikAS-guPz0bvdfjzTxbz8QuMwb-GKa4Jop1tH42@mail.gmail.com>
	<4C48587B.1000306@gmail.com>
Message-ID: <AANLkTikcYCmMjKwA2oxSuMrSWsV-ThuIwSlx-AT3WSqe@mail.gmail.com>

>>> With that said, would you mind running the following test?
>>>
>>> 1. download this job script: http://gist.github.com/485618
>>> 2. run ipcontroller manually in a separate shell
>>> 3. chmod +x the script and then bsub it the "executable" way (bsub -i
>>> infile -o outfile -e errfile testscript.sh)
>>> 4. check the output/error files for errors. you can also use the job's
>>> id to tail it's output (bpeek -J jobid -f) and look at its history
>>> info (bhist jobid)
>>> 5. run ipython, obtain a mec and see how many ids you get (gist should
>>> request 4)
>>> 6. re-run 1-5 and use the "redirection" way (bsub < testscript.sh)
>>>
>>> If the redirection approach works and it's possible to do this with
>>> twisted we could avoid the chmod'ing (although it doesn't hurt) and it
>>> would "fit the mold" so far. Otherwise we'll need to chmod +x AND pass
>>> in the -J option to the bsub command. This isn't too bad but would
>>> require remolding.

bsub testscript.sh didn't manage to launch a single engine. It more or
less launched a bash and waited.
bsub < testscript.sh worked very well, as I suspected it would ;)

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From wackywendell at gmail.com  Thu Jul 22 13:04:50 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Thu, 22 Jul 2010 13:04:50 -0400
Subject: [IPython-dev] [IPython-User] How to build ipython documentation
In-Reply-To: <AANLkTilol1CGLZPBkwTDeSJnenkiLitxJR4vmTciZCk5@mail.gmail.com>
References: <AANLkTikwpiu6r6nAhijPjHVgj8E4WByax6r3WKpy7liC@mail.gmail.com>
	<AANLkTilol1CGLZPBkwTDeSJnenkiLitxJR4vmTciZCk5@mail.gmail.com>
Message-ID: <4C487A32.4000200@gmail.com>

An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100722/51bd4056/attachment.html>

From fperez.net at gmail.com  Thu Jul 22 16:28:39 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 22 Jul 2010 13:28:39 -0700
Subject: [IPython-dev] [IPython-User] How to build ipython documentation
In-Reply-To: <4C487A32.4000200@gmail.com>
References: <AANLkTikwpiu6r6nAhijPjHVgj8E4WByax6r3WKpy7liC@mail.gmail.com> 
	<AANLkTilol1CGLZPBkwTDeSJnenkiLitxJR4vmTciZCk5@mail.gmail.com> 
	<4C487A32.4000200@gmail.com>
Message-ID: <AANLkTilSv0riao-WGLABhmw5OksP10wg-zZyqHrnf3p8@mail.gmail.com>

Hey Wendell,

On Thu, Jul 22, 2010 at 10:04 AM, Wendell Smith <wackywendell at gmail.com> wrote:
> This looks like an error involved with the old IPython setup; I did
> previously have 0.10 on here (I think through easy_install), but I have no
> idea why sphinx is searching for IPython.ColorANSI. Any ideas?

make sure you run 'make clean' first, it may be finding old
auto-generated files from when you had 0.10 around...

Cheers,

f


From fperez.net at gmail.com  Thu Jul 22 16:31:53 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 22 Jul 2010 13:31:53 -0700
Subject: [IPython-dev] First Performance Result
In-Reply-To: <AANLkTim1IBZFxorqe0AY19iOxxtd1hbwxUxGV77yLufM@mail.gmail.com>
References: <AANLkTim1IBZFxorqe0AY19iOxxtd1hbwxUxGV77yLufM@mail.gmail.com>
Message-ID: <AANLkTinF2zDZpjRh0fa0NmHv2MdxRmuIzhuAYt83iuTZ@mail.gmail.com>

Hey Min,

On Thu, Jul 22, 2010 at 2:22 AM, MinRK <benjaminrk at gmail.com> wrote:
>
> It would appear that json is contributing 50% to the overall run time.

any chance you could re-test using pickle instead of cPickle?  I want
to see if the difference vs json is just from the faster C
implementationo of cPickle.  If that's the case, we could later
consider implementing a cython-based version of the json dump/load
code.

Cheers,

f


From fperez.net at gmail.com  Thu Jul 22 16:51:14 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Thu, 22 Jul 2010 13:51:14 -0700
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com>
Message-ID: <AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com>

Hi Erik,

On Wed, Jul 21, 2010 at 12:29 PM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
> Hello all,
>
> I've been meaning to add some ipython profiles to a project I'm
> working on ("recommended interactive environments" as it were), but
> I'm a little unclear as to what is the best way is to do this. ?I
> personally much prefer the .11 style profiles, but of course that's
> still in development, so I can't put it as a profile for general use
> until there's been some kind of release. ?So is .11 in some form
> likely to be out soon? ?Or a 10.1 that might include support for
> .11-style profiles? ?Or is it best to include both .11 and .10
> profiles?

I'm afraid right now,  shipping both is the only viable solution.  I'd
recommend you simply put all non-trivial code in a common file,  and
refer to that file from both profiles.  That way the duplication is
minimal and trivial to maintain.

Because we're making so much deep work on 0.11, I think it will  be a
couple of months before it's ready for release.  While 0.10.1 is
almost ready, we're just waiting for:

- the work Justin, Matthew, Satra et al are doing on batch systems
- for one of us to take a couple of hours to merge in Tom's git pull
requests with a lot of nice  clenaup.
- Jonathan March's bugfix
- A fix for a small wx bug I think I introduced.
- anything I'm missing?

Basically I think 0.10.1 is very close to getting out, while 0.11 is
certainly a few months away (hopefully no more than 3).

Cheers,

f


From tomspur at fedoraproject.org  Thu Jul 22 16:56:22 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Thu, 22 Jul 2010 22:56:22 +0200
Subject: [IPython-dev] [IPython-User] How to build ipython documentation
In-Reply-To: <AANLkTilSv0riao-WGLABhmw5OksP10wg-zZyqHrnf3p8@mail.gmail.com>
References: <AANLkTikwpiu6r6nAhijPjHVgj8E4WByax6r3WKpy7liC@mail.gmail.com>
	<AANLkTilol1CGLZPBkwTDeSJnenkiLitxJR4vmTciZCk5@mail.gmail.com>
	<4C487A32.4000200@gmail.com>
	<AANLkTilSv0riao-WGLABhmw5OksP10wg-zZyqHrnf3p8@mail.gmail.com>
Message-ID: <20100722225622.307707ff@earth>

Am Thu, 22 Jul 2010 13:28:39 -0700
schrieb Fernando Perez <fperez.net at gmail.com>:

> Hey Wendell,
> 
> On Thu, Jul 22, 2010 at 10:04 AM, Wendell Smith
> <wackywendell at gmail.com> wrote:
> > This looks like an error involved with the old IPython setup; I did
> > previously have 0.10 on here (I think through easy_install), but I
> > have no idea why sphinx is searching for IPython.ColorANSI. Any
> > ideas?
> 
> make sure you run 'make clean' first, it may be finding old
> auto-generated files from when you had 0.10 around...

Here it's still failing with:

[snip]
sphinx-build -b html -d build/doctrees   source build/html
Running Sphinx v0.6.6
loading pickled environment... not found
building [html]: targets for 219 source files that are out of date
updating environment: 219 added, 0 changed, 0 removed
reading sources... [  5%]
api/generated/IPython.Extensions.InterpreterPasteInpu*** Pasting of
code with ">>>" or "..." has been enabled. *** Simplified input for
physical quantities enabled.sions.PhysicalQInput reading sources...
[  5%] api/generated/IPython.Extensions.PhysicalQInteractive Exception
occurred: File
"/home/tom/programming/repositories/github/ipython.git/docs/sphinxext/inheritance_diagram.py",
line 107, in _import_class_or_module "Could not import class or module
'%s' specified for inheritance diagram" % name) ValueError: Could not
import class or module 'IPython.Extensions.PhysicalQInteractive'
specified for inheritance diagram The full traceback has been saved
in /tmp/sphinx-err-N3wivW.log, if you want to report the issue to the
developers. Please also report this if it was a user error, so that a
better error message can be provided next time. Either send bugs to the
mailing list at <http://groups.google.com/group/sphinx-dev/>, or report
them in the tracker at
<http://bitbucket.org/birkenfeld/sphinx/issues/>. Thanks! make: ***
[html] Fehler 1

	Thomas


From benjaminrk at gmail.com  Thu Jul 22 16:59:42 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 22 Jul 2010 13:59:42 -0700
Subject: [IPython-dev] First Performance Result
In-Reply-To: <AANLkTinF2zDZpjRh0fa0NmHv2MdxRmuIzhuAYt83iuTZ@mail.gmail.com>
References: <AANLkTim1IBZFxorqe0AY19iOxxtd1hbwxUxGV77yLufM@mail.gmail.com> 
	<AANLkTinF2zDZpjRh0fa0NmHv2MdxRmuIzhuAYt83iuTZ@mail.gmail.com>
Message-ID: <AANLkTilU-795cDaHfBYhSUB0p7LkLY7RhqGSxmzmlyLI@mail.gmail.com>

It would appear to be just the C implementation. Regular pickle takes
approximately the same amount of time as JSON.  The integer key issue is a
serious one.

Any JSON serialized dict with integer keys will get reconstructed
incorrectly with string keys. Approximately 100% of controller messages
include such a thing (keyed by engine IDs). I can get around it easily
enough in the controller/client code (in ugly ways), but it's user code that
I'm worried about. It means that we cannot, in general, allow user dicts to
be sent if we use JSON, unless on every send we walk all iterables and
convert every dict we find to a custom dict subclass, which is certainly
unacceptable.

This is not a problem with Python's JSON, it's a problem with JSON itself.

On Thu, Jul 22, 2010 at 13:31, Fernando Perez <fperez.net at gmail.com> wrote:

> Hey Min,
>
> On Thu, Jul 22, 2010 at 2:22 AM, MinRK <benjaminrk at gmail.com> wrote:
> >
> > It would appear that json is contributing 50% to the overall run time.
>
> any chance you could re-test using pickle instead of cPickle?  I want
> to see if the difference vs json is just from the faster C
> implementationo of cPickle.  If that's the case, we could later
> consider implementing a cython-based version of the json dump/load
> code.
>
> Cheers,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100722/41985c87/attachment.html>

From JDM at MarchRay.net  Thu Jul 22 17:48:21 2010
From: JDM at MarchRay.net (Jonathan March)
Date: Thu, 22 Jul 2010 16:48:21 -0500
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com>
	<AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com>
Message-ID: <AANLkTin1KBe6cPQf5ZKz2ohVi5BviUkTWR-rK7jqG5B-@mail.gmail.com>

On Thu, Jul 22, 2010 at 3:51 PM, Fernando Perez <fperez.net at gmail.com> wrote:
>
> While 0.10.1 is?almost ready, we're just waiting for:
>
> - the work Justin, Matthew, Satra et al are doing on batch systems
> - for one of us to take a couple of hours to merge in Tom's git pull
> requests with a lot of nice ?clenaup.
> - Jonathan March's bugfix
> - A fix for a small wx bug I think I introduced.
> - anything I'm missing?

Just wanted to let ya'll know (I told Fernando and Brian last week)
that I'm in the middle of changing jobs and cities on very short
notice, so my proffered help with processing small ipython patches has
to be on hold for now. I hope to be back and helping as soon as the
move settles down.
Adelante...
Jonathan March


From erik.tollerud at gmail.com  Fri Jul 23 13:31:27 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Fri, 23 Jul 2010 10:31:27 -0700
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com> 
	<AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com>
Message-ID: <AANLkTi=omUaObeZ=c_eQ8qeTnBCSsj20c0bGZ02TcTYD@mail.gmail.com>

Ok, great, thanks for the info!

And is the plan to leave the profile API pretty much as-is for
post-0.11 releases?  I understand that the .10 and .11 changes weren't
necessarily expected, but it be good to know if I can plan around the
.11 style syntax as the structure for future releases.

Along those lines, there is one thing I've noticed in the custom
profiles I've built that I'd like to eliminate if possible.  If I
inject startup lines using  c.Global.exec_lines, they seem to
increment the line count.  So I never start out with the line count
actually at 1.  Is there a way to override this behavior, either by
just setting the line count manually,   or a version of exec_lines
that doesn't get stored in the history?


On Thu, Jul 22, 2010 at 1:51 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi Erik,
>
> On Wed, Jul 21, 2010 at 12:29 PM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
>> Hello all,
>>
>> I've been meaning to add some ipython profiles to a project I'm
>> working on ("recommended interactive environments" as it were), but
>> I'm a little unclear as to what is the best way is to do this. ?I
>> personally much prefer the .11 style profiles, but of course that's
>> still in development, so I can't put it as a profile for general use
>> until there's been some kind of release. ?So is .11 in some form
>> likely to be out soon? ?Or a 10.1 that might include support for
>> .11-style profiles? ?Or is it best to include both .11 and .10
>> profiles?
>
> I'm afraid right now, ?shipping both is the only viable solution. ?I'd
> recommend you simply put all non-trivial code in a common file, ?and
> refer to that file from both profiles. ?That way the duplication is
> minimal and trivial to maintain.
>
> Because we're making so much deep work on 0.11, I think it will ?be a
> couple of months before it's ready for release. ?While 0.10.1 is
> almost ready, we're just waiting for:
>
> - the work Justin, Matthew, Satra et al are doing on batch systems
> - for one of us to take a couple of hours to merge in Tom's git pull
> requests with a lot of nice ?clenaup.
> - Jonathan March's bugfix
> - A fix for a small wx bug I think I introduced.
> - anything I'm missing?
>
> Basically I think 0.10.1 is very close to getting out, while 0.11 is
> certainly a few months away (hopefully no more than 3).
>
> Cheers,
>
> f
>



-- 
Erik Tollerud


From fperez.net at gmail.com  Fri Jul 23 15:00:30 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 23 Jul 2010 12:00:30 -0700
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTi=omUaObeZ=c_eQ8qeTnBCSsj20c0bGZ02TcTYD@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com> 
	<AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com> 
	<AANLkTi=omUaObeZ=c_eQ8qeTnBCSsj20c0bGZ02TcTYD@mail.gmail.com>
Message-ID: <AANLkTikONNAObHwS=LTAyj5u+bZgp4SZ+9=m224zkvCM@mail.gmail.com>

Hi Erik,

On Fri, Jul 23, 2010 at 10:31 AM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
>
> And is the plan to leave the profile API pretty much as-is for
> post-0.11 releases? ?I understand that the .10 and .11 changes weren't
> necessarily expected, but it be good to know if I can plan around the
> .11 style syntax as the structure for future releases.

Barring any unforeseen problems, we expect the 0.11 system for
profiles to remain compatible from now on.  We have a plan to make it
easier for new projects to provide IPython profiles in *their own
tree*, but the  syntax would be backwards-compatible.  Whereas now you
say

ipython -p profname

we'd like to allow also (optionally, of course):

ipython -p project:profname

which would search in the installed tree of project, a subdirectory
called ipython/profiles/ for that profile name.  This would let
projects offer and update their own profiles without users having to
install anything in their own ~/.ipython directories.  But that's not
implemented yet :)

> Along those lines, there is one thing I've noticed in the custom
> profiles I've built that I'd like to eliminate if possible. ?If I
> inject startup lines using ?c.Global.exec_lines, they seem to
> increment the line count. ?So I never start out with the line count
> actually at 1. ?Is there a way to override this behavior, either by
> just setting the line count manually, ? or a version of exec_lines
> that doesn't get stored in the history?

That's a bug, plain and simple, sorry :)  For actual code, instead of
exec_lines, I use this:

c.Global.exec_files = ['extras.py']

that way I can put everything I want in a file, and I can also load
that  file from my old-style profiles.  My new-style profile consists
of very few lines these days, just setting a few startup options, and
I leave the heavy lifting to my little 'extras' file.

Does this help?

Cheers,

f


From satra at mit.edu  Fri Jul 23 15:19:02 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Fri, 23 Jul 2010 15:19:02 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTikNCmON98MtPTj7Zgxk7XIbCk1FScB3OZkEfEpq@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
	<AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>
	<AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com>
	<AANLkTinE2gz627iSeHrZSN52ZViY_8JbXqu2tDux1gPN@mail.gmail.com>
	<AANLkTine__xHN3Gt9TWL36DpTsPa8LWpLG8nWMMoVOKn@mail.gmail.com>
	<AANLkTikJU_XgWFylavDth0WVdnv3EzjtIP6I-F2nzDDM@mail.gmail.com>
	<AANLkTikNCmON98MtPTj7Zgxk7XIbCk1FScB3OZkEfEpq@mail.gmail.com>
Message-ID: <AANLkTi=5M6gz13u8TsqCogXB_=O1WW6qFCcXWxM3ijyN@mail.gmail.com>

if i add the following line to sge script to match my shell, it works fine.
perhaps we should allow adding shell as an option like queue and by default
set it to the user's shell?

#$ -S /bin/bash

cheers,

satra



On Wed, Jul 21, 2010 at 11:58 PM, Satrajit Ghosh <satra at mit.edu> wrote:

> hi justin,
>
> 1. By cleanly installed, do you mean SGE in addition to ipython/ipcluster?
>>
>
> no just the python environment.
>
>
>> 2. From the job output you sent me previously (when it wasn't working) it
>> seems that there might have been a mismatch in the shell that was used given
>> that the output was complaining about "Illegal variable name". I've noticed
>> that SGE likes to assign csh by default on my system if I don't specify a
>> shell at install time.  What is the output of "qconf -sq all.q | grep -i
>> shell" for you?
>>
>
> (nipype0.3)satra at sub:/tmp$ qconf -sq all.q | grep -i shell
> shell                 /bin/sh
> shell_start_mode      unix_behavior
>
>  (nipype0.3)satra at sub:/tmp$ qconf -sq sub | grep -i shell
> shell                 /bin/csh
> shell_start_mode      posix_compliant
>
> (nipype0.3)satra at sub:/tmp$ qconf -sq twocore | grep -i shell
> shell                 /bin/bash
> shell_start_mode      posix_compliant
>
> only twocore worked. all.q and sub didn't. choosing the latter two puts the
> job in qw state.
>
> my default shell is bash.
>
> cheers,
>
> satra
>
>
>> Thanks!
>>
>> ~Justin
>>
>> On Wed, Jul 21, 2010 at 9:05 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>
>>> hi justin.
>>>
>>> i really don't know what the difference is, but i clean installed
>>> everything and it works beautifully on SGE.
>>>
>>> cheers,
>>>
>>> satra
>>>
>>>
>>>
>>> On Tue, Jul 20, 2010 at 4:04 PM, Brian Granger <ellisonbg at gmail.com>wrote:
>>>
>>>> Great!  I mean great that you and Justin are testing and debugging this.
>>>>
>>>> Brian
>>>>
>>>> On Tue, Jul 20, 2010 at 1:01 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>>> > hi brian,
>>>> >
>>>> > i ran into a problem (my engines were not starting) and justin and i
>>>> are
>>>> > going to try and figure out what's causing it.
>>>> >
>>>> > cheers,
>>>> >
>>>> > satra
>>>> >
>>>> >
>>>> > On Tue, Jul 20, 2010 at 3:19 PM, Brian Granger <ellisonbg at gmail.com>
>>>> wrote:
>>>> >>
>>>> >> Satra,
>>>> >>
>>>> >> If you could test this as well, that would be great.  Thanks.
>>>>  Justin,
>>>> >> let us know when you think it is ready to go with the documentation
>>>> >> and testing.
>>>> >>
>>>> >> Cheers,
>>>> >>
>>>> >> Brian
>>>> >>
>>>> >> On Tue, Jul 20, 2010 at 7:48 AM, Justin Riley <
>>>> justin.t.riley at gmail.com>
>>>> >> wrote:
>>>> >> > On 07/19/2010 01:06 AM, Brian Granger wrote:
>>>> >> >> * I like the design of the BatchEngineSet.  This will be easy to
>>>> port
>>>> >> >> to
>>>> >> >>   0.11.
>>>> >> > Excellent :D
>>>> >> >
>>>> >> >> * I think if we are going to have default submission templates, we
>>>> need
>>>> >> >> to
>>>> >> >>   expose the queue name to the command line.  This shouldn't be
>>>> too
>>>> >> >> tough.
>>>> >> >
>>>> >> > Added --queue option to my 0.10.1-sge branch and tested this with
>>>> SGE
>>>> >> > 62u3 and Torque 2.4.6. I don't have LSF to test but I added in the
>>>> code
>>>> >> > that *should* work with LSF.
>>>> >> >
>>>> >> >> * Have you tested this with Python 2.6.  I saw that you mentioned
>>>> that
>>>> >> >>   the engines were shutting down cleanly now.  What did you do to
>>>> fix
>>>> >> >> that?
>>>> >> >>   I am even running into that in 0.11 so any info you can provide
>>>> would
>>>> >> >>   be helpful.
>>>> >> >
>>>> >> > I've been testing the code with Python 2.6. I didn't do anything
>>>> special
>>>> >> > other than switch the BatchEngineSet to using job arrays (ie a
>>>> single
>>>> >> > qsub command instead of N qsubs). Now when I run "ipcluster sge -n
>>>> 4"
>>>> >> > the controller starts and the engines are launched and at that
>>>> point the
>>>> >> > ipcluster session is running indefinitely. If I then ctrl-c the
>>>> >> > ipcluster session it catches the signal and calls kill() which
>>>> >> > terminates the engines by canceling the job. Is this the same
>>>> situation
>>>> >> > you're trying to get working?
>>>> >> >
>>>> >> >> * For now, let's stick with the assumption of a shared $HOME for
>>>> the
>>>> >> >> furl files.
>>>> >> >> * The biggest thing is if people can test this thoroughly.  I
>>>> don't
>>>> >> >> have
>>>> >> >>   SGE/PBS/LSF access right now, so it is a bit difficult for me to
>>>> >> >> help. I
>>>> >> >>   have a cluster coming later in the summer, but it is not here
>>>> yet.
>>>> >> >>  Once
>>>> >> >>   people have tested it well and are satisfied with it, let's
>>>> merge it.
>>>> >> >> * If we can update the documentation about how the PBS/SGE support
>>>> >> >> works
>>>> >> >>   that would be great.  The file is here:
>>>> >> >
>>>> >> > That sounds fine to me. I'm testing this stuff on my workstation's
>>>> local
>>>> >> > sge/torque queues and it works fine. I'll also test this with
>>>> >> > StarCluster and make sure it works on a real cluster. If someone
>>>> else
>>>> >> > can test using LSF on a real cluster (with shared $HOME) that'd be
>>>> >> > great. I'll try to update the docs some time this week.
>>>> >> >
>>>> >> >>
>>>> >> >> Once these small changes have been made and everyone has tested,
>>>> me
>>>> >> >> can merge it for the 0.10.1 release.
>>>> >> > Excellent :D
>>>> >> >
>>>> >> >> Thanks for doing this work Justin and Satra!  It is fantastic!
>>>>  Just
>>>> >> >> so you all know where this is going in 0.11:
>>>> >> >>
>>>> >> >> * We are going to get rid of using Twisted in ipcluster.  This
>>>> means we
>>>> >> >> have
>>>> >> >>   to re-write the process management stuff to use things like
>>>> popen.
>>>> >> >> * We have a new configuration system in 0.11.  This allows users
>>>> to
>>>> >> >> maintain
>>>> >> >>   cluster profiles that are a set of configuration files for a
>>>> >> >> particular
>>>> >> >>   cluster setup.  This makes it easy for a user to have multiple
>>>> >> >> clusters
>>>> >> >>   configured, which they can then start by name.  The logging,
>>>> >> >> security, etc.
>>>> >> >>   is also different for each cluster profile.
>>>> >> >> * It will be quite a bit of work to get everything working in
>>>> 0.11, so
>>>> >> >> I am
>>>> >> >>   glad we are getting good PBS/SGE support in 0.10.1.
>>>> >> >
>>>> >> > I'm willing to help out with the PBS/SGE/LSF portion of ipcluster
>>>> in
>>>> >> > 0.11, I guess just let me know when is appropriate to start
>>>> hacking.
>>>> >> >
>>>> >> > Thanks!
>>>> >> >
>>>> >> > ~Justin
>>>> >> >
>>>> >>
>>>> >>
>>>> >>
>>>> >> --
>>>> >> Brian E. Granger, Ph.D.
>>>> >> Assistant Professor of Physics
>>>> >> Cal Poly State University, San Luis Obispo
>>>> >> bgranger at calpoly.edu
>>>> >> ellisonbg at gmail.com
>>>> >
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> Brian E. Granger, Ph.D.
>>>> Assistant Professor of Physics
>>>> Cal Poly State University, San Luis Obispo
>>>> bgranger at calpoly.edu
>>>> ellisonbg at gmail.com
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100723/ff21b804/attachment.html>

From wackywendell at gmail.com  Fri Jul 23 16:21:48 2010
From: wackywendell at gmail.com (Wendell Smith)
Date: Fri, 23 Jul 2010 16:21:48 -0400
Subject: [IPython-dev] [IPython-User] How to build ipython documentation
In-Reply-To: <AANLkTilSv0riao-WGLABhmw5OksP10wg-zZyqHrnf3p8@mail.gmail.com>
References: <AANLkTikwpiu6r6nAhijPjHVgj8E4WByax6r3WKpy7liC@mail.gmail.com>
	<AANLkTilol1CGLZPBkwTDeSJnenkiLitxJR4vmTciZCk5@mail.gmail.com>
	<4C487A32.4000200@gmail.com>
	<AANLkTilSv0riao-WGLABhmw5OksP10wg-zZyqHrnf3p8@mail.gmail.com>
Message-ID: <4C49F9DC.3000909@gmail.com>

Hi,

You're absolutely right, and I should have known better - I did need to 
'make clean'.

However, in order to get it to continue all the way through, I had to 
install twisted, foolscap, and wxpython - none of which are necessary 
for basic ipython. Is it supposed to be that way?

-Wendell

On 07/22/2010 04:28 PM, Fernando Perez wrote:
> Hey Wendell,
>
> On Thu, Jul 22, 2010 at 10:04 AM, Wendell Smith<wackywendell at gmail.com>  wrote:
>    
>> This looks like an error involved with the old IPython setup; I did
>> previously have 0.10 on here (I think through easy_install), but I have no
>> idea why sphinx is searching for IPython.ColorANSI. Any ideas?
>>      
> make sure you run 'make clean' first, it may be finding old
> auto-generated files from when you had 0.10 around...
>
> Cheers,
>
> f
>    



From justin.t.riley at gmail.com  Fri Jul 23 17:54:50 2010
From: justin.t.riley at gmail.com (Justin Riley)
Date: Fri, 23 Jul 2010 17:54:50 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTi=5M6gz13u8TsqCogXB_=O1WW6qFCcXWxM3ijyN@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
	<AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>
	<AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com>
	<AANLkTinE2gz627iSeHrZSN52ZViY_8JbXqu2tDux1gPN@mail.gmail.com>
	<AANLkTine__xHN3Gt9TWL36DpTsPa8LWpLG8nWMMoVOKn@mail.gmail.com>
	<AANLkTikJU_XgWFylavDth0WVdnv3EzjtIP6I-F2nzDDM@mail.gmail.com>
	<AANLkTikNCmON98MtPTj7Zgxk7XIbCk1FScB3OZkEfEpq@mail.gmail.com>
	<AANLkTi=5M6gz13u8TsqCogXB_=O1WW6qFCcXWxM3ijyN@mail.gmail.com>
Message-ID: <AANLkTi=M0FyJ8ZmyurafQ45aaZDHOeYgmjgYQjEM0RX+@mail.gmail.com>

Hi Satrajit/Matthieu,

Satrajit, so for now I set /bin/sh to be the shell for all generated
scripts (PBS/SGE/LSF) given that it's probably the most commonly
included shell on *NIXs. Should we still add a --shell option? If the
user passes their own script they can of course customize the shell,
but otherwise I would imagine /bin/sh with the generated code should
work for most folks. If it still makes sense to have a --shell option
I'll add it in.

Matthieu, I updated my 0.10.1-sge branch to address the LSF shell
redirection issue. Basically I create a bsub wrapper that does the
shell redirection and then pass the wrapper to getProcessOutput. I
don't believe Twisted's getProcessOutput will handle stdin redirection
so this is my solution for now. Would you mind testing this new code
with LSF?

I've also updated the parallel_process.txt docs for ipcluster. Let me
know what you guys think.

~Justin

On Fri, Jul 23, 2010 at 3:19 PM, Satrajit Ghosh <satra at mit.edu> wrote:
> if i add the following line to sge script to match my shell, it works fine.
> perhaps we should allow adding shell as an option like queue and by default
> set it to the user's shell?
>
> #$ -S /bin/bash
>
> cheers,
>
> satra
>
>
>
> On Wed, Jul 21, 2010 at 11:58 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>
>> hi justin,
>>
>>> 1. By cleanly installed, do you mean SGE in addition to
>>> ipython/ipcluster?
>>
>> no just the python environment.
>>
>>>
>>> 2.?From the job output you sent me previously (when it wasn't working) it
>>> seems that there might have been a mismatch in the shell that was used given
>>> that the output was complaining about "Illegal variable name". I've noticed
>>> that SGE likes to assign csh by default on my system if I don't specify a
>>> shell at install time. ?What is the output of "qconf -sq all.q | grep -i
>>> shell" for you?
>>
>> (nipype0.3)satra at sub:/tmp$ qconf -sq all.q | grep -i shell
>> shell???????????????? /bin/sh
>> shell_start_mode????? unix_behavior
>>
>> ?(nipype0.3)satra at sub:/tmp$ qconf -sq sub | grep -i shell
>> shell???????????????? /bin/csh
>> shell_start_mode????? posix_compliant
>>
>> (nipype0.3)satra at sub:/tmp$ qconf -sq twocore | grep -i shell
>> shell???????????????? /bin/bash
>> shell_start_mode????? posix_compliant
>>
>> only twocore worked. all.q and sub didn't. choosing the latter two puts
>> the job in qw state.
>>
>> my default shell is bash.
>>
>> cheers,
>>
>> satra
>>
>>>
>>> Thanks!
>>> ~Justin
>>> On Wed, Jul 21, 2010 at 9:05 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>>>
>>>> hi justin.
>>>>
>>>> i really don't know what the difference is, but i clean installed
>>>> everything and it works beautifully on SGE.
>>>>
>>>> cheers,
>>>>
>>>> satra
>>>>
>>>>
>>>> On Tue, Jul 20, 2010 at 4:04 PM, Brian Granger <ellisonbg at gmail.com>
>>>> wrote:
>>>>>
>>>>> Great! ?I mean great that you and Justin are testing and debugging
>>>>> this.
>>>>>
>>>>> Brian
>>>>>
>>>>> On Tue, Jul 20, 2010 at 1:01 PM, Satrajit Ghosh <satra at mit.edu> wrote:
>>>>> > hi brian,
>>>>> >
>>>>> > i ran into a problem (my engines were not starting) and justin and i
>>>>> > are
>>>>> > going to try and figure out what's causing it.
>>>>> >
>>>>> > cheers,
>>>>> >
>>>>> > satra
>>>>> >
>>>>> >
>>>>> > On Tue, Jul 20, 2010 at 3:19 PM, Brian Granger <ellisonbg at gmail.com>
>>>>> > wrote:
>>>>> >>
>>>>> >> Satra,
>>>>> >>
>>>>> >> If you could test this as well, that would be great. ?Thanks.
>>>>> >> ?Justin,
>>>>> >> let us know when you think it is ready to go with the documentation
>>>>> >> and testing.
>>>>> >>
>>>>> >> Cheers,
>>>>> >>
>>>>> >> Brian
>>>>> >>
>>>>> >> On Tue, Jul 20, 2010 at 7:48 AM, Justin Riley
>>>>> >> <justin.t.riley at gmail.com>
>>>>> >> wrote:
>>>>> >> > On 07/19/2010 01:06 AM, Brian Granger wrote:
>>>>> >> >> * I like the design of the BatchEngineSet. ?This will be easy to
>>>>> >> >> port
>>>>> >> >> to
>>>>> >> >> ? 0.11.
>>>>> >> > Excellent :D
>>>>> >> >
>>>>> >> >> * I think if we are going to have default submission templates,
>>>>> >> >> we need
>>>>> >> >> to
>>>>> >> >> ? expose the queue name to the command line. ?This shouldn't be
>>>>> >> >> too
>>>>> >> >> tough.
>>>>> >> >
>>>>> >> > Added --queue option to my 0.10.1-sge branch and tested this with
>>>>> >> > SGE
>>>>> >> > 62u3 and Torque 2.4.6. I don't have LSF to test but I added in the
>>>>> >> > code
>>>>> >> > that *should* work with LSF.
>>>>> >> >
>>>>> >> >> * Have you tested this with Python 2.6. ?I saw that you mentioned
>>>>> >> >> that
>>>>> >> >> ? the engines were shutting down cleanly now. ?What did you do to
>>>>> >> >> fix
>>>>> >> >> that?
>>>>> >> >> ? I am even running into that in 0.11 so any info you can provide
>>>>> >> >> would
>>>>> >> >> ? be helpful.
>>>>> >> >
>>>>> >> > I've been testing the code with Python 2.6. I didn't do anything
>>>>> >> > special
>>>>> >> > other than switch the BatchEngineSet to using job arrays (ie a
>>>>> >> > single
>>>>> >> > qsub command instead of N qsubs). Now when I run "ipcluster sge -n
>>>>> >> > 4"
>>>>> >> > the controller starts and the engines are launched and at that
>>>>> >> > point the
>>>>> >> > ipcluster session is running indefinitely. If I then ctrl-c the
>>>>> >> > ipcluster session it catches the signal and calls kill() which
>>>>> >> > terminates the engines by canceling the job. Is this the same
>>>>> >> > situation
>>>>> >> > you're trying to get working?
>>>>> >> >
>>>>> >> >> * For now, let's stick with the assumption of a shared $HOME for
>>>>> >> >> the
>>>>> >> >> furl files.
>>>>> >> >> * The biggest thing is if people can test this thoroughly. ?I
>>>>> >> >> don't
>>>>> >> >> have
>>>>> >> >> ? SGE/PBS/LSF access right now, so it is a bit difficult for me
>>>>> >> >> to
>>>>> >> >> help. I
>>>>> >> >> ? have a cluster coming later in the summer, but it is not here
>>>>> >> >> yet.
>>>>> >> >> ?Once
>>>>> >> >> ? people have tested it well and are satisfied with it, let's
>>>>> >> >> merge it.
>>>>> >> >> * If we can update the documentation about how the PBS/SGE
>>>>> >> >> support
>>>>> >> >> works
>>>>> >> >> ? that would be great. ?The file is here:
>>>>> >> >
>>>>> >> > That sounds fine to me. I'm testing this stuff on my workstation's
>>>>> >> > local
>>>>> >> > sge/torque queues and it works fine. I'll also test this with
>>>>> >> > StarCluster and make sure it works on a real cluster. If someone
>>>>> >> > else
>>>>> >> > can test using LSF on a real cluster (with shared $HOME) that'd be
>>>>> >> > great. I'll try to update the docs some time this week.
>>>>> >> >
>>>>> >> >>
>>>>> >> >> Once these small changes have been made and everyone has tested,
>>>>> >> >> me
>>>>> >> >> can merge it for the 0.10.1 release.
>>>>> >> > Excellent :D
>>>>> >> >
>>>>> >> >> Thanks for doing this work Justin and Satra! ?It is fantastic!
>>>>> >> >> ?Just
>>>>> >> >> so you all know where this is going in 0.11:
>>>>> >> >>
>>>>> >> >> * We are going to get rid of using Twisted in ipcluster. ?This
>>>>> >> >> means we
>>>>> >> >> have
>>>>> >> >> ? to re-write the process management stuff to use things like
>>>>> >> >> popen.
>>>>> >> >> * We have a new configuration system in 0.11. ?This allows users
>>>>> >> >> to
>>>>> >> >> maintain
>>>>> >> >> ? cluster profiles that are a set of configuration files for a
>>>>> >> >> particular
>>>>> >> >> ? cluster setup. ?This makes it easy for a user to have multiple
>>>>> >> >> clusters
>>>>> >> >> ? configured, which they can then start by name. ?The logging,
>>>>> >> >> security, etc.
>>>>> >> >> ? is also different for each cluster profile.
>>>>> >> >> * It will be quite a bit of work to get everything working in
>>>>> >> >> 0.11, so
>>>>> >> >> I am
>>>>> >> >> ? glad we are getting good PBS/SGE support in 0.10.1.
>>>>> >> >
>>>>> >> > I'm willing to help out with the PBS/SGE/LSF portion of ipcluster
>>>>> >> > in
>>>>> >> > 0.11, I guess just let me know when is appropriate to start
>>>>> >> > hacking.
>>>>> >> >
>>>>> >> > Thanks!
>>>>> >> >
>>>>> >> > ~Justin
>>>>> >> >
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> --
>>>>> >> Brian E. Granger, Ph.D.
>>>>> >> Assistant Professor of Physics
>>>>> >> Cal Poly State University, San Luis Obispo
>>>>> >> bgranger at calpoly.edu
>>>>> >> ellisonbg at gmail.com
>>>>> >
>>>>> >
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Brian E. Granger, Ph.D.
>>>>> Assistant Professor of Physics
>>>>> Cal Poly State University, San Luis Obispo
>>>>> bgranger at calpoly.edu
>>>>> ellisonbg at gmail.com
>>>>
>>>
>>
>
>


From ben.root at ou.edu  Fri Jul 23 19:31:07 2010
From: ben.root at ou.edu (Benjamin Root)
Date: Fri, 23 Jul 2010 18:31:07 -0500
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTi=M0FyJ8ZmyurafQ45aaZDHOeYgmjgYQjEM0RX+@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com> 
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com> 
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com> 
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com> 
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com> 
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com> 
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com> 
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com> 
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com> 
	<4C45B72F.5020000@gmail.com>
	<AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com> 
	<AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com> 
	<AANLkTinE2gz627iSeHrZSN52ZViY_8JbXqu2tDux1gPN@mail.gmail.com> 
	<AANLkTine__xHN3Gt9TWL36DpTsPa8LWpLG8nWMMoVOKn@mail.gmail.com> 
	<AANLkTikJU_XgWFylavDth0WVdnv3EzjtIP6I-F2nzDDM@mail.gmail.com> 
	<AANLkTikNCmON98MtPTj7Zgxk7XIbCk1FScB3OZkEfEpq@mail.gmail.com> 
	<AANLkTi=5M6gz13u8TsqCogXB_=O1WW6qFCcXWxM3ijyN@mail.gmail.com> 
	<AANLkTi=M0FyJ8ZmyurafQ45aaZDHOeYgmjgYQjEM0RX+@mail.gmail.com>
Message-ID: <AANLkTinLb-jx8r12LALgINdB3Dysoypg6sasdZU9mfmF@mail.gmail.com>

On Fri, Jul 23, 2010 at 4:54 PM, Justin Riley <justin.t.riley at gmail.com>wrote:

> Hi Satrajit/Matthieu,
>
> Satrajit, so for now I set /bin/sh to be the shell for all generated
> scripts (PBS/SGE/LSF) given that it's probably the most commonly
> included shell on *NIXs. Should we still add a --shell option? If the
> user passes their own script they can of course customize the shell,
> but otherwise I would imagine /bin/sh with the generated code should
> work for most folks. If it still makes sense to have a --shell option
> I'll add it in.
>
>
If I might interject for a moment, this was a major issue a few years ago
with Ubuntu: https://wiki.ubuntu.com/DashAsBinSh

Essentially, people were using Bash-isms without realizing it and starting
their shell scripts with /bin/sh.  In Debian, it is policy for all shell
scripts that specify /bin/sh should only use POSIX features.  So, for Ubuntu
changed the /bin/sh aliase to /bin/dash from /bin/bash.  Dash is a lot like
bash, but not quite.  This caused some... interesting... issues.

The link I provided above mentions some of the usual gotchas.  Whether they
apply to the issue at hand or not, I wouldn't know, but it has been a handy
reference for me before.

I'll go back to my hole...

Ben Root
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100723/ee6d0010/attachment.html>

From fperez.net at gmail.com  Fri Jul 23 20:17:55 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 23 Jul 2010 17:17:55 -0700
Subject: [IPython-dev] Paul Ivanov: Did you get any feedback from GH
	when I merged?
In-Reply-To: <4C464218.8000701@gmail.com>
References: <AANLkTime9dtq7a9aFyMnbnVtIeQwlsH09J-DP44moogc@mail.gmail.com> 
	<4C464218.8000701@gmail.com>
Message-ID: <AANLkTimAZwdbHz9Cix1+ZomnyY6+O6xZzvxwhYfMfR+F@mail.gmail.com>

On Tue, Jul 20, 2010 at 5:40 PM, Paul Ivanov <pivanov314 at gmail.com> wrote:
>
> One unfortunate thing about commenting on the commits, is that the
> comments don't seem to carry over across forks. The commits you merged
> into trunk (ipython/ipython) don't have any reference to the comments we
> made about them in my fork (ivanov/ipython).

Yes, there's some metadata that's tied to each repo in github (issues
are similar) and they have no mechanism for moving/copying that
metadata around.

Cheers

f


From fperez.net at gmail.com  Fri Jul 23 20:51:40 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Fri, 23 Jul 2010 17:51:40 -0700
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTin1KBe6cPQf5ZKz2ohVi5BviUkTWR-rK7jqG5B-@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com> 
	<AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com> 
	<AANLkTin1KBe6cPQf5ZKz2ohVi5BviUkTWR-rK7jqG5B-@mail.gmail.com>
Message-ID: <AANLkTi=JzerS48CoZ_4_c=5p+d7h87E2gbDF7Hix818m@mail.gmail.com>

Hi Jonathan,

On Thu, Jul 22, 2010 at 2:48 PM, Jonathan March <JDM at marchray.net> wrote:
>> - Jonathan March's bugfix
>> - A fix for a small wx bug I think I introduced.
>> - anything I'm missing?
>
> Just wanted to let ya'll know (I told Fernando and Brian last week)
> that I'm in the middle of changing jobs and cities on very short
> notice, so my proffered help with processing small ipython patches has
> to be on hold for now. I hope to be back and helping as soon as the
> move settles down.

No worries at all, I didn't mean to make it appear as if I was calling
you out :)

In this case I'll just apply your local change (since the actual fix
is a one-liner) and we'll be more than happy to have you plug back in
as your move/logistics allow.

Best of luck with the move.

Regards,

f


From satra at mit.edu  Fri Jul 23 22:07:43 2010
From: satra at mit.edu (Satrajit Ghosh)
Date: Fri, 23 Jul 2010 22:07:43 -0400
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTinLb-jx8r12LALgINdB3Dysoypg6sasdZU9mfmF@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
	<AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>
	<AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com>
	<AANLkTinE2gz627iSeHrZSN52ZViY_8JbXqu2tDux1gPN@mail.gmail.com>
	<AANLkTine__xHN3Gt9TWL36DpTsPa8LWpLG8nWMMoVOKn@mail.gmail.com>
	<AANLkTikJU_XgWFylavDth0WVdnv3EzjtIP6I-F2nzDDM@mail.gmail.com>
	<AANLkTikNCmON98MtPTj7Zgxk7XIbCk1FScB3OZkEfEpq@mail.gmail.com>
	<AANLkTi=5M6gz13u8TsqCogXB_=O1WW6qFCcXWxM3ijyN@mail.gmail.com>
	<AANLkTi=M0FyJ8ZmyurafQ45aaZDHOeYgmjgYQjEM0RX+@mail.gmail.com>
	<AANLkTinLb-jx8r12LALgINdB3Dysoypg6sasdZU9mfmF@mail.gmail.com>
Message-ID: <AANLkTi=jh8RGji0wPXpJHqwe_mJg1osGSBvPj9o21sM4@mail.gmail.com>

hi justin,

Satrajit, so for now I set /bin/sh to be the shell for all generated
>> scripts (PBS/SGE/LSF) given that it's probably the most commonly
>> included shell on *NIXs. Should we still add a --shell option? If the
>> user passes their own script they can of course customize the shell,
>> but otherwise I would imagine /bin/sh with the generated code should
>> work for most folks. If it still makes sense to have a --shell option
>> I'll add it in.
>>
>
i think it still makes sense to add it in. it should be identical to the
--queue option in that it's a switch. unfortunately, i do know of a lot of
places where tcsh is the default shell!

cheers,

satra
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100723/83917eef/attachment.html>

From matthieu.brucher at gmail.com  Sat Jul 24 02:41:55 2010
From: matthieu.brucher at gmail.com (Matthieu Brucher)
Date: Sat, 24 Jul 2010 08:41:55 +0200
Subject: [IPython-dev] SciPy Sprint summary
In-Reply-To: <AANLkTi=jh8RGji0wPXpJHqwe_mJg1osGSBvPj9o21sM4@mail.gmail.com>
References: <AANLkTinwimsB-o2Ix9UhVi8Rzh16AFwUr8pyuFd22GY1@mail.gmail.com>
	<4C3F1FE1.4040000@gmail.com>
	<AANLkTin1yuaUuQuuyv6KiNowjiF8hj6KO2decyXW_WKi@mail.gmail.com>
	<AANLkTil5BMI47Y6vIWOGG1BARbxmrr3wD2Fh95XaVD6C@mail.gmail.com>
	<4C3F709C.5080505@gmail.com>
	<AANLkTilSbJE5MkBPoNiIFc3GRmvPuZxGv5SC2ZqOnlzx@mail.gmail.com>
	<AANLkTinQ44h2k_13t-lnRwEWKJwLlX9jSNfPFnHvb_Zo@mail.gmail.com>
	<AANLkTiks5OCKKKafVrn5LBgPRHA6vZXCkau93wn4eLpo@mail.gmail.com>
	<AANLkTil7OfIaUZsTCZr6kMu8A6cnsutCSe_f77jeSWon@mail.gmail.com>
	<4C42B09F.50106@gmail.com>
	<AANLkTilFGVt8Z6mbpt-IX4ZJ_P5-aXDvbgCzFYp9DmtM@mail.gmail.com>
	<4C43455F.1050508@gmail.com>
	<AANLkTikgwUwk0yuiwD7buF_X_aQjnZoV6vvAeb7G9-UG@mail.gmail.com>
	<4C45B72F.5020000@gmail.com>
	<AANLkTimcwMLlftx46JKATbzIhNwLua_2-SNviAeso3aE@mail.gmail.com>
	<AANLkTimsi38lu0mbVMBihM63rp3C5_P5DYrAI-OBqMBt@mail.gmail.com>
	<AANLkTinE2gz627iSeHrZSN52ZViY_8JbXqu2tDux1gPN@mail.gmail.com>
	<AANLkTine__xHN3Gt9TWL36DpTsPa8LWpLG8nWMMoVOKn@mail.gmail.com>
	<AANLkTikJU_XgWFylavDth0WVdnv3EzjtIP6I-F2nzDDM@mail.gmail.com>
	<AANLkTikNCmON98MtPTj7Zgxk7XIbCk1FScB3OZkEfEpq@mail.gmail.com>
	<AANLkTi=5M6gz13u8TsqCogXB_=O1WW6qFCcXWxM3ijyN@mail.gmail.com>
	<AANLkTi=M0FyJ8ZmyurafQ45aaZDHOeYgmjgYQjEM0RX+@mail.gmail.com>
	<AANLkTinLb-jx8r12LALgINdB3Dysoypg6sasdZU9mfmF@mail.gmail.com>
	<AANLkTi=jh8RGji0wPXpJHqwe_mJg1osGSBvPj9o21sM4@mail.gmail.com>
Message-ID: <AANLkTi=Uv9k7SVzsH74969CaysGCofvsPUsz818SEYsW@mail.gmail.com>

2010/7/24 Satrajit Ghosh <satra at mit.edu>:
> hi justin,
>
>>> Satrajit, so for now I set /bin/sh to be the shell for all generated
>>> scripts (PBS/SGE/LSF) given that it's probably the most commonly
>>> included shell on *NIXs. Should we still add a --shell option? If the
>>> user passes their own script they can of course customize the shell,
>>> but otherwise I would imagine /bin/sh with the generated code should
>>> work for most folks. If it still makes sense to have a --shell option
>>> I'll add it in.
>
> i think it still makes sense to add it in. it should be identical to the
> --queue option in that it's a switch. unfortunately, i do know of a lot of
> places where tcsh is the default shell!

Wel, for instance, we have csh as our default sh...

Matthieu
-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher


From JDM at MarchRay.net  Sat Jul 24 07:45:49 2010
From: JDM at MarchRay.net (Jonathan March)
Date: Sat, 24 Jul 2010 06:45:49 -0500
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTi=JzerS48CoZ_4_c=5p+d7h87E2gbDF7Hix818m@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com>
	<AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com>
	<AANLkTin1KBe6cPQf5ZKz2ohVi5BviUkTWR-rK7jqG5B-@mail.gmail.com>
	<AANLkTi=JzerS48CoZ_4_c=5p+d7h87E2gbDF7Hix818m@mail.gmail.com>
Message-ID: <AANLkTimHwGuVvUUaTfjSuO5Kw8dr-67EXpfrduzL9e4f@mail.gmail.com>

On Fri, Jul 23, 2010 at 7:51 PM, Fernando Perez <fperez.net at gmail.com> wrote:

> On Thu, Jul 22, 2010 at 2:48 PM, Jonathan March <JDM at marchray.net> wrote:
>> I hope to be back and helping as soon as the move settles down.
>
> No worries at all, I didn't mean to make it appear as if I was calling you out :)

No worries either. That possibility never crossed my mind -
just thought I owed the list too an explanation for vanishing from my
little station :)


From andresete.chaos at gmail.com  Sat Jul 24 18:03:47 2010
From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=)
Date: Sat, 24 Jul 2010 17:03:47 -0500
Subject: [IPython-dev] about ipython-zmq
Message-ID: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com>

Hi everyone.
i'm working on a very important issue on ipython-zmq
.
Let's suppose the following code in the prompt:
In [1]: for i in range(100000):
   ...:     print i
   ...:
This will take a lot of time to run, and if the user wants to stop the
process he will normally do it with ctrl+c.
by capturing KeyboardInterrupt i was experimenting with a message sent to
the kernel to stop such process, but the kernel hangs until the "for"
process is over.
The solution I see is to run the kernel processes on a thread. what do you
think?

And another question:
What magi commands do you think ipython-zmq should have?

Omar Andres Zapata Mesa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100724/fae66ee5/attachment.html>

From fperez.net at gmail.com  Sat Jul 24 18:12:51 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 24 Jul 2010 15:12:51 -0700
Subject: [IPython-dev] about ipython-zmq
In-Reply-To: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com>
References: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com>
Message-ID: <AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com>

Hi Omar,

2010/7/24 Omar Andr?s Zapata Mesa <andresete.chaos at gmail.com>:
> .
> Let's suppose the following code in the prompt:
> In [1]: for i in range(100000):
> ?? ...: ? ? print i
> ?? ...:
> This will take a lot of time to run, and if the user wants to stop the
> process he will normally do it with ctrl+c.
> by capturing KeyboardInterrupt i was experimenting with a message sent to
> the kernel to stop such process, but the kernel hangs until the "for"
> process is over.
> The solution I see is to run the kernel processes on a thread. what do you
> think?

No, the kernel will be in a separate process, and what needs to be done is:

1. capture Ctrl-C in the frontend side with the usual try/except.

2. Send the Ctrl-C as a signal to the kernel process.

In order to do this, you'll  need to know the PID of the kernel
process, but Evan has already been making progress in this direction
so you can benefit from his work.  This code:

http://github.com/epatters/ipython/blob/qtfrontend/IPython/zmq/kernel.py#L316

already has a kernel launcher prototype with the necessary PID information.

To send the signal, you can use os.kill  for now.  This has problems
on Windows, but let's get signal handling working on *nix first and
once things are in place nicely, we'll look into more general options.

> And another question:
> What magi commands do you think ipython-zmq should have?

For now don't worry about magics, as they should all happen
kernel-wise for you.  I'll send an email regarding some ideas about
magics separately shortly.

Cheers,

f


From fperez.net at gmail.com  Sun Jul 25 05:03:21 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 02:03:21 -0700
Subject: [IPython-dev] Full block handling finished for interactive frontends
Message-ID: <AANLkTikvpqLjGZdqxZwf2q-j4iiN9=6F67pHukzK2BxG@mail.gmail.com>

Hi folks,

[especially Gerardo] with this commit:

http://github.com/fperez/ipython/commit/df85a15e64ca20ac6cb9f32721bd59343397d276

we now have a fully working block splitter that handles a reasonable
amount of test cases.  I haven't yet added static support for ipython
special syntax (magics, !, etc), but for pure python syntax it's fully
functional.  A second pair of eyes on this code would be much
appreciated, as it's the core of our interactive input handling and
getting it right (at least to this point) took me a surprising amount
of effort.

I'll try to complete the special syntax tomorrow, but even now that
can be sent to the kernel just fine.

Gerardo, let me know if you have any  problems using this method.  As
things stand now, Evan and Omar should be OK using the line-based
workflow, and you should be able to get your blocks with this code.
Over the next few days we'll work on landing all of this, and I think
our architecture is starting to shape up very nicely.

Cheers,

f


From gael.varoquaux at normalesup.org  Sun Jul 25 14:10:42 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sun, 25 Jul 2010 20:10:42 +0200
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
Message-ID: <20100725181042.GB16987@phare.normalesup.org>

With the 0.11 series of IPython, I no longer understand how the
interaction with the GUI mainloop occurs:

----------------------------------------------------------------------
$ ipython -wthread

In [1]: import wx

In [2]: wx.App.IsMainLoopRunning()
Out[2]: False
----------------------------------------------------------------------

----------------------------------------------------------------------
$ ipython -q4thread
In [1]: from PyQt4 import QtGui

In [2]: type(QtGui.QApplication.instance())
Out[2]: <type 'NoneType'>
----------------------------------------------------------------------

Is there a mainloop running or not? If not, I really don't understand how
I get interactivity with GUI windows and I'd love an explaination or a
pointer.

The problem with this behavior is that there is a lot of code that checks
if a mainloop is running, and if not starts one. This code thus blocks
IPython and more or less defeats the purpose of the GUI options.

Cheers,

Ga?l


From ellisonbg at gmail.com  Sun Jul 25 15:10:12 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 12:10:12 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <20100725181042.GB16987@phare.normalesup.org>
References: <20100725181042.GB16987@phare.normalesup.org>
Message-ID: <AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>

Gael,

Great questions.  The short answer is that the traditional methods of
discovering if the event loop is running won't work.  This issue will become
even more complicated with we implement GUI integration in the new 2 process
frontend/kernel.  We still need to decide how we are going to handle this.
 Here was the last email we sent out a long time ago that didn't really get
any response:

Current situation
=============

Both matplotlib and ets have code that tries to:

* See what GUI toolkit is being used
* Get the global App object if it already exists, if not create it.
* See if the main loop is running, if not possibly start it.

All of this logic makes many assumptions about how IPython affects the
answers to these questions.  Because IPython's GUI support has changed in
significant
ways, current matplotlib and ets make incorrect decisions about these issues
(such as trying to
start the event loop a second time, creating a second main App ojbect, etc.)
under IPython
0.11.  This leads to crashes...

Description of GUI support in 0.11
==========================

IPython allows GUI event loops to be run in an interactive IPython session.
This is done using Python's PyOS_InputHook hook which Python calls
when the :func:`raw_input` function is called and is waiting for user input.
IPython has versions of this hook for wx, pyqt4 and pygtk.  When the
inputhook
is called, it iterates the GUI event loop until a user starts to type
again.  When the user stops typing, the event loop iterates again.  This is
how tk works.

When a GUI program is used interactively within IPython, the event loop of
the GUI should *not* be started. This is because, the PyOS_Inputhook itself
is responsible for iterating the GUI event loop.

IPython has facilities for installing the needed input hook for each GUI
toolkit and for creating the needed main GUI application object. Usually,
these main application objects should be created only once and for some
GUI toolkits, special options have to be passed to the application object
to enable it to function properly in IPython.

What we need to decide
===================

We need to answer the following questions:

* Who is responsible for creating the main GUI application object, IPython
 or third parties (matplotlib, enthought.traits, etc.)?

* What is the proper way for third party code to detect if a GUI application
 object has already been created?  If one has been created, how should
 the existing instance be retrieved?

* In a GUI application object has been created, how should third party code
 detect if the GUI event loop is running. It is not sufficient to call the
 relevant function methods in the GUI toolkits (like ``IsMainLoopRunning``)
 because those don't know if the GUI event loop is running through the
 input hook.

* We might need a way for third party code to determine if it is running
 in IPython or not.  Currently, the only way of running GUI code in IPython
 is by using the input hook, but eventually, GUI based versions of IPython
 will allow the GUI event loop in the more traditional manner. We will need
 a way for third party code to distinguish between these two cases.

While we are focused on other things right now (the kernel/frontend) we
would love to hear your thoughts on these issues.  Implementing a solution
shouldn't be too difficult.

Cheers,

Brian



On Sun, Jul 25, 2010 at 11:10 AM, Gael Varoquaux <
gael.varoquaux at normalesup.org> wrote:

> With the 0.11 series of IPython, I no longer understand how the
> interaction with the GUI mainloop occurs:
>
> ----------------------------------------------------------------------
> $ ipython -wthread
>
> In [1]: import wx
>
> In [2]: wx.App.IsMainLoopRunning()
> Out[2]: False
> ----------------------------------------------------------------------
>
> ----------------------------------------------------------------------
> $ ipython -q4thread
> In [1]: from PyQt4 import QtGui
>
> In [2]: type(QtGui.QApplication.instance())
> Out[2]: <type 'NoneType'>
> ----------------------------------------------------------------------
>
> Is there a mainloop running or not? If not, I really don't understand how
> I get interactivity with GUI windows and I'd love an explaination or a
> pointer.
>
> The problem with this behavior is that there is a lot of code that checks
> if a mainloop is running, and if not starts one. This code thus blocks
> IPython and more or less defeats the purpose of the GUI options.
>
> Cheers,
>
> Ga?l
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/4b4758b1/attachment.html>

From efiring at hawaii.edu  Sun Jul 25 16:35:02 2010
From: efiring at hawaii.edu (Eric Firing)
Date: Sun, 25 Jul 2010 10:35:02 -1000
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
Message-ID: <4C4C9FF6.8090503@hawaii.edu>

On 07/25/2010 09:10 AM, Brian Granger wrote:
> Gael,
>
> Great questions.  The short answer is that the traditional methods of
> discovering if the event loop is running won't work.  This issue will
> become even more complicated with we implement GUI integration in the
> new 2 process frontend/kernel.  We still need to decide how we are going
> to handle this.  Here was the last email we sent out a long time ago
> that didn't really get any response:

Brian,

I've been looking at that old message for a couple of weeks, trying to 
figure out how to respond from the mpl perspective.  I'm still quite 
uncertain, and I would be pleased to see people with a better 
understanding of gui toolkits, event loops, and ipython step in.

Preliminary thoughts:

Although ipython has provided invaluable service to mpl by enabling 
interactive plotting for all gui backends, I am not at all sure that 
this functionality should be left to ipython in the long run.  The 
problem is that mpl is used in a variety of ways and environments.  Gui 
functionality is central to mpl; it seems odd, and unnecessarily 
complicated, to have to delegate part of that to an environment, or 
shell, like ipython.

At present, for most backends, interactive mpl plotting is possible in 
ipython without any of ipython's gui logic.  That is, running vanilla 
ipython one can:

In [1]: from pylab import *

In [2]: ion()

In [3]: plot([1,2,3])
Out[3]: [<matplotlib.lines.Line2D object at 0x3f3c350>]

and the plot appears with full interaction, courtesy of the 
PyOS_InputHook mechanism used by default in tk, gtk, and qt4.  If mpl 
simply adopted the new ipython code to add this capability to wx, then 
wx* backends would be included.  The advantage over leaving this in 
ipython is that it would give mpl more uniform behavior regardless of 
whether it is run in ipython or elsewhere.

Sometimes one wants mpl's show() to have blocking behavior.  At present 
it blocks when mpl is not in interactive mode.  The blocking is 
implemented by starting the gui event loop.

One very useful service ipython provides is enabling mpl scripts with 
show() to be run in non-blocking mode.  I think this would be even 
better if one could easily choose whether to respect the interactive 
setting.  Then, one could either run a script in ipython exactly as it 
would be run from the command line--that is, blocking at each show() if 
not in interactive mode--or one could run it as at present in pylab 
mode.  I think this could be done with simple modifications of the pylab 
mode code.

I have no idea how all this will be affected by the proposed two-process 
model for ipython.

>
> Current situation
> =============
>
> Both matplotlib and ets have code that tries to:
>
> * See what GUI toolkit is being used
> * Get the global App object if it already exists, if not create it.
> * See if the main loop is running, if not possibly start it.
>
> All of this logic makes many assumptions about how IPython affects the
> answers to these questions.  Because IPython's GUI support has changed
> in significant
> ways, current matplotlib and ets make incorrect decisions about these
> issues (such as trying to
> start the event loop a second time, creating a second main App ojbect,
> etc.) under IPython
> 0.11.  This leads to crashes...

This complexity is the reason why I would like to delegate all gui 
control back to mpl.

>
> Description of GUI support in 0.11
> ==========================
>
> IPython allows GUI event loops to be run in an interactive IPython session.
> This is done using Python's PyOS_InputHook hook which Python calls
> when the :func:`raw_input` function is called and is waiting for user input.
> IPython has versions of this hook for wx, pyqt4 and pygtk.  When the
> inputhook
> is called, it iterates the GUI event loop until a user starts to type
> again.  When the user stops typing, the event loop iterates again.  This
> is how tk works.
>
> When a GUI program is used interactively within IPython, the event loop of
> the GUI should *not* be started. This is because, the PyOS_Inputhook itself
> is responsible for iterating the GUI event loop.
>
> IPython has facilities for installing the needed input hook for each GUI
> toolkit and for creating the needed main GUI application object. Usually,
> these main application objects should be created only once and for some
> GUI toolkits, special options have to be passed to the application object
> to enable it to function properly in IPython.

I don't know anything about these options.  I think that presently, mpl 
is always making the app object--but it is hard to keep all this 
straight in my head.

>
> What we need to decide
> ===================
>
> We need to answer the following questions:
>
> * Who is responsible for creating the main GUI application object, IPython
>   or third parties (matplotlib, enthought.traits, etc.)?
>

At least for mpl, mpl always needs to be *able* to make it, since it 
can't depend on being run in ipython.  Therefore it seems simpler if mpl 
always *does* make it.

> * What is the proper way for third party code to detect if a GUI application
>   object has already been created?  If one has been created, how should
>   the existing instance be retrieved?
>

It would be simpler if third party code (mpl) did not *have* to do all 
this--if it could simply assume that it was responsible for creating and 
destroying the app object.  But maybe this is naive.


> * In a GUI application object has been created, how should third party code
>   detect if the GUI event loop is running. It is not sufficient to call the
>   relevant function methods in the GUI toolkits (like ``IsMainLoopRunning``)
>   because those don't know if the GUI event loop is running through the
>   input hook.
>

Again, it seems so much simpler if the third party code can be left in 
control of all this, so the question does not even arise.

> * We might need a way for third party code to determine if it is running
>   in IPython or not.  Currently, the only way of running GUI code in IPython
>   is by using the input hook, but eventually, GUI based versions of IPython
>   will allow the GUI event loop in the more traditional manner. We will need
>   a way for third party code to distinguish between these two cases.
>

What are the non-hook methods you have in mind?  Maybe this option makes 
my proposed, or hoped-for, simplification impossible.

> While we are focused on other things right now (the kernel/frontend) we
> would love to hear your thoughts on these issues.  Implementing a
> solution shouldn't be too difficult.

Another vague thought:  If we really need a more flexible environment, 
then maybe the way to achieve it is with a separate package or module 
that provides the API for collaboration between, e.g., ipython and mpl. 
  Perhaps all the toolkit-specific event loop code could be factored out 
and wrapped in a toolkit-neutral API.  Then, an mpl interactive backend 
would use this API regardless of whether mpl is running in a script, or 
inside ipython.  In the latter case, ipython would be using the same 
API, providing centralized knowledge of, and control over, the app 
object and the loop.  I think that such a refactoring, largely combining 
existing functionality in ipython and mpl, might not be terribly 
difficult, and might make future improvements in functionality much 
easier.  It would also make it easier for other libraries to plug into 
ipython, collaborate with mpl, etc.

Even if the idea above is sound--and it may be completely 
impractical--the devil is undoubtedly in the details.

Eric

>
> Cheers,
>
> Brian
>
>
>
> On Sun, Jul 25, 2010 at 11:10 AM, Gael Varoquaux
> <gael.varoquaux at normalesup.org <mailto:gael.varoquaux at normalesup.org>>
> wrote:
>
>     With the 0.11 series of IPython, I no longer understand how the
>     interaction with the GUI mainloop occurs:
>
>     ----------------------------------------------------------------------
>     $ ipython -wthread
>
>     In [1]: import wx
>
>     In [2]: wx.App.IsMainLoopRunning()
>     Out[2]: False
>     ----------------------------------------------------------------------
>
>     ----------------------------------------------------------------------
>     $ ipython -q4thread
>     In [1]: from PyQt4 import QtGui
>
>     In [2]: type(QtGui.QApplication.instance())
>     Out[2]: <type 'NoneType'>
>     ----------------------------------------------------------------------
>
>     Is there a mainloop running or not? If not, I really don't
>     understand how
>     I get interactivity with GUI windows and I'd love an explaination or a
>     pointer.
>
>     The problem with this behavior is that there is a lot of code that
>     checks
>     if a mainloop is running, and if not starts one. This code thus blocks
>     IPython and more or less defeats the purpose of the GUI options.
>
>     Cheers,
>
>     Ga?l
>     _______________________________________________
>     IPython-dev mailing list
>     IPython-dev at scipy.org <mailto:IPython-dev at scipy.org>
>     http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu <mailto:bgranger at calpoly.edu>
> ellisonbg at gmail.com <mailto:ellisonbg at gmail.com>
>
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev



From gael.varoquaux at normalesup.org  Sun Jul 25 16:35:00 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sun, 25 Jul 2010 22:35:00 +0200
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
Message-ID: <20100725203500.GA2587@phare.normalesup.org>

On Sun, Jul 25, 2010 at 12:10:12PM -0700, Brian Granger wrote:
>    * It is not sufficient to call the ?relevant function methods in the
>    GUI toolkits (like ``IsMainLoopRunning``) ?because those don't know
>    if the GUI event loop is running through the ?input hook.

OK, so this is the key part that I had missed. I could even call this a
bug of the various toolkits.

Is there any way to find out if the GUI event loop is running through
the input hook at all?

>    Both matplotlib and ets have code that tries to [snip]

The problem is a bit larger: it's not only about matplotlib, ets and
IPython, it's a fairly general practice for plugin-like code to check if
the eventloop is running before starting it.

So, we have a problem and no solution (yet). The good news (I guess) is
that IPython 0.11 is not in production yet. I am just worried about the
bug reports landing in the various packages.

Thanks for your explanations. If you have any suggestions, I am open to
try things out in Mayavi (which apart for this problem works just fine
with 0.11).

Ga?l


From gael.varoquaux at normalesup.org  Sun Jul 25 16:56:07 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sun, 25 Jul 2010 22:56:07 +0200
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <4C4C9FF6.8090503@hawaii.edu>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
Message-ID: <20100725205607.GB2587@phare.normalesup.org>

On Sun, Jul 25, 2010 at 10:35:02AM -1000, Eric Firing wrote:
> Although ipython has provided invaluable service to mpl by enabling 
> interactive plotting for all gui backends, I am not at all sure that 
> this functionality should be left to ipython in the long run.  The 
> problem is that mpl is used in a variety of ways and environments.  Gui 
> functionality is central to mpl; it seems odd, and unnecessarily 
> complicated, to have to delegate part of that to an environment, or 
> shell, like ipython.

Wow, I just did a little experiment, and I really don't understand the
outcome. Please bear with me:

$ ipython

In [1]: !cat /home/varoquau/.matplotlib/matplotlibrc
backend     : GtkAgg 

In [2]: from pylab import *

In [3]: ion()

In [4]: plot([1,2,3])
Out[4]: [<matplotlib.lines.Line2D object at 0xccb4dac>]

In [5]: from enthought.mayavi import mlab

In [6]: mlab.test_surf()
Out[6]: <enthought.mayavi.modules.surface.Surface object at 0xd58ce0c>

Two things I do not understand:

    1) I can interact alright with the Mayavi plot, nice and fine
       eventhough there is not wx event-loop running, and I did not
       register a InputHook

    2) I did not get a segfault, while I am running at the same time GTK
       and Wx. This used to be a big no no.

I believe that 1 is due to matplotlib registering an InputHook, but I
cannot find where it is done. Also, does this seems to mean that under
Linux GTK input hooks work for Wx (and they are nicer since they don't
poll).

Anyhow, this is good news, eventhough I don't understand it at all.

Ga?l


From ellisonbg at gmail.com  Sun Jul 25 17:05:07 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:05:07 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <20100725203500.GA2587@phare.normalesup.org>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<20100725203500.GA2587@phare.normalesup.org>
Message-ID: <AANLkTinZwqVqn9j9qt_T-kgmnofa94Jm_cON-6bsw0yx@mail.gmail.com>

On Sun, Jul 25, 2010 at 1:35 PM, Gael Varoquaux <
gael.varoquaux at normalesup.org> wrote:

> On Sun, Jul 25, 2010 at 12:10:12PM -0700, Brian Granger wrote:
> >    * It is not sufficient to call the  relevant function methods in the
> >    GUI toolkits (like ``IsMainLoopRunning``)  because those don't know
> >    if the GUI event loop is running through the  input hook.
>
> OK, so this is the key part that I had missed. I could even call this a
> bug of the various toolkits.
>
> Is there any way to find out if the GUI event loop is running through
> the input hook at all?
>

Yes:

from IPython.lib import inputhook
inputhook.current_gui()

The possible values are:

GUI_WX = 'wx'
GUI_QT = 'qt'
GUI_QT4 = 'qt4'
GUI_GTK = 'gtk'
GUI_TK = 'tk'

>    Both matplotlib and ets have code that tries to [snip]
>
> The problem is a bit larger: it's not only about matplotlib, ets and
> IPython, it's a fairly general practice for plugin-like code to check if
> the eventloop is running before starting it.
>
>
Yes, and if such projects want to run in IPython, they are going to have to
add additional logic.  We spent a very long time to see if this could be
avoided and it cannot.  Our hope is that the additional logic can be
absolutely minimal.


> So, we have a problem and no solution (yet). The good news (I guess) is
> that IPython 0.11 is not in production yet. I am just worried about the
> bug reports landing in the various packages.
>

Yes, this stuff is definitely not release ready.

Thanks for your explanations. If you have any suggestions, I am open to
> try things out in Mayavi (which apart for this problem works just fine
> with 0.11).
>
>
Great.  I think we will need to wait until we have done the GUI integration
for the kernel/frontend before finalizing things, because there, the GUI
integration will be quite different.

Cheers,

Brian


> Ga?l
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/3ab4cabc/attachment.html>

From efiring at hawaii.edu  Sun Jul 25 17:05:53 2010
From: efiring at hawaii.edu (Eric Firing)
Date: Sun, 25 Jul 2010 11:05:53 -1000
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <20100725205607.GB2587@phare.normalesup.org>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
	<20100725205607.GB2587@phare.normalesup.org>
Message-ID: <4C4CA731.6030802@hawaii.edu>

On 07/25/2010 10:56 AM, Gael Varoquaux wrote:
> On Sun, Jul 25, 2010 at 10:35:02AM -1000, Eric Firing wrote:
>> Although ipython has provided invaluable service to mpl by enabling
>> interactive plotting for all gui backends, I am not at all sure that
>> this functionality should be left to ipython in the long run.  The
>> problem is that mpl is used in a variety of ways and environments.  Gui
>> functionality is central to mpl; it seems odd, and unnecessarily
>> complicated, to have to delegate part of that to an environment, or
>> shell, like ipython.
>
> Wow, I just did a little experiment, and I really don't understand the
> outcome. Please bear with me:
>
> $ ipython
>
> In [1]: !cat /home/varoquau/.matplotlib/matplotlibrc
> backend     : GtkAgg
>
> In [2]: from pylab import *
>
> In [3]: ion()
>
> In [4]: plot([1,2,3])
> Out[4]: [<matplotlib.lines.Line2D object at 0xccb4dac>]
>
> In [5]: from enthought.mayavi import mlab
>
> In [6]: mlab.test_surf()
> Out[6]:<enthought.mayavi.modules.surface.Surface object at 0xd58ce0c>
>
> Two things I do not understand:
>
>      1) I can interact alright with the Mayavi plot, nice and fine
>         eventhough there is not wx event-loop running, and I did not
>         register a InputHook
>
>      2) I did not get a segfault, while I am running at the same time GTK
>         and Wx. This used to be a big no no.
>
> I believe that 1 is due to matplotlib registering an InputHook, but I
> cannot find where it is done. Also, does this seems to mean that under
> Linux GTK input hooks work for Wx (and they are nicer since they don't
> poll).

No, mpl is not registering an InputHook, but pygtk is.  Maybe this is 
having a side effect because wx on linux is a wrapper around gtk.

To get a hook registered explicitly for wx, you need to use "ipython 
--gui wx"

Eric

>
> Anyhow, this is good news, eventhough I don't understand it at all.
>
> Ga?l



From gael.varoquaux at normalesup.org  Sun Jul 25 17:12:16 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sun, 25 Jul 2010 23:12:16 +0200
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <AANLkTinZwqVqn9j9qt_T-kgmnofa94Jm_cON-6bsw0yx@mail.gmail.com>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<20100725203500.GA2587@phare.normalesup.org>
	<AANLkTinZwqVqn9j9qt_T-kgmnofa94Jm_cON-6bsw0yx@mail.gmail.com>
Message-ID: <20100725211216.GA8338@phare.normalesup.org>

On Sun, Jul 25, 2010 at 02:05:07PM -0700, Brian Granger wrote:
>      Is there any way to find out if the GUI event loop is running through
>      the input hook at all?

>    Yes:
>    from IPython.lib import inputhook
>    inputhook.current_gui()

OK, but that's IPython-specific. It's an option for Mayavi, though.

>    Great. ?I think we will need to wait until we have done the GUI
>    integration for the kernel/frontend before finalizing things, because
>    there, the GUI integration will be quite different.

Indeed. I guess that in the meantime I'll just do nothing, and if people
want to work with IPython 0.11, they should avoid calling mlab.show().

Thanks for your advice.

Ga?l


From gael.varoquaux at normalesup.org  Sun Jul 25 17:14:16 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sun, 25 Jul 2010 23:14:16 +0200
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <4C4CA731.6030802@hawaii.edu>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
	<20100725205607.GB2587@phare.normalesup.org>
	<4C4CA731.6030802@hawaii.edu>
Message-ID: <20100725211416.GB8338@phare.normalesup.org>

On Sun, Jul 25, 2010 at 11:05:53AM -1000, Eric Firing wrote:
>> I believe that 1 is due to matplotlib registering an InputHook, but I
>> cannot find where it is done. Also, does this seems to mean that under
>> Linux GTK input hooks work for Wx (and they are nicer since they don't
>> poll).
>
> No, mpl is not registering an InputHook, but pygtk is.  Maybe this is  
> having a side effect because wx on linux is a wrapper around gtk.

Interesting. It's actually very nice. I wonder if IPython could use this
to avoid the current polling loop in wx which is fairly annoying.

Ga?l


From efiring at hawaii.edu  Sun Jul 25 17:16:48 2010
From: efiring at hawaii.edu (Eric Firing)
Date: Sun, 25 Jul 2010 11:16:48 -1000
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <20100725211216.GA8338@phare.normalesup.org>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<20100725203500.GA2587@phare.normalesup.org>
	<AANLkTinZwqVqn9j9qt_T-kgmnofa94Jm_cON-6bsw0yx@mail.gmail.com>
	<20100725211216.GA8338@phare.normalesup.org>
Message-ID: <4C4CA9C0.50805@hawaii.edu>

On 07/25/2010 11:12 AM, Gael Varoquaux wrote:
> On Sun, Jul 25, 2010 at 02:05:07PM -0700, Brian Granger wrote:
>>       Is there any way to find out if the GUI event loop is running through
>>       the input hook at all?
>
>>     Yes:
>>     from IPython.lib import inputhook
>>     inputhook.current_gui()
>
> OK, but that's IPython-specific. It's an option for Mayavi, though.
>
>>     Great. ?I think we will need to wait until we have done the GUI
>>     integration for the kernel/frontend before finalizing things, because
>>     there, the GUI integration will be quite different.
>
> Indeed. I guess that in the meantime I'll just do nothing, and if people
> want to work with IPython 0.11, they should avoid calling mlab.show().

Gael,

I haven't looked at mlab.show(), but if it is derived from earlier 
matplotlib show(), then you might want to take a look at how show() is 
now implemented in mpl.  It works well with ipython 0.10 and 0.11.

Eric

>
> Thanks for your advice.
>
> Ga?l
>
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev



From ellisonbg at gmail.com  Sun Jul 25 17:22:42 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:22:42 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <4C4C9FF6.8090503@hawaii.edu>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
Message-ID: <AANLkTi=6zgwTKzCfbvUOocMccDa93+iK-Yq-ausmzVxz@mail.gmail.com>

On Sun, Jul 25, 2010 at 1:35 PM, Eric Firing <efiring at hawaii.edu> wrote:

> On 07/25/2010 09:10 AM, Brian Granger wrote:
> > Gael,
> >
> > Great questions.  The short answer is that the traditional methods of
> > discovering if the event loop is running won't work.  This issue will
> > become even more complicated with we implement GUI integration in the
> > new 2 process frontend/kernel.  We still need to decide how we are going
> > to handle this.  Here was the last email we sent out a long time ago
> > that didn't really get any response:
>
> Brian,
>
> I've been looking at that old message for a couple of weeks, trying to
> figure out how to respond from the mpl perspective.  I'm still quite
> uncertain, and I would be pleased to see people with a better
> understanding of gui toolkits, event loops, and ipython step in.
>

Part of the challenge is that Fernando and I (who have done the GUI work in
IPython) don't know GUI toolkits very well.



> Preliminary thoughts:
>
> Although ipython has provided invaluable service to mpl by enabling
> interactive plotting for all gui backends, I am not at all sure that
> this functionality should be left to ipython in the long run.  The
> problem is that mpl is used in a variety of ways and environments.  Gui
> functionality is central to mpl; it seems odd, and unnecessarily
> complicated, to have to delegate part of that to an environment, or
> shell, like ipython.
>

The challenge is that other projects (traits, mayavi, chaco, etc.) need
these capabilities as well.  They either need to be in IPython or a separate
project.

At present, for most backends, interactive mpl plotting is possible in
> ipython without any of ipython's gui logic.  That is, running vanilla
> ipython one can:
>
> In [1]: from pylab import *
>
> In [2]: ion()
>
> In [3]: plot([1,2,3])
> Out[3]: [<matplotlib.lines.Line2D object at 0x3f3c350>]
>
> and the plot appears with full interaction, courtesy of the
> PyOS_InputHook mechanism used by default in tk, gtk, and qt4.  If mpl
> simply adopted the new ipython code to add this capability to wx, then
> wx* backends would be included.  The advantage over leaving this in
> ipython is that it would give mpl more uniform behavior regardless of
> whether it is run in ipython or elsewhere.
>

Yes, tk, gtk and qt4 already use the input hook mechanism and it doesn't
require IPython in any way.  At some level all the 0.11 IPython GUI support
does is implement a PyOS_InputHook for wx and then provide a single
interface for managing any GUI toolkit.  Maybe this code will eventually
make its way into wx, but even then, it makes sense to have a single nice
interface for this.


> Sometimes one wants mpl's show() to have blocking behavior.  At present
> it blocks when mpl is not in interactive mode.  The blocking is
> implemented by starting the gui event loop.
>
> One very useful service ipython provides is enabling mpl scripts with
> show() to be run in non-blocking mode.  I think this would be even
> better if one could easily choose whether to respect the interactive
> setting.  Then, one could either run a script in ipython exactly as it
> would be run from the command line--that is, blocking at each show() if
> not in interactive mode--or one could run it as at present in pylab
> mode.  I think this could be done with simple modifications of the pylab
> mode code.
>
>
Yes.


> I have no idea how all this will be affected by the proposed two-process
> model for ipython.
>
>
The two process model will not use the inputhook stuff at all.  It will
simple start a full GUI eventloop in the kernel process.  Because it is a
very different approach than the inputhook approach we will need to think
carefully about what interface we provide to projects like mpl.


> >
> > Current situation
> > =============
> >
> > Both matplotlib and ets have code that tries to:
> >
> > * See what GUI toolkit is being used
> > * Get the global App object if it already exists, if not create it.
> > * See if the main loop is running, if not possibly start it.
> >
> > All of this logic makes many assumptions about how IPython affects the
> > answers to these questions.  Because IPython's GUI support has changed
> > in significant
> > ways, current matplotlib and ets make incorrect decisions about these
> > issues (such as trying to
> > start the event loop a second time, creating a second main App ojbect,
> > etc.) under IPython
> > 0.11.  This leads to crashes...
>
> This complexity is the reason why I would like to delegate all gui
> control back to mpl.
>
>
We can't really do that.  The issue is that people want to use both mpl and
traits and chaco in the same code.  If mpl is completely responsible for the
GUI stuff, how will traits and chaco configure their GUI stuff.  The usual
approach of seeing if there is a global app, and using it won't always work.
 In the event loop is running using PyOS_InputHook, how will
mpl/chaco/traits tell if the event loop is running?


> >
> > Description of GUI support in 0.11
> > ==========================
> >
> > IPython allows GUI event loops to be run in an interactive IPython
> session.
> > This is done using Python's PyOS_InputHook hook which Python calls
> > when the :func:`raw_input` function is called and is waiting for user
> input.
> > IPython has versions of this hook for wx, pyqt4 and pygtk.  When the
> > inputhook
> > is called, it iterates the GUI event loop until a user starts to type
> > again.  When the user stops typing, the event loop iterates again.  This
> > is how tk works.
> >
> > When a GUI program is used interactively within IPython, the event loop
> of
> > the GUI should *not* be started. This is because, the PyOS_Inputhook
> itself
> > is responsible for iterating the GUI event loop.
> >
> > IPython has facilities for installing the needed input hook for each GUI
> > toolkit and for creating the needed main GUI application object. Usually,
> > these main application objects should be created only once and for some
> > GUI toolkits, special options have to be passed to the application object
> > to enable it to function properly in IPython.
>
> I don't know anything about these options.  I think that presently, mpl
> is always making the app object--but it is hard to keep all this
> straight in my head.
>
>
Not quite.  When mpl is run in pylab mode in IPython, IPython always creates
the App object.  It the monkey patches the App creation methods to return
the existing version.  Thus, while it looks like mpl is creating the App
objects, it isn't.  This type of monkey patching doesn't play well with the
inputhook stuff.


> >
> > What we need to decide
> > ===================
> >
> > We need to answer the following questions:
> >
> > * Who is responsible for creating the main GUI application object,
> IPython
> >   or third parties (matplotlib, enthought.traits, etc.)?
> >
>
> At least for mpl, mpl always needs to be *able* to make it, since it
> can't depend on being run in ipython.  Therefore it seems simpler if mpl
> always *does* make it.
>
>
This logic has to be conditional.  mpl will have to first look somewhere???
to see if someone else (IPython, traits, chaco) has created it and if the
event loop is running.  This is the trick.


> > * What is the proper way for third party code to detect if a GUI
> application
> >   object has already been created?  If one has been created, how should
> >   the existing instance be retrieved?
> >
>
> It would be simpler if third party code (mpl) did not *have* to do all
> this--if it could simply assume that it was responsible for creating and
> destroying the app object.  But maybe this is naive.
>
>
Because multiple libraries all want to simultaneously to GUI stuff, there
has to be a way for all of them to coordinate.


>
> > * In a GUI application object has been created, how should third party
> code
> >   detect if the GUI event loop is running. It is not sufficient to call
> the
> >   relevant function methods in the GUI toolkits (like
> ``IsMainLoopRunning``)
> >   because those don't know if the GUI event loop is running through the
> >   input hook.
> >
>
> Again, it seems so much simpler if the third party code can be left in
> control of all this, so the question does not even arise.
>
> > * We might need a way for third party code to determine if it is running
> >   in IPython or not.  Currently, the only way of running GUI code in
> IPython
> >   is by using the input hook, but eventually, GUI based versions of
> IPython
> >   will allow the GUI event loop in the more traditional manner. We will
> need
> >   a way for third party code to distinguish between these two cases.
> >
>
> What are the non-hook methods you have in mind?  Maybe this option makes
> my proposed, or hoped-for, simplification impossible.
>
>
The two process kernel/frontend will simply start the event loop in the
kernel in the traditional way (non inputhook).  It has to do this because
the entire kernel will be based on that event loop.  We have thought about
if we could reuse the inputhook stuff there and it won't work.


> > While we are focused on other things right now (the kernel/frontend) we
> > would love to hear your thoughts on these issues.  Implementing a
> > solution shouldn't be too difficult.
>
> Another vague thought:  If we really need a more flexible environment,
> then maybe the way to achieve it is with a separate package or module
> that provides the API for collaboration between, e.g., ipython and mpl.
>  Perhaps all the toolkit-specific event loop code could be factored out
> and wrapped in a toolkit-neutral API.  Then, an mpl interactive backend
> would use this API regardless of whether mpl is running in a script, or
> inside ipython.  In the latter case, ipython would be using the same
> API, providing centralized knowledge of, and control over, the app
> object and the loop.  I think that such a refactoring, largely combining
> existing functionality in ipython and mpl, might not be terribly
> difficult, and might make future improvements in functionality much
> easier.  It would also make it easier for other libraries to plug into
> ipython, collaborate with mpl, etc.
>
>
This might make sense and as we move forward we should see if this makes
sense.  My first thought though is that I don't want to track yet another
project though.


> Even if the idea above is sound--and it may be completely
> impractical--the devil is undoubtedly in the details.
>
>
And there are many ones in this case.  Thanks for participating in the
discussion.

Brian



> Eric
>
> >
> > Cheers,
> >
> > Brian
> >
> >
> >
> > On Sun, Jul 25, 2010 at 11:10 AM, Gael Varoquaux
> > <gael.varoquaux at normalesup.org <mailto:gael.varoquaux at normalesup.org>>
> > wrote:
> >
> >     With the 0.11 series of IPython, I no longer understand how the
> >     interaction with the GUI mainloop occurs:
> >
> >
> ----------------------------------------------------------------------
> >     $ ipython -wthread
> >
> >     In [1]: import wx
> >
> >     In [2]: wx.App.IsMainLoopRunning()
> >     Out[2]: False
> >
> ----------------------------------------------------------------------
> >
> >
> ----------------------------------------------------------------------
> >     $ ipython -q4thread
> >     In [1]: from PyQt4 import QtGui
> >
> >     In [2]: type(QtGui.QApplication.instance())
> >     Out[2]: <type 'NoneType'>
> >
> ----------------------------------------------------------------------
> >
> >     Is there a mainloop running or not? If not, I really don't
> >     understand how
> >     I get interactivity with GUI windows and I'd love an explaination or
> a
> >     pointer.
> >
> >     The problem with this behavior is that there is a lot of code that
> >     checks
> >     if a mainloop is running, and if not starts one. This code thus
> blocks
> >     IPython and more or less defeats the purpose of the GUI options.
> >
> >     Cheers,
> >
> >     Ga?l
> >     _______________________________________________
> >     IPython-dev mailing list
> >     IPython-dev at scipy.org <mailto:IPython-dev at scipy.org>
> >     http://mail.scipy.org/mailman/listinfo/ipython-dev
> >
> >
> >
> >
> > --
> > Brian E. Granger, Ph.D.
> > Assistant Professor of Physics
> > Cal Poly State University, San Luis Obispo
> > bgranger at calpoly.edu <mailto:bgranger at calpoly.edu>
> > ellisonbg at gmail.com <mailto:ellisonbg at gmail.com>
> >
> >
> >
> > _______________________________________________
> > IPython-dev mailing list
> > IPython-dev at scipy.org
> > http://mail.scipy.org/mailman/listinfo/ipython-dev
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/8ba1a657/attachment.html>

From ellisonbg at gmail.com  Sun Jul 25 17:25:00 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:25:00 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <4C4CA731.6030802@hawaii.edu>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
	<20100725205607.GB2587@phare.normalesup.org>
	<4C4CA731.6030802@hawaii.edu>
Message-ID: <AANLkTimNJhr7TY9V5N6bZgjsB+JZqHE29GHnpcSSm7Lk@mail.gmail.com>

On Sun, Jul 25, 2010 at 2:05 PM, Eric Firing <efiring at hawaii.edu> wrote:

> On 07/25/2010 10:56 AM, Gael Varoquaux wrote:
> > On Sun, Jul 25, 2010 at 10:35:02AM -1000, Eric Firing wrote:
> >> Although ipython has provided invaluable service to mpl by enabling
> >> interactive plotting for all gui backends, I am not at all sure that
> >> this functionality should be left to ipython in the long run.  The
> >> problem is that mpl is used in a variety of ways and environments.  Gui
> >> functionality is central to mpl; it seems odd, and unnecessarily
> >> complicated, to have to delegate part of that to an environment, or
> >> shell, like ipython.
> >
> > Wow, I just did a little experiment, and I really don't understand the
> > outcome. Please bear with me:
> >
> > $ ipython
> >
> > In [1]: !cat /home/varoquau/.matplotlib/matplotlibrc
> > backend     : GtkAgg
> >
> > In [2]: from pylab import *
> >
> > In [3]: ion()
> >
> > In [4]: plot([1,2,3])
> > Out[4]: [<matplotlib.lines.Line2D object at 0xccb4dac>]
> >
> > In [5]: from enthought.mayavi import mlab
> >
> > In [6]: mlab.test_surf()
> > Out[6]:<enthought.mayavi.modules.surface.Surface object at 0xd58ce0c>
> >
> > Two things I do not understand:
> >
> >      1) I can interact alright with the Mayavi plot, nice and fine
> >         eventhough there is not wx event-loop running, and I did not
> >         register a InputHook
> >
> >      2) I did not get a segfault, while I am running at the same time GTK
> >         and Wx. This used to be a big no no.
> >
> > I believe that 1 is due to matplotlib registering an InputHook, but I
> > cannot find where it is done. Also, does this seems to mean that under
> > Linux GTK input hooks work for Wx (and they are nicer since they don't
> > poll).
>
> No, mpl is not registering an InputHook, but pygtk is.  Maybe this is
> having a side effect because wx on linux is a wrapper around gtk.
>
> To get a hook registered explicitly for wx, you need to use "ipython
> --gui wx"
>
>
I should clarify.  All IPython is doing in 0.11 for qt4, gtk and tk is to
tell each GUI toolkit to install its inputhook.  Here is the gtk version:

http://github.com/ipython/ipython/blob/master/IPython/lib/inputhook.py#L457

Part of the difficulty is that each GUI toolkits has a different API for
doing this.  We make the API uniform and add a wx inputhook using ctypes.

Cheers,

Brian


> Eric
>
> >
> > Anyhow, this is good news, eventhough I don't understand it at all.
> >
> > Ga?l
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/9f84dc63/attachment.html>

From ellisonbg at gmail.com  Sun Jul 25 17:28:44 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:28:44 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <20100725205607.GB2587@phare.normalesup.org>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
	<20100725205607.GB2587@phare.normalesup.org>
Message-ID: <AANLkTiksKrk_xL0-ijGTsi7-ZKvHpbXP9P8=zP=HoWTn@mail.gmail.com>

On Sun, Jul 25, 2010 at 1:56 PM, Gael Varoquaux <
gael.varoquaux at normalesup.org> wrote:

> On Sun, Jul 25, 2010 at 10:35:02AM -1000, Eric Firing wrote:
> > Although ipython has provided invaluable service to mpl by enabling
> > interactive plotting for all gui backends, I am not at all sure that
> > this functionality should be left to ipython in the long run.  The
> > problem is that mpl is used in a variety of ways and environments.  Gui
> > functionality is central to mpl; it seems odd, and unnecessarily
> > complicated, to have to delegate part of that to an environment, or
> > shell, like ipython.
>
> Wow, I just did a little experiment, and I really don't understand the
> outcome. Please bear with me:
>
> $ ipython
>
> In [1]: !cat /home/varoquau/.matplotlib/matplotlibrc
> backend     : GtkAgg
>
> In [2]: from pylab import *
>
> In [3]: ion()
>
> In [4]: plot([1,2,3])
> Out[4]: [<matplotlib.lines.Line2D object at 0xccb4dac>]
>
> In [5]: from enthought.mayavi import mlab
>
> In [6]: mlab.test_surf()
> Out[6]: <enthought.mayavi.modules.surface.Surface object at 0xd58ce0c>
>
> Two things I do not understand:
>
>    1) I can interact alright with the Mayavi plot, nice and fine
>       eventhough there is not wx event-loop running, and I did not
>       register a InputHook
>
>
This is because pygtk automagically and by default registers a
PyOS_InputHook.


>    2) I did not get a segfault, while I am running at the same time GTK
>       and Wx. This used to be a big no no.
>
>
The reason this works is that on Linux both GTK and Wx use the same
underlying eventloop.  The same is true of qt4+wx on Mac.  As long as the
underlying eventloop is iterated, events from both GUI toolkits can get
handled.  But this is a very OS dependent trick.


> I believe that 1 is due to matplotlib registering an InputHook, but I
> cannot find where it is done. Also, does this seems to mean that under
> Linux GTK input hooks work for Wx (and they are nicer since they don't
> poll).
>
>
Yes, you are right, the gtk inputhook does work for wx on Linux.  But that
requires gtk to be installed to use wx.  But don't get used to this as this
type of things won't work in the two process kernel.

Cheers,

Brian


> Anyhow, this is good news, eventhough I don't understand it at all.
>
> Ga?l
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/aca40d2e/attachment.html>

From ellisonbg at gmail.com  Sun Jul 25 17:30:11 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:30:11 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <20100725211416.GB8338@phare.normalesup.org>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
	<20100725205607.GB2587@phare.normalesup.org>
	<4C4CA731.6030802@hawaii.edu>
	<20100725211416.GB8338@phare.normalesup.org>
Message-ID: <AANLkTikZxedwg24cdHr+pmnwuu0yDTJ8C0dK8bxf_U-a@mail.gmail.com>

On Sun, Jul 25, 2010 at 2:14 PM, Gael Varoquaux <
gael.varoquaux at normalesup.org> wrote:

> On Sun, Jul 25, 2010 at 11:05:53AM -1000, Eric Firing wrote:
> >> I believe that 1 is due to matplotlib registering an InputHook, but I
> >> cannot find where it is done. Also, does this seems to mean that under
> >> Linux GTK input hooks work for Wx (and they are nicer since they don't
> >> poll).
> >
> > No, mpl is not registering an InputHook, but pygtk is.  Maybe this is
> > having a side effect because wx on linux is a wrapper around gtk.
>
> Interesting. It's actually very nice. I wonder if IPython could use this
> to avoid the current polling loop in wx which is fairly annoying.
>
>
As you noted, on Linux, the gtk inputhook will work for wx (OK, there could
be wierd side cases that fail).  But, the reason the wx inputhook has to
poll is that wx does not support triggering events on file descriptor
reads/writes.  It is a limitation of wx.

Cheers,

Brian

> Ga?l
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/2ccfaa7a/attachment.html>

From gael.varoquaux at normalesup.org  Sun Jul 25 17:32:29 2010
From: gael.varoquaux at normalesup.org (Gael Varoquaux)
Date: Sun, 25 Jul 2010 23:32:29 +0200
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <4C4CA9C0.50805@hawaii.edu>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<20100725203500.GA2587@phare.normalesup.org>
	<AANLkTinZwqVqn9j9qt_T-kgmnofa94Jm_cON-6bsw0yx@mail.gmail.com>
	<20100725211216.GA8338@phare.normalesup.org>
	<4C4CA9C0.50805@hawaii.edu>
Message-ID: <20100725213229.GC8338@phare.normalesup.org>

On Sun, Jul 25, 2010 at 11:16:48AM -1000, Eric Firing wrote:
> I haven't looked at mlab.show(), but if it is derived from earlier 
> matplotlib show(), then you might want to take a look at how show() is 
> now implemented in mpl.  It works well with ipython 0.10 and 0.11.

Thanks Eric. mlab's show was not derived from any matplotlib code. None
the less, I had a quite look at SVN matplotlib to figure out how it was
done.

It seems to me that it is done in 'backend_bases.py', line 81, by
checking if IPython added a special attribute to the ShowBase instance.
Thus, it seems to rely on a collaboration between IPython and matplotlib.

Can anyone confirm or infirm?

Cheers,

Ga?l


From ellisonbg at gmail.com  Sun Jul 25 17:34:04 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:34:04 -0700
Subject: [IPython-dev] Full block handling finished for interactive
	frontends
In-Reply-To: <AANLkTikvpqLjGZdqxZwf2q-j4iiN9=6F67pHukzK2BxG@mail.gmail.com>
References: <AANLkTikvpqLjGZdqxZwf2q-j4iiN9=6F67pHukzK2BxG@mail.gmail.com>
Message-ID: <AANLkTimfGASkrRiavBLnUQ+A_Nwc1AV-1cw2L=otExb+@mail.gmail.com>

On Sun, Jul 25, 2010 at 2:03 AM, Fernando Perez <fperez.net at gmail.com>wrote:

> Hi folks,
>
> [especially Gerardo] with this commit:
>
>
> http://github.com/fperez/ipython/commit/df85a15e64ca20ac6cb9f32721bd59343397d276
>
> we now have a fully working block splitter that handles a reasonable
> amount of test cases.  I haven't yet added static support for ipython
> special syntax (magics, !, etc), but for pure python syntax it's fully
> functional.  A second pair of eyes on this code would be much
> appreciated, as it's the core of our interactive input handling and
> getting it right (at least to this point) took me a surprising amount
> of effort.
>
>
Awesome, this is great and will really help the entire code base.  We can do
a good code review of this when you are ready.


> I'll try to complete the special syntax tomorrow, but even now that
> can be sent to the kernel just fine.
>
>
Cool.



> Gerardo, let me know if you have any  problems using this method.  As
> things stand now, Evan and Omar should be OK using the line-based
> workflow, and you should be able to get your blocks with this code.
> Over the next few days we'll work on landing all of this, and I think
> our architecture is starting to shape up very nicely.
>
>
Brian


> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/d7bfbb47/attachment.html>

From ellisonbg at gmail.com  Sun Jul 25 17:35:18 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:35:18 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <20100725213229.GC8338@phare.normalesup.org>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<20100725203500.GA2587@phare.normalesup.org>
	<AANLkTinZwqVqn9j9qt_T-kgmnofa94Jm_cON-6bsw0yx@mail.gmail.com>
	<20100725211216.GA8338@phare.normalesup.org>
	<4C4CA9C0.50805@hawaii.edu>
	<20100725213229.GC8338@phare.normalesup.org>
Message-ID: <AANLkTin8TfnuRgDpO=h-8qj-G5=pnbxugFXwY+M_5KYw@mail.gmail.com>

On Sun, Jul 25, 2010 at 2:32 PM, Gael Varoquaux <
gael.varoquaux at normalesup.org> wrote:

> On Sun, Jul 25, 2010 at 11:16:48AM -1000, Eric Firing wrote:
> > I haven't looked at mlab.show(), but if it is derived from earlier
> > matplotlib show(), then you might want to take a look at how show() is
> > now implemented in mpl.  It works well with ipython 0.10 and 0.11.
>
> Thanks Eric. mlab's show was not derived from any matplotlib code. None
> the less, I had a quite look at SVN matplotlib to figure out how it was
> done.
>
> It seems to me that it is done in 'backend_bases.py', line 81, by
> checking if IPython added a special attribute to the ShowBase instance.
> Thus, it seems to rely on a collaboration between IPython and matplotlib.
>
>
I am not sure, but I wouldn't be surprised.


> Can anyone confirm or infirm?
>
>
Not I.

Brian


> Cheers,
>
> Ga?l
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/00d7664f/attachment.html>

From ellisonbg at gmail.com  Sun Jul 25 17:38:57 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:38:57 -0700
Subject: [IPython-dev] about ipython-zmq
In-Reply-To: <AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com>
References: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com>
	<AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com>
Message-ID: <AANLkTimCAG+mLt=iV2=TqqAxEnxKYq4q+uJ+VSnNHbxX@mail.gmail.com>

2010/7/24 Fernando Perez <fperez.net at gmail.com>

> Hi Omar,
>
> 2010/7/24 Omar Andr?s Zapata Mesa <andresete.chaos at gmail.com>:
> > .
> > Let's suppose the following code in the prompt:
> > In [1]: for i in range(100000):
> >    ...:     print i
> >    ...:
> > This will take a lot of time to run, and if the user wants to stop the
> > process he will normally do it with ctrl+c.
> > by capturing KeyboardInterrupt i was experimenting with a message sent to
> > the kernel to stop such process, but the kernel hangs until the "for"
> > process is over.
> > The solution I see is to run the kernel processes on a thread. what do
> you
> > think?
>
> No, the kernel will be in a separate process, and what needs to be done is:
>
> 1. capture Ctrl-C in the frontend side with the usual try/except.
>
> 2. Send the Ctrl-C as a signal to the kernel process.
>
>
I think it is a little dangerous to forward Ctrl-C.  When there are two
processes like this I think it is very ambiguous as to what it means.  I
would rather go with a frontend magic:

:kernel 0 kill


> In order to do this, you'll  need to know the PID of the kernel
> process, but Evan has already been making progress in this direction
> so you can benefit from his work.  This code:
>
>
> http://github.com/epatters/ipython/blob/qtfrontend/IPython/zmq/kernel.py#L316
>
> already has a kernel launcher prototype with the necessary PID information.
>
>
Let's start to use the Popen interface of Python 2.6.  It has a terminate
and kill method that gets around the PID stuf in a cross platform manner.


> To send the signal, you can use os.kill  for now.  This has problems
> on Windows, but let's get signal handling working on *nix first and
> once things are in place nicely, we'll look into more general options.
>
> > And another question:
> > What magi commands do you think ipython-zmq should have?
>
> For now don't worry about magics, as they should all happen
> kernel-wise for you.  I'll send an email regarding some ideas about
> magics separately shortly.
>
> Cheers,
>
> f
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/0775628a/attachment.html>

From ellisonbg at gmail.com  Sun Jul 25 17:49:03 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:49:03 -0700
Subject: [IPython-dev] First Performance Result
In-Reply-To: <AANLkTim1IBZFxorqe0AY19iOxxtd1hbwxUxGV77yLufM@mail.gmail.com>
References: <AANLkTim1IBZFxorqe0AY19iOxxtd1hbwxUxGV77yLufM@mail.gmail.com>
Message-ID: <AANLkTimgHcbzBpk+D7uS-ucY7FC3R_SiQXiZ-5+g8HQf@mail.gmail.com>

Min,

Thanks for this!  Sorry I have been so quiet, I have been sick for the last
few days.

On Thu, Jul 22, 2010 at 2:22 AM, MinRK <benjaminrk at gmail.com> wrote:

> I have the basic queue built into the controller, and a kernel embedded
> into the Engine, enough to make a simple performance test.
>
> I submitted 32k simple execute requests in a row (round robin to engines,
> explicit multiplexing), then timed the receipt of the results (tic each 1k).
> I did it once with 2 engines, once with 32. (still on a 2-core machine, all
> over tcp on loopback).
>
> Messages went out at an average of 5400 msgs/s, and the results came back
> at ~900 msgs/s.
> So that's 32k jobs submitted in 5.85s, and the last job completed and
> returned its result 43.24s  after the submission of the first one (37.30s
> for 32 engines). On average, a message is sent and received every 1.25 ms.
> When sending very small number of requests (1-10) in this way to just one
> engine, it gets closer to 1.75 ms round trip.
>
>
This is great!  For reference, what is your ping time on localhost?


> In all, it seems to be a good order of magnitude quicker than the Twisted
> implementation for these small messages.
>
>
That is what I would expect.


> Identifying the cost of json for small messages:
>
> Outgoing messages go at 9500/s if I use cPickle for serialization instead
> of json. Round trip to 1 engine for 32k messages: 35s. Round trip to 1
> engine for 32k messages with json: 53s.
>
> It would appear that json is contributing 50% to the overall run time.
>
>
Seems like we know what to do about json now, right?


> With %timeit x.loads(x.dumps(msg))
> on a basic message, I find that json is ~15x slower than cPickle.
> And by these crude estimates, with json, we spend about 35% of our time
> serializing, as opposed to just 2.5% with pickle.
>
> I attached a bar plot of the average replies per second over each 1000 msg
> block, overlaying numbers for 2 engines and for 32. I did the same comparing
> pickle and json for 1 and 2 engines.
>
> The messages are small, but a tiny amount of work is done in the kernel.
> The jobs were submitted like this:
>         for i in xrange(32e3/len(engines)):
>           for eid,key in engines.iteritems():
>             thesession.send(queue, "execute_request",
> dict(code='id=%i'%(int(eid)+i)),ident=str(key))
>
>
>

One thing that is *really* significant is that the requests per/second goes
up with 2 engines connected!  Not sure why this is the case by my guess is
that 0MQ does the queuing/networking in a separate thread and it is able to
overlap logic and communication.  This is wonderful and bodes well for us.

Cheers,

Brian




-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/2b414397/attachment.html>

From efiring at hawaii.edu  Sun Jul 25 17:50:31 2010
From: efiring at hawaii.edu (Eric Firing)
Date: Sun, 25 Jul 2010 11:50:31 -1000
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <AANLkTi=6zgwTKzCfbvUOocMccDa93+iK-Yq-ausmzVxz@mail.gmail.com>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
	<AANLkTi=6zgwTKzCfbvUOocMccDa93+iK-Yq-ausmzVxz@mail.gmail.com>
Message-ID: <4C4CB1A7.9050009@hawaii.edu>

On 07/25/2010 11:22 AM, Brian Granger wrote:
[...]
>
>     What are the non-hook methods you have in mind?  Maybe this option makes
>     my proposed, or hoped-for, simplification impossible.
>
>
> The two process kernel/frontend will simply start the event loop in the
> kernel in the traditional way (non inputhook).  It has to do this
> because the entire kernel will be based on that event loop.  We have
> thought about if we could reuse the inputhook stuff there and it won't work.

I suspect this will require major changes in mpl's gui event code.
What is your time scale for switching to the two-process version?  Is 
there a document outlining how it will work?  Or a prototype?

>
>      > While we are focused on other things right now (the
>     kernel/frontend) we
>      > would love to hear your thoughts on these issues.  Implementing a
>      > solution shouldn't be too difficult.
>
>     Another vague thought:  If we really need a more flexible environment,
>     then maybe the way to achieve it is with a separate package or module
>     that provides the API for collaboration between, e.g., ipython and mpl.
>       Perhaps all the toolkit-specific event loop code could be factored out
>     and wrapped in a toolkit-neutral API.  Then, an mpl interactive backend
>     would use this API regardless of whether mpl is running in a script, or
>     inside ipython.  In the latter case, ipython would be using the same
>     API, providing centralized knowledge of, and control over, the app
>     object and the loop.  I think that such a refactoring, largely combining
>     existing functionality in ipython and mpl, might not be terribly
>     difficult, and might make future improvements in functionality much
>     easier.  It would also make it easier for other libraries to plug into
>     ipython, collaborate with mpl, etc.
>
>
> This might make sense and as we move forward we should see if this makes
> sense.  My first thought though is that I don't want to track yet
> another project though.

I certainly sympathize with that. It could live in ipython as a single 
module or subpackage.  Maybe ipython would end up being an mpl dependency.

>
>     Even if the idea above is sound--and it may be completely
>     impractical--the devil is undoubtedly in the details.
>
>
> And there are many ones in this case.  Thanks for participating in the
> discussion.

Everything you said in your response to my post points in the direction 
of really needing a clean central API to coordinate the gui activities 
of all the potential players.

Eric
>
> Brian


From ellisonbg at gmail.com  Sun Jul 25 17:51:51 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:51:51 -0700
Subject: [IPython-dev] Named Engines
In-Reply-To: <AANLkTimtYS0lyNLDUbq9h3e4vE4ug6MCo0MipVNhK6Ss@mail.gmail.com>
References: <AANLkTinr79xqPOuXNFWECudnxQCOjt3sMwtkdrIzcJFU@mail.gmail.com>
	<AANLkTimJ0taipQVzdHwI2IHUlTaTK0RZg1nKkRUU1Prx@mail.gmail.com>
	<AANLkTimVVEAY50QHkReYdikaV2fB0TEUnOZzRqz7meJ1@mail.gmail.com>
	<AANLkTikHtqFs5ybHJdZvtPkRLmsf1Aj-fztD0WMVdjHp@mail.gmail.com>
	<AANLkTimtYS0lyNLDUbq9h3e4vE4ug6MCo0MipVNhK6Ss@mail.gmail.com>
Message-ID: <AANLkTikZxoAhYnezBRsJEt_zGV_TLNNrmPu2KP3HYMvt@mail.gmail.com>

On Wed, Jul 21, 2010 at 1:58 PM, MinRK <benjaminrk at gmail.com> wrote:

>
>
> On Wed, Jul 21, 2010 at 12:17, Brian Granger <ellisonbg at gmail.com> wrote:
>
>> On Wed, Jul 21, 2010 at 10:51 AM, MinRK <benjaminrk at gmail.com> wrote:
>> >
>> >
>> > On Wed, Jul 21, 2010 at 10:07, Brian Granger <ellisonbg at gmail.com>
>> wrote:
>> >>
>> >> On Wed, Jul 21, 2010 at 2:35 AM, MinRK <benjaminrk at gmail.com> wrote:
>> >> > I now have my MonitoredQueue object on git, which is the three socket
>> >> > Queue
>> >> > device that can be the core of the lightweight ME and Task models
>> >> > (depending
>> >> > on whether it is XREP on both sides for ME, or XREP/XREQ for load
>> >> > balanced
>> >> > tasks).
>> >>
>> >> This sounds very cool.  What repos is this in?
>> >
>> > all on my pyzmq master: github.com/minrk/pyzmq
>> > The Devices are specified in the growing _zmq.pyx. Should I move them?
>>  I
>> > don't have enough Cython experience (this is my first nontrivial Cython
>> > work) to know how to correctly move it to a new file still with all the
>> > right zmq imports.
>>
>> Yes, I think we do want to move them.  We should look at how mpi4py
>> splits things up.  My guess is that we want to have the declaration of
>> the 0MQ C API in a single file that other files can use.  Then have
>> files for the individual things like Socket, Message, Poller, Device,
>> etc.  That will make the code base much easier to work with.  But
>> splitting things like this in Cython is a bit suble.  I have done it
>> before, but I will ask Lisandro Dalcin the best way to approach it.
>> For now, I would keep going with the single file approach (unless you
>> want to learn about how to split things using pxi and pxd files).
>>
>
> I'd be happy to help split it up if you find out the best way to go about
> it.
>
>

OK, I a a bit behind on things from being sick, but I may look into this
when I review+merge you branch.


>
>> >>
>> >> > The biggest difference in terms of design between Python in the
>> >> > Controller
>> >> > picking the destination and this new device is that the client code
>> >> > actually
>> >> > needs to know the XREQ identity of each engine, and all the switching
>> >> > logic
>> >> > lives in the client code (if not the user exposed code) instead of
>> the
>> >> > controller - if the client says 'do x in [1,2,3]' they actually issue
>> 3
>> >> > sends, unlike before, when they issued 1 and the controller issued 3.
>> >> > This
>> >> > will increase traffic between the client and the controller, but
>> >> > dramatically reduce work done in the controller.
>> >>
>> >> But because 0MQ has such low latency it might be a win.  Each request
>> >> the controller gets will be smaller and easier to handle.  The idea of
>> >> allowing clients to specify the names is something I have thought
>> >> about before.  One question though:  what does 0MQ do when you try to
>> >> send on an XREP socket to an identity that doesn't exist?  Will the
>> >> client be able to know that the client wasn't there?  That seems like
>> >> an important failure case.
>> >
>> > As far as I can tell, the XREP socket sends messages out to XREQ ids,
>> and
>> > trusts that such an XREQ exists. If no such id is connected, the message
>> is
>> > silently lost to the aether.  However, with the controller monitoring
>> the
>> > queue, it knows when you have sent a message to an engine that is not
>> > _registered_, and can tell you about it. This should be sufficient,
>> since
>> > presumably all the connected XREQ sockets should be registered engines.
>>
>> I guess I don't quite see how the monitoring is used yet, but it does
>> worry me that the message is silently lost.  So you think 0MQ should
>> raise on that?  I have a feeling that the identies were designed to be
>> a private API thing in 0MQ and we are challenging that.
>>
>
> I don't know what 0MQ should do, but I imagine the silent loss is based on
> thinking of XREP messages as always being replies. That way, a reply sent to
> a nonexistent key is interpreted as being a reply to a message whose
> requester is gone, and 0MQ presumes that nobody else would be interested in
> the result, and drops it. As far as 0MQ is concerned, you wouldn't want the
> following to happen:
> A makes a request of B
> A dies
> B replies to A
> B crashes because A didn't receive the reply
>
> nothing went wrong in B, so it shouldn't crash.
>
>  For us, the XREP messages are not replies on the engine side (they are
> replies on the client side). We are using the identities to treat the
> engine-facing XREP as a keyed multiplexer. The result is that if you send a
> message to nobody, nobody receives it. It's not that nobody knows about it -
> the controller can tell, because it sees every message as it goes by, and
> knows what the valid keys are, but the send itself will not fail.  In the
> client code, you can easily check if a key is valid with the controller, so
> I don't see a problem here.
>
>
OK


> The only source of a problem I can think of comes from the fact that the
> client has a copy of the registration table, and presumably doesn't want to
> update it every time.  That way, an engine could go away between the
> client's updates of the registration, and some requests would vanish.  Note
> that the controller still does receive them, and the client can check with
> the controller on the status of requests that are taking too long.  The
> controller can use a PUB socket to notify of engines coming/going, which
> would mean the window for the client to not be up to date would be very
> small, and it wouldn't even be a big problem if it happend, since the client
> would be notified that its request won't be received.
>

I think this approach makes sense.  At some level the same issue exists
today for us in the twisted version.  If you do mec.get_ids(), that
information could become stale at any moment in time.  I think this is a
intrinsic limitation of the multiengine approach (MPI included).

Cheers,

Brian


>
>
>>
>> > To test:
>> > a = ctx.socket(zmq.XREP)
>> > a.bind('tcp://127.0.0.1:1234')
>> > b = ctx.socket(zmq.XREQ)
>> > b.setsockopt(zmq.IDENTITY, 'hello')
>> > a.send_multipart(['hello', 'mr. b'])
>> > time.sleep(.2)
>> > b.connect('tcp://127.0.0.1:1234')
>> > a.send_multipart(['hello', 'again'])
>> > b.recv()
>> > # 'again'
>> >
>> >>
>> >> > Since the engines' XREP IDs are known at the client level, and these
>> are
>> >> > roughly any string, it brings up the question: should we have
>> strictly
>> >> > integer ID engines, or should we allow engines to have names, like
>> >> > 'franklin1', corresponding directly to their XREP identity?
>> >>
>> >> The idea of having names is pretty cool.  Maybe default to numbers,
>> >> but allow named prefixes as well as raw names?
>> >
>> >
>> > This part is purely up to our user-facing side of the client code. It
>> > certainly doesn't affect how anything works inside. It's just a question
>> of
>> > what a valid `targets' argument (or key for a dictionary interface)
>> would be
>> > in the multiengine.
>>
>> Any string or list of strings?
>>
>
> Well, for now targets is any int or list of ints. I don't see any reason
> that you couldn't use a string anywhere an int would be used. It's perfectly
> unambiguous, since the two key sets are of a different type.
>
> you could do:
> execute('a=5', targets=[0,1,'odin', 'franklin474'])
> and the _build_targets method does:
>
> target_idents = []
> for t in targets:
>     if isinstance(t, int):
>         ident = identities[t]
>     if isinstance(t, str) and t in identities.itervalues():
>         ident = t
>     else:
>         raise KeyError("bad target: %s"%t)
>     target_idents.append(t)
> return target_idents
>
>
>
>> >>
>> >> > I think people might like using names, but I imagine it could get
>> >> > confusing.
>> >> >  It would be unambiguous in code, since we use integer IDs and XREP
>> >> > identities must be strings, so if someone keys on a string it must be
>> >> > the
>> >> > XREP id, and if they key on a number it must be by engine ID.
>> >>
>> >> Right.  I will have a look at the code.
>> >>
>> >> Cheers,
>> >>
>> >> Brian
>> >>
>> >> > -MinRK
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Brian E. Granger, Ph.D.
>> >> Assistant Professor of Physics
>> >> Cal Poly State University, San Luis Obispo
>> >> bgranger at calpoly.edu
>> >> ellisonbg at gmail.com
>> >
>> >
>>
>>
>>
>> --
>> Brian E. Granger, Ph.D.
>> Assistant Professor of Physics
>> Cal Poly State University, San Luis Obispo
>> bgranger at calpoly.edu
>> ellisonbg at gmail.com
>>
>
>


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/45dbc294/attachment.html>

From ellisonbg at gmail.com  Sun Jul 25 17:55:37 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 14:55:37 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <4C4CB1A7.9050009@hawaii.edu>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<4C4C9FF6.8090503@hawaii.edu>
	<AANLkTi=6zgwTKzCfbvUOocMccDa93+iK-Yq-ausmzVxz@mail.gmail.com>
	<4C4CB1A7.9050009@hawaii.edu>
Message-ID: <AANLkTind4N2A_fMenJB37McVT3mi6QZ2o97yAdguiV_5@mail.gmail.com>

On Sun, Jul 25, 2010 at 2:50 PM, Eric Firing <efiring at hawaii.edu> wrote:

> On 07/25/2010 11:22 AM, Brian Granger wrote:
> [...]
>
>
>>    What are the non-hook methods you have in mind?  Maybe this option
>> makes
>>    my proposed, or hoped-for, simplification impossible.
>>
>>
>> The two process kernel/frontend will simply start the event loop in the
>> kernel in the traditional way (non inputhook).  It has to do this
>> because the entire kernel will be based on that event loop.  We have
>> thought about if we could reuse the inputhook stuff there and it won't
>> work.
>>
>
> I suspect this will require major changes in mpl's gui event code.
> What is your time scale for switching to the two-process version?  Is there
> a document outlining how it will work?  Or a prototype?
>
>
Here is a sketch of the design:

http://github.com/ipython/ipython/commit/e21b32e89a634cb1393fd54c1a5657f63f40b1ff

This development is happening right now as part of two GSoC projects and
some Enthought funded work.  There are 5 of us working off of
ipython/ipython master right now in our own branches.  Should be ready for
testing in the next month.  The actual 0.11 release is probably a bit
further out than that though.



>
>
>>     > While we are focused on other things right now (the
>>    kernel/frontend) we
>>     > would love to hear your thoughts on these issues.  Implementing a
>>     > solution shouldn't be too difficult.
>>
>>    Another vague thought:  If we really need a more flexible environment,
>>    then maybe the way to achieve it is with a separate package or module
>>    that provides the API for collaboration between, e.g., ipython and mpl.
>>      Perhaps all the toolkit-specific event loop code could be factored
>> out
>>    and wrapped in a toolkit-neutral API.  Then, an mpl interactive backend
>>    would use this API regardless of whether mpl is running in a script, or
>>    inside ipython.  In the latter case, ipython would be using the same
>>    API, providing centralized knowledge of, and control over, the app
>>    object and the loop.  I think that such a refactoring, largely
>> combining
>>    existing functionality in ipython and mpl, might not be terribly
>>    difficult, and might make future improvements in functionality much
>>    easier.  It would also make it easier for other libraries to plug into
>>    ipython, collaborate with mpl, etc.
>>
>>
>> This might make sense and as we move forward we should see if this makes
>> sense.  My first thought though is that I don't want to track yet
>> another project though.
>>
>
> I certainly sympathize with that. It could live in ipython as a single
> module or subpackage.  Maybe ipython would end up being an mpl dependency.
>
>
IPython is already almost an mpl dep.  But I guess some people run mpl in
servers where IPython is not present.


>
>
>>    Even if the idea above is sound--and it may be completely
>>    impractical--the devil is undoubtedly in the details.
>>
>>
>> And there are many ones in this case.  Thanks for participating in the
>> discussion.
>>
>
> Everything you said in your response to my post points in the direction of
> really needing a clean central API to coordinate the gui activities of all
> the potential players.
>
>
Yes, definitely.  We will keep you in the look.

Cheers,

Brian



> Eric
>
>>
>> Brian
>>
>


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/fdd9e61a/attachment.html>

From efiring at hawaii.edu  Sun Jul 25 18:04:25 2010
From: efiring at hawaii.edu (Eric Firing)
Date: Sun, 25 Jul 2010 12:04:25 -1000
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <20100725213229.GC8338@phare.normalesup.org>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com>
	<20100725203500.GA2587@phare.normalesup.org>
	<AANLkTinZwqVqn9j9qt_T-kgmnofa94Jm_cON-6bsw0yx@mail.gmail.com>
	<20100725211216.GA8338@phare.normalesup.org>
	<4C4CA9C0.50805@hawaii.edu>
	<20100725213229.GC8338@phare.normalesup.org>
Message-ID: <4C4CB4E9.2020104@hawaii.edu>

On 07/25/2010 11:32 AM, Gael Varoquaux wrote:
> On Sun, Jul 25, 2010 at 11:16:48AM -1000, Eric Firing wrote:
>> I haven't looked at mlab.show(), but if it is derived from earlier
>> matplotlib show(), then you might want to take a look at how show() is
>> now implemented in mpl.  It works well with ipython 0.10 and 0.11.
>
> Thanks Eric. mlab's show was not derived from any matplotlib code. None
> the less, I had a quite look at SVN matplotlib to figure out how it was
> done.
>
> It seems to me that it is done in 'backend_bases.py', line 81, by
> checking if IPython added a special attribute to the ShowBase instance.

Also, we don't start the mainloop in show() if mpl is in interactive 
mode.  Given that the input hook can take care of events, the remaining 
function of starting the mainloop is to block until all figures are 
closed.  So, show in non-interactive mode blocks; in interactive mode, 
it does not.

> Thus, it seems to rely on a collaboration between IPython and matplotlib.

Yes, in the sense that it takes advantage of that attribute if it is 
there; but no, in the sense that it works fine without IPython, or with 
plain IPython when the inputhook is in use.  It is not a clean solution 
to a general problem; it is an ad hoc solution to the specific problem 
of making mpl work under a reasonable range of current circumstances, 
including IPython 0.10 and 0.11 (assuming it doesn't change too much).

Eric

>
> Can anyone confirm or infirm?
>
> Cheers,
>
> Ga?l



From fperez.net at gmail.com  Sun Jul 25 18:55:16 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 15:55:16 -0700
Subject: [IPython-dev] about ipython-zmq
In-Reply-To: <AANLkTimCAG+mLt=iV2=TqqAxEnxKYq4q+uJ+VSnNHbxX@mail.gmail.com>
References: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com> 
	<AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com> 
	<AANLkTimCAG+mLt=iV2=TqqAxEnxKYq4q+uJ+VSnNHbxX@mail.gmail.com>
Message-ID: <AANLkTikCF77Y=J6qoRSRfHZM5A8LavPUDnq6ebFc-kmK@mail.gmail.com>

On Sun, Jul 25, 2010 at 2:38 PM, Brian Granger <ellisonbg at gmail.com> wrote:
>
> I think it is a little dangerous to forward Ctrl-C. ?When there are two
> processes like this I think it is very ambiguous as to what it means. ?I
> would rather go with a frontend magic:
> :kernel 0 kill

I really think we do need Ctrl-C.  It would be pretty awful to have an
interactive environment (especially for the line-based, blocking ones
like the plain terminal Omar is working on and Evan's) where Ctrl-C
doesn't just stop the kernel, at least on platforms where we can send
processes signals.  Given that the frontend does no real computation,
what other semantics should Ctrl-C have?

>> In order to do this, you'll ?need to know the PID of the kernel
>> process, but Evan has already been making progress in this direction
>> so you can benefit from his work. ?This code:
>>
>>
>> http://github.com/epatters/ipython/blob/qtfrontend/IPython/zmq/kernel.py#L316
>>
>> already has a kernel launcher prototype with the necessary PID
>> information.
>>
>
> Let's start to use the Popen interface of Python 2.6. ?It has a terminate
> and kill method that gets around the PID stuf in a cross platform manner.

subprocess kill only sends SIGKILL, while os.kill allows the sending
of any signal, so I'm not sure it completely replaces os.kill for us.
But for subprocess cleanup then yes, I'm all for using it (especially
if it works reliably in Windows).

Cheers,

f


From fperez.net at gmail.com  Sun Jul 25 19:00:56 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 16:00:56 -0700
Subject: [IPython-dev] Detecting GUI mainloop running in IPython
In-Reply-To: <4C4CB1A7.9050009@hawaii.edu>
References: <20100725181042.GB16987@phare.normalesup.org>
	<AANLkTi=08LvTScvFBZ_PJ78PE+asr__4uB5fsP+d6_+_@mail.gmail.com> 
	<4C4C9FF6.8090503@hawaii.edu>
	<AANLkTi=6zgwTKzCfbvUOocMccDa93+iK-Yq-ausmzVxz@mail.gmail.com> 
	<4C4CB1A7.9050009@hawaii.edu>
Message-ID: <AANLkTimun3_J+U0jo7wAz0Vh8qL-43qKcPY7VA1ibNzS@mail.gmail.com>

Hi Eric,

On Sun, Jul 25, 2010 at 2:50 PM, Eric Firing <efiring at hawaii.edu> wrote:
>
>
> I suspect this will require major changes in mpl's gui event code.
> What is your time scale for switching to the two-process version? ?Is
> there a document outlining how it will work? ?Or a prototype?
>

it's worth noting that we'll continue to ship a single-process version
of IPython, much like today's, because that works with nothing but the
stdlib.  There are many places that use IPython and rely on being able
to install it without any dependencies, so we want to keep that
constituency happy.

The two-process design, of which these in-progress branches already
have prototypes (one running in a terminal, two in Qt):

http://github.com/omazapa/ipython
http://github.com/epatters/ipython
http://github.com/muzgash/ipython

will allow the user session to continue operating even if the user's
code (or some other library) segfaults the python interpreter; it will
also give us reconnect/disconnect abilities, simultaneous
collaboration on a single kernel, frontends with different feature
sets, etc.  But since the kernels won't be listening for input on
stdin, the InputHook tricks won't quite work, so we'll need to find a
solution for that...

Cheers,

f


From fperez.net at gmail.com  Sun Jul 25 19:08:59 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 16:08:59 -0700
Subject: [IPython-dev] [IPython-User] How to build ipython documentation
In-Reply-To: <4C49F9DC.3000909@gmail.com>
References: <AANLkTikwpiu6r6nAhijPjHVgj8E4WByax6r3WKpy7liC@mail.gmail.com> 
	<AANLkTilol1CGLZPBkwTDeSJnenkiLitxJR4vmTciZCk5@mail.gmail.com> 
	<4C487A32.4000200@gmail.com>
	<AANLkTilSv0riao-WGLABhmw5OksP10wg-zZyqHrnf3p8@mail.gmail.com> 
	<4C49F9DC.3000909@gmail.com>
Message-ID: <AANLkTimDx6m3dj9nxEUt0n_FZtr9P2uuChnah1frU_m6@mail.gmail.com>

Hi Wendell,

On Fri, Jul 23, 2010 at 1:21 PM, Wendell Smith <wackywendell at gmail.com> wrote:
>
>
> However, in order to get it to continue all the way through, I had to
> install twisted, foolscap, and wxpython - none of which are necessary for
> basic ipython. Is it supposed to be that way?

no, it shouldn't.  The problem is that sphinx, in order to build the
docs, needs to import the modules, as it does not parse its inputs.
So if you want to build a *complete* set of IPython docs where every
docstring is included, you'd need to have every dependency installed.
And since some are mutually incompatible (say cocoa and win32 stuff,
which by definition run on different platforms), it will never be
possible to have 100% coverage with this approach.

In practice we can make the docs build with only the stdlib by fixing
the scripts to avoid documenting certain subpackages when their
dependencies aren't met.  But packagers/distributors would still need
to have the full dependencies installed if they want to generate
complete docs, and this approach still strikes me as somewhat ugly.

I don't have a solid solution to this though, given how the need to
import modules is a constraint coming from sphinx.  Anyone with a good
idea on how to proceed here, I'm all ears...

Cheers,

f


From andresete.chaos at gmail.com  Sun Jul 25 19:21:54 2010
From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=)
Date: Sun, 25 Jul 2010 18:21:54 -0500
Subject: [IPython-dev] about ipython-zmq
In-Reply-To: <AANLkTikCF77Y=J6qoRSRfHZM5A8LavPUDnq6ebFc-kmK@mail.gmail.com>
References: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com> 
	<AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com> 
	<AANLkTimCAG+mLt=iV2=TqqAxEnxKYq4q+uJ+VSnNHbxX@mail.gmail.com> 
	<AANLkTikCF77Y=J6qoRSRfHZM5A8LavPUDnq6ebFc-kmK@mail.gmail.com>
Message-ID: <AANLkTimu7Xz9DzVucDZjr8F5z+PPHu4nu9PP=8kSH20i@mail.gmail.com>

Hi all.

crtl+c is done! you can see de code in
http://github.com/omazapa/ipython/tree/master/IPython/zmq/

I wrote a new message type in this frontend`s method

def get_kernel_pid(self):
        omsg = self.session.send(self.request_socket,'pid_request')
        while True:
           #print "waiting recieve"
           rep = self.session.recv(self.request_socket)

           if rep is not None:
               self.kernel_pid=rep['content']['pid']
               break
           time.sleep(0.05)
        return self.kernel_pid

and y this one in kernel's class
def pid_request(self,ident,parent):
            pid_msg = {u'pid':self.kernel_pid,
                     'status':'ok'}
            self.session.send(self.reply_socket, 'pid_reply',pid_msg,
parent, ident)

then we have a new reuqest's type ''execute_request', 'complete_request',
'pid_request'

frontend's class  have the attribute kernel_pid and I call get_kernel_pig()
in the constructor.
then when I have captured KeyboardInterrupt  send signal SIGINT with kill

it is working.
O

2010/7/25 Fernando Perez <fperez.net at gmail.com>

> On Sun, Jul 25, 2010 at 2:38 PM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >
> > I think it is a little dangerous to forward Ctrl-C.  When there are two
> > processes like this I think it is very ambiguous as to what it means.  I
> > would rather go with a frontend magic:
> > :kernel 0 kill
>
> I really think we do need Ctrl-C.  It would be pretty awful to have an
> interactive environment (especially for the line-based, blocking ones
> like the plain terminal Omar is working on and Evan's) where Ctrl-C
> doesn't just stop the kernel, at least on platforms where we can send
> processes signals.  Given that the frontend does no real computation,
> what other semantics should Ctrl-C have?
>
> >> In order to do this, you'll  need to know the PID of the kernel
> >> process, but Evan has already been making progress in this direction
> >> so you can benefit from his work.  This code:
> >>
> >>
> >>
> http://github.com/epatters/ipython/blob/qtfrontend/IPython/zmq/kernel.py#L316
> >>
> >> already has a kernel launcher prototype with the necessary PID
> >> information.
> >>
> >
> > Let's start to use the Popen interface of Python 2.6.  It has a terminate
> > and kill method that gets around the PID stuf in a cross platform manner.
>
> subprocess kill only sends SIGKILL, while os.kill allows the sending
> of any signal, so I'm not sure it completely replaces os.kill for us.
> But for subprocess cleanup then yes, I'm all for using it (especially
> if it works reliably in Windows).
>
> Cheers,
>
> f
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/f05d25b2/attachment.html>

From ellisonbg at gmail.com  Sun Jul 25 20:57:51 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 17:57:51 -0700
Subject: [IPython-dev] about ipython-zmq
In-Reply-To: <AANLkTikCF77Y=J6qoRSRfHZM5A8LavPUDnq6ebFc-kmK@mail.gmail.com>
References: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com>
	<AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com>
	<AANLkTimCAG+mLt=iV2=TqqAxEnxKYq4q+uJ+VSnNHbxX@mail.gmail.com>
	<AANLkTikCF77Y=J6qoRSRfHZM5A8LavPUDnq6ebFc-kmK@mail.gmail.com>
Message-ID: <AANLkTimk1BhOkGqiGow7qZ=btjNfvDZjFcs+Zc1NVZdr@mail.gmail.com>

On Sun, Jul 25, 2010 at 3:55 PM, Fernando Perez <fperez.net at gmail.com>wrote:

> On Sun, Jul 25, 2010 at 2:38 PM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >
> > I think it is a little dangerous to forward Ctrl-C.  When there are two
> > processes like this I think it is very ambiguous as to what it means.  I
> > would rather go with a frontend magic:
> > :kernel 0 kill
>
> I really think we do need Ctrl-C.  It would be pretty awful to have an
> interactive environment (especially for the line-based, blocking ones
> like the plain terminal Omar is working on and Evan's) where Ctrl-C
> doesn't just stop the kernel, at least on platforms where we can send
> processes signals.  Given that the frontend does no real computation,
> what other semantics should Ctrl-C have?
>
>
The case that I am worried about is if a frontend hangs.  Almost *Everyone*
will try Ctrl-C to try to kill the frontend, but if the frontend is enough
alive to trap Ctrl-C and send it to the kernel, the kernel will get it
instead.  If the kernel is running code, it is likely that someone will be
unhappy.  This is especially true because of the possibility of multiple
frontends running the same kernel.

Like most GUI applications (and Mathematica for example), I think Ctrl-C
should be disabled and the frontend should provide a different interface
(possibly using a kernel magic) to signal the kernel.  But let's talk more
about this.


> >> In order to do this, you'll  need to know the PID of the kernel
> >> process, but Evan has already been making progress in this direction
> >> so you can benefit from his work.  This code:
> >>
> >>
> >>
> http://github.com/epatters/ipython/blob/qtfrontend/IPython/zmq/kernel.py#L316
> >>
> >> already has a kernel launcher prototype with the necessary PID
> >> information.
> >>
> >
> > Let's start to use the Popen interface of Python 2.6.  It has a terminate
> > and kill method that gets around the PID stuf in a cross platform manner.
>
> subprocess kill only sends SIGKILL, while os.kill allows the sending
> of any signal, so I'm not sure it completely replaces os.kill for us.
> But for subprocess cleanup then yes, I'm all for using it (especially
> if it works reliably in Windows).
>
>
True, and we do probably want to allow Linux and Mac to send the other
signals.

Brian


> Cheers,
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/8a8c66ac/attachment.html>

From fperez.net at gmail.com  Sun Jul 25 21:29:05 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 18:29:05 -0700
Subject: [IPython-dev] about ipython-zmq
In-Reply-To: <AANLkTimk1BhOkGqiGow7qZ=btjNfvDZjFcs+Zc1NVZdr@mail.gmail.com>
References: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com> 
	<AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com> 
	<AANLkTimCAG+mLt=iV2=TqqAxEnxKYq4q+uJ+VSnNHbxX@mail.gmail.com> 
	<AANLkTikCF77Y=J6qoRSRfHZM5A8LavPUDnq6ebFc-kmK@mail.gmail.com> 
	<AANLkTimk1BhOkGqiGow7qZ=btjNfvDZjFcs+Zc1NVZdr@mail.gmail.com>
Message-ID: <AANLkTi=R8wORfD-H84VqQ=S8FPfh2LyTViiddphoLU1y@mail.gmail.com>

On Sun, Jul 25, 2010 at 5:57 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> The case that I am worried about is if a frontend hangs. ?Almost?*Everyone*
> will try Ctrl-C to try to kill the frontend, but if the frontend is enough
> alive to trap Ctrl-C and send it to the kernel, the kernel will get it
> instead. ?If the kernel is running code, it is likely that someone will be
> unhappy. ?This is especially true because of the possibility of multiple
> frontends running the same kernel.
> Like most GUI applications (and Mathematica for example), I think Ctrl-C
> should be disabled and the frontend should provide a different interface
> (possibly using a kernel magic) to signal the kernel. ?But let's talk more
> about this.

A terminal is a good example of a gui application that forwards Ctrl-C
to the underlying process it exposes.  When you type Ctrl-C in a
terminal, it's not the terminal itself (say xterm or gnome-terminal)
that gets it, but instead it's sent to whatever you were running at
the time.

It makes perfect sense to me for IPython frontends to forward that
signal to the kernel, since frontends are thin 'handles' on the kernel
itself, and interrupting a long-running computation is one of the most
common things in everyday practice.

I know it would drive me positively insane if I had to type a full
command to send a simple interrupt to a running kernel.  In full GUI
frontends we can certainly expose a little 'interrupt kernel' button
somewhere, but I suspect I wouldn't be the only one driven mad by
Ctrl-C not doing the most intuitive thing...

The case of a hung frontend should be handled like other apps: a close
button, a 'force quit' from the OS, etc.  Killing a hung gui in
general is done like that, and it should be indeed a special 'kill'
operation because in general, the front ends should not be hung under
normal conditions: they run very little code, so there's no reason for
them to block other than when they are waiting for a kernel to return.

Now, for *asynchronous* frontends, then we certainly want an
'interrupt kernel' command/button, so Gerardo probably should
implement something like that.  But a blocking, line-based frontend
that 'feels like a terminal' should 'act like a terminal', I think...

Cheers,

f


From ellisonbg at gmail.com  Sun Jul 25 21:59:25 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Sun, 25 Jul 2010 18:59:25 -0700
Subject: [IPython-dev] about ipython-zmq
In-Reply-To: <AANLkTi=R8wORfD-H84VqQ=S8FPfh2LyTViiddphoLU1y@mail.gmail.com>
References: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com>
	<AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com>
	<AANLkTimCAG+mLt=iV2=TqqAxEnxKYq4q+uJ+VSnNHbxX@mail.gmail.com>
	<AANLkTikCF77Y=J6qoRSRfHZM5A8LavPUDnq6ebFc-kmK@mail.gmail.com>
	<AANLkTimk1BhOkGqiGow7qZ=btjNfvDZjFcs+Zc1NVZdr@mail.gmail.com>
	<AANLkTi=R8wORfD-H84VqQ=S8FPfh2LyTViiddphoLU1y@mail.gmail.com>
Message-ID: <AANLkTin1Hn03CVa2FCV+q1TTUHeCPRxUJ2OYk7JoQuCR@mail.gmail.com>

On Sun, Jul 25, 2010 at 6:29 PM, Fernando Perez <fperez.net at gmail.com>wrote:

> On Sun, Jul 25, 2010 at 5:57 PM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> > The case that I am worried about is if a frontend hangs.
>  Almost *Everyone*
> > will try Ctrl-C to try to kill the frontend, but if the frontend is
> enough
> > alive to trap Ctrl-C and send it to the kernel, the kernel will get it
> > instead.  If the kernel is running code, it is likely that someone will
> be
> > unhappy.  This is especially true because of the possibility of multiple
> > frontends running the same kernel.
> > Like most GUI applications (and Mathematica for example), I think Ctrl-C
> > should be disabled and the frontend should provide a different interface
> > (possibly using a kernel magic) to signal the kernel.  But let's talk
> more
> > about this.
>
> A terminal is a good example of a gui application that forwards Ctrl-C
> to the underlying process it exposes.  When you type Ctrl-C in a
> terminal, it's not the terminal itself (say xterm or gnome-terminal)
> that gets it, but instead it's sent to whatever you were running at
> the time.
>
>
Yes, definitely.  On Mac OS X, as far as I know the terminal is the only
application that maps Ctrl-C to SIGINT.


> It makes perfect sense to me for IPython frontends to forward that
> signal to the kernel, since frontends are thin 'handles' on the kernel
> itself, and interrupting a long-running computation is one of the most
> common things in everyday practice.
>
> I know it would drive me positively insane if I had to type a full
> command to send a simple interrupt to a running kernel.  In full GUI
> frontends we can certainly expose a little 'interrupt kernel' button
> somewhere, but I suspect I wouldn't be the only one driven mad by
> Ctrl-C not doing the most intuitive thing...
>
>
Good points.  I do agree that if a frontend looks like a terminal and acts
like a terminal, then it should *really* act like a terminal and use Ctrl-C
in the same way as a terminal.  For frontends that are less terminal like, I
am less convinced, but this is partly because I haven't really interacted
with Python in this way.  I think this will become more clear as we move
forward.  My only hesitation about Ctrl-C in a GUI app is the ambiguity of
what Ctrl-C does in a distributed application.  But, I do think we want to
err in the direction of making it easy to interrupt things, so Ctrl-C makes
the most sense for the default.  There is nothing worse than starting up an
app and having Ctrl-C disabled when it seems like it should be active.  But,
I do think it would be nice to have this configurable.


> The case of a hung frontend should be handled like other apps: a close
> button, a 'force quit' from the OS, etc.  Killing a hung gui in
> general is done like that, and it should be indeed a special 'kill'
> operation because in general, the front ends should not be hung under
> normal conditions: they run very little code, so there's no reason for
> them to block other than when they are waiting for a kernel to return.
>
>
True, a hung frontend should be exceptional whereas interrupting code in the
kernel is common.  And you are right that a hung application should be
handled like other hung applications.


> Now, for *asynchronous* frontends, then we certainly want an
> 'interrupt kernel' command/button, so Gerardo probably should
> implement something like that.  But a blocking, line-based frontend
> that 'feels like a terminal' should 'act like a terminal', I think...
>
>
Yes I agree with that, definitely.


> Cheers,
>
> f
>

Cheers,

Brian

-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/fb2616d3/attachment.html>

From fperez.net at gmail.com  Sun Jul 25 22:21:27 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 19:21:27 -0700
Subject: [IPython-dev] about ipython-zmq
In-Reply-To: <AANLkTin1Hn03CVa2FCV+q1TTUHeCPRxUJ2OYk7JoQuCR@mail.gmail.com>
References: <AANLkTimR4uLJvsCubDomSHP3N+2_2i2r0GpssTmxLbWn@mail.gmail.com> 
	<AANLkTim5ODQUWPEzoAOZb+jQ75Ao=rVm1f58idf5YDbH@mail.gmail.com> 
	<AANLkTimCAG+mLt=iV2=TqqAxEnxKYq4q+uJ+VSnNHbxX@mail.gmail.com> 
	<AANLkTikCF77Y=J6qoRSRfHZM5A8LavPUDnq6ebFc-kmK@mail.gmail.com> 
	<AANLkTimk1BhOkGqiGow7qZ=btjNfvDZjFcs+Zc1NVZdr@mail.gmail.com> 
	<AANLkTi=R8wORfD-H84VqQ=S8FPfh2LyTViiddphoLU1y@mail.gmail.com> 
	<AANLkTin1Hn03CVa2FCV+q1TTUHeCPRxUJ2OYk7JoQuCR@mail.gmail.com>
Message-ID: <AANLkTinEpTLusveFU03xXExeLR55W9n04nrdu4ohLkrH@mail.gmail.com>

On Sun, Jul 25, 2010 at 6:59 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> ?For frontends that are less terminal like, I am less convinced, but this is
> partly because I haven't really interacted with Python in this way. ?I think
> this will become more clear as we move forward. ?My only hesitation about
> Ctrl-C in a GUI app is the ambiguity of what Ctrl-C does in a distributed
> application.

Completely agreed, less 'terminal-y' frontends will probably want to
expose this differently, especially if they are dealing possibly with
multiple kernels. At that point some kind of gui widget with 'stop'
little icons is probably a more sensible interface than a blind Ctrl-C
forwarding.

Thanks for the feedback and good thinking on this though!

Cheers,

f


From fperez.net at gmail.com  Sun Jul 25 22:49:18 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 19:49:18 -0700
Subject: [IPython-dev] correct test-suite
In-Reply-To: <20100718171412.42f4e970@earth>
References: <20100718171412.42f4e970@earth>
Message-ID: <AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com>

Hi Tom,

On Sun, Jul 18, 2010 at 8:14 AM, Thomas Spura <tomspur at fedoraproject.org> wrote:
> There is now a Makefile, so it's nicer to run repetive tasks in the
> repository, but currently there is only 'make test-suite', which should
> run the test suite.
> (now = in branch my_fix_test_suite at github:
> http://github.com/tomspur/ipython/commits/my_fix_test_suite)

Unfortunately this approach does not work if IPython isn't installed
system-wide, because it breaks the twisted tests:

===============================================================================
[ERROR]: IPython.kernel

Traceback (most recent call last):
  File "/usr/lib/python2.6/dist-packages/twisted/trial/runner.py",
line 651, in loadByNames
    things.append(self.findByName(name))
  File "/usr/lib/python2.6/dist-packages/twisted/trial/runner.py",
line 461, in findByName
    return reflect.namedAny(name)
  File "/usr/lib/python2.6/dist-packages/twisted/python/reflect.py",
line 471, in namedAny
    raise ObjectNotFound('%r does not name an object' % (name,))
twisted.python.reflect.ObjectNotFound: 'IPython.kernel' does not name an object
-------------------------------------------------------------------------------

as well as some others.  And the manual sys.path manipulations done here:

http://github.com/tomspur/ipython/commit/7abd52e0933aa57a082db4623c791630bc0671ea#L0R51

should be avoided in production code, which iptest.py is.

I'm not opposed to having a top-level makefile, but the one in your
commit can't be merged for this reason.  Additionally, it makes
targets for things like tar/zip generation that should be done as
distutils commands instead (and they are done by some of the scripts
in tools/).

In practice, I find that simply having the current working IPython/
directory symlinked in my PYTHONPATH lets me run the iptest script at
any time without further PYTHONPATH manipulations.

I'm happy to find a solution to running the test suite without
installing, but this one doesn't seem to work robustly (and I'd
already backed off a while ago some more hackish things I'd tried,
precisely for being too brittle).

Until we have a really clean solution, we'll have a test suite that
can only be run from an installed IPython (or equivalently, a setup
that finds the source tree transparently, which is what I use).

> One failing test pointed out, that there is a programming error in
> IPython/Shell.py and is now corrected in this commit:
> http://github.com/tomspur/ipython/commit/7e7988ee9e7c35b2e5302725ebdf6c22135f334e

This one I did just cherry-pick and push, as it was really a clean
bug, thanks for the fix:

http://github.com/ipython/ipython/commit/a469f3d77cf794b33ac20cf9d3f2246387423808

> But now, there is a problem with test: "Test that object's __del__
> methods are called on exit." in IPython/core/tests/test_run.py:146.

I think that's all caused by the problems you are seeing from your
method of running the tests.  On my system, all tests do pass cleanly
right now:

**********************************************************************
Test suite completed for system with the following information:
IPython version: 0.11.alpha1.git
BZR revision   : 0
Platform info  : os.name -> posix, sys.platform -> linux2
               : Linux-2.6.32-24-generic-i686-with-Ubuntu-10.04-lucid
Python info    : 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3]

Tools and libraries available at test time:
   curses foolscap gobject gtk pexpect twisted wx wx.aui zope.interface

Tools and libraries NOT available at test time:
   objc

Ran 10 test groups in 46.446s

Status:
OK
####

I also commented on your bundled_libs branch: it can't be merged
because it also breaks most of the Twisted tests.  Until the test
suite passes 100% we can't merge those changes, though I do very much
like the idea of better organizing externals, so I hope you can sort
out the issue.  Do let us know as soon as you can fix those and we can
try again.

So I think right now we have merged everything that is mergeable from
you, right?  Please go ahead and file pull requests again if you do
update these (since those trigger an email and that makes it easier to
keep track of what's been done).

Thanks a lot for your interest and help!

Cheers,

f


From fperez.net at gmail.com  Sun Jul 25 23:27:23 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 20:27:23 -0700
Subject: [IPython-dev] Trunk in 100% test compliance
Message-ID: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com>

Hi folks,

we had a few lingering test errors here and there, and with all the
renewed activity in the project, that seemed like a fairly unsafe way
to proceed.  We really want everyone to be able to *always* run the
*full* test suite and only make pull requests when the suite passes
completely.  Having failing tests in the way makes it much more likely
that new code will be added with more failures, so hopefully this is a
useful checkpoint to start from.

I've only ran the tests on linux for now, but the only major component
that misses is cocoa/objc:

**********************************************************************
Test suite completed for system with the following information:
IPython version: 0.11.alpha1.git
BZR revision   : 0
Platform info  : os.name -> posix, sys.platform -> linux2
               : Linux-2.6.32-24-generic-i686-with-Ubuntu-10.04-lucid
Python info    : 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3]

Tools and libraries available at test time:
   curses foolscap gobject gtk pexpect twisted wx wx.aui zope.interface

Tools and libraries NOT available at test time:
   objc

Ran 10 test groups in 46.446s

Status:
OK
###

If anyone sees a different result on their system, please do let us
know and we'll hopefully be able to fix it.

Cheers,

f


From benjaminrk at gmail.com  Sun Jul 25 23:49:36 2010
From: benjaminrk at gmail.com (MinRK)
Date: Sun, 25 Jul 2010 20:49:36 -0700
Subject: [IPython-dev] First Performance Result
In-Reply-To: <AANLkTimgHcbzBpk+D7uS-ucY7FC3R_SiQXiZ-5+g8HQf@mail.gmail.com>
References: <AANLkTim1IBZFxorqe0AY19iOxxtd1hbwxUxGV77yLufM@mail.gmail.com> 
	<AANLkTimgHcbzBpk+D7uS-ucY7FC3R_SiQXiZ-5+g8HQf@mail.gmail.com>
Message-ID: <AANLkTin_yef45wnPXKkzsRjUwr-sgKK2SXcV98Didpm0@mail.gmail.com>

On Sun, Jul 25, 2010 at 14:49, Brian Granger <ellisonbg at gmail.com> wrote:

> Min,
>
> Thanks for this!  Sorry I have been so quiet, I have been sick for the last
> few days.
>
> On Thu, Jul 22, 2010 at 2:22 AM, MinRK <benjaminrk at gmail.com> wrote:
>
>> I have the basic queue built into the controller, and a kernel embedded
>> into the Engine, enough to make a simple performance test.
>>
>> I submitted 32k simple execute requests in a row (round robin to engines,
>> explicit multiplexing), then timed the receipt of the results (tic each 1k).
>> I did it once with 2 engines, once with 32. (still on a 2-core machine, all
>> over tcp on loopback).
>>
>> Messages went out at an average of 5400 msgs/s, and the results came back
>> at ~900 msgs/s.
>> So that's 32k jobs submitted in 5.85s, and the last job completed and
>> returned its result 43.24s  after the submission of the first one (37.30s
>> for 32 engines). On average, a message is sent and received every 1.25 ms.
>> When sending very small number of requests (1-10) in this way to just one
>> engine, it gets closer to 1.75 ms round trip.
>>
>>
> This is great!  For reference, what is your ping time on localhost?
>

ping on localhost is 50-100 us


>
>
>> In all, it seems to be a good order of magnitude quicker than the Twisted
>> implementation for these small messages.
>>
>>
> That is what I would expect.
>
>
>> Identifying the cost of json for small messages:
>>
>> Outgoing messages go at 9500/s if I use cPickle for serialization instead
>> of json. Round trip to 1 engine for 32k messages: 35s. Round trip to 1
>> engine for 32k messages with json: 53s.
>>
>> It would appear that json is contributing 50% to the overall run time.
>>
>>
> Seems like we know what to do about json now, right?
>

I believe we do: 1. cjson, 2. cPickle, 3. json/simplejson, 4. pickle.
Also: never use integer keys in message internals, and never use json for
user data.


>
>
>> With %timeit x.loads(x.dumps(msg))
>> on a basic message, I find that json is ~15x slower than cPickle.
>> And by these crude estimates, with json, we spend about 35% of our time
>> serializing, as opposed to just 2.5% with pickle.
>>
>> I attached a bar plot of the average replies per second over each 1000 msg
>> block, overlaying numbers for 2 engines and for 32. I did the same comparing
>> pickle and json for 1 and 2 engines.
>>
>> The messages are small, but a tiny amount of work is done in the kernel.
>> The jobs were submitted like this:
>>         for i in xrange(32e3/len(engines)):
>>           for eid,key in engines.iteritems():
>>             thesession.send(queue, "execute_request",
>> dict(code='id=%i'%(int(eid)+i)),ident=str(key))
>>
>>
>>
>
> One thing that is *really* significant is that the requests per/second goes
> up with 2 engines connected!  Not sure why this is the case by my guess is
> that 0MQ does the queuing/networking in a separate thread and it is able to
> overlap logic and communication.  This is wonderful and bodes well for us.
>

Yes, I only ran it for 1,2,32, but it's still a little faster at 32 than 2,
even on a 2 core machine.


> Cheers,
>
> Brian
>
>
>
>
> --
> Brian E. Granger, Ph.D.
> Assistant Professor of Physics
> Cal Poly State University, San Luis Obispo
> bgranger at calpoly.edu
> ellisonbg at gmail.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100725/c44dff6d/attachment.html>

From fperez.net at gmail.com  Sun Jul 25 23:56:18 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sun, 25 Jul 2010 20:56:18 -0700
Subject: [IPython-dev] First Performance Result
In-Reply-To: <AANLkTin_yef45wnPXKkzsRjUwr-sgKK2SXcV98Didpm0@mail.gmail.com>
References: <AANLkTim1IBZFxorqe0AY19iOxxtd1hbwxUxGV77yLufM@mail.gmail.com> 
	<AANLkTimgHcbzBpk+D7uS-ucY7FC3R_SiQXiZ-5+g8HQf@mail.gmail.com> 
	<AANLkTin_yef45wnPXKkzsRjUwr-sgKK2SXcV98Didpm0@mail.gmail.com>
Message-ID: <AANLkTi=MoeBePs4g25aAQ+aCEsyS0=ORnQeY0UisAGtm@mail.gmail.com>

On Sun, Jul 25, 2010 at 8:49 PM, MinRK <benjaminrk at gmail.com> wrote:
>
> I believe we do: 1. cjson, 2. cPickle, 3. json/simplejson, 4. pickle.
> Also: never use integer keys in message internals, and never use json for
> user data.

That sounds good to me.  Min, it would be great if you drop some of
these nuggets in the docs as you go, so we have a record of these
design decisions in the documentation (in addition to the mailing list
archives).

Thanks for pushing on this!

Cheers,

f


From tomspur at fedoraproject.org  Mon Jul 26 03:18:50 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Mon, 26 Jul 2010 09:18:50 +0200
Subject: [IPython-dev] correct test-suite
In-Reply-To: <AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com>
Message-ID: <20100726091850.218c47ad@earth>

Am Sun, 25 Jul 2010 19:49:18 -0700
schrieb Fernando Perez <fperez.net at gmail.com>:

> Hi Tom,
> 
> On Sun, Jul 18, 2010 at 8:14 AM, Thomas Spura
> <tomspur at fedoraproject.org> wrote:
> > There is now a Makefile, so it's nicer to run repetive tasks in the
> > repository, but currently there is only 'make test-suite', which
> > should run the test suite.
> > (now = in branch my_fix_test_suite at github:
> > http://github.com/tomspur/ipython/commits/my_fix_test_suite)
> 
> Unfortunately this approach does not work if IPython isn't installed
> system-wide, because it breaks the twisted tests:
> 
> ===============================================================================
> [ERROR]: IPython.kernel
> 
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/dist-packages/twisted/trial/runner.py",
> line 651, in loadByNames
>     things.append(self.findByName(name))
>   File "/usr/lib/python2.6/dist-packages/twisted/trial/runner.py",
> line 461, in findByName
>     return reflect.namedAny(name)
>   File "/usr/lib/python2.6/dist-packages/twisted/python/reflect.py",
> line 471, in namedAny
>     raise ObjectNotFound('%r does not name an object' % (name,))
> twisted.python.reflect.ObjectNotFound: 'IPython.kernel' does not name
> an object
> -------------------------------------------------------------------------------
> 
> as well as some others.  And the manual sys.path manipulations done
> here:
> 
> http://github.com/tomspur/ipython/commit/7abd52e0933aa57a082db4623c791630bc0671ea#L0R51
> 
> should be avoided in production code, which iptest.py is.
> 
> I'm not opposed to having a top-level makefile, but the one in your
> commit can't be merged for this reason.  Additionally, it makes
> targets for things like tar/zip generation that should be done as
> distutils commands instead (and they are done by some of the scripts
> in tools/).
> 
> In practice, I find that simply having the current working IPython/
> directory symlinked in my PYTHONPATH lets me run the iptest script at
> any time without further PYTHONPATH manipulations.

I did now something similar and linked
$(pythondir)/site-packages/IPython -> git repository. Seems to work
pretty well.

> 
> I'm happy to find a solution to running the test suite without
> installing, but this one doesn't seem to work robustly (and I'd
> already backed off a while ago some more hackish things I'd tried,
> precisely for being too brittle).
> 
> Until we have a really clean solution, we'll have a test suite that
> can only be run from an installed IPython (or equivalently, a setup
> that finds the source tree transparently, which is what I use).
> 
> > One failing test pointed out, that there is a programming error in
> > IPython/Shell.py and is now corrected in this commit:
> > http://github.com/tomspur/ipython/commit/7e7988ee9e7c35b2e5302725ebdf6c22135f334e
> 
> This one I did just cherry-pick and push, as it was really a clean
> bug, thanks for the fix:
> 
> http://github.com/ipython/ipython/commit/a469f3d77cf794b33ac20cf9d3f2246387423808
> 
> > But now, there is a problem with test: "Test that object's __del__
> > methods are called on exit." in IPython/core/tests/test_run.py:146.
> 
> I think that's all caused by the problems you are seeing from your
> method of running the tests.  On my system, all tests do pass cleanly
> right now:
> 
> **********************************************************************
> Test suite completed for system with the following information:
> IPython version: 0.11.alpha1.git
> BZR revision   : 0
> Platform info  : os.name -> posix, sys.platform -> linux2
>                : Linux-2.6.32-24-generic-i686-with-Ubuntu-10.04-lucid
> Python info    : 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
> [GCC 4.4.3]
> 
> Tools and libraries available at test time:
>    curses foolscap gobject gtk pexpect twisted wx wx.aui
> zope.interface
> 
> Tools and libraries NOT available at test time:
>    objc
> 
> Ran 10 test groups in 46.446s
> 
> Status:
> OK
> ####

Here, they don't... That's why, I didn't look too closely to the
failing tests in my branches. I'll try to fix the failures in current
master on my side first, because it seems some other dependencies are
doing something wrong I guess...

**********************************************************************
Test suite completed for system with the following information:
IPython version: 0.11.alpha1.git
BZR revision   : 0
Platform info  : os.name -> posix, sys.platform -> linux2
               :
Linux-2.6.33.6-147.fc13.x86_64-x86_64-with-fedora-13-Goddard Python
info    : 2.6.4 (r264:75706, Jun  4 2010, 18:20:31) [GCC 4.4.4 20100503
(Red Hat 4.4.4-2)]

Tools and libraries available at test time:
   curses foolscap gobject gtk pexpect twisted wx wx.aui zope.interface

Tools and libraries NOT available at test time:
   objc

Ran 10 test groups in 68.690s

Status:
ERROR - 2 out of 10 test groups failed.
----------------------------------------
Runner failed: IPython.core
You may wish to rerun this one individually, with:
/usr/bin/python /usr/lib/python2.6/site-packages/IPython/testing/iptest.py
IPython.core

----------------------------------------
Runner failed: IPython.extensions
You may wish to rerun this one individually, with:
/usr/bin/python /usr/lib/python2.6/site-packages/IPython/testing/iptest.py
IPython.extensions


> 
> I also commented on your bundled_libs branch: it can't be merged
> because it also breaks most of the Twisted tests.  Until the test
> suite passes 100% we can't merge those changes, though I do very much
> like the idea of better organizing externals, so I hope you can sort
> out the issue.  Do let us know as soon as you can fix those and we can
> try again.

Will do. :)
 
> So I think right now we have merged everything that is mergeable from
> you, right?  Please go ahead and file pull requests again if you do
> update these (since those trigger an email and that makes it easier to
> keep track of what's been done).

I think, it's quite handy to see on which commit you run ipython. e.g.
when receiving bug reports.
So this commit could be merged:
http://github.com/tomspur/ipython/commit/936858ba3e6648a8bc0031cb76a94643dcdb080a

(This also works, when the IPython directory is symlinked somewhere
else.)

I'll rework all other commits and will do a pull request again.

Thanks.
   Tom


From fperez.net at gmail.com  Mon Jul 26 19:03:05 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 26 Jul 2010 16:03:05 -0700
Subject: [IPython-dev] Unicode howto
Message-ID: <AANLkTikraMrHDthL1FBdRRX5MPqxakiF3AUaNAUP-J8w@mail.gmail.com>

Hi all,

in trying to reply to a query from Min about unicode in zmq, I found
this document:

http://docs.python.org/howto/unicode.html

Somehow I'd managed to miss it before, but it's a very nice and
concise introduction to unicode in Python 2.x.  Since we need to start
seriously thinking about unicode if we're going to push ipython to
3.x, I thought others might find it useful.

Cheers,

f

ps - Python now ships a nice collection of howto's that I'd never
noticed.  In case someone else is as distracted as I am:

http://docs.python.org/howto/index.html


From fperez.net at gmail.com  Mon Jul 26 21:12:35 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 26 Jul 2010 18:12:35 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com> 
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com> 
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com> 
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com> 
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com>
Message-ID: <AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>

[ I'm cc'ing the list on this, which may be of general interest ]

On Mon, Jul 26, 2010 at 2:14 PM, MinRK <benjaminrk at gmail.com> wrote:
> Basically, the question revolves around what should we do with non-ascii
> unicode messages in this situation:
> msg=u'?'
> a.send(msg)
> s = b.recv()

Shouldn't send/receive *always* work with bytes and never with
unicode?  Unicode requires knowing the encoding, and that is a
dangerous proposition on two sides of the wire.

If a message is unicode, it should be encoded first (to utf-8) and
decoded on the other side back to unicode.

There is then the question of the receiving side: should it always
decode? If not, should a flag about bytes/unicode be sent along?

Not sure...

Cheers,

f


From benjaminrk at gmail.com  Mon Jul 26 21:43:45 2010
From: benjaminrk at gmail.com (Min RK)
Date: Mon, 26 Jul 2010 18:43:45 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com>
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com>
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com>
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com>
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com>
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>
Message-ID: <A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>

After chatting with Brian a little bit, I think what should happen is the actual buffer gets sent, since zmq itself should not be aware of encoding. The main reason I started looking at handling unicode is that json returns unicode objects instead of strings, so I was encountering errors having never created unicode strings myself, and things like:
send(u'a message') would fail, as would sock.connect(u'tcp://127.0.0.1:123'), and I think that should definitely not happen.

I solved these problems easily enough by changing all the isinstance(s,str) calls to isinstance(s,(str,unicode)). This works because the PyString_As... methods all these tests were screening actually accept unicode as well as str, as long as the unicode object is ascii (or default encoding?).

In implementing buffer support, I can also send (without copying) any object that provides the buffer interface, including arbitrary unicode strings. I think it was a mistake to attempt to conflate these two things and attempt
to reconstruct unicode objects on both sides within zmq.

Here is where my code currently stands:
Either a unicode object a) contains an ascii string, and is sent as a string, or b) it is not a basic string, and its buffer is sent and reconstruction is left up to the user, just like all other buffered objects.

-MinRK

On Jul 26, 2010, at 18:12, Fernando Perez <fperez.net at gmail.com> wrote:

> [ I'm cc'ing the list on this, which may be of general interest ]
> 
> On Mon, Jul 26, 2010 at 2:14 PM, MinRK <benjaminrk at gmail.com> wrote:
>> Basically, the question revolves around what should we do with non-ascii
>> unicode messages in this situation:
>> msg=u'?'
>> a.send(msg)
>> s = b.recv()
> 
> Shouldn't send/receive *always* work with bytes and never with
> unicode?  Unicode requires knowing the encoding, and that is a
> dangerous proposition on two sides of the wire.
> 
> If a message is unicode, it should be encoded first (to utf-8) and
> decoded on the other side back to unicode.
> 
> There is then the question of the receiving side: should it always
> decode? If not, should a flag about bytes/unicode be sent along?
> 
> Not sure...
> 
> Cheers,
> 
> f


From ellisonbg at gmail.com  Mon Jul 26 22:25:37 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Mon, 26 Jul 2010 19:25:37 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com>
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com>
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com>
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com>
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com>
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>
Message-ID: <AANLkTi=5rm=qCVWP+fhkVt9Njc87wvMJeVs=2KMTbTrn@mail.gmail.com>

On Mon, Jul 26, 2010 at 6:12 PM, Fernando Perez <fperez.net at gmail.com>wrote:

> [ I'm cc'ing the list on this, which may be of general interest ]
>
> On Mon, Jul 26, 2010 at 2:14 PM, MinRK <benjaminrk at gmail.com> wrote:
> > Basically, the question revolves around what should we do with non-ascii
> > unicode messages in this situation:
> > msg=u'?'
> > a.send(msg)
> > s = b.recv()
>
> Shouldn't send/receive *always* work with bytes and never with
> unicode?  Unicode requires knowing the encoding, and that is a
> dangerous proposition on two sides of the wire.
>
>
Yes, 0MQ and pyzmq should always deal with bytes.


> If a message is unicode, it should be encoded first (to utf-8) and
> decoded on the other side back to unicode.
>
>
Yep


> There is then the question of the receiving side: should it always
> decode? If not, should a flag about bytes/unicode be sent along?
>
>
That is really for an application to handle on a per message basis.  The
most reasonable options are:

1. Put encoding/decoding info in the message content.
2.  Always encode and decode in the application.

Brian


> Not sure...
>
> Cheers,
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100726/a5d29253/attachment.html>

From fperez.net at gmail.com  Mon Jul 26 22:33:03 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 26 Jul 2010 19:33:03 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com> 
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com> 
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com> 
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com> 
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com> 
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com> 
	<A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
Message-ID: <AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com>

Glad you worked it out, but I'm worried about one thing: I don't
believe you can send unicode strings as a buffer over the wire.  The
reason is that the two interpreters at both ends of the connection
could have been compiled with different internal unicode encodings.
Python can be compiled to store unicode internally either as UCS-2 or
UCS-4, you can check sys.maxunicode to find out how your particular
build was made:

http://www.python.org/dev/peps/pep-0100/
http://www.python.org/dev/peps/pep-0261/

If you send a unicode string as a buffer from a ucs2 python to a ucs4
one, you'll get  a mess at the other end, I think.

Minor note:
On Mon, Jul 26, 2010 at 6:43 PM, Min RK <benjaminrk at gmail.com> wrote:
> isinstance(s,(str,unicode))

this is equiv. to: isinstance(s, basestring)



Cheers,

f


From fperez.net at gmail.com  Mon Jul 26 22:38:21 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 26 Jul 2010 19:38:21 -0700
Subject: [IPython-dev] correct test-suite
In-Reply-To: <20100726091850.218c47ad@earth>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com> 
	<20100726091850.218c47ad@earth>
Message-ID: <AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com>

On Mon, Jul 26, 2010 at 12:18 AM, Thomas Spura
<tomspur at fedoraproject.org> wrote:
> Here, they don't... That's why, I didn't look too closely to the
> failing tests in my branches. I'll try to fix the failures in current
> master on my side first, because it seems some other dependencies are
> doing something wrong I guess...
>

If you can't find it, show me the tracebacks and I may be able to help
out.  We want the test suite to degrade gracefully by skipping if
optional dependencies aren't met, not to fail.

Cheers,

f


From benjaminrk at gmail.com  Tue Jul 27 00:13:15 2010
From: benjaminrk at gmail.com (Min RK)
Date: Mon, 26 Jul 2010 21:13:15 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com>
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com>
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com>
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com>
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com>
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>
	<A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
	<AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com>
Message-ID: <C105F145-CEF1-43C9-B359-CDE8BF578CD8@gmail.com>



On Jul 26, 2010, at 19:33, Fernando Perez <fperez.net at gmail.com> wrote:

> Glad you worked it out, but I'm worried about one thing: I don't
> believe you can send unicode strings as a buffer over the wire.  The
> reason is that the two interpreters at both ends of the connection
> could have been compiled with different internal unicode encodings.
> Python can be compiled to store unicode internally either as UCS-2 or
> UCS-4, you can check sys.maxunicode to find out how your particular
> build was made:
> 
> http://www.python.org/dev/peps/pep-0100/
> http://www.python.org/dev/peps/pep-0261/
> 
> If you send a unicode string as a buffer from a ucs2 python to a ucs4
> one, you'll get  a mess at the other end, I think.

I'm not sure that is our concern. If buffers are being sent, then it's not zmq whose job it is to interpret that buffer, it's the user's receiving code.

zmq only sends bytes, and for most objects, unicode included, that's what the buffer interface is. But sometimes a unicode object is really just a simple str in a unicode package, and when that's the case we interpret it as a string. Otherwise it's treated like all other objects - a black box that provides a buffer interface. It's up to the user sending the data to send it in a form that they can understand on the other side.

> Minor note:
> On Mon, Jul 26, 2010 at 6:43 PM, Min RK <benjaminrk at gmail.com> wrote:
>> isinstance(s,(str,unicode))
> 
> this is equiv. to: isinstance(s, basestring)
> 

ah, thanks, I hadn't seen that one. I'll use it.

> 
> 
> Cheers,
> 
> f

your points have further clarified that I was mistaken to attempt to support unicode strings. We support basic strings and raw buffers. When faced with a unicode object, we effectively (but not literally) do:
try: send(str(u))
except: send(buffer(u))

From tomspur at fedoraproject.org  Tue Jul 27 02:25:12 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Tue, 27 Jul 2010 08:25:12 +0200
Subject: [IPython-dev] correct test-suite
In-Reply-To: <AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com>
	<20100726091850.218c47ad@earth>
	<AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com>
Message-ID: <20100727082512.62f54006@earth>

Am Mon, 26 Jul 2010 19:38:21 -0700
schrieb Fernando Perez <fperez.net at gmail.com>:

> On Mon, Jul 26, 2010 at 12:18 AM, Thomas Spura
> <tomspur at fedoraproject.org> wrote:
> > Here, they don't... That's why, I didn't look too closely to the
> > failing tests in my branches. I'll try to fix the failures in
> > current master on my side first, because it seems some other
> > dependencies are doing something wrong I guess...
> >
> 
> If you can't find it, show me the tracebacks and I may be able to help
> out.  We want the test suite to degrade gracefully by skipping if
> optional dependencies aren't met, not to fail.

I can't find it right now...

Some are failing, because of a deprecation warning in argparse (it
seems ipython imports my local installed one, and not the bundled one).
Updating the bundled one first and then fix the deprecation warning
would help quite a lot.

Here is the output:

$ /usr/bin/python /usr/lib/python2.6/site-packages/IPython/testing/iptest.py
IPython.core IPython.extensions
EEEEE.EEEEEEEE.................EE........................E.E.........E...SE....EEEE...FEFE.E.
======================================================================
ERROR: Failure: ImportError (cannot import name release)
----------------------------------------------------------------------
Traceback (most recent call last): File
"/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module) File
"/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname) File
"/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc) File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/application.py",
line 35, in <module> from IPython.core import release, crashhandler
ImportError: cannot import name release

======================================================================
ERROR: Failure: AttributeError ('module' object has no attribute
'utils')
----------------------------------------------------------------------
Traceback (most recent call last): File
"/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module) File
"/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname) File
"/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc) File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/completer.py",
line 85, in <module> import IPython.utils.rlineimpl as readline
AttributeError: 'module' object has no attribute 'utils'

======================================================================
ERROR: Failure: ImportError (cannot import name ultratb)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/crashhandler.py",
line 26, in <module> from IPython.core import ultratb ImportError:
cannot import name ultratb

======================================================================
ERROR: Failure: ImportError (cannot import name ipapi)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/debugger.py",
line 33, in <module> from IPython.core import ipapi ImportError: cannot
import name ipapi

======================================================================
ERROR: Failure: ImportError (cannot import name ultratb)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/embed.py",
line 32, in <module> from IPython.core import ultratb ImportError:
cannot import name ultratb

======================================================================
ERROR: Failure: ImportError (cannot import name ipapi)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/history.py",
line 10, in <module> from IPython.core import ipapi ImportError: cannot
import name ipapi

======================================================================
ERROR: Failure: ImportError (cannot import name release)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/ipapp.py",
line 31, in <module> from IPython.core import release ImportError:
cannot import name release

======================================================================
ERROR: Failure: ImportError (cannot import name oinspect)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/iplib.py",
line 34, in <module> from IPython.core import debugger, oinspect
ImportError: cannot import name oinspect

======================================================================
ERROR: Failure: ImportError (cannot import name oinspect)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/magic.py",
line 50, in <module> from IPython.core import debugger, oinspect
ImportError: cannot import name oinspect

======================================================================
ERROR: Failure: ImportError (cannot import name release)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
importFromDir mod = load_module(part_fqname, fh, filename, desc)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/prompts.py",
line 22, in <module> from IPython.core import release ImportError:
cannot import name release

======================================================================
ERROR: Failure: AttributeError ('NoneType' object has no attribute
'user_ns')
----------------------------------------------------------------------
Traceback (most recent call last): File
"/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module) File
"/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname) File
"/usr/lib/python2.6/site-packages/nose/importer.py", line 86, in
importFromDir mod = load_module(part_fqname, fh, filename, desc) File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/refbug.py",
line 28, in <module> if not '_refbug_cache' in ip.user_ns:
AttributeError: 'NoneType' object has no attribute 'user_ns'

======================================================================
ERROR: Failure: IndexError (list index out of range)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/plugins/manager.py", line
148, in generate for r in result:
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 668, in loadTestsFromModule extraglobs=self.extraglobs) File
"/usr/lib64/python2.6/doctest.py", line 852, in find self._find(tests,
obj, name, module, source_lines, globs, {}) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 128, in _find source_lines, globs, seen) File
"/usr/lib64/python2.6/doctest.py", line 906, in _find globs, seen)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 128, in _find source_lines, globs, seen) File
"/usr/lib64/python2.6/doctest.py", line 894, in _find test =
self._get_test(obj, name, module, globs, source_lines) File
"/usr/lib64/python2.6/doctest.py", line 978, in _get_test filename,
lineno) File "/usr/lib64/python2.6/doctest.py", line 597, in get_doctest
    return DocTest(self.get_examples(string, name), globs,
  File "/usr/lib64/python2.6/doctest.py", line 611, in get_examples
    return [x for x in self.parse(string, name)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 450, in parse self._parse_example(m, name, lineno,ip2py) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 510, in _parse_example source = self.ip2py(source) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 382, in ip2py newline(_ip.prefilter(line,lnum>0)) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: Failure: AttributeError ('module' object has no attribute
'utils')
----------------------------------------------------------------------
Traceback (most recent call last): File
"/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
loadTestsFromName addr.filename, addr.module) File
"/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
importFromPath return self.importFromDir(dir_path, fqname) File
"/usr/lib/python2.6/site-packages/nose/importer.py", line 86, in
importFromDir mod = load_module(part_fqname, fh, filename, desc) File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_completer.py",
line 14, in <module> from IPython.core import completer File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/completer.py",
line 85, in <module> import IPython.utils.rlineimpl as readline
AttributeError: 'module' object has no attribute 'utils'

======================================================================
ERROR: IPython.core.tests.test_handlers.test_handlers
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_handlers.py",
line 74, in test_handlers ("top",
'get_ipython().system("d:/cygwin/top ")'), File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_handlers.py",
line 46, in run ip.runlines(pre) File
"/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line 2114, in
runlines self.push_line('\n') File
"/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback) File
"/usr/lib64/python2.6/contextlib.py", line 113, in nested yield vars
File "/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line
2102, in runlines prefiltered =
self.prefilter_manager.prefilter_lines(line,more) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: IPython.core.tests.test_imports.test_import_completer
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_imports.py",
line 5, in test_import_completer from IPython.core import completer
File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/completer.py",
line 85, in <module> import IPython.utils.rlineimpl as readline
AttributeError: 'module' object has no attribute 'utils'

======================================================================
ERROR: IPython.core.tests.test_iplib.test_runlines
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_iplib.py",
line 250, in test_runlines ip.runlines(['a = 10', 'a+=1']) File
"/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line 2114, in
runlines self.push_line('\n') File
"/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback) File
"/usr/lib64/python2.6/contextlib.py", line 113, in nested yield vars
  File "/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line
2102, in runlines prefiltered =
self.prefilter_manager.prefilter_lines(line,more) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: Failure: IndexError (list index out of range)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/plugins/manager.py", line
148, in generate for r in result:
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 668, in loadTestsFromModule extraglobs=self.extraglobs) File
"/usr/lib64/python2.6/doctest.py", line 852, in find self._find(tests,
obj, name, module, source_lines, globs, {}) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 128, in _find source_lines, globs, seen) File
"/usr/lib64/python2.6/doctest.py", line 906, in _find globs, seen)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 128, in _find source_lines, globs, seen) File
"/usr/lib64/python2.6/doctest.py", line 894, in _find test =
self._get_test(obj, name, module, globs, source_lines) File
"/usr/lib64/python2.6/doctest.py", line 978, in _get_test filename,
lineno) File "/usr/lib64/python2.6/doctest.py", line 597, in get_doctest
    return DocTest(self.get_examples(string, name), globs,
  File "/usr/lib64/python2.6/doctest.py", line 611, in get_examples
    return [x for x in self.parse(string, name)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 450, in parse self._parse_example(m, name, lineno,ip2py) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 510, in _parse_example source = self.ip2py(source) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 382, in ip2py newline(_ip.prefilter(line,lnum>0)) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: Failure: InvalidAliasError (The name sum can't be aliased
because it is a keyword or builtin.)
----------------------------------------------------------------------
Traceback (most recent call last): File
"/usr/lib/python2.6/site-packages/nose/loader.py", line 224, in
generate for test in g(): File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_magic.py",
line 32, in test_rehashx _ip.magic('rehashx') File
"/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line 1688, in
magic result = fn(magic_args) File
"/usr/lib/python2.6/site-packages/IPython/core/magic.py", line 2773, in
magic_rehashx ff.replace('.',''), ff) File
"/usr/lib/python2.6/site-packages/IPython/core/alias.py", line 161, in
define_alias nargs = self.validate_alias(name, cmd) File
"/usr/lib/python2.6/site-packages/IPython/core/alias.py", line 172, in
validate_alias "because it is a keyword or builtin." % name)
InvalidAliasError: The name sum can't be aliased because it is a
keyword or builtin.

======================================================================
ERROR: IPython.core.tests.test_magic.test_time
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_magic.py",
line 260, in test_time _ip.magic('time None') File
"/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line 1689, in
magic return result File "/usr/lib64/python2.6/contextlib.py", line 34,
in __exit__ self.gen.throw(type, value, traceback)
  File "/usr/lib64/python2.6/contextlib.py", line 113, in nested
    yield vars
  File "/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line
1688, in magic result = fn(magic_args)
  File "/usr/lib/python2.6/site-packages/IPython/core/magic.py", line
2026, in magic_time expr = self.shell.prefilter(parameter_s,False)
  File "/usr/lib/python2.6/site-packages/IPython/core/prefilter.py",
line 439, in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: Failure: IndexError (list index out of range)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/plugins/manager.py", line
148, in generate for r in result:
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 668, in loadTestsFromModule extraglobs=self.extraglobs) File
"/usr/lib64/python2.6/doctest.py", line 852, in find self._find(tests,
obj, name, module, source_lines, globs, {}) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 128, in _find source_lines, globs, seen) File
"/usr/lib64/python2.6/doctest.py", line 906, in _find globs, seen)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 128, in _find source_lines, globs, seen) File
"/usr/lib64/python2.6/doctest.py", line 894, in _find test =
self._get_test(obj, name, module, globs, source_lines) File
"/usr/lib64/python2.6/doctest.py", line 978, in _get_test filename,
lineno) File "/usr/lib64/python2.6/doctest.py", line 597, in get_doctest
    return DocTest(self.get_examples(string, name), globs,
  File "/usr/lib64/python2.6/doctest.py", line 611, in get_examples
    return [x for x in self.parse(string, name)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 450, in parse self._parse_example(m, name, lineno,ip2py) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 510, in _parse_example source = self.ip2py(source) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 382, in ip2py newline(_ip.prefilter(line,lnum>0)) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: See https://bugs.launchpad.net/ipython/+bug/315706
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/usr/lib/python2.6/site-packages/IPython/testing/_paramtestpy2.py",
line 53, in run_parametric testgen.next() File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_prefilter.py",
line 45, in test_autocall_binops yield nt.assert_equals(ip.prefilter('f
1'),'f(1)') File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range -------------------- >> begin captured stdout
<< --------------------- Automatic calling is: Full Automatic calling
is: OFF

--------------------- >> end captured stdout << ----------------------

======================================================================
ERROR: Check that multiline string literals don't expand as magic
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/usr/lib/python2.6/site-packages/IPython/testing/_paramtestpy2.py",
line 53, in run_parametric testgen.next() File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_prefilter.py",
line 66, in test_issue114 yield nt.assert_equals(ip.prefilter(raw),
raw) File "/usr/lib/python2.6/site-packages/IPython/core/prefilter.py",
line 437, in prefilter_lines for lnum, line in enumerate(llines) ])
File "/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line
380, in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: Test user input conversions
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/usr/lib/python2.6/site-packages/IPython/testing/_paramtestpy2.py",
line 53, in run_parametric testgen.next() File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_prefilter.py",
line 36, in test_prefilter yield nt.assert_equals(ip.prefilter(raw),
correct) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: Test that simple class definitions work.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_run.py",
line 137, in test_simpledef _ip.runlines('t = isinstance(f(), foo)')
File "/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line
2114, in runlines self.push_line('\n') File
"/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback) File
"/usr/lib64/python2.6/contextlib.py", line 113, in nested yield vars
  File "/usr/lib/python2.6/site-packages/IPython/core/iplib.py", line
2102, in runlines prefiltered =
self.prefilter_manager.prefilter_lines(line,more) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR: Failure: IndexError (list index out of range)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/plugins/manager.py", line
148, in generate for r in result:
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 668, in loadTestsFromModule extraglobs=self.extraglobs) File
"/usr/lib64/python2.6/doctest.py", line 852, in find self._find(tests,
obj, name, module, source_lines, globs, {}) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 128, in _find source_lines, globs, seen) File
"/usr/lib64/python2.6/doctest.py", line 906, in _find globs, seen)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 128, in _find source_lines, globs, seen) File
"/usr/lib64/python2.6/doctest.py", line 894, in _find test =
self._get_test(obj, name, module, globs, source_lines) File
"/usr/lib64/python2.6/doctest.py", line 978, in _get_test filename,
lineno) File "/usr/lib64/python2.6/doctest.py", line 597, in get_doctest
    return DocTest(self.get_examples(string, name), globs,
  File "/usr/lib64/python2.6/doctest.py", line 611, in get_examples
    return [x for x in self.parse(string, name)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 450, in parse self._parse_example(m, name, lineno,ip2py) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 510, in _parse_example source = self.ip2py(source) File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 382, in ip2py newline(_ip.prefilter(line,lnum>0)) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 439,
in prefilter_lines out = self.prefilter_line(llines[0],
continue_prompt) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 380,
in prefilter_line self.shell._last_input_line = line File
"/usr/lib/python2.6/site-packages/IPython/utils/autoattr.py", line 129,
in __get__ val = self.getter(obj) File
"/usr/lib/python2.6/site-packages/IPython/core/prefilter.py", line 224,
in shell klass='IPython.core.iplib.InteractiveShell')[0] IndexError:
list index out of range

======================================================================
ERROR:
IPython.extensions.tests.test_pretty.TestPrettyInteractively.test_printers
----------------------------------------------------------------------
Traceback (most recent call last): File
"/usr/lib/python2.6/site-packages/nose/case.py", line 186, in runTest
self.test(*self.arg) File
"/usr/lib/python2.6/site-packages/IPython/testing/decorators.py", line
225, in skipper_func return f(*args, **kwargs) File
"/home/tom/programming/repositories/github/ipython.git/IPython/extensions/tests/test_pretty.py",
line 101, in test_printers tt.ipexec_validate(self.fname, ipy_out) File
"/usr/lib/python2.6/site-packages/IPython/testing/tools.py", line 250,
in ipexec_validate (fname, err)) ValueError: Running file
'/tmp/tmpv7BI33.ipy' produced error:
"---------------------------------------------------------------------------\nAttributeError
Traceback (most recent call
last)\n\n/home/tom/programming/repositories/github/ipython.git/<ipython
console> in <module>()\n\nAttributeError: 'NoneType' object has no
console> attribute 'for_type'"

======================================================================
FAIL: Test that object's __del__ methods are called on exit.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/decorators.py", line
225, in skipper_func return f(*args, **kwargs) File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_run.py",
line 155, in test_obj_del tt.ipexec_validate(self.fname, 'object A
deleted') File
"/usr/lib/python2.6/site-packages/IPython/testing/tools.py", line 252,
in ipexec_validate nt.assert_equals(out.strip(), expected_out.strip())
AssertionError: '\x1b[?1034hobject A deleted' != 'object A deleted'
>>  raise self.failureException, \
          (None or '%r != %r' % ('\x1b[?1034hobject A deleted', 'object
A deleted')) 

======================================================================
FAIL: IPython.core.tests.test_run.TestMagicRunSimple.test_tclass
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/decorators.py", line
225, in skipper_func return f(*args, **kwargs) File
"/home/tom/programming/repositories/github/ipython.git/IPython/core/tests/test_run.py",
line 169, in test_tclass tt.ipexec_validate(self.fname, out) File
"/usr/lib/python2.6/site-packages/IPython/testing/tools.py", line 252,
in ipexec_validate nt.assert_equals(out.strip(), expected_out.strip())
AssertionError: "\x1b[?1034hARGV 1-: ['C-first']\nARGV 1-:
['C-second']\ntclass.py: deleting object: C-first" != "ARGV 1-:
['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first"
>>  raise self.failureException, \
          (None or '%r != %r' % ("\x1b[?1034hARGV 1-: ['C-first']\nARGV
1-: ['C-second']\ntclass.py: deleting object: C-first", "ARGV 1-:
['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object:
C-first")) 

----------------------------------------------------------------------
Ran 98 tests in 1.636s

FAILED (SKIP=1, errors=26, failures=2)

Do you see some quick fixes?

	Thomas


From fperez.net at gmail.com  Tue Jul 27 02:40:16 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Mon, 26 Jul 2010 23:40:16 -0700
Subject: [IPython-dev] correct test-suite
In-Reply-To: <20100727082512.62f54006@earth>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com> 
	<20100726091850.218c47ad@earth>
	<AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com> 
	<20100727082512.62f54006@earth>
Message-ID: <AANLkTikaEDvzX8m9-809e3bxcRRGTnTMt1+wMrsLQGfq@mail.gmail.com>

Hi,

On Mon, Jul 26, 2010 at 11:25 PM, Thomas Spura
<tomspur at fedoraproject.org> wrote:
> Some are failing, because of a deprecation warning in argparse (it
> seems ipython imports my local installed one, and not the bundled one).
> Updating the bundled one first and then fix the deprecation warning
> would help quite a lot.

Yes, I'll try to update argparse to the version in 2.7 official, as it
will move us in the direction of less bundled utilities.

> Here is the output:
>
> $ /usr/bin/python /usr/lib/python2.6/site-packages/IPython/testing/iptest.py
> IPython.core IPython.extensions
> EEEEE.EEEEEEEE.................EE........................E.E.........E...SE....EEEE...FEFE.E.
> ======================================================================
> ERROR: Failure: ImportError (cannot import name release)
> ----------------------------------------------------------------------
> Traceback (most recent call last): File
> "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
> loadTestsFromName addr.filename, addr.module) File
> "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
> importFromPath return self.importFromDir(dir_path, fqname) File
> "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
> importFromDir mod = load_module(part_fqname, fh, filename, desc) File
> "/home/tom/programming/repositories/github/ipython.git/IPython/core/application.py",
> line 35, in <module> from IPython.core import release, crashhandler
> ImportError: cannot import name release
>
> ======================================================================
> ERROR: Failure: AttributeError ('module' object has no attribute
> 'utils')
> ----------------------------------------------------------------------
> Traceback (most recent call last): File
> "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
> loadTestsFromName addr.filename, addr.module) File
> "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
> importFromPath return self.importFromDir(dir_path, fqname) File
> "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
> importFromDir mod = load_module(part_fqname, fh, filename, desc) File
> "/home/tom/programming/repositories/github/ipython.git/IPython/core/completer.py",
> line 85, in <module> import IPython.utils.rlineimpl as readline
> AttributeError: 'module' object has no attribute 'utils'
>
> ======================================================================
> ERROR: Failure: ImportError (cannot import name ultratb)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
> ?File "/usr/lib/python2.6/site-packages/nose/loader.py", line 382, in
> loadTestsFromName addr.filename, addr.module)
> ?File "/usr/lib/python2.6/site-packages/nose/importer.py", line 39, in
> importFromPath return self.importFromDir(dir_path, fqname)
> ?File "/usr/lib/python2.6/site-packages/nose/importer.py", line 84, in
> importFromDir mod = load_module(part_fqname, fh, filename, desc)
> ?File
> "/home/tom/programming/repositories/github/ipython.git/IPython/core/crashhandler.py",
> line 26, in <module> from IPython.core import ultratb ImportError:
> cannot import name ultratb
>

...

All of this seems to indicate that you are somehow mixing an old,
0.10.x tree with the tests for the current code.  In 0.11.x we
reorganized the big code dump from the original ipython into a more
rational structure, but your tests can't seem to find *any* imports.
That seems to indicate that you are finding on your PYTHONPATH the old
code first.

Try this and let us know what you get (from a plain python shell):

>>> import IPython
>>> print IPython.__version__
0.11.alpha1.git
>>> print IPython.__file__
/home/fperez/usr/lib/python2.6/site-packages/IPython/__init__.pyc


Also, looking at which one of these imports work will be a good
tell-tale sign (from a normal python shell).  If you have the real
0.11 tree, you should get:

>>> import IPython.core.ultratb
>>> import IPython.ultraTB
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named ultraTB


whereas a 0.10.x tree gives the opposite:

>>> import IPython.ultraTB
>>> import IPython.core.ultratb
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named core.ultratb


Cheers,

f


From tomspur at fedoraproject.org  Tue Jul 27 02:49:40 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Tue, 27 Jul 2010 08:49:40 +0200
Subject: [IPython-dev] correct test-suite
In-Reply-To: <AANLkTikaEDvzX8m9-809e3bxcRRGTnTMt1+wMrsLQGfq@mail.gmail.com>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com>
	<20100726091850.218c47ad@earth>
	<AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com>
	<20100727082512.62f54006@earth>
	<AANLkTikaEDvzX8m9-809e3bxcRRGTnTMt1+wMrsLQGfq@mail.gmail.com>
Message-ID: <20100727084940.32d8849d@earth>

Am Mon, 26 Jul 2010 23:40:16 -0700
schrieb Fernando Perez <fperez.net at gmail.com>:
> ...
> 
> All of this seems to indicate that you are somehow mixing an old,
> 0.10.x tree with the tests for the current code.  In 0.11.x we
> reorganized the big code dump from the original ipython into a more
> rational structure, but your tests can't seem to find *any* imports.
> That seems to indicate that you are finding on your PYTHONPATH the old
> code first.

Sorry... Nope...

> 
> Try this and let us know what you get (from a plain python shell):
> 
> >>> import IPython
> >>> print IPython.__version__
> 0.11.alpha1.git
> >>> print IPython.__file__
> /home/fperez/usr/lib/python2.6/site-packages/IPython/__init__.pyc

My output (when checking out my_random_stuff branch with the SHA1
commit):
$ ipython
Python 2.6.4 (r264:75706, Jun  4 2010, 18:20:31) 
Type "copyright", "credits" or "license" for more information.

IPython 0.11.alpha1.git.125afa252a525213df44926912a5a68643434ebf -- An
enhanced Interactive Python. ?         -> Introduction and overview of
IPython's features. %quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object'. ?object also works, ?? prints more.

In [1]: import IPython

In [2]: IPython.__version__ 
Out[2]: '0.11.alpha1.git.125afa252a525213df44926912a5a68643434ebf'

In [3]: IPython.__file__ 
Out[3]: '/usr/lib/python2.6/site-packages/IPython/__init__.pyc'


> 
> 
> Also, looking at which one of these imports work will be a good
> tell-tale sign (from a normal python shell).  If you have the real
> 0.11 tree, you should get:
> 
> >>> import IPython.core.ultratb
> >>> import IPython.ultraTB
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
> ImportError: No module named ultraTB
> 
> 
> whereas a 0.10.x tree gives the opposite:
> 
> >>> import IPython.ultraTB
> >>> import IPython.core.ultratb
> Traceback (most recent call last):
>   File "<stdin>", line 1, in <module>
> ImportError: No module named core.ultratb

$ python2.6
Python 2.6.4 (r264:75706, Jun  4 2010, 18:20:31) 
[GCC 4.4.4 20100503 (Red Hat 4.4.4-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import IPython.core.ultratb
>>> import IPython.ultraTB
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named ultraTB

-> 0.11 tree

	Thomas


From erik.tollerud at gmail.com  Tue Jul 27 05:31:11 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Tue, 27 Jul 2010 02:31:11 -0700
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTikONNAObHwS=LTAyj5u+bZgp4SZ+9=m224zkvCM@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com> 
	<AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com> 
	<AANLkTi=omUaObeZ=c_eQ8qeTnBCSsj20c0bGZ02TcTYD@mail.gmail.com> 
	<AANLkTikONNAObHwS=LTAyj5u+bZgp4SZ+9=m224zkvCM@mail.gmail.com>
Message-ID: <AANLkTinsdenT9fBDjKOv=HXW9XVd9nKEpznoZMf0VkjC@mail.gmail.com>

Hi Fernando,

> Barring any unforeseen problems, we expect the 0.11 system for
> profiles to remain compatible from now on.

Good to know - the .11 system is a lot nicer than the previous ones anyway.

>We have a plan to make it
> easier for new projects to provide IPython profiles in *their own
> tree*, but the ?syntax would be backwards-compatible. ?Whereas now you
> say
>
> ipython -p profname
>
> we'd like to allow also (optionally, of course):
>
> ipython -p project:profname

Ooh, that's a neat idea - for now my plan was to include a script in
my project that would just bootstrap ipython (similar to how sympy
does it) depending on which (if any) version of IPython is found.  But
the scheme you have in mind would be much more elegant.


> That's a bug, plain and simple, sorry :) ?For actual code, instead of
> exec_lines, I use this:
>
> c.Global.exec_files = ['extras.py']

I didn't realize that didn't increment the In[#] counter. Definitely
good to know that option is available, but I decided that if it was a
bug I should go hunting...

Trouble is, despite spending quite a bit of time rooting around in the
IPython.core, I can't seem to figure out where the input and output
cache's get populated and their counters incremented... It would be
possible, presumably, to run it like exec_files does for regular py
files and not use the ipython filtering and such, but that really
limits the usefulness of the profile... So is there some option some
where that can temporarily turn off the in/out caching (and presumably
that will also prevent the counter from incrementing)? And if not, is
there some obvious spot I missed where they get incremented that I
could try to figure out how it could be patched to prevent this
behavior?


-- 
Erik Tollerud


From dsdale24 at gmail.com  Tue Jul 27 08:05:06 2010
From: dsdale24 at gmail.com (Darren Dale)
Date: Tue, 27 Jul 2010 08:05:06 -0400
Subject: [IPython-dev] Trunk in 100% test compliance
In-Reply-To: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com>
References: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com>
Message-ID: <AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com>

On Sun, Jul 25, 2010 at 11:27 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi folks,
>
> we had a few lingering test errors here and there, and with all the
> renewed activity in the project, that seemed like a fairly unsafe way
> to proceed. ?We really want everyone to be able to *always* run the
> *full* test suite and only make pull requests when the suite passes
> completely. ?Having failing tests in the way makes it much more likely
> that new code will be added with more failures, so hopefully this is a
> useful checkpoint to start from.
[...]
> If anyone sees a different result on their system, please do let us
> know and we'll hopefully be able to fix it.

I just fetched the master branch, and when I try to run "python
setup.py install" I get:

error: package directory 'IPython/frontend/tests' does not exist

IPython/frontend contains only an empty __init__.py, but setupbase.py
is still doing:

    add_package(packages, 'frontend', tests=True)
    # Don't include the cocoa frontend for now as it is not stable
    if sys.platform == 'darwin' and False:
        add_package(packages, 'frontend.cocoa', tests=True, others=['plugin'])
        add_package(packages, 'frontend.cocoa.examples')
        add_package(packages, 'frontend.cocoa.examples.IPython1Sandbox')
        add_package(packages, 'frontend.cocoa.examples.IPython1Sandbox.English.$
    add_package(packages, 'frontend.process')
    add_package(packages, 'frontend.wx')

Cheers,
Darren


From ellisonbg at gmail.com  Tue Jul 27 11:07:33 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 27 Jul 2010 08:07:33 -0700
Subject: [IPython-dev] Trunk in 100% test compliance
In-Reply-To: <AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com>
References: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com>
	<AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com>
Message-ID: <AANLkTinPBCe=648f1=o0yBTU-A24pTP-5LyqFqs4T0Yo@mail.gmail.com>

Oops,

When we fix this we will need to remove the IPython/gui and IPython/frontend
references in setup.py and MANIFEST as well.

Brian

On Tue, Jul 27, 2010 at 5:05 AM, Darren Dale <dsdale24 at gmail.com> wrote:

> On Sun, Jul 25, 2010 at 11:27 PM, Fernando Perez <fperez.net at gmail.com>
> wrote:
> > Hi folks,
> >
> > we had a few lingering test errors here and there, and with all the
> > renewed activity in the project, that seemed like a fairly unsafe way
> > to proceed.  We really want everyone to be able to *always* run the
> > *full* test suite and only make pull requests when the suite passes
> > completely.  Having failing tests in the way makes it much more likely
> > that new code will be added with more failures, so hopefully this is a
> > useful checkpoint to start from.
> [...]
> > If anyone sees a different result on their system, please do let us
> > know and we'll hopefully be able to fix it.
>
> I just fetched the master branch, and when I try to run "python
> setup.py install" I get:
>
> error: package directory 'IPython/frontend/tests' does not exist
>
> IPython/frontend contains only an empty __init__.py, but setupbase.py
> is still doing:
>
>    add_package(packages, 'frontend', tests=True)
>    # Don't include the cocoa frontend for now as it is not stable
>    if sys.platform == 'darwin' and False:
>        add_package(packages, 'frontend.cocoa', tests=True,
> others=['plugin'])
>        add_package(packages, 'frontend.cocoa.examples')
>        add_package(packages, 'frontend.cocoa.examples.IPython1Sandbox')
>        add_package(packages,
> 'frontend.cocoa.examples.IPython1Sandbox.English.$
>    add_package(packages, 'frontend.process')
>    add_package(packages, 'frontend.wx')
>
> Cheers,
> Darren
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100727/04e961d1/attachment.html>

From ellisonbg at gmail.com  Tue Jul 27 14:14:55 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 27 Jul 2010 11:14:55 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <C105F145-CEF1-43C9-B359-CDE8BF578CD8@gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com>
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com>
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com>
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com>
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com>
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>
	<A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
	<AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com>
	<C105F145-CEF1-43C9-B359-CDE8BF578CD8@gmail.com>
Message-ID: <AANLkTinxoEHdgv17nqu9MXbX2_k7GLCBR6zZhqkwnU3-@mail.gmail.com>

On Mon, Jul 26, 2010 at 9:13 PM, Min RK <benjaminrk at gmail.com> wrote:

>
>
> On Jul 26, 2010, at 19:33, Fernando Perez <fperez.net at gmail.com> wrote:
>
> > Glad you worked it out, but I'm worried about one thing: I don't
> > believe you can send unicode strings as a buffer over the wire.  The
> > reason is that the two interpreters at both ends of the connection
> > could have been compiled with different internal unicode encodings.
> > Python can be compiled to store unicode internally either as UCS-2 or
> > UCS-4, you can check sys.maxunicode to find out how your particular
> > build was made:
> >
> > http://www.python.org/dev/peps/pep-0100/
> > http://www.python.org/dev/peps/pep-0261/
> >
> > If you send a unicode string as a buffer from a ucs2 python to a ucs4
> > one, you'll get  a mess at the other end, I think.
>
> I'm not sure that is our concern. If buffers are being sent, then it's not
> zmq whose job it is to interpret that buffer, it's the user's receiving
> code.
>
> zmq only sends bytes, and for most objects, unicode included, that's what
> the buffer interface is. But sometimes a unicode object is really just a
> simple str in a unicode package, and when that's the case we interpret it as
> a string. Otherwise it's treated like all other objects - a black box that
> provides a buffer interface. It's up to the user sending the data to send it
> in a form that they can understand on the other side.
>
>
Yes, I hadn't though about the fact that unicode objects are buffers as
well.  But, we could raise a TypeError when a user tries to send a unicode
object (str in python 3).  IOW, don't treat unicode as buffers and force
them to encode/de ode.  Does this make sense or should we allow unicode to
be sent as buffers.

Brian


> > Minor note:
> > On Mon, Jul 26, 2010 at 6:43 PM, Min RK <benjaminrk at gmail.com> wrote:
> >> isinstance(s,(str,unicode))
> >
> > this is equiv. to: isinstance(s, basestring)
> >
>
> ah, thanks, I hadn't seen that one. I'll use it.
>
> >
> >
> > Cheers,
> >
> > f
>
> your points have further clarified that I was mistaken to attempt to
> support unicode strings. We support basic strings and raw buffers. When
> faced with a unicode object, we effectively (but not literally) do:
> try: send(str(u))
> except: send(buffer(u))




-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100727/13cc532c/attachment.html>

From fperez.net at gmail.com  Tue Jul 27 14:25:36 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 27 Jul 2010 11:25:36 -0700
Subject: [IPython-dev] Trunk in 100% test compliance
In-Reply-To: <AANLkTinPBCe=648f1=o0yBTU-A24pTP-5LyqFqs4T0Yo@mail.gmail.com>
References: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com> 
	<AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com> 
	<AANLkTinPBCe=648f1=o0yBTU-A24pTP-5LyqFqs4T0Yo@mail.gmail.com>
Message-ID: <AANLkTimfA9zcy5ahooMbsCbWKyenJF+7DWE2cae4GkCd@mail.gmail.com>

Hey Brian,

On Tue, Jul 27, 2010 at 8:07 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>
> When we fix this we will need to remove the IPython/gui and IPython/frontend
> references in setup.py and MANIFEST as well.

when you did the cleanup of dead code in trunk, where did you plan to
have the new frontends live?  I figured we might have kept the
top-level frontend/ directory, or did you have a new location for
that?  There was some unmaintaned code there, but the new work by
Evan, Gerardo and Omar was also going there, so we should find a good
plan for that, so they can merge from trunk if they need to...

Cheers,

f


From fperez.net at gmail.com  Tue Jul 27 14:34:17 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 27 Jul 2010 11:34:17 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTinxoEHdgv17nqu9MXbX2_k7GLCBR6zZhqkwnU3-@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com> 
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com> 
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com> 
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com> 
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com> 
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com> 
	<A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
	<AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com> 
	<C105F145-CEF1-43C9-B359-CDE8BF578CD8@gmail.com>
	<AANLkTinxoEHdgv17nqu9MXbX2_k7GLCBR6zZhqkwnU3-@mail.gmail.com>
Message-ID: <AANLkTik04o36kEm0UE_fY8Soe9em07-bYHSf4M1FO_A3@mail.gmail.com>

On Tue, Jul 27, 2010 at 11:14 AM, Brian Granger <ellisonbg at gmail.com> wrote:
>
> Yes, I hadn't though about the fact that unicode objects are buffers as
> well. ?But, we could raise a TypeError when a user tries to send a unicode
> object (str in python 3). ?IOW, don't treat unicode as buffers and force
> them to encode/de ode. ?Does this make sense or should we allow unicode to
> be sent as buffers.

Well, the problem I explained about a possible mismatch in internal
unicode storage format rears its ugly head if we allow
unicode-as-buffer.  I was precisely worried about sending 3.x strings
as buffers, since the two ends may not agree on what the buffer means.
 I may be worrying about a non-problem, but at some point it might be
worth veryfing this.  The test is a bit cumbersome to set up, because
you have to build two versions of Python, one with ucs-2 and one with
ucs-4, and see what happens if they try to send each other stuff.  But
I think it's a test worth making, so we know for sure whether this is
a problem or not, as it will dictate design decisions for 3.x on all
string handling.

If it is a problem, then there are some options:

- disallow communication between ucs 2/4 pythons.
- detect a mismatch and encode/decode all unicode strings to utf-8 on
send/receive, but allow raw buffer sending if there's no mismatch.
- *always* encode/decode.

The middle option seems appealing because it avoids the overhead of
encoding/decoding on all sends, but I'm worried it may be too brittle.

Cheers,


f


From fperez.net at gmail.com  Tue Jul 27 14:37:33 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 27 Jul 2010 11:37:33 -0700
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTinsdenT9fBDjKOv=HXW9XVd9nKEpznoZMf0VkjC@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com> 
	<AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com> 
	<AANLkTi=omUaObeZ=c_eQ8qeTnBCSsj20c0bGZ02TcTYD@mail.gmail.com> 
	<AANLkTikONNAObHwS=LTAyj5u+bZgp4SZ+9=m224zkvCM@mail.gmail.com> 
	<AANLkTinsdenT9fBDjKOv=HXW9XVd9nKEpznoZMf0VkjC@mail.gmail.com>
Message-ID: <AANLkTimggqudVsR9x87D-tf3n3X0e3a4wXutc32tW9p8@mail.gmail.com>

Hi Erik,

On Tue, Jul 27, 2010 at 2:31 AM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
>
>> That's a bug, plain and simple, sorry :) ?For actual code, instead of
>> exec_lines, I use this:
>>
>> c.Global.exec_files = ['extras.py']
>
> I didn't realize that didn't increment the In[#] counter. Definitely
> good to know that option is available, but I decided that if it was a
> bug I should go hunting...
>
> Trouble is, despite spending quite a bit of time rooting around in the
> IPython.core, I can't seem to figure out where the input and output
> cache's get populated and their counters incremented... It would be
> possible, presumably, to run it like exec_files does for regular py
> files and not use the ipython filtering and such, but that really
> limits the usefulness of the profile... So is there some option some
> where that can temporarily turn off the in/out caching (and presumably
> that will also prevent the counter from incrementing)? And if not, is
> there some obvious spot I missed where they get incremented that I
> could try to figure out how it could be patched to prevent this
> behavior?

I wouldn't bother if I were you: that code is a horrible mess, and the
re-work that we're doing right now will clean a lot of that up.  The
old code has coupling all over the map for prompt handling, and we're
trying to clean that as well.  If you're really curious, the code is
in core/prompts.py, and the object in the main ipython that handles it
is get_ipython().outputcache.  So grepping around for that guy may
help, but as I said, I'd let it go for now and live with using
exec_files, until we finish up the housecleaning :)

Cheers,

f


From ellisonbg at gmail.com  Tue Jul 27 15:21:16 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 27 Jul 2010 12:21:16 -0700
Subject: [IPython-dev] Trunk in 100% test compliance
In-Reply-To: <AANLkTimfA9zcy5ahooMbsCbWKyenJF+7DWE2cae4GkCd@mail.gmail.com>
References: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com>
	<AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com>
	<AANLkTinPBCe=648f1=o0yBTU-A24pTP-5LyqFqs4T0Yo@mail.gmail.com>
	<AANLkTimfA9zcy5ahooMbsCbWKyenJF+7DWE2cae4GkCd@mail.gmail.com>
Message-ID: <AANLkTim6L4Tbwg_W717u7N29iwEZ8-mKnsOtTx68T90x@mail.gmail.com>

On Tue, Jul 27, 2010 at 11:25 AM, Fernando Perez <fperez.net at gmail.com>wrote:

> Hey Brian,
>
> On Tue, Jul 27, 2010 at 8:07 AM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >
> > When we fix this we will need to remove the IPython/gui and
> IPython/frontend
> > references in setup.py and MANIFEST as well.
>
> when you did the cleanup of dead code in trunk, where did you plan to
> have the new frontends live?  I figured we might have kept the
> top-level frontend/ directory, or did you have a new location for
> that?  There was some unmaintaned code there, but the new work by
> Evan, Gerardo and Omar was also going there, so we should find a good
> plan for that, so they can merge from trunk if they need to...
>
>
I did this yesterday and it is in trunk now:

http://github.com/ipython/ipython/commit/595fc3b996f891ecc1a1996c598d15e47e6aac67

But I did leave the top-level frontend directory with the qt subdirectory in
place.  Basically, it is organized like you expect.  In my previous email,
when I said IPython/frontend, I more mean "the appropriate things that used
to be in in IPython/frontend".  But yes, all the new stuff should still go
into frontend as expected.

Cheers,

Brian


> Cheers,
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100727/56447bd8/attachment.html>

From ellisonbg at gmail.com  Tue Jul 27 15:23:37 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 27 Jul 2010 12:23:37 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTik04o36kEm0UE_fY8Soe9em07-bYHSf4M1FO_A3@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com>
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com>
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com>
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com>
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com>
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>
	<A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
	<AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com>
	<C105F145-CEF1-43C9-B359-CDE8BF578CD8@gmail.com>
	<AANLkTinxoEHdgv17nqu9MXbX2_k7GLCBR6zZhqkwnU3-@mail.gmail.com>
	<AANLkTik04o36kEm0UE_fY8Soe9em07-bYHSf4M1FO_A3@mail.gmail.com>
Message-ID: <AANLkTikH3GjfdywBw=8uG+eRbWU-Borse9s4b100kEys@mail.gmail.com>

On Tue, Jul 27, 2010 at 11:34 AM, Fernando Perez <fperez.net at gmail.com>wrote:

> On Tue, Jul 27, 2010 at 11:14 AM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> >
> > Yes, I hadn't though about the fact that unicode objects are buffers as
> > well.  But, we could raise a TypeError when a user tries to send a
> unicode
> > object (str in python 3).  IOW, don't treat unicode as buffers and force
> > them to encode/de ode.  Does this make sense or should we allow unicode
> to
> > be sent as buffers.
>
> Well, the problem I explained about a possible mismatch in internal
> unicode storage format rears its ugly head if we allow
> unicode-as-buffer.  I was precisely worried about sending 3.x strings
> as buffers, since the two ends may not agree on what the buffer means.
>  I may be worrying about a non-problem, but at some point it might be
> worth veryfing this.  The test is a bit cumbersome to set up, because
> you have to build two versions of Python, one with ucs-2 and one with
> ucs-4, and see what happens if they try to send each other stuff.  But
> I think it's a test worth making, so we know for sure whether this is
> a problem or not, as it will dictate design decisions for 3.x on all
> string handling.
>
>
This is definitely an issue.  Also, someone could set their own custom
unicode encoding by hand and that would mess this up as well.


> If it is a problem, then there are some options:
>
> - disallow communication between ucs 2/4 pythons.
>

But this doesn't account for other encoding/decoding setups.


> - detect a mismatch and encode/decode all unicode strings to utf-8 on
> send/receive, but allow raw buffer sending if there's no mismatch.
>

This will be tough though if users set their own encoding.


> - *always* encode/decode.
>
>
I think this is the option that I prefer (having users to this in their
application code).


> The middle option seems appealing because it avoids the overhead of
> encoding/decoding on all sends, but I'm worried it may be too brittle.
>
>
Brian


> Cheers,
>
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100727/b82bba6e/attachment.html>

From tomspur at fedoraproject.org  Tue Jul 27 15:27:26 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Tue, 27 Jul 2010 21:27:26 +0200
Subject: [IPython-dev] correct test-suite
In-Reply-To: <AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com>
	<20100726091850.218c47ad@earth>
	<AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com>
Message-ID: <20100727212726.6a988639@earth>

Am Mon, 26 Jul 2010 19:38:21 -0700
schrieb Fernando Perez <fperez.net at gmail.com>:

> On Mon, Jul 26, 2010 at 12:18 AM, Thomas Spura
> <tomspur at fedoraproject.org> wrote:
> > Here, they don't... That's why, I didn't look too closely to the
> > failing tests in my branches. I'll try to fix the failures in
> > current master on my side first, because it seems some other
> > dependencies are doing something wrong I guess...
> >
> 
> If you can't find it, show me the tracebacks and I may be able to help
> out.  We want the test suite to degrade gracefully by skipping if
> optional dependencies aren't met, not to fail.

Now there are some less failures than before:

$ /usr/bin/python /usr/lib/python2.6/site-packages/IPython/testing/iptest.py
IPython.core
IPython.extensions ......F.................................................................S......>f(1) ...F........F.F...E.
======================================================================
ERROR:
IPython.extensions.tests.test_pretty.TestPrettyInteractively.test_printers
----------------------------------------------------------------------
Traceback (most recent call last): File
"/usr/lib/python2.6/site-packages/nose/case.py", line 186, in runTest
self.test(*self.arg) File
"/usr/lib/python2.6/site-packages/IPython/testing/decorators.py", line
225, in skipper_func return f(*args, **kwargs) File
"/usr/lib/python2.6/site-packages/IPython/extensions/tests/test_pretty.py",
line 101, in test_printers tt.ipexec_validate(self.fname, ipy_out) File
"/usr/lib/python2.6/site-packages/IPython/testing/tools.py", line 250,
in ipexec_validate (fname, err)) ValueError: Running file
'/tmp/tmpEtePU6.ipy' produced error:
"---------------------------------------------------------------------------\nAttributeError
Traceback (most recent call last)\n\n/home/tom/bin/<ipython console> in
<module>()\n\nAttributeError: 'NoneType' object has no attribute
'for_type'"

======================================================================
FAIL: Doctest: IPython.core.magic.Magic.magic_reset_selective
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/usr/lib/python2.6/site-packages/IPython/testing/plugin/ipdoctest.py",
line 265, in runTest raise
self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for
IPython.core.magic.Magic.magic_reset_selective File
"/usr/lib/python2.6/site-packages/IPython/core/magic.py", line 1115, in
magic_reset_selective

----------------------------------------------------------------------
File "/usr/lib/python2.6/site-packages/IPython/core/magic.py", line
1132, in IPython.core.magic.Magic.magic_reset_selective Failed example:
    get_ipython().magic("who_ls ")
Expected:
    ['a', 'b', 'b1', 'b1m', 'b2m', 'b2s', 'b3m', 'b4m', 'c']
Got:
    ['Bunch', 'ESC_MAGIC', 'FakeModule', 'GetoptError', 'IPython',
'LSString', 'Macro', 'Magic', 'SList', 'StringIO', 'StringTypes',
'Struct', 'Term', 'TryNext', 'UsageError', 'a', 'abbrev_cwd',
'arg_split', 'b', 'b1m', 'b2m', 'b2s', 'b3m', 'b4m', 'bdb', 'c',
'clock', 'clock2', 'compress_dhist', 'debugger', 'enable_gui', 'error',
'file_read', 'get_py_filename', 'getopt', 'inspect', 'itpl',
'mpl_runner', 'nlprint', 'oinspect', 'on_off', 'os', 'page', 'pformat',
'printpl', 'profile', 'pstats', 're', 'set_term_title', 'shutil',
'sys', 'testdec', 'textwrap', 'time', 'types', 'warn']
----------------------------------------------------------------------
File "/usr/lib/python2.6/site-packages/IPython/core/magic.py", line
1137, in IPython.core.magic.Magic.magic_reset_selective Failed example:
get_ipython().magic("who_ls ") Expected: ['a', 'b', 'b1', 'b1m', 'b2s',
'c'] Got: ['Bunch', 'ESC_MAGIC', 'FakeModule', 'GetoptError',
'IPython', 'LSString', 'Macro', 'Magic', 'SList', 'StringIO',
'StringTypes', 'Struct', 'Term', 'TryNext', 'UsageError', 'a',
'abbrev_cwd', 'arg_split', 'b', 'b1m', 'b2s', 'b4m', 'bdb', 'c',
'clock', 'clock2', 'compress_dhist', 'debugger', 'enable_gui', 'error',
'file_read', 'get_py_filename', 'getopt', 'inspect', 'itpl',
'mpl_runner', 'nlprint', 'oinspect', 'on_off', 'os', 'page', 'pformat',
'printpl', 'profile', 'pstats', 're', 'set_term_title', 'shutil',
'sys', 'testdec', 'textwrap', 'time', 'types', 'warn']
----------------------------------------------------------------------
File "/usr/lib/python2.6/site-packages/IPython/core/magic.py", line
1142, in IPython.core.magic.Magic.magic_reset_selective Failed example:
get_ipython().magic("who_ls ") Expected: ['a', 'b', 'b1', 'b1m', 'b2s',
'c'] Got: ['Bunch', 'ESC_MAGIC', 'GetoptError', 'IPython', 'LSString',
'Macro', 'Magic', 'SList', 'StringIO', 'StringTypes', 'Struct', 'Term',
'TryNext', 'UsageError', 'a', 'arg_split', 'b', 'b1m', 'b2s', 'b4m',
'c', 'clock', 'clock2', 'enable_gui', 'error', 'get_py_filename',
'getopt', 'inspect', 'itpl', 'mpl_runner', 'nlprint', 'oinspect',
'on_off', 'os', 'page', 'pformat', 'printpl', 'profile', 'pstats',
're', 'set_term_title', 'shutil', 'sys', 'textwrap', 'time', 'types',
'warn']
----------------------------------------------------------------------
File "/usr/lib/python2.6/site-packages/IPython/core/magic.py", line
1147, in IPython.core.magic.Magic.magic_reset_selective Failed example:
get_ipython().magic("who_ls ") Expected: Out[8]:['a', 'b', 'b1', 'b1m',
'b2s'] Got: ['ESC_MAGIC', 'GetoptError', 'IPython', 'LSString',
'SList', 'StringIO', 'StringTypes', 'Term', 'TryNext', 'UsageError',
'a', 'arg_split', 'b', 'b1m', 'b2s', 'b4m', 'enable_gui', 'error',
'get_py_filename', 'getopt', 'itpl', 'mpl_runner', 'nlprint', 'on_off',
'os', 'page', 'pformat', 'printpl', 'profile', 'pstats', 're',
'set_term_title', 'shutil', 'sys', 'textwrap', 'time', 'types', 'warn']
----------------------------------------------------------------------
File "/usr/lib/python2.6/site-packages/IPython/core/magic.py", line
1152, in IPython.core.magic.Magic.magic_reset_selective Failed example:
get_ipython().magic("who_ls ") Expected: ['a'] Got: ['ESC_MAGIC',
'GetoptError', 'IPython', 'LSString', 'SList', 'StringIO',
'StringTypes', 'Term', 'TryNext', 'UsageError', 'a', 'arg_split',
'error', 'get_py_filename', 'getopt', 'itpl', 'mpl_runner', 'nlprint',
'on_off', 'os', 'page', 'pformat', 'printpl', 'profile', 'pstats',
're', 'set_term_title', 'shutil', 'sys', 'textwrap', 'time', 'types',
'warn']

>>  raise self.failureException(self.format_failure(<StringIO.StringIO
>> instance at 0x48941b8>.getvalue()))
    

======================================================================
FAIL: Check that multiline string literals don't expand as magic
----------------------------------------------------------------------
Traceback (most recent call last):
  File
"/usr/lib/python2.6/site-packages/IPython/testing/_paramtestpy2.py",
line 53, in run_parametric testgen.next() File
"/usr/lib/python2.6/site-packages/IPython/core/tests/test_prefilter.py",
line 59, in test_issue114 yield nt.assert_equals(ip.prefilter(raw),
raw) AssertionError: '"""\nget_ipython().magic("Exit ")\n"""' !=
'"""\nExit\n"""'
>>  raise self.failureException, \
          (None or '%r != %r' % ('"""\nget_ipython().magic("Exit
")\n"""', '"""\nExit\n"""')) 

======================================================================
FAIL: Test that object's __del__ methods are called on exit.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/decorators.py", line
225, in skipper_func return f(*args, **kwargs) File
"/usr/lib/python2.6/site-packages/IPython/core/tests/test_run.py", line
155, in test_obj_del tt.ipexec_validate(self.fname, 'object A deleted')
File "/usr/lib/python2.6/site-packages/IPython/testing/tools.py", line
252, in ipexec_validate nt.assert_equals(out.strip(),
expected_out.strip()) AssertionError: '\x1b[?1034hobject A deleted' !=
'object A deleted'
>>  raise self.failureException, \
          (None or '%r != %r' % ('\x1b[?1034hobject A deleted', 'object
A deleted')) 

======================================================================
FAIL: IPython.core.tests.test_run.TestMagicRunSimple.test_tclass
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/decorators.py", line
225, in skipper_func return f(*args, **kwargs) File
"/usr/lib/python2.6/site-packages/IPython/core/tests/test_run.py", line
169, in test_tclass tt.ipexec_validate(self.fname, out) File
"/usr/lib/python2.6/site-packages/IPython/testing/tools.py", line 252,
in ipexec_validate nt.assert_equals(out.strip(), expected_out.strip())
AssertionError: "\x1b[?1034hARGV 1-: ['C-first']\nARGV 1-:
['C-second']\ntclass.py: deleting object: C-first" != "ARGV 1-:
['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first"
>>  raise self.failureException, \
          (None or '%r != %r' % ("\x1b[?1034hARGV 1-: ['C-first']\nARGV
1-: ['C-second']\ntclass.py: deleting object: C-first", "ARGV 1-:
['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object:
C-first")) 

----------------------------------------------------------------------
Ran 104 tests in 1.916s

FAILED (SKIP=1, errors=1, failures=4)


But don't know, what is really expected here right now...

	Thomas


From fperez.net at gmail.com  Tue Jul 27 16:13:36 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 27 Jul 2010 13:13:36 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTikH3GjfdywBw=8uG+eRbWU-Borse9s4b100kEys@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com> 
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com> 
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com> 
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com> 
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com> 
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com> 
	<A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
	<AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com> 
	<C105F145-CEF1-43C9-B359-CDE8BF578CD8@gmail.com>
	<AANLkTinxoEHdgv17nqu9MXbX2_k7GLCBR6zZhqkwnU3-@mail.gmail.com> 
	<AANLkTik04o36kEm0UE_fY8Soe9em07-bYHSf4M1FO_A3@mail.gmail.com> 
	<AANLkTikH3GjfdywBw=8uG+eRbWU-Borse9s4b100kEys@mail.gmail.com>
Message-ID: <AANLkTim540qGHOsBRbWc8JdA5idB3=tBeY-v+HhmrO_E@mail.gmail.com>

On Tue, Jul 27, 2010 at 12:23 PM, Brian Granger <ellisonbg at gmail.com> wrote:
> This is definitely an issue. ?Also, someone could set their own custom
> unicode encoding by hand and that would mess this up as well.
>
>>
>> If it is a problem, then there are some options:
>>
>> - disallow communication between ucs 2/4 pythons.
>
> But this doesn't account for other encoding/decoding setups.

Note that when I mention ucs2/4, that refers to the *internal* python
storage of all unicode objects.  That is: ucs2/4 is how the buffer,
under the hood for a unicode string, is written in memory.  There are
no other encoding/decoding setups for Python, this is strictly a
compile-time flag and can only be either ucs2 or ucs4.

You can see the value by typing:

In [1]: sys.maxunicode
Out[1]: 1114111

That's ucs-4, and that number is the whole of the current unicode
standard.  If you get instead 2^16, it means you have a ucs2 build,
and python can only encode strings in the BMP (basic multilingual
plane, where all living languages are stored but not math symbols,
musical symbols and some extended Asian characters).

Does that make sense?

Note that additionally, it's exceedingly rare for anyone to set up a
custom encoding for unicode.  It's hard to do right, requires plumbing
in the codecs module, and I think Python supports out of the box
enough encodings that I can't imagine why anyone would write a new
encoding.  But regardless, if a string has been encoded then it's OK:
now it's bytes, and there's no problem.

>> - detect a mismatch and encode/decode all unicode strings to utf-8 on
>> send/receive, but allow raw buffer sending if there's no mismatch.
>
> This will be tough though if users set their own encoding.

No, the issue with users having something other than utf-8 is
orthogonal to this.  The idea would be: - if both ends of the
transmission have conflicting ucs internals, then all unicode strings
are sent as utf-8.  If a user sends an encoded string, then that's
just a bunch of bytes and it doesn't matter how they encoded it, since
they will be responsible for decoding it on the other end.

But I still don't like this approach because the ucs2/4 mismatch is a
pair-wise problem, and for a multi-node setup managing this pair-wise
switching of protocols can be a nightmare.  And let's not even get
started on what pub/sub sockets would do with this...

>> - *always* encode/decode.
>>
>
> I think this is the option that I prefer (having users to this in their
> application code).

Yes, now that I think of pub/sub sockets, I don't think we have a
choice.  It's a bit unfortunate that Python recently decided *not* to
standardize on a storage scheme:

http://mail.python.org/pipermail/python-dev/2008-July/080886.html

because it means forever paying the price of encoding/decoding in this context.

Cheers,

f

ps - as you can tell, I've been finally doing my homework on unicode,
in preparation for an eventual 3.x transition :)


From fperez.net at gmail.com  Tue Jul 27 16:21:15 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 27 Jul 2010 13:21:15 -0700
Subject: [IPython-dev] Trunk in 100% test compliance
In-Reply-To: <AANLkTim6L4Tbwg_W717u7N29iwEZ8-mKnsOtTx68T90x@mail.gmail.com>
References: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com> 
	<AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com> 
	<AANLkTinPBCe=648f1=o0yBTU-A24pTP-5LyqFqs4T0Yo@mail.gmail.com> 
	<AANLkTimfA9zcy5ahooMbsCbWKyenJF+7DWE2cae4GkCd@mail.gmail.com> 
	<AANLkTim6L4Tbwg_W717u7N29iwEZ8-mKnsOtTx68T90x@mail.gmail.com>
Message-ID: <AANLkTim2==3jg82msLJraUKFrmnbMFv5EUxfW-fnoJ+y@mail.gmail.com>

On Tue, Jul 27, 2010 at 12:21 PM, Brian Granger <ellisonbg at gmail.com> wrote:

> I did this yesterday and it is in trunk now:
> http://github.com/ipython/ipython/commit/595fc3b996f891ecc1a1996c598d15e47e6aac67
> But I did leave the top-level frontend directory with the qt subdirectory in
> place. ?Basically, it is organized like you expect. ?In my previous email,
> when I said IPython/frontend, I more mean "the appropriate things that used
> to be in in IPython/frontend". ?But yes, all the new stuff should still go
> into frontend as expected.

Ah, in fact the qt subdir is not there, because git won't keep empty
directories.  So if you do a full clean when you switch to trunk

git clean -dfx

You'll see qt/ gone from frontend.  Since I saw qt/ gone, I got
confused.  But that's OK, when Evan/Gerardo add qt back it will be
fine.

Sorry for the misunderstanding.

Cheers,

f


From ellisonbg at gmail.com  Tue Jul 27 16:49:12 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 27 Jul 2010 13:49:12 -0700
Subject: [IPython-dev] Trunk in 100% test compliance
In-Reply-To: <AANLkTim2==3jg82msLJraUKFrmnbMFv5EUxfW-fnoJ+y@mail.gmail.com>
References: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com>
	<AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com>
	<AANLkTinPBCe=648f1=o0yBTU-A24pTP-5LyqFqs4T0Yo@mail.gmail.com>
	<AANLkTimfA9zcy5ahooMbsCbWKyenJF+7DWE2cae4GkCd@mail.gmail.com>
	<AANLkTim6L4Tbwg_W717u7N29iwEZ8-mKnsOtTx68T90x@mail.gmail.com>
	<AANLkTim2==3jg82msLJraUKFrmnbMFv5EUxfW-fnoJ+y@mail.gmail.com>
Message-ID: <AANLkTimtg9W01VoeuMf7w1gqQY6_n=4c-6jCHEFEhBbM@mail.gmail.com>

On Tue, Jul 27, 2010 at 1:21 PM, Fernando Perez <fperez.net at gmail.com>wrote:

> On Tue, Jul 27, 2010 at 12:21 PM, Brian Granger <ellisonbg at gmail.com>
> wrote:
>
> > I did this yesterday and it is in trunk now:
> >
> http://github.com/ipython/ipython/commit/595fc3b996f891ecc1a1996c598d15e47e6aac67
> > But I did leave the top-level frontend directory with the qt subdirectory
> in
> > place.  Basically, it is organized like you expect.  In my previous
> email,
> > when I said IPython/frontend, I more mean "the appropriate things that
> used
> > to be in in IPython/frontend".  But yes, all the new stuff should still
> go
> > into frontend as expected.
>
> Ah, in fact the qt subdir is not there, because git won't keep empty
> directories.  So if you do a full clean when you switch to trunk
>
>
OK, that makes more sense.


> git clean -dfx
>
> You'll see qt/ gone from frontend.  Since I saw qt/ gone, I got
> confused.  But that's OK, when Evan/Gerardo add qt back it will be
> fine.
>
>
Great.


> Sorry for the misunderstanding.
>
>
No problem.


> Cheers,
>
> f
>



-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100727/182da3f5/attachment.html>

From fperez.net at gmail.com  Tue Jul 27 16:55:56 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 27 Jul 2010 13:55:56 -0700
Subject: [IPython-dev] Trunk in 100% test compliance
In-Reply-To: <AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com>
References: <AANLkTikT4QoN+BaU3A3J6nMryCf+zwV5-5AkaFqE1T91@mail.gmail.com> 
	<AANLkTim++OUejGG9GjNP1LKauzVwiQjM06v-fDdWEsu_@mail.gmail.com>
Message-ID: <AANLkTinv5guawRKuu8VprhYU6VW-WVMGofD-gOsbRuAo@mail.gmail.com>

Hey Darren,

On Tue, Jul 27, 2010 at 5:05 AM, Darren Dale <dsdale24 at gmail.com> wrote:
>
> I just fetched the master branch, and when I try to run "python
> setup.py install" I get:
>
> error: package directory 'IPython/frontend/tests' does not exist
>

I've just fixed those problems, sorry about that.  There was  a big
cleanup of dead code (kept in deathrow so it's easy to get anything
from there back at any point, without needing to fish in git history)
and some things were accidentally still referred to from the
setup/tests.  It should be good now, at least it works on my box from
a real install, please let us know if you still see a problem.

Cheers,

f


From fperez.net at gmail.com  Tue Jul 27 16:56:30 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 27 Jul 2010 13:56:30 -0700
Subject: [IPython-dev] correct test-suite
In-Reply-To: <20100727212726.6a988639@earth>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com> 
	<20100726091850.218c47ad@earth>
	<AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com> 
	<20100727212726.6a988639@earth>
Message-ID: <AANLkTinj9cbfZfhjHp7YdmFtKKq8RStDExdHjYuv3SsE@mail.gmail.com>

On Tue, Jul 27, 2010 at 12:27 PM, Thomas Spura
<tomspur at fedoraproject.org> wrote:
>> On Mon, Jul 26, 2010 at 12:18 AM, Thomas Spura
>> <tomspur at fedoraproject.org> wrote:
>> > Here, they don't... That's why, I didn't look too closely to the
>> > failing tests in my branches. I'll try to fix the failures in
>> > current master on my side first, because it seems some other
>> > dependencies are doing something wrong I guess...
>> >
>>
>> If you can't find it, show me the tracebacks and I may be able to help
>> out. ?We want the test suite to degrade gracefully by skipping if
>> optional dependencies aren't met, not to fail.
>
> Now there are some less failures than before:

Just in case anyone is seeing similar problems, we're tracking this on
IRC, we'll report back with any solution

f


From benjaminrk at gmail.com  Tue Jul 27 18:16:04 2010
From: benjaminrk at gmail.com (MinRK)
Date: Tue, 27 Jul 2010 15:16:04 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTim540qGHOsBRbWc8JdA5idB3=tBeY-v+HhmrO_E@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com> 
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com> 
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com> 
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com> 
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com> 
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com> 
	<A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
	<AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com> 
	<C105F145-CEF1-43C9-B359-CDE8BF578CD8@gmail.com>
	<AANLkTinxoEHdgv17nqu9MXbX2_k7GLCBR6zZhqkwnU3-@mail.gmail.com> 
	<AANLkTik04o36kEm0UE_fY8Soe9em07-bYHSf4M1FO_A3@mail.gmail.com> 
	<AANLkTikH3GjfdywBw=8uG+eRbWU-Borse9s4b100kEys@mail.gmail.com> 
	<AANLkTim540qGHOsBRbWc8JdA5idB3=tBeY-v+HhmrO_E@mail.gmail.com>
Message-ID: <AANLkTinkOM+Oqvkay3qQRpN3Q0bxvX6iWgsG14gxny88@mail.gmail.com>

Okay, so it sounds like we should never interpret unicode objects as simple
strings, if I am understanding the arguments correctly.

I certainly don't think that sending anything providing the buffer interface
should raise an exception, though. It should be up to the user to know
whether the buffer will be legible on the other side.

The situation I'm concerned about is that json gives you unicode strings,
whether that was the input or not.
s1 = 'word'
j = json.dumps(s1)
s2 = json.loads(j)
# u'word'

Now, if you have that logic internally, and you are sending messages based
on messages you received, unless you wrap _every single thing_ you pass to
send in str(), then you are calling things like send(u'word').  I really
don't think that should not raise an error, and trunk surely does.

The other options are either to always interpret unicode objects like
everything else, always sending by its buffer, trusting that the receiving
end will call decode (which may require that the message be copied at least
one extra time). This would also mean that if A sends something packed by
json to B, B unpacks it, and it included a str to be sent to C, then B has a
unicode wrapped version of it (not a str). If B then sends it on to C, C
will get a string that will _not_ be the same as the one A packed and sent
to B. I think this is terrible, since it seems like such an obvious (already
done) fix in zmq.

I think that the vast majority of the time you are faced with unicode
strings, they are in fact simple str instances that got wrapped, and we
should expect that and deal with it.

I decided to run some tests, since I currently have a UCS2 (OSX 10.6.4) and
UCS4 (ubuntu 10.04) machine
They are running my `patches' zmq branch right now, and I'm having no
problems.

case 1: sys.defaultencoding = utf8 on mac, ascii on ubuntu.
a.send(u'who') # valid ascii, valid utf-8, ascii string sent
b.recv()
# 'who'

u=u'who?'
# u'who\xcf\x80'

a.send(u'who?') # valid ascii, valid utf-8, utf string sent
b.recv().decode('utf-8')
# u'who\xcf\x80'

case 2: sys.defaultencoding = ascii,ascii
a.send(u'who') # valid ascii, string sent
b.recv()
# 'who'

u=u'who?'
u
# u'who\xcf\x80'

a.send(u'who?') # invalid ascii, buffer sent
s = b.recv()
# 'w\x00h\x00o\x00\xcf\x00\x80\x00'
s.decode('utf-8')
# UnicodeError (invalid utf-8)
s.decode('utf16')
# u'who\xcf\x80'


It seems that the _buffer_ of a unicode object is always utf16

I also did it with utf-8 on both sides, and threw in some latin-1, and there
was no difference between those and case 1.

I can't find the problem here.

As far as I can tell, a unicode object is:
a) a valid string for the sender, and the string is sent in the sender's
default encoding
on the receiver:
    sock.recv().decode(sender.defaultcodec)
    gets the object back
b) not a valid string for the sender, and the utf16 buffer is sent
on the receiver:
    sock.recv().decode('utf16')
    always seems to work

I even tried various instances of specifying the encoding as latin, etc. and
sending math symbols (?,?) in various directions, and invariably the only
thing I needed to know on the receiver was the default encoding on the
sender. Everything was reconstructed properly with either
s.decode(sender.defaultcodec) or s.decode(utf16), depending solely on
whether str(u) would raise on the sender.

Are there specific symbols and/or directions where I should see a problem?
Based on reading, I figured that math symbols would if anything, but they
certainly don't in either direction.

-MinRK


On Tue, Jul 27, 2010 at 13:13, Fernando Perez <fperez.net at gmail.com> wrote:

> On Tue, Jul 27, 2010 at 12:23 PM, Brian Granger <ellisonbg at gmail.com>
> wrote:
> > This is definitely an issue.  Also, someone could set their own custom
> > unicode encoding by hand and that would mess this up as well.
> >
> >>
> >> If it is a problem, then there are some options:
> >>
> >> - disallow communication between ucs 2/4 pythons.
> >
> > But this doesn't account for other encoding/decoding setups.
>
> Note that when I mention ucs2/4, that refers to the *internal* python
> storage of all unicode objects.  That is: ucs2/4 is how the buffer,
> under the hood for a unicode string, is written in memory.  There are
> no other encoding/decoding setups for Python, this is strictly a
> compile-time flag and can only be either ucs2 or ucs4.
>
> You can see the value by typing:
>
> In [1]: sys.maxunicode
> Out[1]: 1114111
>
> That's ucs-4, and that number is the whole of the current unicode
> standard.  If you get instead 2^16, it means you have a ucs2 build,
> and python can only encode strings in the BMP (basic multilingual
> plane, where all living languages are stored but not math symbols,
> musical symbols and some extended Asian characters).
>
> Does that make sense?
>
> Note that additionally, it's exceedingly rare for anyone to set up a
> custom encoding for unicode.  It's hard to do right, requires plumbing
> in the codecs module, and I think Python supports out of the box
> enough encodings that I can't imagine why anyone would write a new
> encoding.  But regardless, if a string has been encoded then it's OK:
> now it's bytes, and there's no problem.
>
> >> - detect a mismatch and encode/decode all unicode strings to utf-8 on
> >> send/receive, but allow raw buffer sending if there's no mismatch.
> >
> > This will be tough though if users set their own encoding.
>
> No, the issue with users having something other than utf-8 is
> orthogonal to this.  The idea would be: - if both ends of the
> transmission have conflicting ucs internals, then all unicode strings
> are sent as utf-8.  If a user sends an encoded string, then that's
> just a bunch of bytes and it doesn't matter how they encoded it, since
> they will be responsible for decoding it on the other end.
>
> But I still don't like this approach because the ucs2/4 mismatch is a
> pair-wise problem, and for a multi-node setup managing this pair-wise
> switching of protocols can be a nightmare.  And let's not even get
> started on what pub/sub sockets would do with this...
>
> >> - *always* encode/decode.
> >>
> >
> > I think this is the option that I prefer (having users to this in their
> > application code).
>
> Yes, now that I think of pub/sub sockets, I don't think we have a
> choice.  It's a bit unfortunate that Python recently decided *not* to
> standardize on a storage scheme:
>
> http://mail.python.org/pipermail/python-dev/2008-July/080886.html
>
> because it means forever paying the price of encoding/decoding in this
> context.
>
> Cheers,
>
> f
>
> ps - as you can tell, I've been finally doing my homework on unicode,
> in preparation for an eventual 3.x transition :)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100727/0296131f/attachment.html>

From ellisonbg at gmail.com  Tue Jul 27 18:22:44 2010
From: ellisonbg at gmail.com (Brian Granger)
Date: Tue, 27 Jul 2010 15:22:44 -0700
Subject: [IPython-dev] Buffers
In-Reply-To: <AANLkTinkOM+Oqvkay3qQRpN3Q0bxvX6iWgsG14gxny88@mail.gmail.com>
References: <AANLkTinzjyz1Bt7EqobUSmm8MEGktW_Y5P62JEWJkfK3@mail.gmail.com>
	<AANLkTik3_NPYXwLx=uZYKTMEveUa0AZ-+Nid1+xEypZL@mail.gmail.com>
	<AANLkTimPeZQNhJHFrphzXd8g+xQhVfUYT30RWbobHLG+@mail.gmail.com>
	<AANLkTimiJwaHM=mvnHWSq9ppkKR6HmVB6aKkLtyRtwi2@mail.gmail.com>
	<AANLkTimJVJQ1mZOy25j4-CQ7aiSG81UQH-N80TaLvT9q@mail.gmail.com>
	<AANLkTikGqhvxb0wtX1=OWQRO=PoxBTquma2QmjjuFKfA@mail.gmail.com>
	<A9C08D9A-6BFC-463C-93F3-F57076BCEBD2@gmail.com>
	<AANLkTinuT1fhA1giQdp+DPrz+xsnv0ETDj7-pS9WM7=-@mail.gmail.com>
	<C105F145-CEF1-43C9-B359-CDE8BF578CD8@gmail.com>
	<AANLkTinxoEHdgv17nqu9MXbX2_k7GLCBR6zZhqkwnU3-@mail.gmail.com>
	<AANLkTik04o36kEm0UE_fY8Soe9em07-bYHSf4M1FO_A3@mail.gmail.com>
	<AANLkTikH3GjfdywBw=8uG+eRbWU-Borse9s4b100kEys@mail.gmail.com>
	<AANLkTim540qGHOsBRbWc8JdA5idB3=tBeY-v+HhmrO_E@mail.gmail.com>
	<AANLkTinkOM+Oqvkay3qQRpN3Q0bxvX6iWgsG14gxny88@mail.gmail.com>
Message-ID: <AANLkTikQrfK8ZQxkFFmBvj1LqTFQLxGF9v7g=itu1KNu@mail.gmail.com>

Do you guys want to chat about this on IRC?


On Tue, Jul 27, 2010 at 3:16 PM, MinRK <benjaminrk at gmail.com> wrote:

> Okay, so it sounds like we should never interpret unicode objects as simple
> strings, if I am understanding the arguments correctly.
>
> I certainly don't think that sending anything providing the buffer
> interface should raise an exception, though. It should be up to the user to
> know whether the buffer will be legible on the other side.
>
> The situation I'm concerned about is that json gives you unicode strings,
> whether that was the input or not.
> s1 = 'word'
> j = json.dumps(s1)
> s2 = json.loads(j)
> # u'word'
>
> Now, if you have that logic internally, and you are sending messages based
> on messages you received, unless you wrap _every single thing_ you pass to
> send in str(), then you are calling things like send(u'word').  I really
> don't think that should not raise an error, and trunk surely does.
>
> The other options are either to always interpret unicode objects like
> everything else, always sending by its buffer, trusting that the receiving
> end will call decode (which may require that the message be copied at least
> one extra time). This would also mean that if A sends something packed by
> json to B, B unpacks it, and it included a str to be sent to C, then B has a
> unicode wrapped version of it (not a str). If B then sends it on to C, C
> will get a string that will _not_ be the same as the one A packed and sent
> to B. I think this is terrible, since it seems like such an obvious (already
> done) fix in zmq.
>
> I think that the vast majority of the time you are faced with unicode
> strings, they are in fact simple str instances that got wrapped, and we
> should expect that and deal with it.
>
> I decided to run some tests, since I currently have a UCS2 (OSX 10.6.4) and
> UCS4 (ubuntu 10.04) machine
> They are running my `patches' zmq branch right now, and I'm having no
> problems.
>
> case 1: sys.defaultencoding = utf8 on mac, ascii on ubuntu.
> a.send(u'who') # valid ascii, valid utf-8, ascii string sent
> b.recv()
> # 'who'
>
> u=u'who?'
> # u'who\xcf\x80'
>
> a.send(u'who?') # valid ascii, valid utf-8, utf string sent
> b.recv().decode('utf-8')
> # u'who\xcf\x80'
>
> case 2: sys.defaultencoding = ascii,ascii
> a.send(u'who') # valid ascii, string sent
> b.recv()
> # 'who'
>
> u=u'who?'
> u
> # u'who\xcf\x80'
>
> a.send(u'who?') # invalid ascii, buffer sent
> s = b.recv()
> # 'w\x00h\x00o\x00\xcf\x00\x80\x00'
> s.decode('utf-8')
> # UnicodeError (invalid utf-8)
> s.decode('utf16')
> # u'who\xcf\x80'
>
>
> It seems that the _buffer_ of a unicode object is always utf16
>
> I also did it with utf-8 on both sides, and threw in some latin-1, and
> there was no difference between those and case 1.
>
> I can't find the problem here.
>
> As far as I can tell, a unicode object is:
> a) a valid string for the sender, and the string is sent in the sender's
> default encoding
> on the receiver:
>     sock.recv().decode(sender.defaultcodec)
>     gets the object back
> b) not a valid string for the sender, and the utf16 buffer is sent
> on the receiver:
>     sock.recv().decode('utf16')
>     always seems to work
>
> I even tried various instances of specifying the encoding as latin, etc.
> and sending math symbols (?,?) in various directions, and invariably the
> only thing I needed to know on the receiver was the default encoding on the
> sender. Everything was reconstructed properly with either
> s.decode(sender.defaultcodec) or s.decode(utf16), depending solely on
> whether str(u) would raise on the sender.
>
> Are there specific symbols and/or directions where I should see a problem?
> Based on reading, I figured that math symbols would if anything, but they
> certainly don't in either direction.
>
> -MinRK
>
>
> On Tue, Jul 27, 2010 at 13:13, Fernando Perez <fperez.net at gmail.com>wrote:
>
>> On Tue, Jul 27, 2010 at 12:23 PM, Brian Granger <ellisonbg at gmail.com>
>> wrote:
>> > This is definitely an issue.  Also, someone could set their own custom
>> > unicode encoding by hand and that would mess this up as well.
>> >
>> >>
>> >> If it is a problem, then there are some options:
>> >>
>> >> - disallow communication between ucs 2/4 pythons.
>> >
>> > But this doesn't account for other encoding/decoding setups.
>>
>> Note that when I mention ucs2/4, that refers to the *internal* python
>> storage of all unicode objects.  That is: ucs2/4 is how the buffer,
>> under the hood for a unicode string, is written in memory.  There are
>> no other encoding/decoding setups for Python, this is strictly a
>> compile-time flag and can only be either ucs2 or ucs4.
>>
>> You can see the value by typing:
>>
>> In [1]: sys.maxunicode
>> Out[1]: 1114111
>>
>> That's ucs-4, and that number is the whole of the current unicode
>> standard.  If you get instead 2^16, it means you have a ucs2 build,
>> and python can only encode strings in the BMP (basic multilingual
>> plane, where all living languages are stored but not math symbols,
>> musical symbols and some extended Asian characters).
>>
>> Does that make sense?
>>
>> Note that additionally, it's exceedingly rare for anyone to set up a
>> custom encoding for unicode.  It's hard to do right, requires plumbing
>> in the codecs module, and I think Python supports out of the box
>> enough encodings that I can't imagine why anyone would write a new
>> encoding.  But regardless, if a string has been encoded then it's OK:
>> now it's bytes, and there's no problem.
>>
>> >> - detect a mismatch and encode/decode all unicode strings to utf-8 on
>> >> send/receive, but allow raw buffer sending if there's no mismatch.
>> >
>> > This will be tough though if users set their own encoding.
>>
>> No, the issue with users having something other than utf-8 is
>> orthogonal to this.  The idea would be: - if both ends of the
>> transmission have conflicting ucs internals, then all unicode strings
>> are sent as utf-8.  If a user sends an encoded string, then that's
>> just a bunch of bytes and it doesn't matter how they encoded it, since
>> they will be responsible for decoding it on the other end.
>>
>> But I still don't like this approach because the ucs2/4 mismatch is a
>> pair-wise problem, and for a multi-node setup managing this pair-wise
>> switching of protocols can be a nightmare.  And let's not even get
>> started on what pub/sub sockets would do with this...
>>
>> >> - *always* encode/decode.
>> >>
>> >
>> > I think this is the option that I prefer (having users to this in their
>> > application code).
>>
>> Yes, now that I think of pub/sub sockets, I don't think we have a
>> choice.  It's a bit unfortunate that Python recently decided *not* to
>> standardize on a storage scheme:
>>
>> http://mail.python.org/pipermail/python-dev/2008-July/080886.html
>>
>> because it means forever paying the price of encoding/decoding in this
>> context.
>>
>> Cheers,
>>
>> f
>>
>> ps - as you can tell, I've been finally doing my homework on unicode,
>> in preparation for an eventual 3.x transition :)
>>
>
>


-- 
Brian E. Granger, Ph.D.
Assistant Professor of Physics
Cal Poly State University, San Luis Obispo
bgranger at calpoly.edu
ellisonbg at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100727/4ddd8584/attachment.html>

From andresete.chaos at gmail.com  Wed Jul 28 00:03:35 2010
From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=)
Date: Tue, 27 Jul 2010 23:03:35 -0500
Subject: [IPython-dev] kernel proxy
Message-ID: <AANLkTi=oFaESeernumndhtJzC-tc0rLPvXqOg6rB-AYb@mail.gmail.com>

Hi guys!!
I want to know
where is the newest code writed of kernelproxy?
tnk!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100727/f8d5c4a8/attachment.html>

From fperez.net at gmail.com  Wed Jul 28 00:11:38 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Tue, 27 Jul 2010 21:11:38 -0700
Subject: [IPython-dev] kernel proxy
In-Reply-To: <AANLkTi=oFaESeernumndhtJzC-tc0rLPvXqOg6rB-AYb@mail.gmail.com>
References: <AANLkTi=oFaESeernumndhtJzC-tc0rLPvXqOg6rB-AYb@mail.gmail.com>
Message-ID: <AANLkTikayks3CifbCgQLeMrVs29-yV-Qc3V7-b3GHhx7@mail.gmail.com>

Hi Omar,

2010/7/27 Omar Andr?s Zapata Mesa <andresete.chaos at gmail.com>:
> I want to know
> where is the newest code writed of kernelproxy?
> tnk!

Brian is on IRC, you can ask him there what the status of his kernel
work is so you know where best to start from.

Cheers,

f


From erik.tollerud at gmail.com  Wed Jul 28 01:58:03 2010
From: erik.tollerud at gmail.com (Erik Tollerud)
Date: Tue, 27 Jul 2010 22:58:03 -0700
Subject: [IPython-dev] Practices for .10 or .11 profile formats
In-Reply-To: <AANLkTimggqudVsR9x87D-tf3n3X0e3a4wXutc32tW9p8@mail.gmail.com>
References: <AANLkTikCHeyITV-Y4yJjFczmMbbfU2cE7xBj1B8GgFBt@mail.gmail.com> 
	<AANLkTin50LOzW0dUwi5PzwepWKRRgk5I97Gd-Zcaz-Tt@mail.gmail.com> 
	<AANLkTi=omUaObeZ=c_eQ8qeTnBCSsj20c0bGZ02TcTYD@mail.gmail.com> 
	<AANLkTikONNAObHwS=LTAyj5u+bZgp4SZ+9=m224zkvCM@mail.gmail.com> 
	<AANLkTinsdenT9fBDjKOv=HXW9XVd9nKEpznoZMf0VkjC@mail.gmail.com> 
	<AANLkTimggqudVsR9x87D-tf3n3X0e3a4wXutc32tW9p8@mail.gmail.com>
Message-ID: <AANLkTi=SBQwxgdy+B36wEXVkih_tAMxhkkCdBE=M7g-m@mail.gmail.com>

Ah, I didn't realize the current overhaul was digging into the core as
well.  Having tried (and failed, due to confusion) to hack on that
code a couple other times, I'm happy to hear that.

I'll just be satisfying with voting for the issue on the tracker.
Thanks for the detailed responses!

On Tue, Jul 27, 2010 at 11:37 AM, Fernando Perez <fperez.net at gmail.com> wrote:
> Hi Erik,
>
> On Tue, Jul 27, 2010 at 2:31 AM, Erik Tollerud <erik.tollerud at gmail.com> wrote:
>>
>>> That's a bug, plain and simple, sorry :) ?For actual code, instead of
>>> exec_lines, I use this:
>>>
>>> c.Global.exec_files = ['extras.py']
>>
>> I didn't realize that didn't increment the In[#] counter. Definitely
>> good to know that option is available, but I decided that if it was a
>> bug I should go hunting...
>>
>> Trouble is, despite spending quite a bit of time rooting around in the
>> IPython.core, I can't seem to figure out where the input and output
>> cache's get populated and their counters incremented... It would be
>> possible, presumably, to run it like exec_files does for regular py
>> files and not use the ipython filtering and such, but that really
>> limits the usefulness of the profile... So is there some option some
>> where that can temporarily turn off the in/out caching (and presumably
>> that will also prevent the counter from incrementing)? And if not, is
>> there some obvious spot I missed where they get incremented that I
>> could try to figure out how it could be patched to prevent this
>> behavior?
>
> I wouldn't bother if I were you: that code is a horrible mess, and the
> re-work that we're doing right now will clean a lot of that up. ?The
> old code has coupling all over the map for prompt handling, and we're
> trying to clean that as well. ?If you're really curious, the code is
> in core/prompts.py, and the object in the main ipython that handles it
> is get_ipython().outputcache. ?So grepping around for that guy may
> help, but as I said, I'd let it go for now and live with using
> exec_files, until we finish up the housecleaning :)
>
> Cheers,
>
> f
>



-- 
Erik Tollerud


From andresete.chaos at gmail.com  Wed Jul 28 13:48:18 2010
From: andresete.chaos at gmail.com (=?UTF-8?Q?Omar_Andr=C3=A9s_Zapata_Mesa?=)
Date: Wed, 28 Jul 2010 12:48:18 -0500
Subject: [IPython-dev] pyzmq and kernelmanager.
Message-ID: <AANLkTinqjNp3UXf8f1tt-WmH5zqJ8+uQ8gxB72aqBp+Z@mail.gmail.com>

Hi all.
I am trying work with kernelmanger but it need from  zmq.eventloop import
ioloop
then I do a pull in pyzmq's code and I try compile it again but it show me
this message

zmq/_zmq.c: In the function ?init_zmq?:
zmq/_zmq.c:10242: error: ?EMTHREAD?


O.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100728/d70be08b/attachment.html>

From benjaminrk at gmail.com  Wed Jul 28 14:35:21 2010
From: benjaminrk at gmail.com (MinRK)
Date: Wed, 28 Jul 2010 11:35:21 -0700
Subject: [IPython-dev] pyzmq and kernelmanager.
In-Reply-To: <AANLkTinqjNp3UXf8f1tt-WmH5zqJ8+uQ8gxB72aqBp+Z@mail.gmail.com>
References: <AANLkTinqjNp3UXf8f1tt-WmH5zqJ8+uQ8gxB72aqBp+Z@mail.gmail.com>
Message-ID: <AANLkTinYsD60KoMYLttkROHsiA5SBfe+8XO+HYQFD=Bx@mail.gmail.com>

You probably have zeromq trunk, which I think doesn't work yet.

You need 2.0.7 from here: http://www.zeromq.org/area:download

(you are far from the first to have this problem)

-MinRK

2010/7/28 Omar Andr?s Zapata Mesa <andresete.chaos at gmail.com>

> Hi all.
> I am trying work with kernelmanger but it need from  zmq.eventloop import
> ioloop
> then I do a pull in pyzmq's code and I try compile it again but it show me
> this message
>
> zmq/_zmq.c: In the function ?init_zmq?:
> zmq/_zmq.c:10242: error: ?EMTHREAD?
>
>
> O.
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100728/62d511a6/attachment.html>

From tomspur at fedoraproject.org  Thu Jul 29 04:54:46 2010
From: tomspur at fedoraproject.org (Thomas Spura)
Date: Thu, 29 Jul 2010 10:54:46 +0200
Subject: [IPython-dev] correct test-suite
In-Reply-To: <AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com>
	<20100726091850.218c47ad@earth>
	<AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com>
Message-ID: <20100729105446.522921bc@earth>

Am Mon, 26 Jul 2010 19:38:21 -0700
schrieb Fernando Perez <fperez.net at gmail.com>:

> On Mon, Jul 26, 2010 at 12:18 AM, Thomas Spura
> <tomspur at fedoraproject.org> wrote:
> > Here, they don't... That's why, I didn't look too closely to the
> > failing tests in my branches. I'll try to fix the failures in
> > current master on my side first, because it seems some other
> > dependencies are doing something wrong I guess...
> >
> 
> If you can't find it, show me the tracebacks and I may be able to help
> out.  We want the test suite to degrade gracefully by skipping if
> optional dependencies aren't met, not to fail.

I wrote a little script, that creates a ipython*.xz, so I can build a
random snapshot from ipython as a rpm package and install it properly
to run iptest on it.

Now the failures are down to 2, hopefully, you see, why they are
failing:

$ /usr/bin/python /usr/lib/python2.6/site-packages/IPython/testing/iptest.py
IPython.core
>f2("a b c")
>f1("a", "b", "c")
>f1(1,2,3)
>f2(4)
..............................Out[85]: 'get_ipython().system("true ")\n'
Out[87]: 'get_ipython().system("d:/cygwin/top ")\n'
Out[88]: 'no change'
Out[89]: '"no change"\n'
Out[91]: 'get_ipython().system("true")\n'
Out[92]: Out[93]: 'get_ipython().magic("sx  true")\n'
Out[94]: Out[95]: 'get_ipython().magic("sx true")\n'
Out[97]: 'get_ipython().magic("lsmagic ")\n'
Out[99]: 'get_ipython().magic("lsmagic ")\n'
Out[101]: 'get_ipython().system(" true")\n'
Out[103]: 'x=1 # what?\n'
   File "<ipython console>", line 2
     !true
     ^
SyntaxError: invalid syntax

Out[105]: 'if 1:\n    !true\n'
Out[107]: 'if 1:\n    lsmagic\n'
Out[109]: 'if 1:\n    an_alias\n'
Out[111]: 'if 1:\n    get_ipython().system("true")\n'
Out[113]: 'if 2:\n    get_ipython().magic("lsmagic ")\n'
Out[115]: 'if 1:\n    get_ipython().system("true ")\n'
Out[116]: Out[117]: 'if 1:\n    get_ipython().magic("sx true")\n'
   File "<ipython console>", line 2
     /fun 1 2
     ^
SyntaxError: invalid syntax

Out[119]: 'if 1:\n    /fun 1 2\n'
   File "<ipython console>", line 2
     ;fun 1 2
     ^
SyntaxError: invalid syntax

Out[121]: 'if 1:\n    ;fun 1 2\n'
   File "<ipython console>", line 2
     ,fun 1 2
     ^
SyntaxError: invalid syntax

Out[123]: 'if 1:\n    ,fun 1 2\n'
   File "<ipython console>", line 2
     ?fun 1 2
     ^
SyntaxError: invalid syntax

Out[125]: 'if 1:\n    ?fun 1 2\n'
   File "<ipython console>", line 1
     len "abc"
             ^
SyntaxError: invalid syntax

Out[127]: 'len "abc"\n'
>autocallable()
Out[128]: 'called'
Out[129]: 'autocallable()\n'
>list("1", "2", "3")
Out[131]: 'list("1", "2", "3")\n'
>list("1 2 3")
Out[132]: ['1', ' ', '2', ' ', '3']
Out[133]: 'list("1 2 3")\n'
>len(range(1,4))
Out[134]: 3
Out[135]: 'len(range(1,4))\n'
>list("1", "2", "3")
Out[137]: 'list("1", "2", "3")\n'
>list("1 2 3")
Out[138]: ['1', ' ', '2', ' ', '3']
Out[139]: 'list("1 2 3")\n'
>len(range(1,4))
Out[140]: 3
Out[141]: 'len(range(1,4))\n'
>len("abc")
Out[142]: 3
Out[143]: 'len("abc")\n'
>len("abc");
Out[145]: 'len("abc");\n'
>len([1,2])
Out[146]: 2
Out[147]: 'len([1,2])\n'
Out[148]: True
Out[149]: 'call_idx [1]\n'
>call_idx(1)
Out[150]: True
Out[151]: 'call_idx(1)\n'
Out[152]: <built-in function len>
Out[153]: 'len \n'
>list("1", "2", "3")
Out[155]: 'list("1", "2", "3")\n'
>list("1 2 3")
Out[156]: ['1', ' ', '2', ' ', '3']
Out[157]: 'list("1 2 3")\n'
>len(range(1,4))
Out[158]: 3
Out[159]: 'len(range(1,4))\n'
>len("abc")
Out[160]: 3
Out[161]: 'len("abc")\n'
>len("abc");
Out[163]: 'len("abc");\n'
>len([1,2])
Out[164]: 2
Out[165]: 'len([1,2])\n'
Out[166]: True
Out[167]: 'call_idx [1]\n'
>call_idx(1)
Out[168]: True
Out[169]: 'call_idx(1)\n'
>len()
Out[171]: 'len()\n'
..............................................S.......#     print "bar"
# 
..>f(1)
....................................................................................F.F..
======================================================================
FAIL: Test that object's __del__ methods are called on exit.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/decorators.py", line
225, in skipper_func return f(*args, **kwargs) File
"/usr/lib/python2.6/site-packages/IPython/core/tests/test_run.py", line
155, in test_obj_del tt.ipexec_validate(self.fname, 'object A deleted')
File "/usr/lib/python2.6/site-packages/IPython/testing/tools.py", line
252, in ipexec_validate nt.assert_equals(out.strip(),
expected_out.strip()) AssertionError: '\x1b[?1034hobject A deleted' !=
'object A deleted'
>>  raise self.failureException, \
          (None or '%r != %r' % ('\x1b[?1034hobject A deleted', 'object
A deleted')) 

======================================================================
FAIL: IPython.core.tests.test_run.TestMagicRunSimple.test_tclass
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/nose/case.py", line 186, in
runTest self.test(*self.arg)
  File
"/usr/lib/python2.6/site-packages/IPython/testing/decorators.py", line
225, in skipper_func return f(*args, **kwargs) File
"/usr/lib/python2.6/site-packages/IPython/core/tests/test_run.py", line
169, in test_tclass tt.ipexec_validate(self.fname, out) File
"/usr/lib/python2.6/site-packages/IPython/testing/tools.py", line 252,
in ipexec_validate nt.assert_equals(out.strip(), expected_out.strip())
AssertionError: "\x1b[?1034hARGV 1-: ['C-first']\nARGV 1-:
['C-second']\ntclass.py: deleting object: C-first" != "ARGV 1-:
['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object: C-first"
>>  raise self.failureException, \
          (None or '%r != %r' % ("\x1b[?1034hARGV 1-: ['C-first']\nARGV
1-: ['C-second']\ntclass.py: deleting object: C-first", "ARGV 1-:
['C-first']\nARGV 1-: ['C-second']\ntclass.py: deleting object:
C-first")) 

----------------------------------------------------------------------
Ran 180 tests in 1.735s

FAILED (SKIP=1, failures=2)


	Thomas


From vano at mail.mipt.ru  Fri Jul 30 12:48:50 2010
From: vano at mail.mipt.ru (Ivan Pozdeev)
Date: Fri, 30 Jul 2010 20:48:50 +0400
Subject: [IPython-dev] %run -d is broken in Python 2.7
In-Reply-To: <AANLkTinIVMq8qokXEPSzdj6zcCUK4xIJwNY6gYLShMcP@mail.gmail.com>
References: <1213230248.20100713052612@mail.mipt.ru>
	<AANLkTinIVMq8qokXEPSzdj6zcCUK4xIJwNY6gYLShMcP@mail.gmail.com>
Message-ID: <149512827.20100730204850@mail.mipt.ru>

Good news: the bug in pdb is fixed!

http://bugs.python.org/issue9230

> 2010/7/12 vano <vano at mail.mipt.ru>:
>> After thorough investigation, it turned out a pdb issue (details are
>> on the link), so i filed a bug there (http://bugs.python.org/issue9230) as
>> well as a bugfix.
>>
>> If any of you have write access to python source, you can help me to get
>> it fixed quickly.

> Ouch, thanks for finding this and providing the pdb patch.
> Unfortunately I don't have write access to Python itself (I have
> 2-year old patches lingering in the python tracker, I'm afraid).

> If you can make a (most likely ugly) monkeypatch at runtime to fix
> this from the IPython side, we'll include that.  There's a good chance
> this will take forever to fix in Python itself, so carrying our own
> version-checked ugly fix is better than having broken functionality
> for 2.7 users.

> I imagine that grabbing the pdb instance and injecting a frame object
> into it will do the trick, from looking at your traceback.

> If you make such a fix, just post a pull  request for us or a patch,
> as you prefer:

> http://ipython.scipy.org/doc/nightly/html/development/gitwash/index.html

> and we'll be happy to include it.

> Cheers,

> f
-- 
Cheers,
 Ivan                          mailto:vano at mail.mipt.ru



From fperez.net at gmail.com  Sat Jul 31 19:40:28 2010
From: fperez.net at gmail.com (Fernando Perez)
Date: Sat, 31 Jul 2010 16:40:28 -0700
Subject: [IPython-dev] correct test-suite
In-Reply-To: <20100729105446.522921bc@earth>
References: <20100718171412.42f4e970@earth>
	<AANLkTikQzO6ATLoV2hQOwHu4cs7VyEfFkf7pJ7JP1Wnc@mail.gmail.com> 
	<20100726091850.218c47ad@earth>
	<AANLkTikm3QxDNYX-JjCV-jOv43M9qDWAnrgBzrxpnhR1@mail.gmail.com> 
	<20100729105446.522921bc@earth>
Message-ID: <AANLkTinqtCzx_izSOwVG_JMV3BTUbm9XXFw06429E2_A@mail.gmail.com>

Great, this is much better

On Thu, Jul 29, 2010 at 1:54 AM, Thomas Spura <tomspur at fedoraproject.org> wrote:
> I wrote a little script, that creates a ipython*.xz, so I can build a
> random snapshot from ipython as a rpm package and install it properly
> to run iptest on it.
>
> Now the failures are down to 2, hopefully, you see, why they are
> failing:

These are 'trivial' failures, for some reason on your system the
output is garbled, but in fact the test is running OK.  Could you
please open a ticket with your OS details and this failure?  Though
it's not serious, I'd like to have it fixed nonetheless so  we don't
get these false positives.

But in practice you are OK, as all tests are really passing and these
are failures of the test detection, not of  the underlying condition
being tested.

Cheers,

f


From benjaminrk at gmail.com  Thu Jul 22 05:22:46 2010
From: benjaminrk at gmail.com (MinRK)
Date: Thu, 22 Jul 2010 02:22:46 -0700
Subject: [IPython-dev] First Performance Result
Message-ID: <AANLkTim1IBZFxorqe0AY19iOxxtd1hbwxUxGV77yLufM@mail.gmail.com>

I have the basic queue built into the controller, and a kernel embedded into
the Engine, enough to make a simple performance test.

I submitted 32k simple execute requests in a row (round robin to engines,
explicit multiplexing), then timed the receipt of the results (tic each 1k).
I did it once with 2 engines, once with 32. (still on a 2-core machine, all
over tcp on loopback).

Messages went out at an average of 5400 msgs/s, and the results came back at
~900 msgs/s.
So that's 32k jobs submitted in 5.85s, and the last job completed and
returned its result 43.24s  after the submission of the first one (37.30s
for 32 engines). On average, a message is sent and received every 1.25 ms.
When sending very small number of requests (1-10) in this way to just one
engine, it gets closer to 1.75 ms round trip.

In all, it seems to be a good order of magnitude quicker than the Twisted
implementation for these small messages.

Identifying the cost of json for small messages:

Outgoing messages go at 9500/s if I use cPickle for serialization instead of
json. Round trip to 1 engine for 32k messages: 35s. Round trip to 1 engine
for 32k messages with json: 53s.

It would appear that json is contributing 50% to the overall run time.

With %timeit x.loads(x.dumps(msg))
on a basic message, I find that json is ~15x slower than cPickle.
And by these crude estimates, with json, we spend about 35% of our time
serializing, as opposed to just 2.5% with pickle.

I attached a bar plot of the average replies per second over each 1000 msg
block, overlaying numbers for 2 engines and for 32. I did the same comparing
pickle and json for 1 and 2 engines.

The messages are small, but a tiny amount of work is done in the kernel.
The jobs were submitted like this:
        for i in xrange(32e3/len(engines)):
          for eid,key in engines.iteritems():
            thesession.send(queue, "execute_request",
dict(code='id=%i'%(int(eid)+i)),ident=str(key))
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100722/ebaf3b9f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: bar.png
Type: image/png
Size: 29718 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100722/ebaf3b9f/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pickle.png
Type: image/png
Size: 51352 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/ipython-dev/attachments/20100722/ebaf3b9f/attachment-0001.png>