PEP 324 (process module)

I believe ActiveState's Trent Mick has released something along these lines. Does your proposed API match theirs? --Guido van Rossum (home page: http://www.python.org/~guido/)

I believe ActiveState's Trent Mick has released something along these lines. Does your proposed API match theirs?
No, not at all. There are some issues with this module, both regards the API and the implementation, which I'm not happy with. I tried to establish a discussion with Trent about this, but if I remember correctly, I never recieved a response. Here's some things which I'm not happy with: * There are three different "factory" classes: Process, ProcessOpen, ProcessProxy. I have only one, covering all cases. I think three different classes are confusing to users. Even I have trouble understanding the difference between them. * The code size is very large. The code is complex. * Trent's module always executes things through the shell, and deals with every ugly cornercase that comes from this. * The modules uses destructors, which I'm usually avoiding. So, the chances of getting our modules API-compatible are very small. /Peter Åstrand <astrand@lysator.liu.se>

Peter Astrand wrote:
I believe ActiveState's Trent Mick has released something along these lines. Does your proposed API match theirs?
[...]
So, the chances of getting our modules API-compatible are very small.
I really don't want to get into a religious war about the relative benefits of the two modules (I don't know Peter's well enough to comment), but I think it would be nice for users if there weren't two modules with the same goal, different interfaces and the same _name_. I understand from Trent that Twisted has started to use his process.py, so having another process.py in the stdlib at some point would cause at least some users pain. --david

So, the chances of getting our modules API-compatible are very small.
I really don't want to get into a religious war about the relative benefits of the two modules (I don't know Peter's well enough to comment), but I think it would be nice for users if there weren't two modules with the same goal, different interfaces and the same _name_. I
Yes, this could be a problem, but no-one has been able to come up with a solution: After 6 months, "process" and "popen" was the only suggested names. How about "subprocess"? /Peter Åstrand <astrand@lysator.liu.se>

[Peter Astrand wrote]
I believe ActiveState's Trent Mick has released something along these lines. Does your proposed API match theirs?
No, not at all. There are some issues with this module, both regards the API and the implementation, which I'm not happy with. I tried to establish a discussion with Trent about this, but if I remember correctly, I never recieved a response.
Yes, I am an ass. I am sorry about that Peter. I have had my blinders on for quite a while with work stuff. I have just started looking at your module so I can't be sure that we could find a happy common ground but I hope so. I realize that my process.py has some warts (including some you have pointed out below) and I am willing to work on those.
Here's some things which I'm not happy with:
* There are three different "factory" classes: Process, ProcessOpen, ProcessProxy. I have only one, covering all cases. I think three different classes are confusing to users. Even I have trouble understanding the difference between them.
Part of the confusion, I think, is that my process.py's docstrings have lagged a little bit behind development and there is one glaring problem that ProcessOpen.__init__.__doc__ mistakenly has the description from ProcessProxy.__init__. Process: Create a process without doing without doing anything with stdin/stderr/stdout. Kind of like os.system in that regard. ProcessOpen: ...add stdin/stdout/stderr attributes (file-like objects). Kind of like os.popen3 in that regard. Separating the two allows Process' implementation to be simpler: no dup'ing (and inheritability handling, etc) of the std handlers is required. If general opinion is that this separation is not useful, then I am fine with merging the two. (Note that my vague memory is that this separation may be necessary to allow for new console handling with subsystem:windows apps on Windows. I can't remember exactly though.) ProcessProxy: ...a behemoth using a thread for each of stdin/stdout/stderr to allow the user to get an event-like IO interface. This is something that was required for Komodo (for my job) to allow interactive running (and stdin handling) of processes in a GUI terminal (i.e., if you know Komodo, to run commands in the "Command Output" tab). If this is deemed too controversial for the Python core then I think it would be possible to move this out to a separate module -- though I know that some people outside of Komodo have found this useful.
* The code size is very large. The code is complex.
* Trent's module always executes things through the shell, and deals with every ugly cornercase that comes from this.
Some of the large code size is _for_ execution through the shell. :) I think that execution via the shell should be a feature of a module like this (so that users can use some shell features) and I even think that it should be the default (so that, for example, Windows users don't have to learn what cmd.exe or command.com are to run "dir"). However, I absolutely agree that one should be able to run withOUT the shell (i.e. have an option for this). Other reasons for the size/complexity of my process.py over yours: - My process objects have a .kill() method -- which is actually quite a pain on Windows. - My module contains a work-around (113 lines) for a known bug in LinuxThreads where one cannot .wait() on a created process from a subthread of the thread that created the process. (1) This feature was a requirement for Komodo. Pulling it out to a separate module and just documenting the limitation (which really isn't a big deal for most common uses) is probably an option for me. In fact the workaround is probably not generally acceptible and should, at the very least, be made optional. (2) I haven't had a chance to check but I _think_ that recent Linuxs may have switched to the newer threading libraries out there that fix this. At least I remember reading about the new threading libraries (there was more that one competitor) a couple of years ago. - The ProcessProxy "behemoth" is responsible for about half of my process.py module. - My module includes some handling that for subsystem:windows vs. subsystem:console apps on Windows that I don't think yours does. (That isn't totally fair: I think yours has a few features that mine doesn't.)
* The modules uses destructors, which I'm usually avoiding.
Is that evil? I have gotten mixed signals from the occassional lurking that I have done on the Python lists. Currently refcounts will ensure that my __del__'s get called. And I *do* have .close() methods for proper usage. Though that seem anathema to some Python programmers.
So, the chances of getting our modules API-compatible are very small.
Presuming that I don't go dark again I hope that maybe we can reach some concensus. In the meantime, if you'd be willing to change your module's name to something other than process.py I think that would help discussions. (I don't know if that is a pain for you at this point. You mentioned "subprocess". Alternatively, how about "posixprocess"? Though, despite PEP 324's title, I don't know if that is completely accurate anymore.) Trent -- Trent Mick TrentM@ActiveState.com

ProcessProxy: ...a behemoth using a thread for each of stdin/stdout/stderr to allow the user to get an event-like IO interface.
Why not avoid threads on POSIX systems, and use select instead? My module does, although it does not provide an event-like IO interface. If you get rid of the threads, then you don't need the workaround code for Linux.
Some of the large code size is _for_ execution through the shell. :) I think that execution via the shell should be a feature of a module like this (so that users can use some shell features) and I even think that it should be the default (so that, for example, Windows users don't have to learn what cmd.exe or command.com are to run "dir"). However, I absolutely agree that one should be able to run withOUT the shell (i.e. have an option for this).
You're right. My module should probably have an option for invoking through the shell, or at least document how to do it. I really don't want it as default, though.
Other reasons for the size/complexity of my process.py over yours: - My process objects have a .kill() method -- which is actually quite a pain on Windows.
True. I guess my module would benefit from such a method as well.
- My module includes some handling that for subsystem:windows vs. subsystem:console apps on Windows that I don't think yours does.
Can you describe why this is needed/useful?
* The modules uses destructors, which I'm usually avoiding.
Is that evil?
Destructors interferes with the GC. Someone else can probably fill in the details.
concensus. In the meantime, if you'd be willing to change your module's name to something other than process.py I think that would help discussions. (I don't know if that is a pain for you at this point. You mentioned "subprocess".
I can change, but I'd like more feedback before that. No-one has told me their opinion on the name "subprocess", for example, not even you :-)
Alternatively, how about "posixprocess"? Though, despite PEP 324's title, I don't know if that is completely accurate anymore.)
Oh, I've forgotten to change the title. Yes, this is wrong, because the module certainly aims to work on non-POSIX systems as well (read: Windows). /Peter Åstrand <astrand@lysator.liu.se>

On Wed, 2004-08-04 at 10:05, Peter Astrand wrote:
ProcessProxy: ...a behemoth using a thread for each of stdin/stdout/stderr to allow the user to get an event-like IO interface.
Why not avoid threads on POSIX systems, and use select instead? My module does, although it does not provide an event-like IO interface. If you get rid of the threads, then you don't need the workaround code for Linux.
Doesn't select() effectively busy-wait on "real" files (pipes and file descriptors obtained via open(), as opposed to network sockets) on most (all?) UNIXen? At least this has been my finding under Linux. (See http://www.plope.com/Members/chrism/News_Item.2004-07-29.4755190458/talkback... for more info). - C

Doesn't select() effectively busy-wait on "real" files (pipes and file descriptors obtained via open(), as opposed to network sockets) on most (all?) UNIXen? At least this has been my finding under Linux.
select() doesn't make sense for regular files, so tail -f can't use it. For Unix pipes, it works as advertised. On Windows, it *only* applies to sockets. --Guido van Rossum (home page: http://www.python.org/~guido/)

On Wed, 2004-08-04 at 11:20, Guido van Rossum wrote:
Doesn't select() effectively busy-wait on "real" files (pipes and file descriptors obtained via open(), as opposed to network sockets) on most (all?) UNIXen? At least this has been my finding under Linux.
select() doesn't make sense for regular files, so tail -f can't use it. For Unix pipes, it works as advertised. On Windows, it *only* applies to sockets.
Aha. I think I understand now. Sorry for going off-topic, but this program (UNIX-only): import select import errno import fcntl import os def go(r, timeout=1): while 1: try: r, w, x = select.select(r, [], [], timeout) print r except select.error, err: if err[0] != errno.EINTR: raise r = w = x = [] p_in, p_out = os.pipe() for fd in p_in, p_out: flags = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, flags | os.O_NDELAY) os.close(p_in) go([p_out]) ... will busywait in select (p_out is always in the ready state; the select timeout is never reached). But if you comment out the "os.close(p_in)" line, it no longer busywaits (the select timeout is reached on every iteration). At least this is the behavior under Linux. This is a little unfortunate because the normal dance when communicating between parent and child in order to capture the output of a child process seems to be: 1) In the parent process, create a set of pipes that will represent stdin/stdout/stderr of the child. 2) fork 3) In the parent process, close the "unused" fds representing the ends of the pipes that the child will use as fds for stdin/stderr/stdout. 4) In the child process, dup these same fds to stdin, stdout, and stderr. ... so select seems to be unuseful here, unless step 3 isn't actually necessary. Still a bit confused and researching... - C

Sorry for going off-topic, but this program (UNIX-only):
... will busywait in select (p_out is always in the ready state; the select timeout is never reached).
But if you comment out the "os.close(p_in)" line, it no longer busywaits (the select timeout is reached on every iteration). At least this is the behavior under Linux.
This isn't strange. You are closing the (only) read-end of the pipe. When you do this, the pipe is broken. Consider this:
import os r, w = os.pipe() os.close(r) os.write(w, "a") Traceback (most recent call last): File "<stdin>", line 1, in ? OSError: [Errno 32] Broken pipe
select() only indicates that something has happened on this filedescriptor.
This is a little unfortunate because the normal dance when communicating between parent and child in order to capture the output of a child process seems to be:
1) In the parent process, create a set of pipes that will represent stdin/stdout/stderr of the child.
2) fork
The problem with your example was that it didn't fork... So, there is no problem with using select() on pipes when communicating with a subprocess. It works great. Take a look at (my) process.py's communicate() method for some inspiration. /Peter Åstrand <astrand@lysator.liu.se>

On Wed, 2004-08-04 at 15:01, Peter Astrand wrote:
But if you comment out the "os.close(p_in)" line, it no longer busywaits (the select timeout is reached on every iteration). At least this is the behavior under Linux.
This isn't strange. You are closing the (only) read-end of the pipe. When you do this, the pipe is broken. Consider this:
import os r, w = os.pipe() os.close(r) os.write(w, "a") Traceback (most recent call last): File "<stdin>", line 1, in ? OSError: [Errno 32] Broken pipe
select() only indicates that something has happened on this filedescriptor.
Right. I understand that returning EOF on the reader side of the pipe is the inverse of the "broken pipe" behavior you demonstrate above.
This is a little unfortunate because the normal dance when communicating between parent and child in order to capture the output of a child process seems to be:
1) In the parent process, create a set of pipes that will represent stdin/stdout/stderr of the child.
2) fork
The problem with your example was that it didn't fork...
I was all set to try to refute this, but after writing a minimal test program to do what I want to do, I find that you're right. That's good news! I'll need to revisit my workaroudns in the program that caused me to need to do this. Thanks for the schooling.
So, there is no problem with using select() on pipes when communicating with a subprocess. It works great. Take a look at (my) process.py's communicate() method for some inspiration.
I've actually looked at it and it's quite nice, but it doesn't do one thing that I'd like to see as part of a process stdlib library. The use case I'm thinking of is one where a long-running program needs to monitor the output of many other potentially long-running processes, doing other things in the meantime. This kind of program tends to use select as a part of a mainloop where there might be other things going on (like handling network communications to/from sockets, updating a GUI, etc). Also, the output from child stderr and stdout potentially never end because the child process(es) may never end. In popen5, "communicate" is terminal. It calls select until there's no more data to get back and then unconditionally waits for the subprocess to finish, blocking the entire time. This isn't useful for the type of program I describe above. Of course, it wasn't meant to be, but having an API that could help with this sort of thing would be nice as well, although probably out of scope for PEP 234. - C

On Wed, 2004-08-04 at 16:03, Chris McDonough wrote:
I was all set to try to refute this, but after writing a minimal test program to do what I want to do, I find that you're right. That's good news! I'll need to revisit my workaroudns in the program that caused me to need to do this. Thanks for the schooling.
Ugh. I spoke a bit too soon.. The following program demonstrates that a particular usage of select (under Linux at least) always returns the output side of a pipe connected to a child process' stdout as "ready" after it gets any output from that child process, even if the child process has no further data to provide after it has provided a bit of data to the parent. This is what causes the "busywait" behavior I've experienced in the past (note that as this program runs, your CPU utilization will likely be near 100%). Or am I doing something silly? import select import errno import fcntl import os import stat import sys from fcntl import F_SETFL, F_GETFL def get_path(): """Return a list corresponding to $PATH, or a default.""" path = ["/bin", "/usr/bin", "/usr/local/bin"] if os.environ.has_key("PATH"): p = os.environ["PATH"] if p: path = p.split(os.pathsep) return path def get_execv_args(command): """Internal: turn a program name into a file name, using $PATH.""" commandargs = command.split() program = commandargs[0] if "/" in program: filename = program try: st = os.stat(filename) except os.error: return None, None else: path = get_path() for dir in path: filename = os.path.join(dir, program) try: st = os.stat(filename) except os.error: continue mode = st[stat.ST_MODE] if mode & 0111: break else: return None, None if not os.access(filename, os.X_OK): return None, None return filename, commandargs def spawn(command): """Start the subprocess.""" filename, argv = get_execv_args(command) if filename is None: raise RuntimeError, '%s is an invalid command' % command child_stdin, stdin = os.pipe() stdout, child_stdout = os.pipe() stderr, child_stderr = os.pipe() # open stderr, stdout in nonblocking mode so we can tail them # in the mainloop without blocking for fd in stdout, stderr: flags = fcntl.fcntl(fd, F_GETFL) fcntl.fcntl(fd, F_SETFL, flags | os.O_NDELAY) pid = os.fork() if pid != 0: # Parent os.close(child_stdin) os.close(child_stdout) os.close(child_stderr) return stdin, stdout, stderr else: # Child try: os.dup2(child_stdin, 0) os.dup2(child_stdout, 1) os.dup2(child_stderr, 2) for i in range(3, 256): try: os.close(i) except: pass os.execv(filename, argv) finally: os._exit(127) def go(out_fds): while 1: try: r, w, x = select.select(out_fds, [], [], 1) if not r: print "timed out" except select.error, err: if err[0] != errno.EINTR: raise for fd in r: sys.stdout.write(os.read(fd, 1024)) stdin, stderr, stdout = spawn('echo "foo"') go([stderr, stdout])

Chris McDonough <chrism@plope.com>:
The following program demonstrates that a particular usage of select (under Linux at least) always returns the output side of a pipe connected to a child process' stdout as "ready" after it gets any output from that child process, even if the child process has no further data to provide after it has provided a bit of data to the parent.
Or am I doing something silly?
for fd in r: sys.stdout.write(os.read(fd, 1024))
You're not taking account of the possibility of EOF. When the child process finishes and closes its end of the pipe, the select will always report the pipe as ready for reading, because it is -- subsequent reads will return immediately with 0 bytes. You need to check whether the read returned 0 bytes, and take that as meaning that the child process has finished. Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg@cosc.canterbury.ac.nz +--------------------------------------+

On Thu, 2004-08-05 at 01:50, Greg Ewing wrote:
Chris McDonough <chrism@plope.com>:
The following program demonstrates that a particular usage of select (under Linux at least) always returns the output side of a pipe connected to a child process' stdout as "ready" after it gets any output from that child process, even if the child process has no further data to provide after it has provided a bit of data to the parent.
Or am I doing something silly?
for fd in r: sys.stdout.write(os.read(fd, 1024))
You're not taking account of the possibility of EOF. When the child process finishes and closes its end of the pipe, the select will always report the pipe as ready for reading, because it is -- subsequent reads will return immediately with 0 bytes.
You need to check whether the read returned 0 bytes, and take that as meaning that the child process has finished.
Of course! Thanks. Let us speak no more of this. ;-) - C

On Aug 4, 2004, at 4:03 PM, Chris McDonough wrote:
On Wed, 2004-08-04 at 15:01, Peter Astrand wrote:
So, there is no problem with using select() on pipes when communicating with a subprocess. It works great. Take a look at (my) process.py's communicate() method for some inspiration.
I've actually looked at it and it's quite nice, but it doesn't do one thing that I'd like to see as part of a process stdlib library. The use case I'm thinking of is one where a long-running program needs to monitor the output of many other potentially long-running processes, doing other things in the meantime. This kind of program tends to use select as a part of a mainloop where there might be other things going on (like handling network communications to/from sockets, updating a GUI, etc). Also, the output from child stderr and stdout potentially never end because the child process(es) may never end.
Twisted handles this quite well already. The stdlib doesn't really do much to support this style of programming. -bob

In popen5, "communicate" is terminal. It calls select until there's no more data to get back and then unconditionally waits for the subprocess to finish, blocking the entire time. This isn't useful for the type of
Yes, I agree. Other people has request this as well. But, even though this could be useful, I do not consider it a showstopper which needs to be solved before the module can be included in the stdlib. Windows support without need for win32all is a more major issue, I think. A non-terminal "communicate" can be added later. /Peter Åstrand <astrand@lysator.liu.se>

Why not avoid threads on POSIX systems, and use select instead? My module does, although it does not provide an event-like IO interface. If you get rid of the threads, then you don't need the workaround code for Linux.
Doesn't select() effectively busy-wait on "real" files (pipes and file descriptors obtained via open(), as opposed to network sockets) on most (all?) UNIXen? At least this has been my finding under Linux.
Yes, but a solution with threads will have problems as well, since read() and friends will return EOF rather than block when you reach the end. /Peter Åstrand <astrand@lysator.liu.se>

On Wed, 2004-08-04 at 12:25, Peter Astrand wrote:
Why not avoid threads on POSIX systems, and use select instead? My module does, although it does not provide an event-like IO interface. If you get rid of the threads, then you don't need the workaround code for Linux.
Doesn't select() effectively busy-wait on "real" files (pipes and file descriptors obtained via open(), as opposed to network sockets) on most (all?) UNIXen? At least this has been my finding under Linux.
Yes, but a solution with threads will have problems as well, since read() and friends will return EOF rather than block when you reach the end.
As I've found, if the end of the pipes in that represent the child's stderr/stdout fds are closed in the parent, a select() reading the "other" ends of these pipes will busywaut (at least on Linux). Other than than, I think this select() would be the absolute right thing on POSIX rather than using threads or polling. Is there a way around this problem or is it just a fact of life? - C

As I've found, if the end of the pipes in that represent the child's stderr/stdout fds are closed in the parent, a select() reading the "other" ends of these pipes will busywaut (at least on Linux). Other than than, I think this select() would be the absolute right thing on POSIX rather than using threads or polling. Is there a way around this problem or is it just a fact of life?
The reading code needs to recognize the EOF and then conclude that it's not going to get anything else from that pipe. No different than sockets. --Guido van Rossum (home page: http://www.python.org/~guido/)

[Peter Astrand wrote]
ProcessProxy: ...a behemoth using a thread for each of stdin/stdout/stderr to allow the user to get an event-like IO interface.
Why not avoid threads on POSIX systems, and use select instead? My module does, although it does not provide an event-like IO interface.
Because that wouldn't be cross-platform... perhaps it would be possible though. I am not that experienced with select() -- I have generally eschewed it because I can't use it on Windows as well.
If you get rid of the threads, then you don't need the workaround code for Linux.
Slight misunderstanding there: the separate thread from which you cannot kill a subprocess on Linux is not one of these ProcessProxy threads. I.e. ignoring ProcessProxy the LinuxThreads-bug workaround is still necessary for Process and ProcessOpen for the user that starts a subprocess on their thread-A and wants to kill it on their thread-B.
You're right. My module should probably have an option for invoking through the shell, or at least document how to do it. I really don't want it as default, though.
I have code to find the shell adequately so I don't see why we couldn't use it. As to making it the default or not: perhaps clear documentation could help avoid the expected "I can't run 'dir'!" newbie complaint, but I don't think it would get all of the them. The current popen*() execute via the shell. The lower-level guys, exec*() and spawn*(), do not. Still up for debate, I guess. :)
- My module includes some handling that for subsystem:windows vs. subsystem:console apps on Windows that I don't think yours does.
Can you describe why this is needed/useful?
A subsystem:windows app (like Komodo, or Idle, or pythonw.exe, or any GUI app) doesn't have a console and hence doesn't have std handles. By default, executing any subsytem:console app (like 'ls' or 'echo' or 'python' or 'perl) from a subsystem:windows app will result in an AllocConsole call (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dllproc/bas...) that will result in a console dialog popping up. You can see this on some Windows apps that haven't dealt with this issue. In particular, some installers out there will pop up an unprofessional console dialog when running some sub-processes. To avoid this one has to muck around with CreateProcess options. Or, kind of the reverse, if a seeming GUI app *does* have std handles (often redirected to log files, Komodo does this) and it wants to offer the ability to pop up a console (e.g. Komodo's feature to debug in a separate console) then one needs the ability to specify CREATE_NEW_CONSOLE.
I can change, but I'd like more feedback before that. No-one has told me their opinion on the name "subprocess", for example, not even you :-)
Sure, I don't mind "subprocess". I am hoping, though, that we can eventually merge our two modules and just call that one "process". Trent -- Trent Mick TrentM@ActiveState.com

Why not avoid threads on POSIX systems, and use select instead? My module does, although it does not provide an event-like IO interface.
Because that wouldn't be cross-platform... perhaps it would be possible though. I am not that experienced with select() -- I have generally eschewed it because I can't use it on Windows as well.
The best way, IMHO, is to use select() on POSIX systems and threads on Windows. This is what my module does.
Can you describe why this is needed/useful?
A subsystem:windows app (like Komodo, or Idle, or pythonw.exe, or any GUI app) doesn't have a console and hence doesn't have std handles. By default, executing any subsytem:console app (like 'ls' or 'echo' or 'python' or 'perl) from a subsystem:windows app will result in an AllocConsole call (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dllproc/bas...) that will result in a console dialog popping up. You can see this on some Windows apps that haven't dealt with this issue. In particular, some installers out there will pop up an unprofessional console dialog when running some sub-processes. To avoid this one has to muck around with CreateProcess options. Or, kind of the reverse, if a seeming GUI app *does* have std handles (often redirected to log files, Komodo does this) and it wants to offer the ability to pop up a console (e.g. Komodo's feature to debug in a separate console) then one needs the ability to specify CREATE_NEW_CONSOLE.
I see what you mean. My module actually supports passing all types of flags to CreateProcess, including CREATE_NEW_CONSOLE, so I would say that my module deals with these issues. Well, I don't have any special boolean flag or something like that for this specific case, but I see no reason to.
I can change, but I'd like more feedback before that. No-one has told me their opinion on the name "subprocess", for example, not even you :-)
Sure, I don't mind "subprocess". I am hoping, though, that we can eventually merge our two modules and just call that one "process".
I don't really see what we could gain from merging our modules. What we have now is two different modules with two different APIs, and applications which uses these. If we were to merge our modules, then the API of either your, mine or both modules would have to change, which means that the applications using these would not work with the merged module. I'm positive to cooperating with you, and perhaps borrow code and ideas from your module though (and you can of course borrow from mine). /Peter Åstrand <astrand@lysator.liu.se>

[Peter Astrand wrote]
Why not avoid threads on POSIX systems, and use select instead? My module does, although it does not provide an event-like IO interface.
Because that wouldn't be cross-platform... perhaps it would be possible though. I am not that experienced with select() -- I have generally eschewed it because I can't use it on Windows as well.
The best way, IMHO, is to use select() on POSIX systems and threads on Windows. This is what my module does.
I'm willing to check that out for ProcessProxy. However, as I said, I would be willing to remove ProcessProxy to a separate module (perhaps just used by Komodo -- which was the original motivation for me) if it is deemed not necessary/worthy for the core.
I don't really see what we could gain from merging our modules.
One process control module to rule them all. Muwahahaha! Seriously, I'd like to see "one obvious way to do it" for process control in the core at some point (perhaps for Python 2.5) and my guess is that a something between our two can provide it.
What we have now is two different modules with two different APIs, and applications which uses these. If we were to merge our modules, then the API of either your, mine or both modules would have to change, which means that the applications using these would not work with the merged module.
I can change Komodo. And I don't use Twisted so I can break them. (Muwahaha!) ... more seriously I think a good final API (even if it break backwards compat a little bit) is fine if the end result is a better module for the core -- and as long as there is a __version__ attribute or something that users can key on. Trent -- Trent Mick TrentM@ActiveState.com
participants (7)
-
Bob Ippolito
-
Chris McDonough
-
David Ascher
-
Greg Ewing
-
Guido van Rossum
-
Peter Astrand
-
Trent Mick