PEP 3145 (With Contents)
Alright, I will re-submit with the contents pasted. I never use double backquotes as I think them rather ugly; that is the work of an editor or some automated program in the chain. Plus, it also messed up my line formatting and now I have lines with one word on them... Anyway, the contents of PEP 3145: PEP: 3145 Title: Asynchronous I/O For subprocess.Popen Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson Type: Standards Track Content-Type: text/plain Created: 04-Aug-2009 Python-Version: 3.2 Abstract: In its present form, the subprocess.Popen implementation is prone to dead-locking and blocking of the parent Python script while waiting on data from the child process. Motivation: A search for "python asynchronous subprocess" will turn up numerous accounts of people wanting to execute a child process and communicate with it from time to time reading only the data that is available instead of blocking to wait for the program to produce data [1] [2] [3]. The current behavior of the subprocess module is that when a user sends or receives data via the stdin, stderr and stdout file objects, dead locks are common and documented [4] [5]. While communicate can be used to alleviate some of the buffering issues, it will still cause the parent process to block while attempting to read data when none is available to be read from the child process. Rationale: There is a documented need for asynchronous, non-blocking functionality in subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the utility of the Python standard library that can be used on Unix based and Windows builds of Python. Practically every I/O object in Python has a file-like wrapper of some sort. Sockets already act as such and for strings there is StringIO. Popen can be made to act like a file by simply using the methods attached the the subprocess.Popen.stderr, stdout and stdin file-like objects. But when using the read and write methods of those options, you do not have the benefit of asynchronous I/O. In the proposed solution the wrapper wraps the asynchronous methods to mimic a file object. Reference Implementation: I have been maintaining a Google Code repository that contains all of my changes including tests and documentation [9] as well as blog detailing the problems I have come across in the development process [10]. I have been working on implementing non-blocking asynchronous I/O in the subprocess.Popen module as well as a wrapper class for subprocess.Popen that makes it so that an executed process can take the place of a file by duplicating all of the methods and attributes that file objects have. There are two base functions that have been added to the subprocess.Popen class: Popen.send and Popen._recv, each with two separate implementations, one for Windows and one for Unix based systems. The Windows implementation uses ctypes to access the functions needed to control pipes in the kernel 32 DLL in an asynchronous manner. On Unix based systems, the Python interface for file control serves the same purpose. The different implementations of Popen.send and Popen._recv have identical arguments to make code that uses these functions work across multiple platforms. When calling the Popen._recv function, it requires the pipe name be passed as an argument so there exists the Popen.recv function that passes selects stdout as the pipe for Popen._recv by default. Popen.recv_err selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err" are much easier to read and understand than "Popen._recv('stdout' ..." and "Popen._recv('stderr' ..." respectively. Since the Popen._recv function does not wait on data to be produced before returning a value, it may return empty bytes. Popen.asyncread handles this issue by returning all data read over a given time interval. The ProcessIOWrapper class uses the asyncread and asyncwrite functions to allow a process to act like a file so that there are no blocking issues that can arise from using the stdout and stdin file objects produced from a subprocess.Popen call. References: [1] [ python-Feature Requests-1191964 ] asynchronous Subprocess http://mail.python.org/pipermail/python-bugs-list/2006-December/ 036524.html [2] Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess http://ivory.idyll.org/blog/feb-07/problems-with-subprocess [3] How can I run an external command asynchronously from Python? - Stack Overflow http://stackoverflow.com/questions/636561/how-can-i-run-an-external- command-asynchronously-from-python [4] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation http://docs.python.org/library/subprocess.html#subprocess.Popen.wait [5] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation http://docs.python.org/library/subprocess.html#subprocess.Popen.kill [6] Issue 1191964: asynchronous Subprocess - Python tracker http://bugs.python.org/issue1191964 [7] Module to allow Asynchronous subprocess use on Windows and Posix platforms - ActiveState Code http://code.activestate.com/recipes/440554/ [8] subprocess.rst - subprocdev - Project Hosting on Google Code http://code.google.com/p/subprocdev/source/browse/doc/subprocess.rst?spec=svn2c925e935cad0166d5da85e37c742d8e7f609de5&r=2c925e935cad0166d5da85e37c742d8e7f609de5#437 [9] subprocdev - Project Hosting on Google Code http://code.google.com/p/subprocdev [10] Python Subprocess Dev http://subdev.blogspot.com/ Copyright: This P.E.P. is licensed under the Open Publication License; http://www.opencontent.org/openpub/. On Tue, Sep 8, 2009 at 22:56, Benjamin Peterson <benjamin@python.org> wrote:
2009/9/7 Eric Pruitt <eric.pruitt@gmail.com>:
Hello all,
I have been working on adding asynchronous I/O to the Python subprocess module as part of my Google Summer of Code project. Now that I have finished documenting and pruning the code, I present PEP 3145 for its inclusion into the Python core code. Any and all feedback on the PEP (http://www.python.org/dev/peps/pep-3145/) is appreciated.
Hi Eric, One of the reasons you're not getting many response is that you've not pasted the contents of the PEP in this message. That makes it really easy for people to comment on various sections.
BTW, it seems like you were trying to use reST formatting with the text PEP layout. Double backquotes only mean something in reST.
-- Regards, Benjamin
I'm bumping this PEP again in hopes of getting some feedback. Thanks, Eric On Tue, Sep 8, 2009 at 23:52, Eric Pruitt <eric.pruitt@gmail.com> wrote:
PEP: 3145 Title: Asynchronous I/O For subprocess.Popen Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson Type: Standards Track Content-Type: text/plain Created: 04-Aug-2009 Python-Version: 3.2
Abstract:
In its present form, the subprocess.Popen implementation is prone to dead-locking and blocking of the parent Python script while waiting on data from the child process.
Motivation:
A search for "python asynchronous subprocess" will turn up numerous accounts of people wanting to execute a child process and communicate with it from time to time reading only the data that is available instead of blocking to wait for the program to produce data [1] [2] [3]. The current behavior of the subprocess module is that when a user sends or receives data via the stdin, stderr and stdout file objects, dead locks are common and documented [4] [5]. While communicate can be used to alleviate some of the buffering issues, it will still cause the parent process to block while attempting to read data when none is available to be read from the child process.
Rationale:
There is a documented need for asynchronous, non-blocking functionality in subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the utility of the Python standard library that can be used on Unix based and Windows builds of Python. Practically every I/O object in Python has a file-like wrapper of some sort. Sockets already act as such and for strings there is StringIO. Popen can be made to act like a file by simply using the methods attached the the subprocess.Popen.stderr, stdout and stdin file-like objects. But when using the read and write methods of those options, you do not have the benefit of asynchronous I/O. In the proposed solution the wrapper wraps the asynchronous methods to mimic a file object.
Reference Implementation:
I have been maintaining a Google Code repository that contains all of my changes including tests and documentation [9] as well as blog detailing the problems I have come across in the development process [10].
I have been working on implementing non-blocking asynchronous I/O in the subprocess.Popen module as well as a wrapper class for subprocess.Popen that makes it so that an executed process can take the place of a file by duplicating all of the methods and attributes that file objects have.
There are two base functions that have been added to the subprocess.Popen class: Popen.send and Popen._recv, each with two separate implementations, one for Windows and one for Unix based systems. The Windows implementation uses ctypes to access the functions needed to control pipes in the kernel 32 DLL in an asynchronous manner. On Unix based systems, the Python interface for file control serves the same purpose. The different implementations of Popen.send and Popen._recv have identical arguments to make code that uses these functions work across multiple platforms.
When calling the Popen._recv function, it requires the pipe name be passed as an argument so there exists the Popen.recv function that passes selects stdout as the pipe for Popen._recv by default. Popen.recv_err selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err" are much easier to read and understand than "Popen._recv('stdout' ..." and "Popen._recv('stderr' ..." respectively.
Since the Popen._recv function does not wait on data to be produced before returning a value, it may return empty bytes. Popen.asyncread handles this issue by returning all data read over a given time interval.
The ProcessIOWrapper class uses the asyncread and asyncwrite functions to allow a process to act like a file so that there are no blocking issues that can arise from using the stdout and stdin file objects produced from a subprocess.Popen call.
References:
[1] [ python-Feature Requests-1191964 ] asynchronous Subprocess http://mail.python.org/pipermail/python-bugs-list/2006-December/ 036524.html
[2] Daily Life in an Ivory Basement : /feb-07/problems-with-subprocess http://ivory.idyll.org/blog/feb-07/problems-with-subprocess
[3] How can I run an external command asynchronously from Python? - Stack Overflow http://stackoverflow.com/questions/636561/how-can-i-run-an-external- command-asynchronously-from-python
[4] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation http://docs.python.org/library/subprocess.html#subprocess.Popen.wait
[5] 18.1. subprocess - Subprocess management - Python v2.6.2 documentation http://docs.python.org/library/subprocess.html#subprocess.Popen.kill
[6] Issue 1191964: asynchronous Subprocess - Python tracker http://bugs.python.org/issue1191964
[7] Module to allow Asynchronous subprocess use on Windows and Posix platforms - ActiveState Code http://code.activestate.com/recipes/440554/
[8] subprocess.rst - subprocdev - Project Hosting on Google Code http://code.google.com/p/subprocdev/source/browse/doc/subprocess.rst?spec=svn2c925e935cad0166d5da85e37c742d8e7f609de5&r=2c925e935cad0166d5da85e37c742d8e7f609de5#437
[9] subprocdev - Project Hosting on Google Code http://code.google.com/p/subprocdev
[10] Python Subprocess Dev http://subdev.blogspot.com/
Copyright:
This P.E.P. is licensed under the Open Publication License; http://www.opencontent.org/openpub/.
On Tue, Sep 15, 2009 at 12:25:35PM -0400, Eric Pruitt wrote:
A search for "python asynchronous subprocess" will turn up numerous accounts of people
IMHO there is no need to refer to a search. It'd be enough to say "There are many people...".
kernel 32 DLL
Why not just name it kernel32.dll? Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd@phd.pp.ru Programmers don't die, they just GOSUB without RETURN.
Hello, I would like to know if your approach is based on Python 2.x or 3.x. Python 3.x has new API provisions, in the I/O layer, for non-blocking I/O and it would be nice if your work could fit in that framework.
Popen can be made to act like a file by simply using the methods attached the the subprocess.Popen.stderr, stdout and stdin file-like objects. But when using the read and write methods of those options, you do not have the benefit of asynchronous I/O.
I'm not sure I understand the latter sentence. Do you imply that, with your work, read() and write() do allow you to benefit from async I/O? If so, how? Another question: what mechanism does it use internally? Is this mechanism accessible from the outside, so that people can e.g. integrate this inside a third-party event loop (Twisted, asyncore or whatever else)? The PEP should probably outline the additional APIs a bit more precisely and formally than it currently does.
The Windows implementation uses ctypes to access the functions needed to control pipes in the kernel 32 DLL in an asynchronous manner.
Sorry for the naive question (I'm not a Windows specialist), but does the allusion to "kernel32.dll" mean that it doesn't function on 64-bit variants? Thanks for your work, Regards Antoine.
On 04:25 pm, eric.pruitt@gmail.com wrote:
I'm bumping this PEP again in hopes of getting some feedback.
Thanks, Eric
On Tue, Sep 8, 2009 at 23:52, Eric Pruitt <eric.pruitt@gmail.com> wrote:
PEP: 3145 Title: Asynchronous I/O For subprocess.Popen Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson Type: Standards Track Content-Type: text/plain Created: 04-Aug-2009 Python-Version: 3.2
Abstract:
� �In its present form, the subprocess.Popen implementation is prone to � �dead-locking and blocking of the parent Python script while waiting on data � �from the child process.
Motivation:
� �A search for "python asynchronous subprocess" will turn up numerous � �accounts of people wanting to execute a child process and communicate with � �it from time to time reading only the data that is available instead of � �blocking to wait for the program to produce data [1] [2] [3]. �The current � �behavior of the subprocess module is that when a user sends or receives � �data via the stdin, stderr and stdout file objects, dead locks are common � �and documented [4] [5]. �While communicate can be used to alleviate some of � �the buffering issues, it will still cause the parent process to block while � �attempting to read data when none is available to be read from the child � �process.
Rationale:
� �There is a documented need for asynchronous, non-blocking functionality in � �subprocess.Popen [6] [7] [2] [3]. �Inclusion of the code would improve the � �utility of the Python standard library that can be used on Unix based and � �Windows builds of Python. �Practically every I/O object in Python has a � �file-like wrapper of some sort. �Sockets already act as such and for � �strings there is StringIO. �Popen can be made to act like a file by simply � �using the methods attached the the subprocess.Popen.stderr, stdout and � �stdin file-like objects. �But when using the read and write methods of � �those options, you do not have the benefit of asynchronous I/O. �In the � �proposed solution the wrapper wraps the asynchronous methods to mimic a � �file object.
Reference Implementation:
� �I have been maintaining a Google Code repository that contains all of my � �changes including tests and documentation [9] as well as blog detailing � �the problems I have come across in the development process [10].
� �I have been working on implementing non-blocking asynchronous I/O in the � �subprocess.Popen module as well as a wrapper class for subprocess.Popen � �that makes it so that an executed process can take the place of a file by � �duplicating all of the methods and attributes that file objects have.
"Non-blocking" and "asynchronous" are actually two different things. From the rest of this PEP, I think only a non-blocking API is being introduced. I haven't looked beyond the PEP, though, so I might be missing something.
� �There are two base functions that have been added to the subprocess.Popen � �class: Popen.send and Popen._recv, each with two separate implementations, � �one for Windows and one for Unix based systems. �The Windows � �implementation uses ctypes to access the functions needed to control pipes � �in the kernel 32 DLL in an asynchronous manner. �On Unix based systems, � �the Python interface for file control serves the same purpose. �The � �different implementations of Popen.send and Popen._recv have identical � �arguments to make code that uses these functions work across multiple � �platforms.
Why does the method for non-blocking read from a pipe start with an "_"? This is the convention (widely used) for a private API. The name also doesn't suggest that this is the non-blocking version of reading. Similarly, the name "send" doesn't suggest that this is the non-blocking version of writing.
� �When calling the Popen._recv function, it requires the pipe name be � �passed as an argument so there exists the Popen.recv function that passes � �selects stdout as the pipe for Popen._recv by default. �Popen.recv_err � �selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err" � �are much easier to read and understand than "Popen._recv('stdout' ..." and � �"Popen._recv('stderr' ..." respectively.
What about reading from other file descriptors? subprocess.Popen allows arbitrary file descriptors to be used. Is there any provision here for reading and writing non-blocking from or to those?
� �Since the Popen._recv function does not wait on data to be produced � �before returning a value, it may return empty bytes. �Popen.asyncread � �handles this issue by returning all data read over a given time � �interval.
Oh. Popen.asyncread? What's that? This is the first time the PEP mentions it.
� �The ProcessIOWrapper class uses the asyncread and asyncwrite functions to � �allow a process to act like a file so that there are no blocking issues � �that can arise from using the stdout and stdin file objects produced from � �a subprocess.Popen call.
What's the ProcessIOWrapper class? And what's the asyncwrite function? Again, this is the first time it's mentioned. So, to sum up, I think my main comment is that the PEP seems to be missing a significant portion of the details of what it's actually proposing. I suspect that this information is present in the implementation, which I have not looked at, but it probably belongs in the PEP. Jean-Paul
On Tue, Sep 15, 2009 at 9:24 PM, <exarkun@twistedmatrix.com> wrote:
On 04:25 pm, eric.pruitt@gmail.com wrote:
I'm bumping this PEP again in hopes of getting some feedback.
This is useful, indeed. ActiveState recipe for this has 10 votes, which is high for ActiveState (and such hardcore topic FWIW).
On Tue, Sep 8, 2009 at 23:52, Eric Pruitt <eric.pruitt@gmail.com> wrote:
PEP: 3145 Title: Asynchronous I/O For subprocess.Popen Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson Type: Standards Track Content-Type: text/plain Created: 04-Aug-2009 Python-Version: 3.2
Abstract:
In its present form, the subprocess.Popen implementation is prone to dead-locking and blocking of the parent Python script while waiting on data from the child process.
Motivation:
A search for "python asynchronous subprocess" will turn up numerous accounts of people wanting to execute a child process and communicate with it from time to time reading only the data that is available instead of blocking to wait for the program to produce data [1] [2] [3]. The current behavior of the subprocess module is that when a user sends or receives data via the stdin, stderr and stdout file objects, dead locks are common and documented [4] [5]. While communicate can be used to alleviate some of the buffering issues, it will still cause the parent process to block while attempting to read data when none is available to be read from the child process.
Rationale:
There is a documented need for asynchronous, non-blocking functionality in subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the utility of the Python standard library that can be used on Unix based and Windows builds of Python. Practically every I/O object in Python has a file-like wrapper of some sort. Sockets already act as such and for strings there is StringIO. Popen can be made to act like a file by simply using the methods attached the the subprocess.Popen.stderr, stdout and stdin file-like objects. But when using the read and write methods of those options, you do not have the benefit of asynchronous I/O. In the proposed solution the wrapper wraps the asynchronous methods to mimic a file object.
Reference Implementation:
I have been maintaining a Google Code repository that contains all of my changes including tests and documentation [9] as well as blog detailing the problems I have come across in the development process [10].
I have been working on implementing non-blocking asynchronous I/O in the subprocess.Popen module as well as a wrapper class for subprocess.Popen that makes it so that an executed process can take the place of a file by duplicating all of the methods and attributes that file objects have.
"Non-blocking" and "asynchronous" are actually two different things. From the rest of this PEP, I think only a non-blocking API is being introduced. I haven't looked beyond the PEP, though, so I might be missing something.
I suggest renaming http://www.python.org/dev/peps/pep-3145/ to 'Non-blocking I/O for subprocess' and continue. IMHO on this stage is where examples with deadlocks that occur with current subprocess implementation are badly needed. There are two base functions that have been added to the
subprocess.Popen class: Popen.send and Popen._recv, each with two separate implementations, one for Windows and one for Unix based systems. The Windows implementation uses ctypes to access the functions needed to control pipes in the kernel 32 DLL in an asynchronous manner. On Unix based systems, the Python interface for file control serves the same purpose. The different implementations of Popen.send and Popen._recv have identical arguments to make code that uses these functions work across multiple platforms.
Why does the method for non-blocking read from a pipe start with an "_"? This is the convention (widely used) for a private API. The name also doesn't suggest that this is the non-blocking version of reading. Similarly, the name "send" doesn't suggest that this is the non-blocking version of writing.
The implementation is based on http://code.activestate.com/recipes/440554/which is more clearly illustrates integrated functionality. _recv() is a private base function, which is takes stdout or stderr as parameter. Corresponding user-level functions to read from stdout and stderr are .recv() and .recv_err() I thought about renaming API to .asyncread() and .asyncwrite(), but that may mean that you call method and then result asynchronously start to fill some buffer, which is not the case here. Then I thought about .check_read() and .check_write(), literally meaning 'check and read' or 'check and return' for non-blocking calls if there is nothing. But then again, poor naming convention of subprocess uses .check_output() for blocking read until command completes. Currently, subversion doesn't have .read and .write methods. It may be the best option: .write(what) to pipe more stuff into input buffer of child process. .read(from) where `from` is either subprocess.STDOUT or STDERR Both functions should be marked as non-blocking in docs and returning None if pipe is closed. When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes selects stdout as the pipe for Popen._recv by default. Popen.recv_err selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err" are much easier to read and understand than "Popen._recv('stdout' ..." and "Popen._recv('stderr' ..." respectively.
What about reading from other file descriptors? subprocess.Popen allows arbitrary file descriptors to be used. Is there any provision here for reading and writing non-blocking from or to those?
On Windows it is WriteFile/ReadFile and PeekNamedPipe. On Linux it is select. Of course a test is needed, but why it should not just work?
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread handles this issue by returning all data read over a given time interval.
Oh. Popen.asyncread? What's that? This is the first time the PEP mentions it.
I guess that's for blocking read with timeout. Among the most popular questions about Python it is the question number ~500. http://stackoverflow.com/questions/1191374/subprocess-with-timeout
The ProcessIOWrapper class uses the asyncread and asyncwrite
functions to allow a process to act like a file so that there are no blocking issues that can arise from using the stdout and stdin file objects produced from a subprocess.Popen call.
What's the ProcessIOWrapper class? And what's the asyncwrite function? Again, this is the first time it's mentioned.
Oh. That's a wrapper to access subprocess pipes with familiar file API. It is interesting: http://code.google.com/p/subprocdev/source/browse/subprocess.py?name=python3...
So, to sum up, I think my main comment is that the PEP seems to be missing a significant portion of the details of what it's actually proposing. I suspect that this information is present in the implementation, which I have not looked at, but it probably belongs in the PEP.
Jean-Paul
Writing PEPs is definitely a job, and a hard one for developers. Too bad a good idea *and* implementation (tests needed) is put on hold, because there is nobody, who can help with that part. IMHO PEP needs to expand on user stories even if there is significant amount of cited sources, a practical summary and problem illustration by examples are missing.
On Dec 7, 2012, at 5:10 PM, anatoly techtonik <techtonik@gmail.com> wrote:
What about reading from other file descriptors? subprocess.Popen allows arbitrary file descriptors to be used. Is there any provision here for reading and writing non-blocking from or to those?
On Windows it is WriteFile/ReadFile and PeekNamedPipe. On Linux it is select. Of course a test is needed, but why it should not just work?
This is exactly why the provision needs to be made explicitly. On Windows it is WriteFile and ReadFile and PeekNamedPipe - unless the handle is a socket in which case it needs to be WSARecv. Or maybe it's some other weird thing - like, maybe a mailslot - and you need to call a different API. On *nix it really shouldn't be select. select cannot wait upon a file descriptor whose value is greater than FD_SETSIZE, which means it sets a hard (and small) limit on the number of things that a process which wants to use this facility can be doing. On the other hand, if you hard-code another arbitrary limit like this into the stdlib subprocess module, it will just be another great reason why Twisted's spawnProcess is the best and everyone should use it instead, so be my guest ;-). -glyph
On Sat, Dec 8, 2012 at 8:14 PM, Glyph <glyph@twistedmatrix.com> wrote:
On Dec 7, 2012, at 5:10 PM, anatoly techtonik <techtonik@gmail.com> wrote:
What about reading from other file descriptors? subprocess.Popen allows
arbitrary file descriptors to be used. Is there any provision here for reading and writing non-blocking from or to those?
On Windows it is WriteFile/ReadFile and PeekNamedPipe. On Linux it is select. Of course a test is needed, but why it should not just work?
This is exactly why the provision needs to be made explicitly.
On Windows it is WriteFile and ReadFile and PeekNamedPipe - unless the handle is a socket in which case it needs to be WSARecv. Or maybe it's some other weird thing - like, maybe a mailslot - and you need to call a different API.
On *nix it really shouldn't be select. select cannot wait upon a file descriptor whose *value* is greater than FD_SETSIZE, which means it sets a hard (and small) limit on the number of things that a process which wants to use this facility can be doing.
Nobody should ever touch select() this decade. poll() exists everywhere that matters.
On the other hand, if you hard-code another arbitrary limit like this into the stdlib subprocess module, it will just be another great reason why Twisted's spawnProcess is the best and everyone should use it instead, so be my guest ;-).
Is twisted's spawnProcess thread safe and async signal safe by using restricted C code for everything between the fork() and exec()? I'm not familiar enough with the twisted codebase to find things easily in it but I'm not seeing such an extension module within twisted and the code in http://twistedmatrix.com/trac/browser/trunk/twisted/internet/process.pycerta... is not safe. Just sayin'. :) Python >= 3.2 along with the http://pypi.python.org/pypi/subprocess32/backport for use on 2.x get this right. -gps
On Dec 8, 2012, at 8:37 PM, Gregory P. Smith <greg@krypto.org> wrote:
Is twisted's spawnProcess thread safe and async signal safe by using restricted C code for everything between the fork() and exec()? I'm not familiar enough with the twisted codebase to find things easily in it but I'm not seeing such an extension module within twisted and the code in http://twistedmatrix.com/trac/browser/trunk/twisted/internet/process.py certainly is not safe. Just sayin'. :)
It's on the agenda: <http://twistedmatrix.com/trac/ticket/5710>. -glyph
On Sun, Dec 9, 2012 at 7:14 AM, Glyph <glyph@twistedmatrix.com> wrote:
On Dec 7, 2012, at 5:10 PM, anatoly techtonik <techtonik@gmail.com> wrote:
What about reading from other file descriptors? subprocess.Popen allows
arbitrary file descriptors to be used. Is there any provision here for reading and writing non-blocking from or to those?
On Windows it is WriteFile/ReadFile and PeekNamedPipe. On Linux it is select. Of course a test is needed, but why it should not just work?
This is exactly why the provision needs to be made explicitly.
On Windows it is WriteFile and ReadFile and PeekNamedPipe - unless the handle is a socket in which case it needs to be WSARecv. Or maybe it's some other weird thing - like, maybe a mailslot - and you need to call a different API.
IIRC on Windows there is no socket descriptor that can be used as a file descriptor. Seems reasonable to limit the implementation to standard file descriptors in this platform. On *nix it really shouldn't be select. select cannot wait upon a file
descriptor whose *value* is greater than FD_SETSIZE, which means it sets a hard (and small) limit on the number of things that a process which wants to use this facility can be doing.
I didn't know that. Should a note be added to http://docs.python.org/2/library/select ? I also thought that poll acts like, well, a polling function - eating 100% CPU while looping over inputs over and over checking if there is something to react to.
On the other hand, if you hard-code another arbitrary limit like this into the stdlib subprocess module, it will just be another great reason why Twisted's spawnProcess is the best and everyone should use it instead, so be my guest ;-).
spawnProcess requires a reactor. This PEP is an alternative for the proponents of green energy. =)
On Dec 19, 2012, at 2:14 PM, anatoly techtonik <techtonik@gmail.com> wrote:
On Sun, Dec 9, 2012 at 7:14 AM, Glyph <glyph@twistedmatrix.com> wrote: On Dec 7, 2012, at 5:10 PM, anatoly techtonik <techtonik@gmail.com> wrote:
What about reading from other file descriptors? subprocess.Popen allows arbitrary file descriptors to be used. Is there any provision here for reading and writing non-blocking from or to those?
On Windows it is WriteFile/ReadFile and PeekNamedPipe. On Linux it is select. Of course a test is needed, but why it should not just work?
This is exactly why the provision needs to be made explicitly.
On Windows it is WriteFile and ReadFile and PeekNamedPipe - unless the handle is a socket in which case it needs to be WSARecv. Or maybe it's some other weird thing - like, maybe a mailslot - and you need to call a different API.
IIRC on Windows there is no socket descriptor that can be used as a file descriptor. Seems reasonable to limit the implementation to standard file descriptors in this platform.
Via the documentation of ReadFile: <http://msdn.microsoft.com/en-us/library/windows/desktop/aa365467(v=vs.85).aspx> hFile [in] A handle to the device (for example, a file, file stream, physical disk, volume, console buffer, tape drive, socket, communications resource, mailslot, or pipe). (...) For asynchronous read operations, hFile can be any handle that is opened with the FILE_FLAG_OVERLAPPED flag by the CreateFilefunction, or a socket handle returned by the socket or accept function. (emphasis mine). So, you can treat sockets as regular files in some contexts, and not in others. Of course there are other reasons to use WSARecv instead of ReadFile sometimes, which is why there are multiple functions.
On *nix it really shouldn't be select. select cannot wait upon a file descriptor whose value is greater than FD_SETSIZE, which means it sets a hard (and small) limit on the number of things that a process which wants to use this facility can be doing.
I didn't know that. Should a note be added to http://docs.python.org/2/library/select ?
The note that should be added there is simply "you should know how the select system call works in C if you want to use this module".
I also thought that poll acts like, well, a polling function - eating 100% CPU while looping over inputs over and over checking if there is something to react to.
Nope. Admittedly, the naming is slightly misleading.
On the other hand, if you hard-code another arbitrary limit like this into the stdlib subprocess module, it will just be another great reason why Twisted's spawnProcess is the best and everyone should use it instead, so be my guest ;-).
spawnProcess requires a reactor. This PEP is an alternative for the proponents of green energy. =)
Do you know what happens when you take something that is supposed to be happening inside a reactor, and then move it outside a reactor? It's not called "green energy", it's called "a bomb" ;-). -glyph
On Thu, Dec 20, 2012 at 3:47 AM, Glyph <glyph@twistedmatrix.com> wrote:
On Dec 19, 2012, at 2:14 PM, anatoly techtonik <techtonik@gmail.com> wrote:
On Sun, Dec 9, 2012 at 7:14 AM, Glyph <glyph@twistedmatrix.com> wrote:
On Dec 7, 2012, at 5:10 PM, anatoly techtonik <techtonik@gmail.com> wrote:
What about reading from other file descriptors? subprocess.Popen allows
arbitrary file descriptors to be used. Is there any provision here for reading and writing non-blocking from or to those?
On Windows it is WriteFile/ReadFile and PeekNamedPipe. On Linux it is select. Of course a test is needed, but why it should not just work?
This is exactly why the provision needs to be made explicitly.
On Windows it is WriteFile and ReadFile and PeekNamedPipe - unless the handle is a socket in which case it needs to be WSARecv. Or maybe it's some other weird thing - like, maybe a mailslot - and you need to call a different API.
IIRC on Windows there is no socket descriptor that can be used as a file descriptor. Seems reasonable to limit the implementation to standard file descriptors in this platform.
Via the documentation of ReadFile: < http://msdn.microsoft.com/en-us/library/windows/desktop/aa365467(v=vs.85).as...
hFile [in]
A handle to the device (for example, a file, file stream, physical disk, volume, console buffer, tape drive, *socket*, communications resource, mailslot, or pipe). (...) For asynchronous read operations, hFile can be any handle that is opened with the FILE_FLAG_OVERLAPPED flag by the CreateFilefunction, or a *socket handle returned by the socket or accept function*.
(emphasis mine).
So, you can treat sockets as regular files in some contexts, and not in others. Of course there are other reasons to use WSARecv instead of ReadFile sometimes, which is why there are multiple functions.
handle != descriptor, and Python documentation explicitly says that socket descriptor is limited, so it's ok to continue not supporting socket descriptors for pipes. http://docs.python.org/2/library/socket.html#socket.socket.fileno
On *nix it really shouldn't be select. select cannot wait upon a file
descriptor whose *value* is greater than FD_SETSIZE, which means it sets a hard (and small) limit on the number of things that a process which wants to use this facility can be doing.
I didn't know that. Should a note be added to http://docs.python.org/2/library/select ?
The note that should be added there is simply "you should know how the select system call works in C if you want to use this module".
Why spreading FUD if it is possible to define a good entrypoint for those who want to learn, but don't have enough time? Why not to say directly that select interface is outdated?
On the other hand, if you hard-code another arbitrary limit like this
into the stdlib subprocess module, it will just be another great reason why Twisted's spawnProcess is the best and everyone should use it instead, so be my guest ;-).
spawnProcess requires a reactor. This PEP is an alternative for the proponents of green energy. =)
Do you know what happens when you take something that is supposed to be happening *inside* a reactor, and then move it *outside* a reactor? It's not called "green energy", it's called "a bomb" ;-).
The biggest complain about nuclear physics is that to understand what's going on it should have been gone 3D long ago. =) I think Twisted needs to organize competition on the best visualization of underlying concepts. It will help people grasp the concepts behind and different problems much faster (as well as gain an ability to compare different reactors).
On Dec 19, 2012, at 7:46 PM, anatoly techtonik <techtonik@gmail.com> wrote:
On *nix it really shouldn't be select. select cannot wait upon a file descriptor whose value is greater than FD_SETSIZE, which means it sets a hard (and small) limit on the number of things that a process which wants to use this facility can be doing.
I didn't know that. Should a note be added to http://docs.python.org/2/library/select ?
The note that should be added there is simply "you should know how the select system call works in C if you want to use this module".
Why spreading FUD if it is possible to define a good entrypoint for those who want to learn, but don't have enough time? Why not to say directly that select interface is outdated?
It's not FUD. If you know how select() works in C, you may well want to call it. It's the most portable multiplexing API, although it has a number of limitations. Really, what most users in this situation ought to be using is Twisted, but it seems there is not sufficient interest to bundle Twisted's core in the stdlib. However, the thing Guido is working on lately may be interoperable enough with Twisted that you can upgrade to it more easily in future versions of Python, so one day it may be reasonable to say select is outdated. (Maybe not though. It's a good thing nobody told me that select was deprecated in favor of asyncore.)
On the other hand, if you hard-code another arbitrary limit like this into the stdlib subprocess module, it will just be another great reason why Twisted's spawnProcess is the best and everyone should use it instead, so be my guest ;-).
spawnProcess requires a reactor. This PEP is an alternative for the proponents of green energy. =)
Do you know what happens when you take something that is supposed to be happening inside a reactor, and then move it outside a reactor? It's not called "green energy", it's called "a bomb" ;-).
The biggest complain about nuclear physics is that to understand what's going on it should have been gone 3D long ago. =) I think Twisted needs to organize competition on the best visualization of underlying concepts. It will help people grasp the concepts behind and different problems much faster (as well as gain an ability to compare different reactors).
I would love for someone to do this, of course, but now we're _really_ off topic. -glyph
I'm really not sure what this PEP is trying to get at given that it contains no examples and sounds from the descriptions to be adding a complicated api on top of something that already, IMNSHO, has too much it (subprocess.Popen). Regardless, any user can use the stdout/err/in file objects with their own code that handles them asynchronously (yes that can be painful but that is what is required for _any_ socket or pipe I/O you don't want to block on). It *sounds* to me like this entire PEP could be written and released as a third party module on PyPI that offers a subprocess.Popen subclass adding some more convenient non-blocking APIs. That's where I'd start if I were interested in this as a future feature. -gps On Fri, Dec 7, 2012 at 5:10 PM, anatoly techtonik <techtonik@gmail.com>wrote:
On Tue, Sep 15, 2009 at 9:24 PM, <exarkun@twistedmatrix.com> wrote:
On 04:25 pm, eric.pruitt@gmail.com wrote:
I'm bumping this PEP again in hopes of getting some feedback.
This is useful, indeed. ActiveState recipe for this has 10 votes, which is high for ActiveState (and such hardcore topic FWIW).
On Tue, Sep 8, 2009 at 23:52, Eric Pruitt <eric.pruitt@gmail.com> wrote:
PEP: 3145 Title: Asynchronous I/O For subprocess.Popen Author: (James) Eric Pruitt, Charles R. McCreary, Josiah Carlson Type: Standards Track Content-Type: text/plain Created: 04-Aug-2009 Python-Version: 3.2
Abstract:
In its present form, the subprocess.Popen implementation is prone to dead-locking and blocking of the parent Python script while waiting on data from the child process.
Motivation:
A search for "python asynchronous subprocess" will turn up numerous accounts of people wanting to execute a child process and communicate with it from time to time reading only the data that is available instead of blocking to wait for the program to produce data [1] [2] [3]. The current behavior of the subprocess module is that when a user sends or receives data via the stdin, stderr and stdout file objects, dead locks are common and documented [4] [5]. While communicate can be used to alleviate some of the buffering issues, it will still cause the parent process to block while attempting to read data when none is available to be read from the child process.
Rationale:
There is a documented need for asynchronous, non-blocking functionality in subprocess.Popen [6] [7] [2] [3]. Inclusion of the code would improve the utility of the Python standard library that can be used on Unix based and Windows builds of Python. Practically every I/O object in Python has a file-like wrapper of some sort. Sockets already act as such and for strings there is StringIO. Popen can be made to act like a file by simply using the methods attached the the subprocess.Popen.stderr, stdout and stdin file-like objects. But when using the read and write methods of those options, you do not have the benefit of asynchronous I/O. In the proposed solution the wrapper wraps the asynchronous methods to mimic a file object.
Reference Implementation:
I have been maintaining a Google Code repository that contains all of my changes including tests and documentation [9] as well as blog detailing the problems I have come across in the development process [10].
I have been working on implementing non-blocking asynchronous I/O in the subprocess.Popen module as well as a wrapper class for subprocess.Popen that makes it so that an executed process can take the place of a file by duplicating all of the methods and attributes that file objects have.
"Non-blocking" and "asynchronous" are actually two different things. From the rest of this PEP, I think only a non-blocking API is being introduced. I haven't looked beyond the PEP, though, so I might be missing something.
I suggest renaming http://www.python.org/dev/peps/pep-3145/ to 'Non-blocking I/O for subprocess' and continue. IMHO on this stage is where examples with deadlocks that occur with current subprocess implementation are badly needed.
There are two base functions that have been added to the
subprocess.Popen class: Popen.send and Popen._recv, each with two separate implementations, one for Windows and one for Unix based systems. The Windows implementation uses ctypes to access the functions needed to control pipes in the kernel 32 DLL in an asynchronous manner. On Unix based systems, the Python interface for file control serves the same purpose. The different implementations of Popen.send and Popen._recv have identical arguments to make code that uses these functions work across multiple platforms.
Why does the method for non-blocking read from a pipe start with an "_"? This is the convention (widely used) for a private API. The name also doesn't suggest that this is the non-blocking version of reading. Similarly, the name "send" doesn't suggest that this is the non-blocking version of writing.
The implementation is based on http://code.activestate.com/recipes/440554/which is more clearly illustrates integrated functionality.
_recv() is a private base function, which is takes stdout or stderr as parameter. Corresponding user-level functions to read from stdout and stderr are .recv() and .recv_err()
I thought about renaming API to .asyncread() and .asyncwrite(), but that may mean that you call method and then result asynchronously start to fill some buffer, which is not the case here.
Then I thought about .check_read() and .check_write(), literally meaning 'check and read' or 'check and return' for non-blocking calls if there is nothing. But then again, poor naming convention of subprocess uses .check_output() for blocking read until command completes.
Currently, subversion doesn't have .read and .write methods. It may be the best option: .write(what) to pipe more stuff into input buffer of child process. .read(from) where `from` is either subprocess.STDOUT or STDERR Both functions should be marked as non-blocking in docs and returning None if pipe is closed.
When calling the Popen._recv function, it requires the pipe name be
passed as an argument so there exists the Popen.recv function that passes selects stdout as the pipe for Popen._recv by default. Popen.recv_err selects stderr as the pipe by default. "Popen.recv" and "Popen.recv_err" are much easier to read and understand than "Popen._recv('stdout' ..." and "Popen._recv('stderr' ..." respectively.
What about reading from other file descriptors? subprocess.Popen allows arbitrary file descriptors to be used. Is there any provision here for reading and writing non-blocking from or to those?
On Windows it is WriteFile/ReadFile and PeekNamedPipe. On Linux it is select. Of course a test is needed, but why it should not just work?
Since the Popen._recv function does not wait on data to be produced
before returning a value, it may return empty bytes. Popen.asyncread handles this issue by returning all data read over a given time interval.
Oh. Popen.asyncread? What's that? This is the first time the PEP mentions it.
I guess that's for blocking read with timeout. Among the most popular questions about Python it is the question number ~500. http://stackoverflow.com/questions/1191374/subprocess-with-timeout
The ProcessIOWrapper class uses the asyncread and asyncwrite
functions to allow a process to act like a file so that there are no blocking issues that can arise from using the stdout and stdin file objects produced from a subprocess.Popen call.
What's the ProcessIOWrapper class? And what's the asyncwrite function? Again, this is the first time it's mentioned.
Oh. That's a wrapper to access subprocess pipes with familiar file API. It is interesting:
http://code.google.com/p/subprocdev/source/browse/subprocess.py?name=python3...
So, to sum up, I think my main comment is that the PEP seems to be missing a significant portion of the details of what it's actually proposing. I suspect that this information is present in the implementation, which I have not looked at, but it probably belongs in the PEP.
Jean-Paul
Writing PEPs is definitely a job, and a hard one for developers. Too bad a good idea *and* implementation (tests needed) is put on hold, because there is nobody, who can help with that part.
IMHO PEP needs to expand on user stories even if there is significant amount of cited sources, a practical summary and problem illustration by examples are missing.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/greg%40krypto.org
On Sun, Dec 9, 2012 at 7:17 AM, Gregory P. Smith <greg@krypto.org> wrote:
I'm really not sure what this PEP is trying to get at given that it contains no examples and sounds from the descriptions to be adding a complicated api on top of something that already, IMNSHO, has too much it (subprocess.Popen).
Regardless, any user can use the stdout/err/in file objects with their own code that handles them asynchronously (yes that can be painful but that is what is required for _any_ socket or pipe I/O you don't want to block on).
And how to use stdout/stderr/in asynchronously in cross-platform manner? IIUC the problem is that every read is blocking.
It *sounds* to me like this entire PEP could be written and released as a third party module on PyPI that offers a subprocess.Popen subclass adding some more convenient non-blocking APIs. That's where I'd start if I were interested in this as a future feature.
I've rewritten the PEP based on how do I understand the code. I don't know how to update it and how to comply with open documentation license, so I just attach it and add PEPs list to CC. Me too has a feeling that the PEP should be stripped of additional high level API until low level functionality is well understood and accepted. -- anatoly t.
You cannot rewrite an existing PEP if you are not one of the original owners, nor can you add yourself as an author to a PEP without permission from the original authors. And please do not CC the peps mailing list on discussions. It should only be used to mail in new PEPs or acceptable patches to PEPs. On Wed, Dec 19, 2012 at 5:20 PM, anatoly techtonik <techtonik@gmail.com>wrote:
On Sun, Dec 9, 2012 at 7:17 AM, Gregory P. Smith <greg@krypto.org> wrote:
I'm really not sure what this PEP is trying to get at given that it contains no examples and sounds from the descriptions to be adding a complicated api on top of something that already, IMNSHO, has too much it (subprocess.Popen).
Regardless, any user can use the stdout/err/in file objects with their own code that handles them asynchronously (yes that can be painful but that is what is required for _any_ socket or pipe I/O you don't want to block on).
And how to use stdout/stderr/in asynchronously in cross-platform manner? IIUC the problem is that every read is blocking.
It *sounds* to me like this entire PEP could be written and released as a third party module on PyPI that offers a subprocess.Popen subclass adding some more convenient non-blocking APIs. That's where I'd start if I were interested in this as a future feature.
I've rewritten the PEP based on how do I understand the code. I don't know how to update it and how to comply with open documentation license, so I just attach it and add PEPs list to CC. Me too has a feeling that the PEP should be stripped of additional high level API until low level functionality is well understood and accepted.
-- anatoly t.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org
On Thu, Dec 20, 2012 at 12:18 PM, Brett Cannon <brett@python.org> wrote:
And please do not CC the peps mailing list on discussions. It should only be used to mail in new PEPs or acceptable patches to PEPs.
PEP 1 should perhaps be clarified if the above is the case. Currently, PEP 1 says all PEP-related e-mail should be sent there: "The PEP editors assign PEP numbers and change their status. Please send all PEP-related email to <peps@python.org> (no cross-posting please). Also see PEP Editor Responsibilities & Workflow below." as well as: "A PEP editor must subscribe to the <peps@python.org> list. All PEP-related correspondence should be sent (or CC'd) to <peps@python.org> (but please do not cross-post!)." (Incidentally, the statement not to cross-post seems contradictory if a PEP-related e-mail is also sent to python-dev, for example.) --Chris
On Wed, Dec 19, 2012 at 5:20 PM, anatoly techtonik <techtonik@gmail.com> wrote:
On Sun, Dec 9, 2012 at 7:17 AM, Gregory P. Smith <greg@krypto.org> wrote:
I'm really not sure what this PEP is trying to get at given that it contains no examples and sounds from the descriptions to be adding a complicated api on top of something that already, IMNSHO, has too much it (subprocess.Popen).
Regardless, any user can use the stdout/err/in file objects with their own code that handles them asynchronously (yes that can be painful but that is what is required for _any_ socket or pipe I/O you don't want to block on).
And how to use stdout/stderr/in asynchronously in cross-platform manner? IIUC the problem is that every read is blocking.
It sounds to me like this entire PEP could be written and released as a third party module on PyPI that offers a subprocess.Popen subclass adding some more convenient non-blocking APIs. That's where I'd start if I were interested in this as a future feature.
I've rewritten the PEP based on how do I understand the code. I don't know how to update it and how to comply with open documentation license, so I just attach it and add PEPs list to CC. Me too has a feeling that the PEP should be stripped of additional high level API until low level functionality is well understood and accepted.
-- anatoly t.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
On Thu, Dec 20, 2012 at 3:55 PM, Chris Jerdonek <chris.jerdonek@gmail.com>wrote:
On Thu, Dec 20, 2012 at 12:18 PM, Brett Cannon <brett@python.org> wrote:
And please do not CC the peps mailing list on discussions. It should
only be
used to mail in new PEPs or acceptable patches to PEPs.
PEP 1 should perhaps be clarified if the above is the case. Currently, PEP 1 says all PEP-related e-mail should be sent there:
"The PEP editors assign PEP numbers and change their status. Please send all PEP-related email to <peps@python.org> (no cross-posting please). Also see PEP Editor Responsibilities & Workflow below."
as well as:
"A PEP editor must subscribe to the <peps@python.org> list. All PEP-related correspondence should be sent (or CC'd) to <peps@python.org> (but please do not cross-post!)."
(Incidentally, the statement not to cross-post seems contradictory if a PEP-related e-mail is also sent to python-dev, for example.)
But it very clearly states to NOT cross-post which is exactly what Anatoly did and that is what I take issue with the most. I personally don't see any confusion with the wording. It clearly states that if you are a PEP author you should mail the peps editors and NOT cross-post. If you are an editor, make sure any emailing you do with an individual CCs the list but do NOT cross-post. -Brett
--Chris
On Wed, Dec 19, 2012 at 5:20 PM, anatoly techtonik <techtonik@gmail.com> wrote:
On Sun, Dec 9, 2012 at 7:17 AM, Gregory P. Smith <greg@krypto.org>
I'm really not sure what this PEP is trying to get at given that it contains no examples and sounds from the descriptions to be adding a complicated api on top of something that already, IMNSHO, has too much
it
(subprocess.Popen).
Regardless, any user can use the stdout/err/in file objects with their own code that handles them asynchronously (yes that can be painful but
wrote: that
is what is required for _any_ socket or pipe I/O you don't want to block on).
And how to use stdout/stderr/in asynchronously in cross-platform manner? IIUC the problem is that every read is blocking.
It sounds to me like this entire PEP could be written and released as a third party module on PyPI that offers a subprocess.Popen subclass
adding
some more convenient non-blocking APIs. That's where I'd start if I were interested in this as a future feature.
I've rewritten the PEP based on how do I understand the code. I don't know how to update it and how to comply with open documentation license, so I just attach it and add PEPs list to CC. Me too has a feeling that the PEP should be stripped of additional high level API until low level functionality is well understood and accepted.
-- anatoly t.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe:
http://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com
On Thu, Dec 20, 2012 at 1:12 PM, Brett Cannon <brett@python.org> wrote:
On Thu, Dec 20, 2012 at 3:55 PM, Chris Jerdonek <chris.jerdonek@gmail.com> wrote:
On Thu, Dec 20, 2012 at 12:18 PM, Brett Cannon <brett@python.org> wrote:
And please do not CC the peps mailing list on discussions. It should only be used to mail in new PEPs or acceptable patches to PEPs.
PEP 1 should perhaps be clarified if the above is the case. Currently, PEP 1 says all PEP-related e-mail should be sent there:
"The PEP editors assign PEP numbers and change their status. Please send all PEP-related email to <peps@python.org> (no cross-posting please). Also see PEP Editor Responsibilities & Workflow below."
as well as:
"A PEP editor must subscribe to the <peps@python.org> list. All PEP-related correspondence should be sent (or CC'd) to <peps@python.org> (but please do not cross-post!)."
(Incidentally, the statement not to cross-post seems contradictory if a PEP-related e-mail is also sent to python-dev, for example.)
But it very clearly states to NOT cross-post which is exactly what Anatoly did and that is what I take issue with the most. I personally don't see any confusion with the wording. It clearly states that if you are a PEP author you should mail the peps editors and NOT cross-post. If you are an editor, make sure any emailing you do with an individual CCs the list but do NOT cross-post.
I don't disagree that he shouldn't have cross-posted. I was just pointing out that the language should be clarified. What's confusing is that the current language implies that one shouldn't send any PEP-related e-mails to any mailing list other than peps@. In particular, how can one discuss PEPs on python-dev or python-ideas without violating that language (e.g. this e-mail which is related to PEP 1)? It is probably just a matter of clarifying what "PEP-related" means. --Chris
On Thu, Dec 20, 2012 at 7:35 PM, Chris Jerdonek <chris.jerdonek@gmail.com>wrote:
On Thu, Dec 20, 2012 at 1:12 PM, Brett Cannon <brett@python.org> wrote:
On Thu, Dec 20, 2012 at 3:55 PM, Chris Jerdonek <
chris.jerdonek@gmail.com>
wrote:
On Thu, Dec 20, 2012 at 12:18 PM, Brett Cannon <brett@python.org>
wrote:
And please do not CC the peps mailing list on discussions. It should only be used to mail in new PEPs or acceptable patches to PEPs.
PEP 1 should perhaps be clarified if the above is the case. Currently, PEP 1 says all PEP-related e-mail should be sent there:
"The PEP editors assign PEP numbers and change their status. Please send all PEP-related email to <peps@python.org> (no cross-posting please). Also see PEP Editor Responsibilities & Workflow below."
as well as:
"A PEP editor must subscribe to the <peps@python.org> list. All PEP-related correspondence should be sent (or CC'd) to <peps@python.org> (but please do not cross-post!)."
(Incidentally, the statement not to cross-post seems contradictory if a PEP-related e-mail is also sent to python-dev, for example.)
But it very clearly states to NOT cross-post which is exactly what Anatoly did and that is what I take issue with the most. I personally don't see any confusion with the wording. It clearly states that if you are a PEP author you should mail the peps editors and NOT cross-post. If you are an editor, make sure any emailing you do with an individual CCs the list but do NOT cross-post.
I don't disagree that he shouldn't have cross-posted. I was just pointing out that the language should be clarified. What's confusing is that the current language implies that one shouldn't send any PEP-related e-mails to any mailing list other than peps@. In particular, how can one discuss PEPs on python-dev or python-ideas without violating that language (e.g. this e-mail which is related to PEP 1)? It is probably just a matter of clarifying what "PEP-related" means.
I'm just not seeing the confusion, sorry. And we have never really had any confusion over this wording before. If you want to send a patch to tweak the wording to me more clear then please go ahead and I will consider it, but I'm not worried enough about it to try to come up with some rewording myself.
On Fri, Dec 21, 2012 at 6:46 AM, Brett Cannon <brett@python.org> wrote:
On Thu, Dec 20, 2012 at 7:35 PM, Chris Jerdonek <chris.jerdonek@gmail.com> wrote:
I don't disagree that he shouldn't have cross-posted. I was just pointing out that the language should be clarified. What's confusing is that the current language implies that one shouldn't send any PEP-related e-mails to any mailing list other than peps@. In particular, how can one discuss PEPs on python-dev or python-ideas without violating that language (e.g. this e-mail which is related to PEP 1)? It is probably just a matter of clarifying what "PEP-related" means.
I'm just not seeing the confusion, sorry. And we have never really had any confusion over this wording before. If you want to send a patch to tweak the wording to me more clear then please go ahead and I will consider it, but I'm not worried enough about it to try to come up with some rewording myself.
I uploaded a proposed patch to this issue: http://bugs.python.org/issue16746 --Chris
What should I do in case Eric lost interest after his GSoC project for PSF appeared as useless for python-dev community? Should I rewrite the proposal from scratch? On Thu, Dec 20, 2012 at 11:18 PM, Brett Cannon <brett@python.org> wrote:
You cannot rewrite an existing PEP if you are not one of the original owners, nor can you add yourself as an author to a PEP without permission from the original authors.
And please do not CC the peps mailing list on discussions. It should only be used to mail in new PEPs or acceptable patches to PEPs.
On Wed, Dec 19, 2012 at 5:20 PM, anatoly techtonik <techtonik@gmail.com>wrote:
On Sun, Dec 9, 2012 at 7:17 AM, Gregory P. Smith <greg@krypto.org> wrote:
I'm really not sure what this PEP is trying to get at given that it contains no examples and sounds from the descriptions to be adding a complicated api on top of something that already, IMNSHO, has too much it (subprocess.Popen).
Regardless, any user can use the stdout/err/in file objects with their own code that handles them asynchronously (yes that can be painful but that is what is required for _any_ socket or pipe I/O you don't want to block on).
And how to use stdout/stderr/in asynchronously in cross-platform manner? IIUC the problem is that every read is blocking.
It *sounds* to me like this entire PEP could be written and released as a third party module on PyPI that offers a subprocess.Popen subclass adding some more convenient non-blocking APIs. That's where I'd start if I were interested in this as a future feature.
I've rewritten the PEP based on how do I understand the code. I don't know how to update it and how to comply with open documentation license, so I just attach it and add PEPs list to CC. Me too has a feeling that the PEP should be stripped of additional high level API until low level functionality is well understood and accepted.
-- anatoly t.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org
Hey, Anatoly, you are free to modify the PEP and code. I do not have any plans to work on this right now. Eric On Mon, Dec 24, 2012 at 04:42:20PM +0300, anatoly techtonik wrote:
What should I do in case Eric lost interest after his GSoC project for PSF appeared as useless for python-dev community? Should I rewrite the proposal from scratch?
On Thu, Dec 20, 2012 at 11:18 PM, Brett Cannon <brett@python.org> wrote:
You cannot rewrite an existing PEP if you are not one of the original owners, nor can you add yourself as an author to a PEP without permission from the original authors.
And please do not CC the peps mailing list on discussions. It should only be used to mail in new PEPs or acceptable patches to PEPs.
On Mon, Dec 24, 2012 at 7:42 AM, anatoly techtonik <techtonik@gmail.com> wrote:
What should I do in case Eric lost interest after his GSoC project for PSF appeared as useless for python-dev community? Should I rewrite the proposal from scratch?
Before you attempt that, start by trying to have a better attitude towards people's contributions around here.
On Dec 24, 2012 11:44 PM, "Brian Curtin" <brian@python.org> wrote:
On Mon, Dec 24, 2012 at 7:42 AM, anatoly techtonik <techtonik@gmail.com>
What should I do in case Eric lost interest after his GSoC project for PSF appeared as useless for python-dev community? Should I rewrite the
wrote: proposal
from scratch?
Before you attempt that, start by trying to have a better attitude towards people's contributions around here.
Ignoring the extremely negative and counter-productive attitude (which if not changed could quite easily lead to no PEP editors wanting to work with you, Anatoly, and thus blocking your changes from being accepted), you are also ignoring the two other authors on that PEP, who also need to agree to adding you to the PEP as an author and your general direction/approach.
participants (10)
-
anatoly techtonik
-
Antoine Pitrou
-
Brett Cannon
-
Brian Curtin
-
Chris Jerdonek
-
Eric Pruitt
-
exarkun@twistedmatrix.com
-
Glyph
-
Gregory P. Smith
-
Oleg Broytmann