Adding a subprocess.CompletedProcess class
The subprocess module provides some nice tools to control the details of running a process, but it's still rather awkward for common use cases where you want to execute a command in one go. * There are three high-level functions: call, check_call and check_output, which all do very similar things with different return/raise behaviours * Their naming is not very clear (check_output doesn't check the output, it checks the return code and captures the output) * You can't use any of them if you want stdout and stderr separately. * You can get stdout and returncode from check_output, but it's not exactly obvious: try: stdout = check_output(...) returncode = 0 except CalledProcessError as e: stdout = e.output returncode = e.returncode I think that what these are lacking is a good way to represent a process that has already finished (as opposed to Popen, which is mostly designed to handle a running process). So I would: 1. Add a CompletedProcess class: * Attributes stdout and stderr are bytes if the relevant stream was piped, None otherwise, like the return value of Popen.communicate() * Attribute returncode is the exit status * ? Attribute cmd is the list of arguments the process ran with (not sure if this should be there or not) * Method cp.check_returncode() raises CalledProcessError if returncode != 0, inspired by requests' Response.raise_for_status() 2. Add a run() function - like call/check_call/check_output, but returns a CompletedProcess instance 3. Deprecate call/check_call/check_output, but leave them around indefinitely, since lots of existing code relies on them. Thanks, Thomas
Hi! On Tue, Jan 27, 2015 at 04:43:31PM -0800, Thomas Kluyver <thomas@kluyver.me.uk> wrote:
The subprocess module provides some nice tools to control the details of running a process, but it's still rather awkward for common use cases where you want to execute a command in one go.
Have you ever looked at wrappers like these: https://executor.readthedocs.org/en/latest/ https://sarge.readthedocs.org/en/latest/ https://github.com/kennethreitz/envoy https://amoffat.github.io/sh/ ??? Oleg. -- Oleg Broytman http://phdru.name/ phd@phdru.name Programmers don't die, they just GOSUB without RETURN.
On 27 January 2015 at 16:57, Oleg Broytman <phd@phdru.name> wrote:
Have you ever looked at wrappers like these:
https://executor.readthedocs.org/en/latest/ https://sarge.readthedocs.org/en/latest/ https://github.com/kennethreitz/envoy https://amoffat.github.io/sh/
I was aware of three of them (sarge, envoy, sh). But the friction isn't enough to push me to add another dependency: I invariably use subprocess and put up with writing somewhat awkward code. Envoy and sarge appear to implement something very similar to my proposal, but as part of a more ambitious goal, namely shell-style pipelines handled by Python. Thomas
On Tue, Jan 27, 2015 at 05:13:45PM -0800, Thomas Kluyver <thomas@kluyver.me.uk> wrote:
On 27 January 2015 at 16:57, Oleg Broytman <phd@phdru.name> wrote:
Have you ever looked at wrappers like these:
https://executor.readthedocs.org/en/latest/ https://sarge.readthedocs.org/en/latest/ https://github.com/kennethreitz/envoy https://amoffat.github.io/sh/
I was aware of three of them (sarge, envoy, sh).
But the friction isn't enough to push me to add another dependency: I invariably use subprocess and put up with writing somewhat awkward code.
The worst outcome of included batteries: people are always trying to push all possible batteries into stdlib. :-(
Envoy and sarge appear to implement something very similar to my proposal, but as part of a more ambitious goal, namely shell-style pipelines handled by Python.
Something like these? https://pypi.python.org/pypi/iterpipes https://github.com/kelleyk/py3k-iterpipes https://github.com/airekans/Pypiep http://plumbum.readthedocs.org/en/latest/ https://pypi.python.org/pypi/popen https://pypi.python.org/pypi/shell.py https://seveas.github.io/whelk/
Thomas
Oleg. -- Oleg Broytman http://phdru.name/ phd@phdru.name Programmers don't die, they just GOSUB without RETURN.
On 27 January 2015 at 17:34, Oleg Broytman <phd@phdru.name> wrote:
The worst outcome of included batteries: people are always trying to push all possible batteries into stdlib. :-(
Or perhaps the problem is that people like me keep using lower quality batteries simply because those are the ones included. subprocess is at that point where it's good enough that I don't want to learn and depend on something else, but bad enough that I don't enjoy using it. None of the wrappers you linked to before have gained the kind of traction that requests has for HTTP, and I'm sceptical that they ever would.
Something like these?
https://pypi.python.org/pypi/iterpipes https://github.com/kelleyk/py3k-iterpipes https://github.com/airekans/Pypiep http://plumbum.readthedocs.org/en/latest/ https://pypi.python.org/pypi/popen https://pypi.python.org/pypi/shell.py https://seveas.github.io/whelk/
To be clear, this is *not* what I'm suggesting go into subprocess. That would make a much bigger, more complex battery. Thomas
On Tue Jan 27 2015 at 4:44:32 PM Thomas Kluyver <thomas@kluyver.me.uk> wrote:
The subprocess module provides some nice tools to control the details of running a process, but it's still rather awkward for common use cases where you want to execute a command in one go.
* There are three high-level functions: call, check_call and check_output, which all do very similar things with different return/raise behaviours * Their naming is not very clear (check_output doesn't check the output, it checks the return code and captures the output) * You can't use any of them if you want stdout and stderr separately. * You can get stdout and returncode from check_output, but it's not exactly obvious:
try: stdout = check_output(...) returncode = 0 except CalledProcessError as e: stdout = e.output returncode = e.returncode
I think that what these are lacking is a good way to represent a process that has already finished (as opposed to Popen, which is mostly designed to handle a running process). So I would:
1. Add a CompletedProcess class: * Attributes stdout and stderr are bytes if the relevant stream was piped, None otherwise, like the return value of Popen.communicate() * Attribute returncode is the exit status * ? Attribute cmd is the list of arguments the process ran with (not sure if this should be there or not) * Method cp.check_returncode() raises CalledProcessError if returncode != 0, inspired by requests' Response.raise_for_status()
2. Add a run() function - like call/check_call/check_output, but returns a CompletedProcess instance
I like #1 and #2 here. 3. Deprecate call/check_call/check_output, but leave them around
indefinitely, since lots of existing code relies on them.
We need to keep those. They are too widely used and are the long term stable API for 2.7. They are useful for many simple cases which they were designed for. -gps
On 28 January 2015 at 17:34, Gregory P. Smith <greg@krypto.org> wrote:
On Tue Jan 27 2015 at 4:44:32 PM Thomas Kluyver <thomas@kluyver.me.uk> wrote:
The subprocess module provides some nice tools to control the details of running a process, but it's still rather awkward for common use cases where you want to execute a command in one go.
* There are three high-level functions: call, check_call and check_output, which all do very similar things with different return/raise behaviours * Their naming is not very clear (check_output doesn't check the output, it checks the return code and captures the output) * You can't use any of them if you want stdout and stderr separately. * You can get stdout and returncode from check_output, but it's not exactly obvious:
try: stdout = check_output(...) returncode = 0 except CalledProcessError as e: stdout = e.output returncode = e.returncode
I think that what these are lacking is a good way to represent a process that has already finished (as opposed to Popen, which is mostly designed to handle a running process). So I would:
1. Add a CompletedProcess class: * Attributes stdout and stderr are bytes if the relevant stream was piped, None otherwise, like the return value of Popen.communicate() * Attribute returncode is the exit status * ? Attribute cmd is the list of arguments the process ran with (not sure if this should be there or not) * Method cp.check_returncode() raises CalledProcessError if returncode != 0, inspired by requests' Response.raise_for_status()
2. Add a run() function - like call/check_call/check_output, but returns a CompletedProcess instance
I like #1 and #2 here.
3. Deprecate call/check_call/check_output, but leave them around indefinitely, since lots of existing code relies on them.
We need to keep those. They are too widely used and are the long term stable API for 2.7. They are useful for many simple cases which they were designed for.
I'm with Greg here - the proposed "run()" API sounds like a nice improvement to me, but I don't think it makes sense to deprecate the existing use case specific variants. In particular, check_call() and check_output() aren't easily replaced with equivalent expressions based on run() since you need to make a separate call to cp.check_returncode(). Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
On 28 January 2015 at 00:36, Nick Coghlan <ncoghlan@gmail.com> wrote:
I'm with Greg here - the proposed "run()" API sounds like a nice improvement to me, but I don't think it makes sense to deprecate the existing use case specific variants. In particular, check_call() and check_output() aren't easily replaced with equivalent expressions based on run() since you need to make a separate call to cp.check_returncode().
Good point. Perhaps it would be more useful for run() to take a kwarg check_returncode, which if True would cause it to raise on a non-zero exit code? That way, any of the function trio could be replaced by use of run(). Barry:
Note that in Python 3, if universal_newlines=True is given, the output will be a str, not bytes.
We need to keep those. They are too widely used and are the long term stable API for 2.7. They are useful for many simple cases which they were designed for.
Agreed that this seems like a nice improvement, but that we need to keep
Ugh, yes. I would follow that for consistency, though I'd love to come up with a way to clean that up too. The fact that 'decode the output' is spelled 'universal_newlines' is one of my least favourite things about subprocess. the
old APIs around probably forever.
Agreed. I would however like to de-emphasise them in the docs. It seems like there's some level of interest in this; shall I work on a patch? Thanks, Thomas
On 28 January 2015 at 10:02, Ethan Furman <ethan@stoneleaf.us> wrote:
It seems like there's some level of interest in this; shall I work on a patch?
That would be great!
First pass at a patch attached. I also ended up adding stderr attributes to CalledProcessError and TimeoutExceeded, so information is not thrown away. Things I wasn't sure about: 1. Should stdout/stderr be captured by default? I opted not to, for consistency with Popen and with shells. 2. I gave run() a check_returncode parameter, but it feels quite a long name for a parameter. Is 'check' clear enough to use as the parameter name? 3. Popen has an 'args' attribute, while CalledProcessError and TimeoutExpired have 'cmd'. CompletedProcess sits between those cases, so which name should it use? For now, it's args. Thomas
On 01/28/2015 12:27 PM, Thomas Kluyver wrote:
On 28 January 2015 at 10:02, Ethan Furman <ethan@stoneleaf.us <mailto:ethan@stoneleaf.us>> wrote:
> It seems like there's some level of interest in this; shall I work on a patch?
That would be great!
First pass at a patch attached. I also ended up adding stderr attributes to CalledProcessError and TimeoutExceeded, so information is not thrown away.
Things I wasn't sure about:
1. Should stdout/stderr be captured by default? I opted not to, for consistency with Popen and with shells. 2. I gave run() a check_returncode parameter, but it feels quite a long name for a parameter. Is 'check' clear enough to use as the parameter name? 3. Popen has an 'args' attribute, while CalledProcessError and TimeoutExpired have 'cmd'. CompletedProcess sits between those cases, so which name should it use? For now, it's args.
At this point, create an issue on the bug tracker and attach your patch there. Much easier to keep track of that way. -- ~Ethan~
On 28 January 2015 at 12:35, Ethan Furman <ethan@stoneleaf.us> wrote:
At this point, create an issue on the bug tracker and attach your patch there. Much easier to keep track of that way.
Done, thanks: http://bugs.python.org/issue23342
On Jan 28, 2015, at 09:43 AM, Thomas Kluyver wrote:
Ugh, yes. I would follow that for consistency, though I'd love to come up with a way to clean that up too. The fact that 'decode the output' is spelled 'universal_newlines' is one of my least favourite things about subprocess.
Not me; it's one of those sekrits that ensures Python 3 porters job security. ;) Cheers, -Barry
On 29 January 2015 at 08:13, Barry Warsaw <barry@python.org> wrote:
On Jan 28, 2015, at 09:43 AM, Thomas Kluyver wrote:
Ugh, yes. I would follow that for consistency, though I'd love to come up with a way to clean that up too. The fact that 'decode the output' is spelled 'universal_newlines' is one of my least favourite things about subprocess.
Not me; it's one of those sekrits that ensures Python 3 porters job security. ;)
I still suspect we should be offering a simpler way to decouple the creation of the pipes from the subprocess call, but I have no idea what that API should look like, and as ugly and unintuitive as it is, the existing universal_newlines trick does address all of my *actual* use cases :P Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Maybe subprocess could take a page out of pathlib and use | (__or__) for piping. On 29 January 2015 at 12:47, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 29 January 2015 at 08:13, Barry Warsaw <barry@python.org> wrote:
On Jan 28, 2015, at 09:43 AM, Thomas Kluyver wrote:
Ugh, yes. I would follow that for consistency, though I'd love to come up with a way to clean that up too. The fact that 'decode the output' is spelled 'universal_newlines' is one of my least favourite things about subprocess.
Not me; it's one of those sekrits that ensures Python 3 porters job security. ;)
I still suspect we should be offering a simpler way to decouple the creation of the pipes from the subprocess call, but I have no idea what that API should look like, and as ugly and unintuitive as it is, the existing universal_newlines trick does address all of my *actual* use cases :P
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan@gmail.com> wrote:
I still suspect we should be offering a simpler way to decouple the creation of the pipes from the subprocess call, but I have no idea what that API should look like,
Presumably that would need some kind of object representing a not-yet-started process. Technically, that could be Popen, but for backwards compatibility the Popen constructor needs to start the process, and p = Popen(..., start=False) seems inelegant. Let's imagine it's a new class called Command. Then you could start coming up with interfaces like: c = subprocess.Command(...) c.stdout = fileobj c.stderr = fileobj2 # Or c.capture('combined') # sets stdout=PIPE and stderr=STDOUT # Maybe get into operator overloading? pipeline = c | c2 # Could this work? Would probably require threading c.stdout = BytesIO() c.stderr_decoded = StringIO() # When you've finished setting things up c.run() # returns a Popen instance N.B. This is 'thinking aloud', not any kind of proposal - I'm not convinced by any of that API myself. Thomas
On 30 Jan 2015 07:11, "Thomas Kluyver" <thomas@kluyver.me.uk> wrote:
On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan@gmail.com> wrote:
I still suspect we should be offering a simpler way to decouple the creation of the pipes from the subprocess call, but I have no idea what that API should look like,
Presumably that would need some kind of object representing a
not-yet-started process. Technically, that could be Popen, but for backwards compatibility the Popen constructor needs to start the process, and p = Popen(..., start=False) seems inelegant.
Let's imagine it's a new class called Command. Then you could start
coming up with interfaces like:
c = subprocess.Command(...) c.stdout = fileobj c.stderr = fileobj2 # Or c.capture('combined') # sets stdout=PIPE and stderr=STDOUT # Maybe get into operator overloading? pipeline = c | c2 # Could this work? Would probably require threading c.stdout = BytesIO() c.stderr_decoded = StringIO() # When you've finished setting things up c.run() # returns a Popen instance
N.B. This is 'thinking aloud', not any kind of proposal - I'm not
convinced by any of that API myself. I'm personally waiting for someone to get annoyed enough to try porting Julia's Cmd objects to Python and publish the results on PyPI :) Cheers, Nick.
Thomas
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Jan 29, 2015 3:11 PM, "Thomas Kluyver" <thomas@kluyver.me.uk> wrote:
On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan@gmail.com> wrote:
I still suspect we should be offering a simpler way to decouple the creation of the pipes from the subprocess call, but I have no idea what that API should look like,
Presumably that would need some kind of object representing a
not-yet-started process. Technically, that could be Popen, but for backwards compatibility the Popen constructor needs to start the process, and p = Popen(..., start=False) seems inelegant.
Let's imagine it's a new class called Command. Then you could start
coming up with interfaces like:
c = subprocess.Command(...) c.stdout = fileobj c.stderr = fileobj2 # Or c.capture('combined') # sets stdout=PIPE and stderr=STDOUT # Maybe get into operator overloading? pipeline = c | c2 # Could this work? Would probably require threading c.stdout = BytesIO() c.stderr_decoded = StringIO() # When you've finished setting things up c.run() # returns a Popen instance
N.B. This is 'thinking aloud', not any kind of proposal - I'm not
convinced by any of that API myself. Here's a start at adding an 'expect_returncode' to sarge. I really like the sarge .run() and Capture APIs: * http://sarge.readthedocs.org/en/latest/overview.html#why-not-just-use-subpro... * http://sarge.readthedocs.org/en/latest/tutorial.html#use-as-context-managers
On Jan 31, 2015 9:28 PM, "Wes Turner" <wes.turner@gmail.com> wrote:
Here's a start at adding an 'expect_returncode' to sarge.
https://bitbucket.org/westurner/sarge/commits/6cc7780f00ccba2a231c2b7c51cb2a... https://bitbucket.org/vinay.sajip/sarge/issue/26/enh-equivalent-to-subproces...
29 January 2015 at 21:09, Thomas Kluyver <thomas@kluyver.me.uk> wrote:
On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan@gmail.com> wrote:
I still suspect we should be offering a simpler way to decouple the creation of the pipes from the subprocess call, but I have no idea what that API should look like,
Presumably that would need some kind of object representing a not-yet-started process. Technically, that could be Popen, but for backwards compatibility the Popen constructor needs to start the process, and p = Popen(..., start=False) seems inelegant.
The thing I've ended up needing to do once or twice, which is unreasonably hard with subprocess, is to run the command and capture the output, but *still* write it to stdout/stderr. So the user gets to see the command's progress, but you can introspect the results afterwards. The hard bit is to do this while still displaying the output as it arrives. You basically need to manage the stdout and stderr pipes yourself, do nasty multi-stream interleaving, and deal with the encoding/universal_newlines stuff on the captured data. In a cross-platform way :-( Paul
On Sun, Feb 1, 2015 at 7:48 AM, Paul Moore <p.f.moore@gmail.com> wrote:
29 January 2015 at 21:09, Thomas Kluyver <thomas@kluyver.me.uk> wrote:
On 29 January 2015 at 03:47, Nick Coghlan <ncoghlan@gmail.com> wrote:
I still suspect we should be offering a simpler way to decouple the creation of the pipes from the subprocess call, but I have no idea what that API should look like,
Presumably that would need some kind of object representing a not-yet-started process.
http://sarge.readthedocs.org/en/latest/reference.html#Pipeline p = sarge.run('sleep 60', async=True) assert type(p) == sarge.Pipeline
Technically, that could be Popen, but for backwards
compatibility the Popen constructor needs to start the process, and p = Popen(..., start=False) seems inelegant.
The thing I've ended up needing to do once or twice, which is unreasonably hard with subprocess, is to run the command and capture the output, but *still* write it to stdout/stderr. So the user gets to see the command's progress, but you can introspect the results afterwards.
The hard bit is to do this while still displaying the output as it arrives. You basically need to manage the stdout and stderr pipes yourself, do nasty multi-stream interleaving, and deal with the encoding/universal_newlines stuff on the captured data. In a cross-platform way :-(
http://sarge.readthedocs.org/en/latest/tutorial.html#buffering-issues '| tee filename.log'
On 1 February 2015 at 16:29, Wes Turner <wes.turner@gmail.com> wrote:
The thing I've ended up needing to do once or twice, which is unreasonably hard with subprocess, is to run the command and capture the output, but *still* write it to stdout/stderr. So the user gets to see the command's progress, but you can introspect the results afterwards.
The hard bit is to do this while still displaying the output as it arrives. You basically need to manage the stdout and stderr pipes yourself, do nasty multi-stream interleaving, and deal with the encoding/universal_newlines stuff on the captured data. In a cross-platform way :-(
http://sarge.readthedocs.org/en/latest/tutorial.html#buffering-issues
I'm not sure how that's relevant. Sure, if the child buffers output there's a delay in seeing it, but that's no more an issue than it would be for display on the console.
'| tee filename.log'
There's a number of problems with this: 1. Doesn't handle stderr 2. Uses a temporary file, which I'd then need to read in the parent process, and delete afterwards (and tidy up if there's an exception) 3. tee may not be available (specifically on Windows) 4. Uses an extra process, which is a non-trivial overhead Paul
On Feb 1, 2015 4:06 PM, "Paul Moore" <p.f.moore@gmail.com> wrote:
On 1 February 2015 at 16:29, Wes Turner <wes.turner@gmail.com> wrote:
The thing I've ended up needing to do once or twice, which is unreasonably hard with subprocess, is to run the command and capture the output, but *still* write it to stdout/stderr. So the user gets to see the command's progress, but you can introspect the results afterwards.
The hard bit is to do this while still displaying the output as it arrives. You basically need to manage the stdout and stderr pipes yourself, do nasty multi-stream interleaving, and deal with the encoding/universal_newlines stuff on the captured data. In a cross-platform way :-(
http://sarge.readthedocs.org/en/latest/tutorial.html#buffering-issues
I'm not sure how that's relevant. Sure, if the child buffers output there's a delay in seeing it, but that's no more an issue than it would be for display on the console.
Ah. The link on that page that I was looking for was probably "Using Redirection" #using-redirection and/or " Capturing stdout and stderr from commands" #capturing-stdout-and-stderr-from-commands
On 2 February 2015 at 01:50, Wes Turner <wes.turner@gmail.com> wrote:
I'm not sure how that's relevant. Sure, if the child buffers output there's a delay in seeing it, but that's no more an issue than it would be for display on the console.
Ah. The link on that page that I was looking for was probably "Using Redirection" #using-redirection and/or " Capturing stdout and stderr from commands" #capturing-stdout-and-stderr-from-commands
Yes, but again the issue here isn't capturing stdout/stderr, but capturing them *while having them still displayed on the terminal*. Anyway, this is starting to hijack the thread, so I'll go and have a more detailed look at sarge to see if it does provide what I need somehow. Thanks, Paul
For the capture and forward of output you probably want to base a solution on https://docs.python.org/3/library/asyncio-subprocess.html these days rather than rolling your own poll + read + echo loop. On Mon, Feb 2, 2015, 12:08 AM Paul Moore <p.f.moore@gmail.com> wrote:
On 2 February 2015 at 01:50, Wes Turner <wes.turner@gmail.com> wrote:
I'm not sure how that's relevant. Sure, if the child buffers output there's a delay in seeing it, but that's no more an issue than it would be for display on the console.
Ah. The link on that page that I was looking for was probably "Using Redirection" #using-redirection and/or " Capturing stdout and stderr from commands" #capturing-stdout-and-stderr-from-commands
Yes, but again the issue here isn't capturing stdout/stderr, but capturing them *while having them still displayed on the terminal*. Anyway, this is starting to hijack the thread, so I'll go and have a more detailed look at sarge to see if it does provide what I need somehow.
Thanks, Paul _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 2 February 2015 at 16:00, Gregory P. Smith <greg@krypto.org> wrote:
For the capture and forward of output you probably want to base a solution on https://docs.python.org/3/library/asyncio-subprocess.html these days rather than rolling your own poll + read + echo loop.
For integrating with existing code, can asyncio interoperate with subprocess.Popen objects? I couldn't see how from my reading of the docs. It feels to me from the docs as if asyncio is an architecture choice - if you're not using asncio throughout your program, it's of no help to you. I may well have misunderstood something, though, as I thought one of the aims of asyncio was that it could work in conjunction with more traditional code. Paul PS Are there any good asyncio tutorials? Particularly ones *not* about implementing network protocols. It feels like you need a degree to understand the docs :-(
On Jan 28, 2015, at 07:34 AM, Gregory P. Smith wrote:
On Tue Jan 27 2015 at 4:44:32 PM Thomas Kluyver <thomas@kluyver.me.uk> wrote:
1. Add a CompletedProcess class: * Attributes stdout and stderr are bytes if the relevant stream was piped, None otherwise, like the return value of Popen.communicate()
Note that in Python 3, if universal_newlines=True is given, the output will be a str, not bytes. (The input is also switched from bytes to str, but that seems less common, at least in my use cases.) You'll probably build the new APIs on the existing ones, so I don't expect that to change. I just wanted to point that out.
* Attribute returncode is the exit status * ? Attribute cmd is the list of arguments the process ran with (not sure if this should be there or not) * Method cp.check_returncode() raises CalledProcessError if returncode != 0, inspired by requests' Response.raise_for_status()
2. Add a run() function - like call/check_call/check_output, but returns a CompletedProcess instance
I like #1 and #2 here.
3. Deprecate call/check_call/check_output, but leave them around
indefinitely, since lots of existing code relies on them.
We need to keep those. They are too widely used and are the long term stable API for 2.7. They are useful for many simple cases which they were designed for.
Agreed that this seems like a nice improvement, but that we need to keep the old APIs around probably forever. Cheers, -Barry
participants (9)
-
Barry Warsaw
-
Ethan Furman
-
Gregory P. Smith
-
João Santos
-
Nick Coghlan
-
Oleg Broytman
-
Paul Moore
-
Thomas Kluyver
-
Wes Turner