On Tue Jan 27 2015 at 4:44:32 PM Thomas Kluyver <thomas@kluyver.me.uk> wrote:
The subprocess module provides some nice tools to control the details of running a process, but it's still rather awkward for common use cases where you want to execute a command in one go.

* There are three high-level functions: call, check_call and check_output, which all do very similar things with different return/raise behaviours
* Their naming is not very clear (check_output doesn't check the output, it checks the return code and captures the output)
* You can't use any of them if you want stdout and stderr separately.
* You can get stdout and returncode from check_output, but it's not exactly obvious:

try:
    stdout = check_output(...)
    returncode = 0
except CalledProcessError as e:
    stdout = e.output
    returncode = e.returncode

I think that what these are lacking is a good way to represent a process that has already finished (as opposed to Popen, which is mostly designed to handle a running process). So I would:

1. Add a CompletedProcess class:
* Attributes stdout and stderr are bytes if the relevant stream was piped, None otherwise, like the return value of Popen.communicate()
* Attribute returncode is the exit status
* ? Attribute cmd is the list of arguments the process ran with (not sure if this should be there or not)
* Method cp.check_returncode() raises CalledProcessError if returncode != 0, inspired by requests' Response.raise_for_status()

2. Add a run() function - like call/check_call/check_output, but returns a CompletedProcess instance

I like #1 and #2 here.

3. Deprecate call/check_call/check_output, but leave them around indefinitely, since lots of existing code relies on them.

We need to keep those. They are too widely used and are the long term stable API for 2.7. They are useful for many simple cases which they were designed for.

-gps