[Python-Dev] Usage of the multiprocessing API and object lifetime

Victor Stinner vstinner at redhat.com
Tue Dec 11 10:33:54 EST 2018


Le mar. 11 déc. 2018 à 16:14, Antoine Pitrou <solipsis at pitrou.net> a écrit :
> What you are proposing here starts to smell like an anti-pattern to
> me.  Python _is_ a garbage-collected language, so by definition, there
> _are_ going to be resources that are automatically collected when an
> object disappears.  If I'm allocating a 2GB bytes object, then PyPy may
> delay the deallocation much longer than CPython.  Do you propose we add
> a release() method to bytes objects to avoid this issue (and emit a
> warning for people who don't call release() on bytes objects)?

We are not talking about simple strings, but processes and threads.

> You can't change the language's philosophy.  We warn about open files
> because those have user-visible consequences (such as unflushed
> buffers, or not being able to delete the file on Windows).  If there is
> no user-visible consequence to not calling join() on a Pool, then we
> shouldn't warn about it.

"user-visible consequences" are that resources are kept alive longer
than I would expect. When I use a context manager, I expect that
Python will magically releases everything for me.

For example, "with subprocess.Popen() as popen: ..." ensures that all
pipes are closed and the process completes, before we exit the block.

Another example, "with open() as fp: ..." ensures that the file
descriptor is closed before we exit the block.

I modified subprocess.Popen.__del__() in Python 3.6 to emit a
ResourceWarning if the subprocess is still running, to suggest the
developer to explicitly manage the resource (ex: call .wait()).

I prefer to explicitly manager resources like processes and threads
since they can exit with error: killed by a signal, waitpid() failure
(exit status already read by a different function), etc. I prefer to
control where the error occurs. I hate when Python logs strange error
during shutdown. Logging errors during shutdown is too late: for
example, the log triggers a new error because a stdlib module has been
cleared. That's why we need hacks like "_warn=warnings.warn" below:

    class Popen(object):
        ...
        def __del__(self, _maxsize=sys.maxsize, _warn=warnings.warn):
            ...
            if self.returncode is None:
                _warn("subprocess %s is still running" % self.pid,
                      ResourceWarning, source=self)
            ...

Victor


More information about the Python-Dev mailing list