On Tue, 11 Dec 2018 at 15:13, Antoine Pitrou
On Tue, 11 Dec 2018 15:21:31 +0100 Victor Stinner
wrote: Pablo's issue35378 evolved to add a weak reference in iterators to try to detect when the Pool is destroyed: raise an exception from the iterator, if possible.
That's an ok fix for me.
By the way, I'm surprised that "with pool:" doesn't release all resources.
That's not a problem, as long as the destructor _does_ release resources.
From a technical point of view, I would prefer to become stricter.
Using "with pool:" is fine, we shouldn't start raising a warning for it.
What you are proposing here starts to smell like an anti-pattern to me. Python _is_ a garbage-collected language, so by definition, there _are_ going to be resources that are automatically collected when an object disappears. If I'm allocating a 2GB bytes object, then PyPy may delay the deallocation much longer than CPython. Do you propose we add a release() method to bytes objects to avoid this issue (and emit a warning for people who don't call release() on bytes objects)?
You can't change the language's philosophy. We warn about open files because those have user-visible consequences (such as unflushed buffers, or not being able to delete the file on Windows). If there is no user-visible consequence to not calling join() on a Pool, then we shouldn't warn about it.
I agree with Antoine here.
On Tue, 11 Dec 2018 15:21:31 +0100 Victor Stinner
The API user has to *explicitly* release resources.
That's definitely not Python's philosophy. In Python, users should not have to worry about resource management themselves, that's the job of the language runtime. We provide the "with" construct so that when users *want* to manage resources explicitly (because there is an impact outside of the Python runtime's control, for example) then they can do so. But leaving resource management to the runtime is completely fine.
I propose to start to emit ResourceWarning in Python 3.8 when objects are not released explicitly.
Strong -1 on this.
I don't know well the multiprocessing API
Nor do I, but I'm against making fundamental design changes such as you propose *without* a deep understanding of the multiprocessing API. If you feel sufficiently strongly that the current design is wrong, then you should understand the principles and reasons behind that design before trying to change it. Paul