Looking at the implementation in concurrent/futures/thread.py, it looks like each of the worker threads repeatedly gets one item from the queue, runs it, and then checks if the executor is being shut down. Worker threads get added dynamically until the executor's max thread count is reached. New futures cannot be submitted when the executor is being shut down. The shutdown() method waits until all workers have cleanly exited.
So it looks like the only time when submitted futures are neither executed nor cancelled is when there are more items in the work queue than there are worker threads. In this situation the worker threads just exit, and the unprocessed items will stay pending forever.
If I analyzed this correctly, perhaps we can add some functionality where leftover work items are explicitly cancelled? I think that would satisfy the OP's requirement. I *think* it would be safe to do this in shutdown() after it has set self._shutdown but before it waits for the worker threads.
On Fri, Jan 3, 2020 at 10:10 AM Miguel Ángel Prosper < firstname.lastname@example.org> wrote:
Having a way to clear the queue and then shutdown once existing jobs are
done is a lot
So the only clean way to do this is cooperative: flush the queue, send
some kind of
message to all children telling them to finish as quickly as possible,
then wait for them
I was personally thinking of an implementation like that, cancel all still in pending and if wait is true the wait for the ones running, for both implementations. I didn't actually meant terminate literally, I just called it that as that's what multiprocessing.dummy.Pool.terminate (+ join after) does. _______________________________________________ Python-ideas mailing list -- email@example.com To unsubscribe send an email to firstname.lastname@example.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://email@example.com/message/M62FHI... Code of Conduct: http://python.org/psf/codeofconduct/