rrr at ronadam.com
Tue Mar 21 22:19:11 CET 2006
Raymond Hettinger wrote:
> I would like to get feedback on an idea I had for simplifying the use
> of queues with daemon consumer threads
> Sometimes, I launch one or more consumer threads that wait for a task
> to enter a queue and then work on the task. A recurring problem is that
> I sometimes need to know if all of the tasks have been completed so I
> can exit or do something with the result.
> If each thread only does a single task, I can use t.join() to wait
> until the task is done. However, if the thread stays alive and waits
> for more Queue entries, then there doesn't seem to be a good way to
> tell when all the processing is done.
> So, the idea is to create a subclass of Queue that increments a counter
> when objects are enqueued, that provides a method for worker threads to
> decrement the counter when the work is done, and that offers a blocking
> join() method that waits until the counter is zero.
Hi Raymond, your approach seems like it would be something I would use.
I'm wonder if threads could be implemented as a type of generator object
where it runs, and the yield waits for next() to be called, instead of
next() waiting for yield. Then you can do...
# do stuff
yield result # wait for next() to be called.
if done: break
my_thread = Worker(args)
results = list(my_thread) # get all values as they are produced
It would be easy to put these in a list and iterate it.
# Do something with args
# Start threads
active_threads = [Worker(args1), Worker(args2), Worker(args3)]
for T in active_threads:
for _ in T: pass # Make sure they are finished.
If this can be done, then all the messy parts possibly could be handled
within python C code, and not by python source code. If you know how to
make a generator, then you would know how to do "simple" threads. And
maybe anything needing more than this would be better off being an
external task anyway?
I'm sure there are lots of um... issues. ;-)
More information about the Python-list