Simpler thread synchronization using "Sticky Condition"

Problem: Using Python's Condition class is error-prone and difficult. For example, the following pseudo-code would be quite typical of its use: condition = threading.Condition() def sender(): while alive(): wait_for_my_data_from_hardware() with condition: send_data_to_receiver() condition.raise() def receiver(): while alive(): with condition: condition.wait() receive_all_data_from_sender() process_data() Main code will then run sender() and receiver() in separate threads. (I know that in a simple case like this, I could just use a Queue. I've been working with code that needs to service several events, where polling queues would introduce overhead and latency so a Condition must be used, but I've simplified the above as much as possible to illustrate the issues even in this simple case). There are two issues even with the above case: 1. Raising a condition only has any effect if the condition is already being waited upon; a condition does not "remember" that it has been raised. So in the example above, if the receiver starts after the sender, it will wait on the condition even though data is already available for it. This issue can be solved by rearranging the code, waiting at the end of the loop rather than the start; but this is counter-intuitive and lots of examples online look like the above. 2. In the receiver, the condition has to be kept locked (be inside the "with" statement") all the time. The lock is automatically released while the condition is being waited upon, but otherwise it must be locked, otherwise we risk missing the condition being raised. The result is that process_data() is called with the lock held - and so this will prevent the sender going round its loop. The sending thread becomes a slave to the processing thread. This may have a huge performance penalty, losing the advantage of loose coupling that threading should provide. You might think that using an Event, rather than a Condition, would solve things, since an Event does "remember" that is has been set. But there is no atomic way to wait on an event and clear it afterwards. Solution: The solution is very simple: to create something that might be called a StickyCondition, or an AutoResetEvent. This is a Condition that does remember it has been raised; or if you prefer, it is an Event that resets itself after it has been waited upon. The example then becomes: auto_event = threading.AutoResetEvent() def sender(): while alive(): wait_for_my_data_from_hardware() send_data_to_receiver() auto_event.set() def receiver(): while alive(): auto_event.wait() receive_all_data_from_sender() process_data() This solves both of the issues described: the receiver will not wait if data is already available, and the sender is not blocked by a lock that is being held by the receiver. The event doesn't need to be locked at all, because of its internal memory. Implementation is trivial, involving a boolean and a Condition in the cross-thread case. I presume it would be more involved for the cross-process case, but I can see no reason it would be impossible. Please let me know if you think this would be useful.

On Tue, 26 Mar 2019 09:27:18 -0000 "Richard Whitehead" <richard.whitehead@ieee.org> wrote:
Nowadays, I would recommend to always use `Condition.wait_for()` rather than `Condition.wait()`. A Condition, unlike what the name suggests, is just a means of synchronization. It doesn't have a boolean state per se. You have to manage your boolean state (or any other kind of state) separately, hence the usefulness of `wait_for()`. As for auto-reset events, the Windows API has them, here's an insightful writeup about them: https://devblogs.microsoft.com/oldnewthing/?p=30773 But, yes, perhaps auto-reset events would be a good addition regardless. Either through a flag to Event, or as a separate class. Regards Antoine.

These kinds of low-level synchronization primitives are notoriously tricky, yeah, and I'm all in favor of having better higher-level tools. But I'm not sure that AutoResetEvent adds enough to be worth it. AFAICT, you can get this behavior with an Event just fine – using your pseudocode: def sender(): while alive(): wait_for_my_data_from_hardware() send_data_to_receiver() auto_event.set() def receiver(): while alive(): auto_event.wait() auto_event.clear() # <-- this line added receive_all_data_from_sender() process_data() It's true that if we use a regular Event then the .clear() doesn't happen atomically with the wakeup, but that doesn't matter. If we call auto_event.set() and then have new data arrive, then there are two cases: 1) the new data early enough to be seen by the current call to receive_all_data_from_sender(): this is fine, the new data will be processed in this call 2) the new data arrives too late to be seen by the current call to receive_all_data_from_sender(): that means the new data arrived after the call to receive_all_data_from_sender() started, which means it arrived after auto_event.clear(), which means that the call to auto_event.set() will successfully re-arm the event and another call to receive_all_data_from_sender() will happen immediately That said, this is still very tricky. It requires careful analysis, and it's not very general (for example, if we want to support multiple receivers than we need to throw out the whole approach and do something entirely different). In Trio we've actually discussed removing Event.clear(), since it's so difficult to use correctly: https://github.com/python-trio/trio/issues/637 You said your original problem is that you have multiple event sources, and the receiver needs to listen to all of them. And based on your approach, I guess you only have one receiver, and that it's OK to couple all the event sources directly to this receiver (i.e., you're OK with passing them all a Condition object to use). Under these circumstances, wouldn't it make more sense to use a single Queue, pass it to all the sources, and have them each do queue.put((source_id, event))? That's simple to implement, hard to mess up, and can easily be extended to multiple receivers. If you want to further decouple the sources from the receiver, then one approach would be to have each source expose its own Queue independently, and then define some kind of 'select' operation (like in Golang/CSP/concurrent ML) to let the receiver read from multiple Queues simultaneously. This is non-trivial to do, but in return you get a very general and powerful construct. There's some more links and discussion here: https://github.com/python-trio/trio/issues/242
Regarding the link you sent, I don't entirely agree with the opinion expressed: if you try to use a Semaphore for this purpose you will soon find that it is "the wrong way round", it is intended to protect resources from multiple accesses, not to synchronize those multiple accesses
Semaphores are extremely generic primitives – there are a lot of different ways to use them. I think the blog post is correct that an AutoResetEvent is equivalent to a semaphore whose value is clamped so that it can't exceed 1. Your 'auto_event.set()' would be implemented as 'sem.release()', and 'auto_event.wait()' would be 'sem.acquire()'. I guess technically the semantics might be slightly different when there are multiple waiters: the semaphore wakes up exactly one waiter, while I'm not sure what your AutoResetEvent would do. But I can't see any way to use AutoResetEvent reliably with multiple waiters anyway. -n -- Nathaniel J. Smith -- https://vorpus.org

Nathaniel, Thanks very much for taking the time to comment. Clearing the event after waiting for it will introduce a race condition: if the sender has gone around its loop again and set the semaphore after we have woken but before we've cleared it. As you said, this stuff is tricky! The only safe way is to make the wait-and-clear atomic, which can be done with a lock; and this comes essentially back to what I'm proposing. I realise this is not a fundamental new primitive - if it was, I wouldn't be able to build it in pure Python - but I've found it extremely useful in our generic threading and processing library. You're right about what you say regarding queues; I didn't want to go into the full details of the multi-threading and multi-processing situation at hand, but I will say that we have a pipeline of tasks that can run as either threads or processes, and we want to make it easy to construct this pipeline, "wiring" it as necessary; combining command queues with data queues just gets a real mess. Richard -----Original Message----- From: Nathaniel Smith Sent: Tuesday, March 26, 2019 4:24 PM To: richard.whitehead@ieee.org Cc: Python-Ideas Subject: Re: [Python-ideas] Simpler thread synchronization using "Sticky Condition" These kinds of low-level synchronization primitives are notoriously tricky, yeah, and I'm all in favor of having better higher-level tools. But I'm not sure that AutoResetEvent adds enough to be worth it. AFAICT, you can get this behavior with an Event just fine – using your pseudocode: def sender(): while alive(): wait_for_my_data_from_hardware() send_data_to_receiver() auto_event.set() def receiver(): while alive(): auto_event.wait() auto_event.clear() # <-- this line added receive_all_data_from_sender() process_data() It's true that if we use a regular Event then the .clear() doesn't happen atomically with the wakeup, but that doesn't matter. If we call auto_event.set() and then have new data arrive, then there are two cases: 1) the new data early enough to be seen by the current call to receive_all_data_from_sender(): this is fine, the new data will be processed in this call 2) the new data arrives too late to be seen by the current call to receive_all_data_from_sender(): that means the new data arrived after the call to receive_all_data_from_sender() started, which means it arrived after auto_event.clear(), which means that the call to auto_event.set() will successfully re-arm the event and another call to receive_all_data_from_sender() will happen immediately That said, this is still very tricky. It requires careful analysis, and it's not very general (for example, if we want to support multiple receivers than we need to throw out the whole approach and do something entirely different). In Trio we've actually discussed removing Event.clear(), since it's so difficult to use correctly: https://github.com/python-trio/trio/issues/637 You said your original problem is that you have multiple event sources, and the receiver needs to listen to all of them. And based on your approach, I guess you only have one receiver, and that it's OK to couple all the event sources directly to this receiver (i.e., you're OK with passing them all a Condition object to use). Under these circumstances, wouldn't it make more sense to use a single Queue, pass it to all the sources, and have them each do queue.put((source_id, event))? That's simple to implement, hard to mess up, and can easily be extended to multiple receivers. If you want to further decouple the sources from the receiver, then one approach would be to have each source expose its own Queue independently, and then define some kind of 'select' operation (like in Golang/CSP/concurrent ML) to let the receiver read from multiple Queues simultaneously. This is non-trivial to do, but in return you get a very general and powerful construct. There's some more links and discussion here: https://github.com/python-trio/trio/issues/242
Semaphores are extremely generic primitives – there are a lot of different ways to use them. I think the blog post is correct that an AutoResetEvent is equivalent to a semaphore whose value is clamped so that it can't exceed 1. Your 'auto_event.set()' would be implemented as 'sem.release()', and 'auto_event.wait()' would be 'sem.acquire()'. I guess technically the semantics might be slightly different when there are multiple waiters: the semaphore wakes up exactly one waiter, while I'm not sure what your AutoResetEvent would do. But I can't see any way to use AutoResetEvent reliably with multiple waiters anyway. -n -- Nathaniel J. Smith -- https://vorpus.org

On Tue, Mar 26, 2019, 09:50 Richard Whitehead <richard.whitehead@ieee.org> wrote:
Sounds fine to me. Why is that a problem? Can you write down an example of how two threads could be interleaved to produce incorrect results? As you said, this stuff is tricky!
But you're effectively implementing a multi-producer single-consumer Queue anyway, so without any details it's hard to guess why using a Queue would be messier. I know you don't want to get into too many details, but if the whole motivation for your proposal is based on some details then it's usually a good idea to explain them :-). -n

Nathaniel, You’re quite right, I can’t work out the race condition myself now... Is it possible that an AutoResetEvent is just an Event with a two-line wait() overload??? Richard From: Nathaniel Smith Sent: Tuesday, March 26, 2019 5:10 PM To: Richard Whitehead Cc: Python-Ideas Subject: Re: [Python-ideas] Simpler thread synchronization using "Sticky Condition" On Tue, Mar 26, 2019, 09:50 Richard Whitehead <richard.whitehead@ieee.org> wrote: Nathaniel, Thanks very much for taking the time to comment. Clearing the event after waiting for it will introduce a race condition: if the sender has gone around its loop again and set the semaphore after we have woken but before we've cleared it. Sounds fine to me. Why is that a problem? Can you write down an example of how two threads could be interleaved to produce incorrect results? As you said, this stuff is tricky! The only safe way is to make the wait-and-clear atomic, which can be done with a lock; and this comes essentially back to what I'm proposing. I realise this is not a fundamental new primitive - if it was, I wouldn't be able to build it in pure Python - but I've found it extremely useful in our generic threading and processing library. You're right about what you say regarding queues; I didn't want to go into the full details of the multi-threading and multi-processing situation at hand, but I will say that we have a pipeline of tasks that can run as either threads or processes, and we want to make it easy to construct this pipeline, "wiring" it as necessary; combining command queues with data queues just gets a real mess. But you're effectively implementing a multi-producer single-consumer Queue anyway, so without any details it's hard to guess why using a Queue would be messier. I know you don't want to get into too many details, but if the whole motivation for your proposal is based on some details then it's usually a good idea to explain them :-). -n

On Tue, Mar 26, 2019 at 2:40 AM Richard Whitehead <richard.whitehead@ieee.org> wrote:
Using Python’s Condition class is error-prone and difficult.
After looking at your example, I think the problem is just that you aren't using condition variables the way they're intended to be used. To write correct condition-variable code, first write code that contains polling loops like with lock: ... while not some_boolean_condition(): lock.release() time.sleep(0) lock.acquire() ... Then (1) for each boolean condition that you polled on, introduce a corresponding condition variable; (2) whenever you do something that causes the truth of that condition to flip from false to true, notify the condition variable; and (3) replace the polling loop with while not some_boolean_condition(): cvar_associated_with_that_condition.wait() You should only call Condition.wait() inside a while loop that tests the associated condition (or use wait_for()). Implementations of condition variables don't necessarily guarantee that the condition is true when wait() returns, even if you do everything else correctly. See https://en.wikipedia.org/wiki/Spurious_wakeup . If the condition variable is notified when you aren't inside wait(), you will miss the notification. That isn't a problem because as long as the boolean condition itself remains true, you will exit the while loop immediately upon testing it. Or, if the condition has become false again, you will wait and you will get the next false-to-true notification. Condition variables are not meant to substitute for polling the actual condition; they just make it more efficient.

Ben, Thanks very much for your reply. Everything you say is true, and the system can certainly be made to work (and without the performance problem I described) using only Condition variables. I think the correct version of my pseudo-code becomes: def sender(): while alive(): wait_for_my_data_from_hardware() with condition: send_data_to_receiver() # I think this has to be done inside the lock on the Condition condition.raise() def receiver(): while alive(): while data available: receive_all_data_from_sender() process_data() with condition: # only sleep when we have nothing to do condition.wait()

Thank you everyone who has commented on this issue, I've been overwhelmed by the helpfulness and thoughtfulness of the responses. In summary: * Condition variables work without problems provided you use them correctly, but using them correctly is quite tricky. * Using an event to achieve the same thing is trivially easy. * Having the event auto-resetting isn't even necessary. My conclusions: * The very simplest client code, which is almost impossible to get wrong or for someone else to mess up during maintenance, uses an auto-resetting event, and so that's what I'm going to do. * It doesn't sound like there is any appetite for this auto-reset-event to be part of the Python standard library, so I'm not going to push for a PEP to be raised on it, but I will certainly use it in my company's base library code. Thanks again, Richard

On Tue, 26 Mar 2019 09:27:18 -0000 "Richard Whitehead" <richard.whitehead@ieee.org> wrote:
Nowadays, I would recommend to always use `Condition.wait_for()` rather than `Condition.wait()`. A Condition, unlike what the name suggests, is just a means of synchronization. It doesn't have a boolean state per se. You have to manage your boolean state (or any other kind of state) separately, hence the usefulness of `wait_for()`. As for auto-reset events, the Windows API has them, here's an insightful writeup about them: https://devblogs.microsoft.com/oldnewthing/?p=30773 But, yes, perhaps auto-reset events would be a good addition regardless. Either through a flag to Event, or as a separate class. Regards Antoine.

These kinds of low-level synchronization primitives are notoriously tricky, yeah, and I'm all in favor of having better higher-level tools. But I'm not sure that AutoResetEvent adds enough to be worth it. AFAICT, you can get this behavior with an Event just fine – using your pseudocode: def sender(): while alive(): wait_for_my_data_from_hardware() send_data_to_receiver() auto_event.set() def receiver(): while alive(): auto_event.wait() auto_event.clear() # <-- this line added receive_all_data_from_sender() process_data() It's true that if we use a regular Event then the .clear() doesn't happen atomically with the wakeup, but that doesn't matter. If we call auto_event.set() and then have new data arrive, then there are two cases: 1) the new data early enough to be seen by the current call to receive_all_data_from_sender(): this is fine, the new data will be processed in this call 2) the new data arrives too late to be seen by the current call to receive_all_data_from_sender(): that means the new data arrived after the call to receive_all_data_from_sender() started, which means it arrived after auto_event.clear(), which means that the call to auto_event.set() will successfully re-arm the event and another call to receive_all_data_from_sender() will happen immediately That said, this is still very tricky. It requires careful analysis, and it's not very general (for example, if we want to support multiple receivers than we need to throw out the whole approach and do something entirely different). In Trio we've actually discussed removing Event.clear(), since it's so difficult to use correctly: https://github.com/python-trio/trio/issues/637 You said your original problem is that you have multiple event sources, and the receiver needs to listen to all of them. And based on your approach, I guess you only have one receiver, and that it's OK to couple all the event sources directly to this receiver (i.e., you're OK with passing them all a Condition object to use). Under these circumstances, wouldn't it make more sense to use a single Queue, pass it to all the sources, and have them each do queue.put((source_id, event))? That's simple to implement, hard to mess up, and can easily be extended to multiple receivers. If you want to further decouple the sources from the receiver, then one approach would be to have each source expose its own Queue independently, and then define some kind of 'select' operation (like in Golang/CSP/concurrent ML) to let the receiver read from multiple Queues simultaneously. This is non-trivial to do, but in return you get a very general and powerful construct. There's some more links and discussion here: https://github.com/python-trio/trio/issues/242
Regarding the link you sent, I don't entirely agree with the opinion expressed: if you try to use a Semaphore for this purpose you will soon find that it is "the wrong way round", it is intended to protect resources from multiple accesses, not to synchronize those multiple accesses
Semaphores are extremely generic primitives – there are a lot of different ways to use them. I think the blog post is correct that an AutoResetEvent is equivalent to a semaphore whose value is clamped so that it can't exceed 1. Your 'auto_event.set()' would be implemented as 'sem.release()', and 'auto_event.wait()' would be 'sem.acquire()'. I guess technically the semantics might be slightly different when there are multiple waiters: the semaphore wakes up exactly one waiter, while I'm not sure what your AutoResetEvent would do. But I can't see any way to use AutoResetEvent reliably with multiple waiters anyway. -n -- Nathaniel J. Smith -- https://vorpus.org

Nathaniel, Thanks very much for taking the time to comment. Clearing the event after waiting for it will introduce a race condition: if the sender has gone around its loop again and set the semaphore after we have woken but before we've cleared it. As you said, this stuff is tricky! The only safe way is to make the wait-and-clear atomic, which can be done with a lock; and this comes essentially back to what I'm proposing. I realise this is not a fundamental new primitive - if it was, I wouldn't be able to build it in pure Python - but I've found it extremely useful in our generic threading and processing library. You're right about what you say regarding queues; I didn't want to go into the full details of the multi-threading and multi-processing situation at hand, but I will say that we have a pipeline of tasks that can run as either threads or processes, and we want to make it easy to construct this pipeline, "wiring" it as necessary; combining command queues with data queues just gets a real mess. Richard -----Original Message----- From: Nathaniel Smith Sent: Tuesday, March 26, 2019 4:24 PM To: richard.whitehead@ieee.org Cc: Python-Ideas Subject: Re: [Python-ideas] Simpler thread synchronization using "Sticky Condition" These kinds of low-level synchronization primitives are notoriously tricky, yeah, and I'm all in favor of having better higher-level tools. But I'm not sure that AutoResetEvent adds enough to be worth it. AFAICT, you can get this behavior with an Event just fine – using your pseudocode: def sender(): while alive(): wait_for_my_data_from_hardware() send_data_to_receiver() auto_event.set() def receiver(): while alive(): auto_event.wait() auto_event.clear() # <-- this line added receive_all_data_from_sender() process_data() It's true that if we use a regular Event then the .clear() doesn't happen atomically with the wakeup, but that doesn't matter. If we call auto_event.set() and then have new data arrive, then there are two cases: 1) the new data early enough to be seen by the current call to receive_all_data_from_sender(): this is fine, the new data will be processed in this call 2) the new data arrives too late to be seen by the current call to receive_all_data_from_sender(): that means the new data arrived after the call to receive_all_data_from_sender() started, which means it arrived after auto_event.clear(), which means that the call to auto_event.set() will successfully re-arm the event and another call to receive_all_data_from_sender() will happen immediately That said, this is still very tricky. It requires careful analysis, and it's not very general (for example, if we want to support multiple receivers than we need to throw out the whole approach and do something entirely different). In Trio we've actually discussed removing Event.clear(), since it's so difficult to use correctly: https://github.com/python-trio/trio/issues/637 You said your original problem is that you have multiple event sources, and the receiver needs to listen to all of them. And based on your approach, I guess you only have one receiver, and that it's OK to couple all the event sources directly to this receiver (i.e., you're OK with passing them all a Condition object to use). Under these circumstances, wouldn't it make more sense to use a single Queue, pass it to all the sources, and have them each do queue.put((source_id, event))? That's simple to implement, hard to mess up, and can easily be extended to multiple receivers. If you want to further decouple the sources from the receiver, then one approach would be to have each source expose its own Queue independently, and then define some kind of 'select' operation (like in Golang/CSP/concurrent ML) to let the receiver read from multiple Queues simultaneously. This is non-trivial to do, but in return you get a very general and powerful construct. There's some more links and discussion here: https://github.com/python-trio/trio/issues/242
Semaphores are extremely generic primitives – there are a lot of different ways to use them. I think the blog post is correct that an AutoResetEvent is equivalent to a semaphore whose value is clamped so that it can't exceed 1. Your 'auto_event.set()' would be implemented as 'sem.release()', and 'auto_event.wait()' would be 'sem.acquire()'. I guess technically the semantics might be slightly different when there are multiple waiters: the semaphore wakes up exactly one waiter, while I'm not sure what your AutoResetEvent would do. But I can't see any way to use AutoResetEvent reliably with multiple waiters anyway. -n -- Nathaniel J. Smith -- https://vorpus.org

On Tue, Mar 26, 2019, 09:50 Richard Whitehead <richard.whitehead@ieee.org> wrote:
Sounds fine to me. Why is that a problem? Can you write down an example of how two threads could be interleaved to produce incorrect results? As you said, this stuff is tricky!
But you're effectively implementing a multi-producer single-consumer Queue anyway, so without any details it's hard to guess why using a Queue would be messier. I know you don't want to get into too many details, but if the whole motivation for your proposal is based on some details then it's usually a good idea to explain them :-). -n

Nathaniel, You’re quite right, I can’t work out the race condition myself now... Is it possible that an AutoResetEvent is just an Event with a two-line wait() overload??? Richard From: Nathaniel Smith Sent: Tuesday, March 26, 2019 5:10 PM To: Richard Whitehead Cc: Python-Ideas Subject: Re: [Python-ideas] Simpler thread synchronization using "Sticky Condition" On Tue, Mar 26, 2019, 09:50 Richard Whitehead <richard.whitehead@ieee.org> wrote: Nathaniel, Thanks very much for taking the time to comment. Clearing the event after waiting for it will introduce a race condition: if the sender has gone around its loop again and set the semaphore after we have woken but before we've cleared it. Sounds fine to me. Why is that a problem? Can you write down an example of how two threads could be interleaved to produce incorrect results? As you said, this stuff is tricky! The only safe way is to make the wait-and-clear atomic, which can be done with a lock; and this comes essentially back to what I'm proposing. I realise this is not a fundamental new primitive - if it was, I wouldn't be able to build it in pure Python - but I've found it extremely useful in our generic threading and processing library. You're right about what you say regarding queues; I didn't want to go into the full details of the multi-threading and multi-processing situation at hand, but I will say that we have a pipeline of tasks that can run as either threads or processes, and we want to make it easy to construct this pipeline, "wiring" it as necessary; combining command queues with data queues just gets a real mess. But you're effectively implementing a multi-producer single-consumer Queue anyway, so without any details it's hard to guess why using a Queue would be messier. I know you don't want to get into too many details, but if the whole motivation for your proposal is based on some details then it's usually a good idea to explain them :-). -n

On Tue, Mar 26, 2019 at 2:40 AM Richard Whitehead <richard.whitehead@ieee.org> wrote:
Using Python’s Condition class is error-prone and difficult.
After looking at your example, I think the problem is just that you aren't using condition variables the way they're intended to be used. To write correct condition-variable code, first write code that contains polling loops like with lock: ... while not some_boolean_condition(): lock.release() time.sleep(0) lock.acquire() ... Then (1) for each boolean condition that you polled on, introduce a corresponding condition variable; (2) whenever you do something that causes the truth of that condition to flip from false to true, notify the condition variable; and (3) replace the polling loop with while not some_boolean_condition(): cvar_associated_with_that_condition.wait() You should only call Condition.wait() inside a while loop that tests the associated condition (or use wait_for()). Implementations of condition variables don't necessarily guarantee that the condition is true when wait() returns, even if you do everything else correctly. See https://en.wikipedia.org/wiki/Spurious_wakeup . If the condition variable is notified when you aren't inside wait(), you will miss the notification. That isn't a problem because as long as the boolean condition itself remains true, you will exit the while loop immediately upon testing it. Or, if the condition has become false again, you will wait and you will get the next false-to-true notification. Condition variables are not meant to substitute for polling the actual condition; they just make it more efficient.

Ben, Thanks very much for your reply. Everything you say is true, and the system can certainly be made to work (and without the performance problem I described) using only Condition variables. I think the correct version of my pseudo-code becomes: def sender(): while alive(): wait_for_my_data_from_hardware() with condition: send_data_to_receiver() # I think this has to be done inside the lock on the Condition condition.raise() def receiver(): while alive(): while data available: receive_all_data_from_sender() process_data() with condition: # only sleep when we have nothing to do condition.wait()

Thank you everyone who has commented on this issue, I've been overwhelmed by the helpfulness and thoughtfulness of the responses. In summary: * Condition variables work without problems provided you use them correctly, but using them correctly is quite tricky. * Using an event to achieve the same thing is trivially easy. * Having the event auto-resetting isn't even necessary. My conclusions: * The very simplest client code, which is almost impossible to get wrong or for someone else to mess up during maintenance, uses an auto-resetting event, and so that's what I'm going to do. * It doesn't sound like there is any appetite for this auto-reset-event to be part of the Python standard library, so I'm not going to push for a PEP to be raised on it, but I will certainly use it in my company's base library code. Thanks again, Richard
participants (4)
-
Antoine Pitrou
-
Ben Rudiak-Gould
-
Nathaniel Smith
-
Richard Whitehead