On 10/22/2012 12:10 AM, Guido van Rossum wrote:
On Sun, Oct 21, 2012 at 6:18 PM, Eric V. Smith <eric@trueblade.com> wrote:
On 10/21/2012 8:23 PM, Guido van Rossum wrote:
I don't see it that way. Any time you acquire a lock, you may be blocked for a long time. In a typical event loop that's an absolute no-no. Typically, to wait for another thread, you give the other thread a callback that adds a new event for *this* thread.
Now, it's possible that in Windows, when using IOCP, the philosophy is different -- I think I've read in http://msdn.microsoft.com/en-us/library/aa365198%28VS.85%29.aspx that there can be multiple threads reading events from a single queue.
Correct. The typical usage of an IOCP is that you create as many threads as you have CPUs (or cores, or execution units, or whatever the kids call them these days), then they can all wait on the same IOCP. So if you have, say 4 CPUs so 4 threads, they can all be woken up to do useful work if the IOCP has work items for them.
So what's the typical way to do locking in such a system? Waiting for a lock seems bad; and you can't assume that no other callbacks may run while you are running. What synchronization primitives are typically used?
When I've done it (admittedly 10 years ago) we just used critical sections, since we weren't blocking for long (mostly memory management). I'm not sure if that's a best practice or not. The IOCP will actually let you block, then it will release another thread. So if you know you're going to block, you should create more threads than you have CPUs. Here's the relevant paragraph from the IOCP link you posted above: "The system also allows a thread waiting in GetQueuedCompletionStatus to process a completion packet if another running thread associated with the same I/O completion port enters a wait state for other reasons, for example the SuspendThread function. When the thread in the wait state begins running again, there may be a brief period when the number of active threads exceeds the concurrency value. However, the system quickly reduces this number by not allowing any new active threads until the number of active threads falls below the concurrency value. This is one reason to have your application create more threads in its thread pool than the concurrency value. Thread pool management is beyond the scope of this topic, but a good rule of thumb is to have a minimum of twice as many threads in the thread pool as there are processors on the system. For additional information about thread pooling, see Thread Pools."