[Marc writes]
The locked section may not be leading in the right direction, but it surely helps in situations where you cannot otherwise enforce useage of an object specific lock, e.g. for builtin file objects (some APIs insist on getting the real thing, not a thread safe wrapper).
Really, all this boils down to is that you want a Python-ish critical section - ie, a light-weight lock. This presumably would be desirable if it could be shown Python locks are indeed "heavy" - I know that from the C POV they may be considered as such, but I havent seen many complaints about lock speed from Python. So in an attempt to get _some_ evidence, I wrote a test program that used the Queue module to append 10000 integers then remove them all. I then hacked the queue module to remove all locking, and ran the same test. The results were 2.4 seconds for the non-locking version, vs 3.8 for the standard version. Without time (or really inclination <wink>) to take this further, it _does_ appear a native Python "critical section" could indeed save a few milli-seconds for a few real-world apps. So if we ignore the implementation details Marc started spelling, does the idea of a Python "critical section" appeal? Could simply be a built-in way of saying "no other _Python_ threads should run" (and of-course the "allow them again"). The semantics could be simply to ensure the Python program integrity - it need say nothing about the Python internal "state" as such. Mark.