
[Phillip J. Eby]
Well, I'm showing my age here, but in the good ol' days of the 8086 processor, I recall it frequently being used to describe a block of assembly code which ran with interrupts disabled - ensuring that no task switching would occur.
According to Wikipedia's current article on "critical section" (which is pretty good!), that still may be common usage for kernel-level programmers. "The rest of us" don't get to run privileged instructions anymore, and in Wikipedia terms I'm talking about what they call "application level" critical sections:
http://en.wikipedia.org/wiki/Critical_section
A little Googling confirms that almost everyone has the application-level sense in mind these days.
Obviously I haven't been doing a lot of threaded programming *since* those days, except in Python. :)
And for all the whining about it, threaded programming in Python is both much easier than elsewhere, _and_ still freaking hard ;-)
The common meaning is:
a section of code such that, once a thread enters it, all other threads are blocked from entering the section for the duration
That doesn't seem like a very useful definition, since it describes any piece of code that's protected by a statically-determined mutex. But you clearly have more experience in this than I.
As I tried to explain the first time, a mutex is a common implementation technique, but even saying "mutex" doesn't define the semantics (see the original msg for one distinction between cross-thread and cross-process exclusion). There are ways to implement critical sections other than via a mutex. The "common meaning" I gave above tries to describe the semantics (visible behavior), not an implementation. A Python-level lock is an obvious and straightforward way to implement those semantics.