Is there a more efficient threading lock?
Chris Angelico
rosuav at gmail.com
Mon Feb 27 01:37:32 EST 2023
On Mon, 27 Feb 2023 at 17:28, Michael Speer <knomenet at gmail.com> wrote:
>
> https://github.com/python/cpython/commit/4958f5d69dd2bf86866c43491caf72f774ddec97
>
> it's a quirk of implementation. the scheduler currently only checks if it
> needs to release the gil after the POP_JUMP_IF_FALSE, POP_JUMP_IF_TRUE,
> JUMP_ABSOLUTE, CALL_METHOD, CALL_FUNCTION, CALL_FUNCTION_KW, and
> CALL_FUNCTION_EX opcodes.
>
Oh now that is VERY interesting. It's a quirk of implementation, yes,
but there's a reason for it; a bug being solved. The underlying
guarantee about __exit__ should be considered to be defined behaviour,
meaning that the precise quirk might not be relevant even though the
bug has to remain fixed in all future versions. But I'd also note here
that, if it can be absolutely 100% guaranteed that the GIL will be
released and signals checked on a reasonable interval, there's no
particular reason to state that signals are checked after every single
Python bytecode. (See the removed comment about empty loops, which
would have been a serious issue and is probably why the backward jump
rule exists.)
So it wouldn't be too hard for a future release of Python to mandate
atomicity of certain specific operations. Obviously it'd require
buy-in from other implementations, but it would be rather convenient
if, subject to some very tight rules like "only when adding integers
onto core data types" etc, a simple statement like "x.y += 1" could
actually be guaranteed to take place atomically.
Though it's still probably not as useful as you might hope. In C, if I
can do "int id = counter++;" atomically, it would guarantee me a new
ID that no other thread could ever have. But in Python, that increment
operation doesn't give you the result, so all it's really useful for
is statistics on operations done. Still, that in itself could be of
value in quite a few situations.
In any case, though, this isn't something to depend upon at the moment.
ChrisA
More information about the Python-list
mailing list