[Python-ideas] New PEP 550: Execution Context

Nick Coghlan ncoghlan at gmail.com
Mon Aug 14 04:10:01 EDT 2017


On 14 August 2017 at 02:33, Yury Selivanov <yselivanov.ml at gmail.com> wrote:
> On Sat, Aug 12, 2017 at 10:09 PM, Nick Coghlan <ncoghlan at gmail.com> wrote:
>> That similarity makes me wonder whether the "isolated or not"
>> behaviour could be moved from the object being executed and directly
>> into the key/value pairs themselves based on whether or not the values
>> were mutable, as that's the way function calls work: if the argument
>> is immutable, the callee *can't* change it, while if it's mutable, the
>> callee can mutate it, but it still can't rebind it to refer to a
>> different object.
>
> I'm afraid that if we design EC context to behave differently for
> mutable/immutable values, it will be an even harder thing to
> understand to end users.

There's nothing to design, as storing a list (or other mutable object)
in an EC will necessarily be the same as storing one in a tuple: the
fact you acquired the reference via an immutable container will do
*nothing* to keep you from mutating the referenced object.

And for use cases like web requests, that's exactly the behaviour we
want - changing the active web request is an EC level operation, but
making changes to the state of the currently active request (e.g. in a
middleware processor) won't require anything special.

[I'm going to snip the rest of the post, as it sounds pretty
reasonable to me, and my questions about the interaction between
sys.set_execution_context() and ec_back go away if
sys.set_execution_context() doesn't exist as you're currently
proposing]

> (gi_isolated_execution_context flag is still here for contextmanager).

This hidden flag variable on the types managing suspendable frames is
still the piece of the proposal that strikes me as being the most
potentially problematic, as it at least doubles the number of flows of
control that need to be tested.

Essentially what we're aiming to model is:

1. Performing operations in a way that modifies the active execution context
2. Performing them in a way that saves & restores the execution context

For synchronous calls, this distinction is straightforward:

- plain calls may alter the active execution context via state mutation
- use ec.run() to save/restore the execution context around the operation

(The ec_back idea means we may also need an "ec.run()" variant that
sets ec_back appropriately before making the call - for example,
"ec.run()" could set ec_back, while a separate "ec.run_isolated()"
could skip setting it. Alternatively, full isolation could be the
default, and "ec.run_shared()" would set ec_back. If we go with the
latter option, then "ec_shared" might be a better attribute name than
"ec_back")

A function can be marked as always having its own private context
using a decorator like so:

    def private_context(f)
        @functools.wraps(f)
        def wrapper(*args, **kwds):
            ec = sys.get_active_context()
            return ec.run(f, *args, **kwds)
        return wrapper

For next/send/throw and anext/asend/athrow, however, the proposal is
to bake the save/restore into the *target objects*, rather than having
to request it explicitly in the way those objects get called.

This means that unless we apply some implicit decorator magic to the
affected slot definitions, there's now going to be a major behavioural
difference between:

    some_state = sys.new_context_item()

    def local_state_changer(x):
        for i in range(x):
            some_state.set(x)
            yield x

    class ParentStateChanger:
        def __init__(self, x):
            self._itr = iter(range(x))
        def __iter__(self):
            return self
        def __next__(self):
            x = next(self._itr)
            some_state.set(x)
            return x

The latter would need the equivalent of `@private_context` on the
`__next__` method definition to get the behaviour that generators
would have by default (and similarly for __anext__ and asynchronous
generators).

I haven't fully thought through the implications of this problem yet,
but some initial unordered thoughts:

- implicit method decorators are always suspicious, but skipping them
in this case feels like we'd be setting up developers of custom
iterators for really subtle context management bugs
- contextlib's own helper classes would be fine, since they define
__enter__ & __exit__, which wouldn't be affected by this
- for lru_cache, we rely on `__wrapped__` to get access to the
underlying function without caching applied. Might it make sense to do
something similar for these implicitly context-restoring methods? If
so, should we use a dedicated name so that additional wrapper layers
don't overwrite it?

Cheers,
Nick.

-- 
Nick Coghlan   |   ncoghlan at gmail.com   |   Brisbane, Australia


More information about the Python-ideas mailing list