On 23 Jun 2022, at 08:27, Stephen J. Turnbull firstname.lastname@example.org wrote:
Interest idea that ref does not auto evaluate in all cases. I was wondering about what the compile/runtime can do it avoid the costs of checking for an evaluation.
I think the main thing to do is to put the burden on the deferred object, since it has to be something special in any case.
Now consider a = b + 0. b.__add__ will be invoked in the usual way. Only if b is a deferred will evaluation take place.
But the act of checking if b is deferred is a cost I am concerned about.
That's true in David's proposed semantics, where the runtime does that check. I'm suggesting modified semantics where deferreds can be a proxy object, whose normal reaction to *any* operation (possibly excepting name binding) is
1. check for a memoized value, if not found evaluate its stored code, and memoize the value 2. perform the action on the memoized value
That means that in the statement "a = b + 0", if b is an int, int.__add__(b, 0) gets called with no burden to code that uses no deferreds.
Then the question is, why do we need syntax? Well, there's the PEP 671 rationales for deferring function argument defaults. There is also the question of whether name binding should trigger evalution. If so,
a = defer b
would (somewhat similar to iter) defer normal b (this could be optimized by checking for non-mutables) and "re-defer" a deferred b (ie, just bind it to a without evaluation). The same consideration would apply to "as" and function arguments (possibly with different resolutions! I'm probably missing some name-binding syntax here, but it should be clear where this going).
I would think that it’s not that hard to add the expected check into the python ceval.c And benchmark the impact of the checks. This would not need a full implementation of the deferred mechanism.
+1 Although I'm not going to spend next weekend it. ;-)