On Mon, May 2, 2016 at 7:25 AM, Robert van Geel <robert@bign.nl> wrote:
I'm receiving digests and seem to not have each individual mail, so here's a digested response:

One side of this that's not discussed is the possible optimization. I can imagine if you've got a lot of objectproperty write and -reads this could actually make certain use cases a lot faster such as this one, saving 6 opcodes:

The continued suggestion that this would be a considerable performance boost worries me. Compared to caching the expression into a local variable the speed-up will be immeasurable. Staring at generated byte code and counting instructions is not a good way to think about Python performance; unlike the model you may have of the hardware underneath, Python instructions, being object-oriented, are not roughly equivalent. The savings on local variable operations of the proposal is minuscule.
 
def __init__(self, left, top, width, height, content):
    self.left = left
    self.top = top
    self.width = width
    self.height = height
    self.content = content.upper()
    self.decideColors()
    self.draw()

versus:

def __init__(self, left, top, width, height, content):
    with self:
        .left = left
        .top = top
        .width = width
        .height = height
        .content = content.upper()
        .decideColors()
        .draw()

The suggestion that you could accomplish this with a (peephole) optimizer does not seem quite correct to me:

x = myobject.b()
...
z = myobject.c()

does not necessarily have a consistent pointer to myobject although it would require some acrobatics to change them in a way that can not be seen by an optimizer.
You can think about exec() or even another thread intervening into a generator function, not something I would do but who knows.

If myobject is a local variable it's actually very simple to know just from looking at the bytecodes in between.
 
The advantage of the construct is that the .dotted variable is guaranteed to point to the same physical object in memory with an increased reference count during the statement and a decrease happening at dedent.

I think you're barking up the wrong tree.

If you want this construct you have to explain how it makes code clearer, more readable, more writable, less error-prone.
 
Django would break when the 'using' keyword would be used for that, but the choice of keyword is arbitrary. Maybe an extended use of the 'with' word would be elegant, when the object would not have an __enter__ and/or __exit__ function it could still run for the purpose of this mechanism. The disadvantage is that you could not use the with construct for this purpose only without also triggering the __enter__ function.

Mixing this into 'with' would be terrible, because the two constructs have nothing in common.

The choice if keyword is not entirely arbitrary; if we can't come up with a decent keyword the feature is dead. But the introduction would have to include a `from __future__ import <something>` statement anyway for at least one, maybe two release cycles (we did this with the `with` statement itself, around 2.4/2.5). So Django will have plenty of time to change.

--
--Guido van Rossum (python.org/~guido)