[Python-ideas] New syntax for 'dynamic' attribute access

Ben North ben at redfrontdoor.org
Sun Feb 11 11:07:18 CET 2007

Thanks for the responses on this.

In general:

Guido van Rossum:
 > I think you should submit this to the
 > PEP editor and argue on Python-dev for its inclusion in Python 2.6 --
 > there's no benefit that I see of waiting until 3.0.

Greg Falcon:
 > Wow!  I have to say this is a compelling idea.

Josiah Carlson:
 > My only concern with your propsed change is your draft implementation.

and on the syntax in particular:

Guido van Rossum:
 > I've thought of the same syntax.

Greg Falcon:
 > The syntax is a bit foreign looking, but [...] I feel like I could
 > learn to like it anyway.

Mostly positive, then, as far as the general idea and the syntax goes.

On the two-argument form, Greg Falcon wrote:
 > >         x = y.('foo_%d' % n, None)
 > This is the one bit I really don't like.  y.('foobar') is arguably a
 > natural extension to y.foobar, but y.('foobar', None) isn't analogous
 > to any current syntax, and the multiple arguments make it look even
 > more dangerously like a call.
 > In Python, you already have to be explicit when you're worried if the
 > thing you're accessing might not be there.

I was definitely in two minds about whether the extension to the
two-argument form was a win or a loss.  It does increase the power of
the new syntax, but it is certainly rather clumsy-looking.  I'd be happy
to prepare a patch with just the one-argument version if the consensus
is that the balance is that way.

Josiah Carlson:
 > My only concern with your propsed change is your draft
 > implementation. [...]
 > Specifically, your changes to ceval.c and the compiler may have been
 > easier to implement, but it may negatively affect general Python
 > performance.  Have you run a recent pystone before and after the changes?

I hadn't done so, no, but have now tried some tests.  In fact, there is
some evidence of a slight negative effect on the general performance.
On my laptop, repeated pystone runs with 100,000 loops are quite
variable but there might be a performance penalty of around 1% with the
new code.  I'm a bit puzzled by this because of course the new opcodes
are never invoked in the pystone code --- does a switch statement become
slower the more cases there are?  (I tried grouping the cases with the
new opcodes all together at the end of the switch, but the noisiness of
the pystone results was such that I couldn't tell whether this helped.)
Or is it just that the core bytecode-interpretation loop is bigger so
has worse processor cache performance?

On the other hand, getting/setting dynamic attributes seems to be about
40--45% faster with the new syntax, probably because you avoid the
lookup of 'getattr' and the overhead of the function call.

Anyway, I'll experiment with the other implementation suggestions made
by Josiah, and in the meantime summarise the discussion so far to
python-dev as suggested by Guido.



More information about the Python-ideas mailing list