Steven,
Sorry about taking a few days to get back to you. Here is the exact code I ran:
def test():
    n = 0
    def inner():
        global g
        nonlocal n
        n = g
        x = n
        g = x
        y = x
        z = y
        return x
    for i in range(100):
        a = inner()

from timeit import Timer
g = 0
t = Timer("test()", setup="from __main__ import test; g=0")
res = t.repeat(repeat=20)
from statistics import mean, stdev
print(f"ALl: {res}")
print(f"mean {mean(res)}, std {stdev(res)}")  

For my branch the results were:
ALl: [8.909581271989737, 8.897892987995874, 9.055693186994176, 9.103679533989634, 9.0795843389933, 9.12056165598915, 9.125767157005612, 9.117257817997597, 9.113885553990258, 9.180963805003557, 9.239156291994732, 9.318854127981467, 9.296847557998262, 9.313092978001805, 9.284125670004869, 9.259817042999202, 9.244616173004033, 9.271513198997127, 9.335984965000534, 9.258596728992416]
mean 9.176373602246167, std 0.12835175852148933

The results for mainline python 3.7 are:
ALl: [9.005807315988932, 9.081591005000519, 9.12138073798269, 9.174804927984951, 9.233709035004722, 9.267144601995824, 9.323436667007627, 9.314979821007, 9.265707976999693, 9.24289796501398, 9.236994076985866, 9.310381392017007, 9.206289929017657, 9.211337374988943, 9.206687778991181, 9.215082932991209, 9.221178130013868, 9.213595701992745, 9.206646608014125, 9.224334346014075]
mean 9.214199416250631, std 0.07610134120369169

There is some variation in the mean and std for each of these each time the program is run, which I suspect is due to scheduling on which core the job is launched. These results are typical however. I can launch each job a number of times to create meta runtime distributions, but I felt that was better left to proper bench-marking programs. It seems like these numbers are within statistical uncertainty of each other.

Nate

On Thu, Jun 27, 2019 at 3:11 PM Steven D'Aprano <steve@pearwood.info> wrote:
On Thu, Jun 27, 2019 at 12:23:24PM -0400, nate lust wrote:

> If you have a
> bench mark you prefer I would be happy to run it against my changes and
> mainline python 3.7 to see how they compare.

Ultimately it will probably need to run against this:

https://github.com/python/pyperformance

for 3.8 and 3.9 alpha but we can start with some quick and dirty tests
using timeit. Let's say:


def test():
    n = 0
    def inner():
        global g
        nonlocal n
        n = g
        x = n
        g = x
        y = x
        z = y
        return x
    for i in range(100):
        a = inner()

from timeit import Timer
t = Timer("test()", setup="from __main__ import test; g=0")
t.repeat(repeat=5)


I'm not speaking officially, but I would say that if this slows down
regular assignments by more than 5%, the idea is dead in the water; if
it slows it down by less than 1%, the performance objection is
satisfied; between 1 and 5 means we get to argue cost versus benefit.

(The above is just my opinion. Others may disagree.)


--
Steven
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-leave@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/SJBOCUI22E36XSIDQBD5GYLATBJRVLIX/
Code of Conduct: http://python.org/psf/codeofconduct/


--
Nate Lust, PhD.
Astrophysics Dept.
Princeton University