Steven,
Sorry about taking a few days to get back to you. Here is the exact code I ran:
def test():
n = 0
def inner():
global g
nonlocal n
n = g
x = n
g = x
y = x
z = y
return x
for i in range(100):
a = inner()
from timeit import Timer
g = 0
t = Timer("test()", setup="from __main__ import test; g=0")
res = t.repeat(repeat=20)
from statistics import mean, stdev
print(f"ALl: {res}")
print(f"mean {mean(res)}, std {stdev(res)}")
For my branch the results were:
ALl: [8.909581271989737, 8.897892987995874, 9.055693186994176, 9.103679533989634, 9.0795843389933, 9.12056165598915, 9.125767157005612, 9.117257817997597, 9.113885553990258, 9.180963805003557, 9.239156291994732, 9.318854127981467, 9.296847557998262, 9.313092978001805, 9.284125670004869, 9.259817042999202, 9.244616173004033, 9.271513198997127, 9.335984965000534, 9.258596728992416]
mean 9.176373602246167, std 0.12835175852148933
The results for mainline python 3.7 are:
ALl: [9.005807315988932, 9.081591005000519, 9.12138073798269, 9.174804927984951, 9.233709035004722, 9.267144601995824, 9.323436667007627, 9.314979821007, 9.265707976999693, 9.24289796501398, 9.236994076985866, 9.310381392017007, 9.206289929017657, 9.211337374988943, 9.206687778991181, 9.215082932991209, 9.221178130013868, 9.213595701992745, 9.206646608014125, 9.224334346014075]
mean 9.214199416250631, std 0.07610134120369169
There is some variation in the mean and std for each of these each time the program is run, which I suspect is due to scheduling on which core the job is launched. These results are typical however. I can launch each job a number of times to create meta runtime distributions, but I felt that was better left to proper bench-marking programs. It seems like these numbers are within statistical uncertainty of each other.
Nate