[Python-Dev] GIL behaviour under Windows

Sturla Molden sturla at molden.no
Thu Oct 22 23:33:22 CEST 2009

Antoine Pitrou skrev:
> This number lacks the elapsed time. 61 switches in one second is probably
> enough, the same amount of switches in 10 or 20 seconds is too small (at least
> for threads needing good responsivity, e.g. I/O threads).
> Also, "fair" has to take into account the average latency and its relative
> stability, which is why I wrote ccbench.

Since I am a scientist and statistics interests me, let's do this 
properly :-) Here is a suggestion:

_Py_Ticker is a circular variable. Thus, it can be transformed to an 
angle measured in radians, using:

   a = 2 * pi * _Py_Ticker / _Py_CheckInterval

With simultaneous measurements of a, check interval count x, and time y 
(µs), we can fit the multiple regression:

   y = b0 + b1*cos(a) + b2*sin(a) + b3*x + err

using some non-linear least squares solver. We can then extract all the 
statistics we need on interpreter latencies for "ticks" with and without 
periodic checks.

On a Python setup with many missed thread switches (pthreads according 
to D. Beazley), we could just extend the model to take into account 
successful and unsccessful check intervals:

   y = b0 + b1*cos(a) + b2*sin(a) + b3*x1 + b4*x2 + err
where x1 being successful thread switches and x2 being missed thread 
switches. But at least on Windows we can use the simpler model.

The reason why multiple regression is needed, is that the record method 
of my GIL_Battle class is not called on every interpreter tick. I thus 
cannot measure precicesly each latency, which I could have done with a 
direct hook into ceval.c. So statistics to the rescue. But on the bright 
side, it reduces the overhead of the profiler.

Would that help?

Sturla Molden

More information about the Python-Dev mailing list