Replying to myself again here, as nobody else said anything:
On Mon, Oct 16, 2017 at 5:42 PM, Koos Zevenhoven
Indeed. And some more on where the precision loss comes from:
When you measure time starting from one point, like 1970, the timer reaches large numbers today, like 10**9 seconds. Tiny fractions of a second are especially tiny when compared to a number like that.
You then need log2(10**9) ~ 30 bits of precision just to get a one-second resolution in your timer. A double-precision (64bit) floating point number has 53 bits of precision in the mantissa, so you end up with 23 bits of precision left for fractions of a second, which means you get a resolution of 1 / 2**23 seconds, which is about 100 ns, which is well in line with the data that Victor provided (~100 ns + overhead = ~200 ns).
My calculation is indeed *approximately* correct, but the problem is that I made a bunch of decimal rounding errors while doing it, which was not really desirable here. The exact expression for the resolution of time.time() today is:
1 / 2**(53 - math.ceil(math.log2(time.time()))) 2.384185791015625e-07
So this is in fact a little over 238 ns. Victor got 239 ns experimentally. So actually the resolution is coarse enough to completely drown the the effects of overhead in Victor's tests, and now that the theory is done correctly, it is completely in line with practice. ––Koos -- + Koos Zevenhoven + http://twitter.com/k7hoven +