[Python-ideas] Why not picoseconds?

Koos Zevenhoven k7hoven at gmail.com
Mon Oct 16 10:42:42 EDT 2017


On Mon, Oct 16, 2017 at 4:10 PM, Victor Stinner <victor.stinner at gmail.com>
wrote:

> 2017-10-16 9:46 GMT+02:00 Stephan Houben <stephanh42 at gmail.com>:
> > Hi all,
> >
> > I realize this is a bit of a pet peeve of me, since
> > in my day job I sometimes get people complaining that
> > numerical data is "off" in the sixteenth significant digit
> > (i.e. it was stored as a double).
> > (...)
>
> Oh. As usual, I suck at explaining the rationale. I'm sorry about that.
>
> The problem is not to know the number of nanoseconds since 1970. The
> problem is that you lose precision even on short durations, say less
> than 1 minute. The precision loss depends on the reference of the
> clock, which can be the UNIX epoch, a reference based on the computer
> boot time, a reference based on the process startup time, etc.
>
> Let's say that your web server runs since 105 days, now you want to
> measure how much time it took to render a HTML template and this time
> is smaller than 1 ms. In such case, the benchmark lose precision just
> because of the float type, not because of the clock resolution.
>
> That's one use case.
>
> Another use case is when you have two applications storing time. A
> program A writes a timestamp, another program B compares the timestamp
> to the current time. To explain the issue, let's say that the format
> used to store the timestamp and clock used by program A have a
> resolution of 1 nanosecond, whereas the clock used by program B has a
> resolution of 1 second.  In that case, there is a window of 1 second
> where the time is seen as "created in the future". For example, the
> GNU tar program emits a warning in that case ("file created in the
> future", or something like that).
>
> More generally, if you store time with a resolution N and your clock
> has resolution P, it's better to have N >= P to prevent bad surprises.
> More and more databases and filesystems support storing time with
> nanosecond resolution.
>

​Indeed. And some more on where the precision loss comes from:

When you measure time starting from one point, like 1970, the timer reaches
large numbers today, like 10**9 seconds. Tiny fractions of a second are
especially tiny when compared to a number like that.

You then need log2(10**9) ~ 30 bits of precision just to get a one-second
resolution in your timer. A double-precision (64bit) floating point number
has 53 bits of precision in the mantissa, so you end up with 23 bits of
precision left for fractions of a second, which means you get a resolution
of 1 / 2**23 seconds, which is about 100 ns, which is well in line with the
data that Victor provided (~100 ns + overhead = ~200 ns).

—Koos


-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20171016/d692443f/attachment.html>


More information about the Python-ideas mailing list