[Python-Dev] PEP 410 (Decimal timestamp): the implementation is ready for a review

Guido van Rossum guido at python.org
Wed Feb 15 18:43:49 CET 2012


On Wed, Feb 15, 2012 at 9:23 AM, Victor Stinner
<victor.stinner at gmail.com> wrote:
> 2012/2/15 Guido van Rossum <guido at python.org>:
>> I just came to this thread. Having read the good arguments on both
>> sides, I keep wondering why anybody would care about nanosecond
>> precision in timestamps.
>
> Python 3.3 exposes C functions that return timespec structure. This
> structure contains a timestamp with a resolution of 1 nanosecond,
> whereas the timeval structure has only a resolution of 1 microsecond.
> Examples of C functions -> Python functions:
>
>  - timeval: gettimeofday() -> time.time()
>  - timespec: clock_gettime() -> time.clock_gettime()
>  - timespec: stat() -> os.stat()
>  - etc.
>
> If we keep float, Python would have has worse precision than C just
> because it uses an inappropriate type (C uses two integers in
> timeval).
>
> Linux supports nanosecond timestamps since Linux 2.6, Windows supports
> 100 ns resolution since Windows 2000 or maybe before. It doesn't mean
> that Windows system clock is accurate: in practical, it's hard to get
> something better than 1 ms :-) But you may use
> QueryPerformanceCounter() is you need a bettre precision, it is used
> by time.clock() for example.
>
>> For measuring e.g. file access times, there
>> is no way that the actual time is know with anything like that
>> precision (even if it is *recorded* as a number of milliseconds --
>> that's a different issue).
>
> If you need a real world example, here is an extract of
> http://en.wikipedia.org/wiki/Ext4:
>
> "Improved timestamps
>    As computers become faster in general and as Linux becomes used
> more for mission-critical applications, the granularity of
> second-based timestamps becomes insufficient. To solve this, ext4
> provides timestamps measured in nanoseconds. (...)"
>
> So nanosecond resolution is needed to check if a file is newer than
> another. Such test is common in build programs like make or scons.
>
> Filesystems resolution:
>  - ext4: 1 ns
>  - btrfs: 1 ns
>  - NTFS: 100 ns
>  - FAT32: 2 sec (yeah!)

This does not explain why microseconds aren't good enough. It seems
none of the clocks involved can actually measure even relative time
intervals more accurate than 100ns, and I expect that kernels don't
actually keep their clock more accurate than milliseconds. (They may
increment it by 1 microsecond approximately every microsecond, or even
by 1 ns roughly every ns, but that doesn't fool me into believing all
those digits of precision. I betcha that over say an hour even time
deltas aren't more accurate than a microsecond, due to inevitable
fluctuations in clock speed.

It seems the argument goes simply "because Linux chose to go all the
way to nanoseconds we must support nanoseconds" -- and Linux probably
chose nanoseconds because that's what fits in 32 bits and there wasn't
anything else to do with those bits.

*Apart* from the specific use case of making an exact copy of a
directory tree that can be verified by other tools that simply compare
the nanosecond times for equality, I don't see any reason for
complicating so many APIs to preserve the fake precision. As far as
simply comparing whether one file is newer than another for tools like
make/scons, I bet that it's in practice impossible to read a file and
create another in less than a microsecond. (I actually doubt that you
can do it faster than a millisecond, but for my argument I don't need
that.)

-- 
--Guido van Rossum (python.org/~guido)


More information about the Python-Dev mailing list