[Python-Dev] Rejecting PEP 410 (Use decimal.Decimal type for timestamps)

Guido van Rossum guido at python.org
Sun Feb 26 04:04:41 CET 2012

After an off-list discussion with Victor I have decided to reject PEP
410. Here are my reasons for rejecting the PEP. (Someone please copy
this to the PEP or reference this message in the archives on

1. I have a general dislike of APIs that take a flag parameter which
modifies the return type. But I also don't like having to add variants
that return Decimal for over a dozen API functions (stat(), fstat(),
etc.). I really think that this PEP would add a lot of complexity that
we don't need.

2. The Decimal type is somewhat awkward to use; it doesn't mix with
floats, there's a context that sets things like precision and
rounding, it's still a floating point type that may lose precision
(something which many people don't get when they first see it).

3. There are *very* few clocks in existance (if any) that actually
measure time with more than 56 bits of accuracy. Sure, for short time
periods we can measure nanoseconds. But a Python (64-bit) float can
represent quite a large number of nanoseconds exactly (many millions
of seconds), so if you're using some kind of real-time timer that
reset e.g. at the start of the current process you should be fine
using a float to represent the time with great precision and accuracy.
On the other hand, if you're measuring the time of day expressed in
seconds (and fractions) since 1/1/1970, you should consider yourself
lucky if your clock is accurate within 1 second. (Especially since
POSIX systems aren't allowed to admit the existence of leap seconds in
their timestamps -- around a leap second you must adjust your clock,
either gradually or abruptly.) I'll give you that some people might
have clocks accurate to a microsecond. Such timestamps can be
represented exactly as floats (at least until some point in the very
distant future, when hopefully we'll have 128-bit floats).

4. I don't expect that timestamps with even greater precision than
nanoseconds will ever become fashionable. Given that light travels
about 30 cm in a nanosecond, there's not much use for more accurate
time measurements in daily life. Given relativity theory, at such a
timescale simultaneity of events is iffy at best.

5. I see only one real use case for nanosecond precision: faithful
copying of the mtime/atime recorded by filesystems, in cases where the
filesystem (like e.g. ext4) records these times with nanosecond
precision. Even if such timestamps can't be trusted to be accurate,
converting them to floats and back loses precision, and verification
using tools not written in Python will flag the difference. But for
this specific use case a much simpler set of API changes will suffice;
only os.stat() and os.utime() need to change slightly (and variants of
os.stat() like os.fstat()).

6. If you worry about systems where float has fewer bits: I don't
think there are any relevant systems that have both a smaller float
type and nanosecond clocks.

So far the rejection note.

As to the changes alluded to in #5: Let os.stat() and friends return
extra fields st_atime_ns (etc.) that give the timestamps in
nanoseconds as a Python (long) integer, such that e.g. (in close
approximation) st_atime == st_atime_ns * 1e-9. Let os.utime() take an
optional keyword argument ns=(atime_ns, mtime_ns). Details of the
actual design of this API, such as the actual field and parameter
names, may change; this is just a suggestion. We don't need a PEP for
this proposal; we can just open a tracker issue and hash out the
details during the code review.

I'm also in favor of getting rid of os.stat_float_times(). As proposed
in another thread, we may deprecated it in 3.3 and delete it in 3.5.
I'm not excited about adding more precision to datetime and timedelta.

--Guido van Rossum (python.org/~guido)

More information about the Python-Dev mailing list