[Python-ideas] Why not picoseconds?
ncoghlan at gmail.com
Sun Oct 15 23:40:36 EDT 2017
On 16 October 2017 at 04:28, Victor Stinner <victor.stinner at gmail.com>
> I proposed to use nanoseconds because UNIX has 1 ns resolution in
> timespec, the most recent API, and Windows has 100 ns.
> Using picoseconds would confuse users who may expect sub-nanosecond
> resolution, whereas no OS support them currently.
> Moreover, nanoseconds as int already landed in os.stat and os.utime.
And this precedent also makes sense to me as the rationale for using an
"_ns" API suffix within the existing module rather than introducing a new
> Last but not least, I already strugle in pytime.c to prevent integer
> overflow with 1 ns resolution. It can quickly become much more complex if
> there is no native C int type supporting a range large enough to more 1
> picosecond resolution usable. I really like using int64_t for _PyTime_t,
> it's well supported, very easy to use (ex: "t = t2 - t1"). 64-bit int
> supports year after 2200 for delta since 1970.
Hopefully by the time we decide it's worth worrying about picoseconds in
"regular" code, compiler support for decimal128 will be sufficiently
ubiquitous that we'll be able to rely on that as our 3rd generation time
representation (where the first gen is seconds as a 64 bit binary float and
the second gen is nanoseconds as a 64 bit integer).
Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-ideas