PEP 564: Add new time functions with nanosecond resolution

Hi, While discussions on this PEP are not over on python-ideas, I proposed this PEP directly on python-dev since I consider that my PEP already summarizes current and past proposed alternatives. python-ideas threads: * Add time.time_ns(): system clock with nanosecond resolution * Why not picoseconds? The PEP 564 will be shortly online at: https://www.python.org/dev/peps/pep-0564/ Victor PEP: 564 Title: Add new time functions with nanosecond resolution Version: $Revision$ Last-Modified: $Date$ Author: Victor Stinner <victor.stinner@gmail.com> Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 16-October-2017 Python-Version: 3.7 Abstract ======== Add five new functions to the ``time`` module: ``time_ns()``, ``perf_counter_ns()``, ``monotonic_ns()``, ``clock_gettime_ns()`` and ``clock_settime_ns()``. They are similar to the function without the ``_ns`` suffix, but have nanosecond resolution: use a number of nanoseconds as a Python int. The best ``time.time_ns()`` resolution measured in Python is 3 times better then ``time.time()`` resolution on Linux and Windows. Rationale ========= Float type limited to 104 days ------------------------------ The clocks resolution of desktop and latop computers is getting closer to nanosecond resolution. More and more clocks have a frequency in MHz, up to GHz for the CPU TSC clock. The Python ``time.time()`` function returns the current time as a floatting point number which is usually a 64-bit binary floatting number (in the IEEE 754 format). The problem is that the float type starts to lose nanoseconds after 104 days. Conversion from nanoseconds (``int``) to seconds (``float``) and then back to nanoseconds (``int``) to check if conversions lose precision:: # no precision loss >>> x = 2 ** 52 + 1; int(float(x * 1e-9) * 1e9) - x 0 # precision loss! (1 nanosecond) >>> x = 2 ** 53 + 1; int(float(x * 1e-9) * 1e9) - x -1 >>> print(datetime.timedelta(seconds=2 ** 53 / 1e9)) 104 days, 5:59:59.254741 ``time.time()`` returns seconds elapsed since the UNIX epoch: January 1st, 1970. This function loses precision since May 1970 (47 years ago):: >>> import datetime >>> unix_epoch = datetime.datetime(1970, 1, 1) >>> print(unix_epoch + datetime.timedelta(seconds=2**53 / 1e9)) 1970-04-15 05:59:59.254741 Previous rejected PEP --------------------- Five years ago, the PEP 410 proposed a large and complex change in all Python functions returning time to support nanosecond resolution using the ``decimal.Decimal`` type. The PEP was rejected for different reasons: * The idea of adding a new optional parameter to change the result type was rejected. It's an uncommon (and bad?) programming practice in Python. * It was not clear if hardware clocks really had a resolution of 1 nanosecond, especially at the Python level. * The ``decimal.Decimal`` type is uncommon in Python and so requires to adapt code to handle it. CPython enhancements of the last 5 years ---------------------------------------- Since the PEP 410 was rejected: * The ``os.stat_result`` structure got 3 new fields for timestamps as nanoseconds (Python ``int``): ``st_atime_ns``, ``st_ctime_ns`` and ``st_mtime_ns``. * The PEP 418 was accepted, Python 3.3 got 3 new clocks: ``time.monotonic()``, ``time.perf_counter()`` and ``time.process_time()``. * The CPython private "pytime" C API handling time now uses a new ``_PyTime_t`` type: simple 64-bit signed integer (C ``int64_t``). The ``_PyTime_t`` unit is an implementation detail and not part of the API. The unit is currently ``1 nanosecond``. Existing Python APIs using nanoseconds as int --------------------------------------------- The ``os.stat_result`` structure has 3 fields for timestamps as nanoseconds (``int``): ``st_atime_ns``, ``st_ctime_ns`` and ``st_mtime_ns``. The ``ns`` parameter of the ``os.utime()`` function accepts a ``(atime_ns: int, mtime_ns: int)`` tuple: nanoseconds. Changes ======= New functions ------------- This PEP adds five new functions to the ``time`` module: * ``time.clock_gettime_ns(clock_id)`` * ``time.clock_settime_ns(clock_id, time: int)`` * ``time.perf_counter_ns()`` * ``time.monotonic_ns()`` * ``time.time_ns()`` These functions are similar to the version without the ``_ns`` suffix, but use nanoseconds as Python ``int``. For example, ``time.monotonic_ns() == int(time.monotonic() * 1e9)`` if ``monotonic()`` value is small enough to not lose precision. Unchanged functions ------------------- This PEP only proposed to add new functions getting or setting clocks with nanosecond resolution. Clocks are likely to lose precision, especially when their reference is the UNIX epoch. Python has other functions handling time (get time, timeout, etc.), but no nanosecond variant is proposed for them since they are less likely to lose precision. Example of unchanged functions: * ``os`` module: ``sched_rr_get_interval()``, ``times()``, ``wait3()`` and ``wait4()`` * ``resource`` module: ``ru_utime`` and ``ru_stime`` fields of ``getrusage()`` * ``signal`` module: ``getitimer()``, ``setitimer()`` * ``time`` module: ``clock_getres()`` Since the ``time.clock()`` function was deprecated in Python 3.3, no ``time.clock_ns()`` is added. Alternatives and discussion =========================== Sub-nanosecond resolution ------------------------- ``time.time_ns()`` API is not "future-proof": if clocks resolutions increase, new Python functions may be needed. In practive, the resolution of 1 nanosecond is currently enough for all structures used by all operating systems functions. Hardware clock with a resolution better than 1 nanosecond already exists. For example, the frequency of a CPU TSC clock is the CPU base frequency: the resolution is around 0.3 ns for a CPU running at 3 GHz. Users who have access to such hardware and really need sub-nanosecond resolution can easyly extend Python for their needs. Such rare use case don't justify to design the Python standard library to support sub-nanosecond resolution. For the CPython implementation, nanosecond resolution is convenient: the standard and well supported ``int64_t`` type can be used to store time. It supports a time delta between -292 years and 292 years. Using the UNIX epoch as reference, this type supports time since year 1677 to year 2262:: >>> 1970 - 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25) 1677.728976954687 >>> 1970 + 2 ** 63 / (10 ** 9 * 3600 * 24 * 365.25) 2262.271023045313 Different types --------------- It was proposed to modify ``time.time()`` to use float type with better precision. The PEP 410 proposed to use ``decimal.Decimal``, but it was rejected. Apart ``decimal.Decimal``, no portable ``float`` type with better precision is currently available in Python. Changing the builtin Python ``float`` type is out of the scope of this PEP. Other ideas of new types were proposed to support larger or arbitrary precision: fractions, structures or 2-tuple using integers, fixed-precision floating point number, etc. See also the PEP 410 for a previous long discussion on other types. Adding a new type requires more effort to support it, than reusing ``int``. The standard library, third party code and applications would have to be modified to support it. The Python ``int`` type is well known, well supported, ease to manipulate, and supports all arithmetic operations like: ``dt = t2 - t1``. Moreover, using nanoseconds as integer is not new in Python, it's already used for ``os.stat_result`` and ``os.utime(ns=(atime_ns, mtime_ns))``. .. note:: If the Python ``float`` type becomes larger (ex: decimal128 or float128), the ``time.time()`` precision will increase as well. Different API ------------- The ``time.time(ns=False)`` API was proposed to avoid adding new functions. It's an uncommon (and bad?) programming practice in Python to change the result type depending on a parameter. Different options were proposed to allow the user to choose the time resolution. If each Python module uses a different resolution, it can become difficult to handle different resolutions, instead of just seconds (``time.time()`` returning ``float``) and nanoseconds (``time.time_ns()`` returning ``int``). Moreover, as written above, there is no need for resolution better than 1 nanosecond in practive in the Python standard library. Annex: Clocks Resolution in Python ================================== Script ot measure the smallest difference between two ``time.time()`` and ``time.time_ns()`` reads ignoring differences of zero:: import math import time LOOPS = 10 ** 6 print("time.time_ns(): %s" % time.time_ns()) print("time.time(): %s" % time.time()) min_dt = [abs(time.time_ns() - time.time_ns()) for _ in range(LOOPS)] min_dt = min(filter(bool, min_dt)) print("min time_ns() delta: %s ns" % min_dt) min_dt = [abs(time.time() - time.time()) for _ in range(LOOPS)] min_dt = min(filter(bool, min_dt)) print("min time() delta: %s ns" % math.ceil(min_dt * 1e9)) Results of time(), perf_counter() and monotonic(). Linux (kernel 4.12 on Fedora 26): * time_ns(): **84 ns** * time(): **239 ns** * perf_counter_ns(): 84 ns * perf_counter(): 82 ns * monotonic_ns(): 84 ns * monotonic(): 81 ns Windows 8.1: * time_ns(): **318000 ns** * time(): **894070 ns** * perf_counter_ns(): 100 ns * perf_counter(): 100 ns * monotonic_ns(): 15000000 ns * monotonic(): 15000000 ns The difference on ``time.time()`` is significant: **84 ns (2.8x better) vs 239 ns on Linux and 318 us (2.8x better) vs 894 us on Windows**. The difference (presion loss) will be larger next years since every day adds 864,00,000,000,000 nanoseconds to the system clock. The difference on ``time.perf_counter()`` and ``time.monotonic clock()`` is not visible in this quick script since the script runs less than 1 minute, and the uptime of the computer used to run the script was smaller than 1 week. A significant difference should be seen with an uptime of 104 days or greater. .. note:: Internally, Python starts ``monotonic()`` and ``perf_counter()`` clocks at zero on some platforms which indirectly reduce the precision loss. Copyright ========= This document has been placed in the public domain.

I read again the discussions on python-ideas and noticed that I forgot to mention the "time_ns module" idea. I also added a section to give concrete examples of the precision loss. https://github.com/python/peps/commit/a4828def403913dbae7452b4f9b9d62a0c83a2... Issues caused by precision loss ------------------------------- Example 1: measure time delta ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A server is running for longer than 104 days. A clock is read before and after running a function to measure its performance. This benchmark lose precision only because the float type used by clocks, not because of the clock resolution. On Python microbenchmarks, it is common to see function calls taking less than 100 ns. A difference of a single nanosecond becomes significant. Example 2: compare time with different resolution ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Two programs "A" and "B" are runing on the same system, so use the system block. The program A reads the system clock with nanosecond resolution and writes the timestamp with nanosecond resolution. The program B reads the timestamp with nanosecond resolution, but compares it to the system clock read with a worse resolution. To simplify the example, let's say that it reads the clock with second resolution. If that case, there is a window of 1 second while the program B can see the timestamp written by A as "in the future". Nowadays, more and more databases and filesystems support storing time with nanosecond resolution. .. note:: This issue was already fixed for file modification time by adding the ``st_mtime_ns`` field to the ``os.stat()`` result, and by accepting nanoseconds in ``os.utime()``. This PEP proposes to generalize the fix. (...) Modify time.time() result type ------------------------------ It was proposed to modify ``time.time()`` to return a different float type with better precision. The PEP 410 proposed to use ``decimal.Decimal`` which already exists and supports arbitray precision, but it was rejected. Apart ``decimal.Decimal``, no portable ``float`` type with better precision is currently available in Python. Changing the builtin Python ``float`` type is out of the scope of this PEP. Moreover, changing existing functions to return a new type introduces a risk of breaking the backward compatibility even the new type is designed carefully. (...) New time_ns module ------------------ Add a new ``time_ns`` module which contains the five new functions: * ``time_ns.clock_gettime(clock_id)`` * ``time_ns.clock_settime(clock_id, time: int)`` * ``time_ns.perf_counter()`` * ``time_ns.monotonic()`` * ``time_ns.time()`` The first question is if the ``time_ns`` should expose exactly the same API (constants, functions, etc.) than the ``time`` module. It can be painful to maintain two flavors of the ``time`` module. How users use suppose to make a choice between these two modules? If tomorrow, other nanosecond variant are needed in the ``os`` module, will we have to add a new ``os_ns`` module as well? There are functions related to time in many modules: ``time``, ``os``, ``signal``, ``resource``, ``select``, etc. Another idea is to add a ``time.ns`` submodule or a nested-namespace to get the ``time.ns.time()`` syntax. Victor

I've read the examples you wrote here, but I'm struggling to see what the real-life use cases are for this. When would you care about *both* very long-running servers (104 days+) and nanosecond precision? I'm not saying it could never happen, but would want to see real "experience reports" of when this is needed. -Ben On Mon, Oct 16, 2017 at 9:50 AM, Victor Stinner <victor.stinner@gmail.com> wrote:

On Mon, Oct 16, 2017 at 8:37 AM, Ben Hoyt <benhoyt@gmail.com> wrote:
A long-running server might still want to log precise *durations* of various events. (Durations of events are the bread and butter of server performance tuning.) And for this it might want to use the most precise clock available, which is perf_counter(). But if perf_counter()'s epoch is the start of the process, after 104 days it can no longer report ns precision due to float rounding (even though the internal counter does not lose ns). -- --Guido van Rossum (python.org/~guido)

Got it -- fair enough. We deploy so often where I work (a couple of times a week at least) that 104 days seems like an eternity. But I can see where for a very stable file server or something you might well run it that long without deploying. Then again, why are you doing performance tuning on a "very stable server"? -Ben On Mon, Oct 16, 2017 at 11:58 AM, Guido van Rossum <guido@python.org> wrote:

2017-10-16 18:14 GMT+02:00 Ben Hoyt <benhoyt@gmail.com>:
I'm not sure of what you mean by "performance *tuning*". My idea in the example is more to collect live performance metrics to make sure that everything is fine on your "very stable server". Send these metrics to your favorite time serie database like Gnocchi, Graphite, Graphana or whatever. Victor

2017-10-16 17:37 GMT+02:00 Ben Hoyt <benhoyt@gmail.com>:
The second example doesn't depend on the system uptime nor how long the program is running. You can hit the issue just after the system finished to boot: "Example 2: compare time with different resolution" https://www.python.org/dev/peps/pep-0564/#example-2-compare-time-with-differ... Victor

Hi, On Mon, 16 Oct 2017 12:42:30 +0200 Victor Stinner <victor.stinner@gmail.com> wrote:
``time.time()`` returns seconds elapsed since the UNIX epoch: January 1st, 1970. This function loses precision since May 1970 (47 years ago)::
This is a funny sentence. I doubt computers (Unix or not) had nanosecond clocks in May 1970.
Why not ``time.process_time_ns()``?
Typo: easily. But how is easy is it?
Such rare use case don't justify to design the Python standard library to support sub-nanosecond resolution.
I suspect that assertion will be challenged at some point :-) Though I agree with the ease of implementation argument (about int64_t being wide enough for nanoseconds but not picoseconds). Regards Antoine.

2017-10-16 17:06 GMT+02:00 Antoine Pitrou <solipsis@pitrou.net>:
I only wrote my first email on python-ideas to ask this question, but I got no answer on this question, only proposal of other solutions to get time with nanosecond resolution. So I picked the simplest option: start simple, only add new clocks, and maybe add more "_ns" functions later. If we add process_time_ns(), should we also add nanosecond resolution to other functions related to process or CPU time? * Add "ru_utime_ns" and "ru_stime_ns" to the resource.struct_rusage used by os.wait3(), os.wait4() and resource.getrusage() * For os.times(): add os.times_ns()? For this one, I prefer to add a new function rather than duplicating *all* fields of os.times_result, since all fields store durations Victor

2017-10-16 17:42 GMT+02:00 Antoine Pitrou <solipsis@pitrou.net>:
Restricting this PEP to the time module would be fine with me.
Maybe I should add a short sentence to keep the question open, but exclude it from the direct scope of the PEP? For example: "New nanosecond flavor of these functions may be added later, if a concrete use case comes in." What do you think? Victor

Oh, now I'm confused. I misunderstood your previous message. I understood that you changed you mind and didn't want to add process_time_ns(). Can you elaborate why you consider that time.process_time_ns() is needed, but not the nanosecond flavor of os.times() nor resource.getrusage()? These functions use the same or similar clock, no? Depending on platform, time.process_time() may be implemented with resource.getrusage(), os.times() or something else. Victor

On Mon, 16 Oct 2017 19:20:44 +0200 Victor Stinner <victor.stinner@gmail.com> wrote:
I didn't say they weren't needed, I said that we could restrict ourselves to the time module for the time being if it makes things easier. But if you want to tackle all of them at once, go for it! :-) Regards Antoine.

Antoine Pitrou:
Why not ``time.process_time_ns()``?
I measured the minimum delta between two clock reads, ignoring zeros. I tested time.process_time(), os.times(), resource.getrusage(), and their nanosecond variants (with my WIP implementation of the PEP 564). Linux: * process_time_ns(): 1 ns * process_time(): 2 ns * resource.getrusage(): 1 us ru_usage structure uses timeval, so it makes sense * clock(): 1 us CLOCKS_PER_SECOND = 1,000,000 => res = 1 us * times_ns().elapsed, times().elapsed: 10 ms os.sysconf("SC_CLK_TCK") == HZ = 100 => res = 10 ms * times_ns().user, times().user: 10 ms os.sysconf("SC_CLK_TCK") == HZ = 100 => res = 10 ms Windows: * process_time(), process_time_ns(): 15.6 ms * os.times().user, os.times_ns().user: 15.6 ms Note: I didn't test os.wait3() and os.wait4(), but they also use the ru_usage structure and so probably also have a resolution of 1 us. It looks like *currently*, only time.process_time() has a resolution in nanoseconds (smaller than 1 us). I propose to only add time.process_time_ns(), as you proposed. We might add nanosecond variant for the other functions once operating systems will add new functions with better resolution. Victor

I updated my PEP 564 to add time.process_time_ns(): https://github.com/python/peps/blob/master/pep-0564.rst The HTML version should be updated shortly: https://www.python.org/dev/peps/pep-0564/ I better explained why some functions got a new nanosecond variant, whereas others don't. The rationale is the precision loss affecting only a few functions in practice. I completed the "Annex: Clocks Resolution in Python" with more numbers, again, to explain why some functions don't need a nanosecond variant. Thanks Antoine, the PEP now looks better to me :-) Victor 2017-10-18 0:05 GMT+02:00 Victor Stinner <victor.stinner@gmail.com>:

Hi Victor, On 10/18/2017 01:14 AM, Victor Stinner wrote:
** In practive, the resolution of 1 nanosecond ** ** no need for resolution better than 1 nanosecond in practive in the Python standard library.** practice vs practice If I understood you correctly on Python-ideas (here just for the records, otherwise please ignore it): why not something like (please change '_in' for what you like): time.time_in(precision) time.monotonic_in(precision) where precision is an enumeration for: 'seconds', 'milliseconds' 'microseconds'... (or 's', 'ms', 'us', 'ns', ...) Thanks, --francis

If it sounds as there is no need or is unnecessary to you then it its ok :-), thank you for the feedback ! I'm just curious on: On 10/21/2017 05:45 PM, Guido van Rossum wrote:
That sounds like unnecessary generality, Meaning that the selection of precision on running time 'costs'?
I understand that one can just multiply/divide the nanoseconds returned, (or it could be a factory) but wouldn't it help for future enhancements to reduce the number of functions (the 'pico' question)?
Thanks, --francis

Le 21 oct. 2017 20:31, "francismb" <francismb@email.de> a écrit : I understand that one can just multiply/divide the nanoseconds returned, (or it could be a factory) but wouldn't it help for future enhancements to reduce the number of functions (the 'pico' question)? If you are me to predict the future, I predict that CPU frequency will be stuck below 10 GHz for the next 10 years :-) Did you hear that the Moore law is no more true since 2012 (Intel said since 2015)? Since 2002, CPUs frequency are blocked around 3 GHz. Overclock records are around 8 GHz with very specialized hardware, not usable for a classical PC. I don't want to overengineer an API "just in case". Let's provide nanoseconds. We can discuss picoseconds later, maybe in 10 years? You can now start to bet if decimal128 will come before or after picoseconds in mainstream CPUs :-) By the way, we are talking about a resolution of 1 ns, but remember that a Python function call is closer to 50 ns. I am not sure that picosecond makes sense if CPU doesn't become much faster. I am too shy to put such predictions in a very offical PEP ;-) Victor

On 22 October 2017 at 09:32, Victor Stinner <victor.stinner@gmail.com> wrote:
There are actually solid physical reasons for that prediction likely being true. Aside from the power consumption, heat dissipation, and EM radiation issues that arise with higher switching frequencies, you also start running into more problems with digital circuit metastability ([1], [2]): the more clock edges you have per second, the higher the chances of an asynchronous input changing state at a bad time. So yeah, for nanosecond resolution to not be good enough for programs running in Python, we're going to be talking about some genuinely fundamental changes in the nature of computing hardware, and it's currently unclear if or how established programming languages will make that jump (see [3] for a gentle introduction to the current state of practical quantum computing). At that point, picoseconds vs nanoseconds is likely to be the least of our conceptual modeling challenges :) Cheers, Nick. [1] https://en.wikipedia.org/wiki/Metastability_in_electronics [2] https://electronics.stackexchange.com/questions/14816/what-is-metastability [3] https://medium.com/@decodoku/how-to-program-a-quantum-computer-982a9329ed02 -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Saturday, October 21, 2017, Nick Coghlan <ncoghlan@gmail.com> wrote:
There are current applications with greater-than nanosecond precision: - relativity experiments - particle experiments Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?! - https://scholar.google.com/scholar?q=femtosecond - https://scholar.google.com/scholar?q=attosecond - GPS now supports nanosecond resolution - https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_experimental_clock...
What about bus latency (and variance)? From https://www.nist.gov/publications/optical-two-way-time-and-frequency-transfe... : transfer is inadequate for state-of-the-art optical clocks and oscillators that have femtosecond-level timing jitter and accuracies below 1 × 10−17. Commensurate optically based transfer methods are therefore needed. Here we demonstrate optical time-frequency transfer over free space via two-way exchange between coherent frequency combs, each phase-locked to the local optical oscillator. We achieve 1 fs timing deviation, residual instability below 1 × 10−18 at 1,000 s and systematic offsets below 4 × 10−19, despite frequent signal fading due to atmospheric turbulence or obstructions across the 2 km link. This free-space transfer can enable terrestrial links to support clock-based geodesy. Combined with satellite-based optical communications, it provides a path towards global-scale geodesy, high-accuracy time-frequency distribution and satellite-based relativity experiments. How much wider must an epoch-relative time struct be for various realistic time precisions/accuracies? 10-6 micro µ 10-9 nano n -- int64 10-12 pico p 10-15 femto f 10-18 atto a 10-21 zepto z 10-24 yocto y I'm at a loss to recommend a library to prefix these with the epoch; but future compatibility may be a helpful, realistic objective. Natural keys with such time resolution are still unfortunately likely to collide.

On Mon, Oct 23, 2017 at 2:06 AM, Wes Turner <wes.turner@gmail.com> wrote:
What about bus latency (and variance)?
I'm currently in Los Angeles. Bus latency is measured in minutes, and may easily exceed sixty of them. :| Seriously though: For applications requiring accurate representation of relativistic effects, the stdlib datetime module has a good few problems besides lacking sub-nanosecond precision. I'd be inclined to YAGNI this away unless/until some third-party module demonstrates that there's actually a use for a datetime module that can handle all that. ChrisA

On 23 October 2017 at 01:06, Wes Turner <wes.turner@gmail.com> wrote:
Yes, as time is a critical part of their experimental setup - when you're operating at relativistic speeds and the kinds of energy levels that particle accelerators hit, it's a bad idea to assume that regular time libraries that assume Newtonian physics applies are going to be up to the task. Normal software assumes a nanosecond is almost no time at all - in high energy particle physics, a nanosecond is enough time for light to travel 30 centimetres, and a high energy particle that stuck around that long before decaying into a lower energy state would be classified as "long lived". Cheers. Nick. P.S. "Don't take code out of the environment it was designed for and assume it will just keep working normally" is one of the main lessons folks learned from the destruction of the first Ariane 5 launch rocket in 1996 (see the first paragraph in https://en.wikipedia.org/wiki/Ariane_5#Notable_launches ) -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

I worked at a molecular dynamics lab for a number of years. I advocated switching all our code to using attosecond units (rather than fractional picoseconds). However, this had nothing whatsoever to do with the machine clock speeds, but only with the physical quantities represented and the scaling/rounding math. It didn't happen, for various reasons. But if it had, I certainly wouldn't have expected standard library support for this. The 'time' module is about wall clock out calendar time, not about *simulation time*. FWIW, a very long simulation might cover a millisecond of simulated time.... we're a very long way from looking at molecular behavior over 104 days. On Oct 22, 2017 8:10 AM, "Wes Turner" <wes.turner@gmail.com> wrote: On Saturday, October 21, 2017, Nick Coghlan <ncoghlan@gmail.com> wrote:
There are current applications with greater-than nanosecond precision: - relativity experiments - particle experiments Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?! - https://scholar.google.com/scholar?q=femtosecond - https://scholar.google.com/scholar?q=attosecond - GPS now supports nanosecond resolution - https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_ experimental_clocks
What about bus latency (and variance)?
From https://www.nist.gov/publications/optical-two-way- time-and-frequency-transfer-over-free-space :
How much wider must an epoch-relative time struct be for various realistic time precisions/accuracies? 10-6 micro µ 10-9 nano n -- int64 10-12 pico p 10-15 femto f 10-18 atto a 10-21 zepto z 10-24 yocto y I'm at a loss to recommend a library to prefix these with the epoch; but future compatibility may be a helpful, realistic objective. Natural keys with such time resolution are still unfortunately likely to collide.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ mertz%40gnosis.cx

On Sunday, October 22, 2017, David Mertz <mertz@gnosis.cx> wrote:
Maybe that's why we haven't found any CTCs (closed timelike curves) yet. Aligning simulation data in context to other events may be enlightening: is there a good library for handing high precision time units in Python (and/or CFFI)? ... http://opendata.cern.ch/ http://opendata.cern.ch/getting-started/CMS

On Sun, Oct 22, 2017 at 1:42 PM, Wes Turner <wes.turner@gmail.com> wrote:
Well, numpy's datetime64 can be set to use (almost) whatever unit you want: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays. datetime.html#datetime-units Though it uses a single epoch, which I don't think ever made sense with femtoseconds.... And it has other problems, but it was designed that way, just for the reason. However, while there has been discussion of improvements, like making the epoch settable, none of them have happened, which makes me think that no one is using it for physics experiments, but rather plain old human calendar time... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

I have read PEP 564 and (mostly) followed the discussion in this thread, and I am happy with the PEP. I am hereby approving PEP 564. Congratulations Victor! -- --Guido van Rossum (python.org/~guido)

Thank you Guido for your review and approval. I just implemented the PEP 564 and so changed the PEP status to Final. FYI I also added 3 new clock identifiers to the time module in Python 3.7: CLOCK_BOOTTIME, CLOCK_PROF and CLOCK_UPTIME. So you can now get your Linux uptime with a resolution of 1 nanosecond :-D haypo@selma$ ./python -c 'import time; print(time.clock_gettime_ns(time.CLOCK_BOOTTIME))' 232172588663888 Don't do that at home, it's just for educational purpose only! ;-) Victor 2017-10-30 18:18 GMT+01:00 Guido van Rossum <guido@python.org>:

Le 22 oct. 2017 17:06, "Wes Turner" <wes.turner@gmail.com> a écrit : Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?! Yes, exactly. Note: Adding resolution better than 1 us to datetime is not in the scope of the PEP but there is an issue, open since a long time. I don't think that time.time_ns() is usable for such experiment. Again, calling a function is Python takes around 50 ns. Victor

On 22/10/17 17:06, Wes Turner wrote:
Sure, but in these kinds of experiments you don't have a "timestamp" in the usual sense. You'll have some kind of high-precision "clock", but in most cases there's no way and no reason to synchronise this to wall time. You end up distinguishing between "macro-time" (wall time) and "micro-time" (time in the experiment relative to something) In a particle accelerator, you care about measuring relative times of almost-simultaneous detection events with extremely high precision. You'll also presumably have a timestamp for the event, but you won't be able or willing to measure that with anything like the same accuracy. While you might be able to say that you detected, say, a muon at 01:23:45.6789 at Δt=543.6ps*, you have femtosecond resolution, you have a timestamp, but you don't have a femtosecond timestamp. In ultrafast spectroscopy, we get a time resolution equal to the duration of your laser pulses (fs-ps), but all the micro-times measured will be relative to some reference laser pulse, which repeats at >MHz frequencies. We also integrate over millions of events - wall-time timestamps don't enter into it. In summary, yes, when writing software for experiments working with high time resolution you have to write your own implementations of whatever data formats best describe time as you're measuring it, which generally won't line up with time as a PC (or a railway company) looks at it. Cheers Thomas * The example is implausible not least because I understand muon chambers tend to be a fair bit bigger than 15cm, but you get my point.

On Monday, October 23, 2017, Thomas Jollans <tjol@tjol.eu> wrote:
(Sorry, maybe too OT) So these experiments are all done in isolation; referent to t=0.
Aligning simulation data in context to other events may be enlightening:
IIUC, https://en.wikipedia.org/wiki/Quantum_mechanics_of_time_travel implies that there are (or may) Are potentially connections between events over greater periods of time. It's unfortunate that aligning this data requires adding offsets and working with nonstandard adhoc time structs. A problem for another day, I suppose. Thanks for adding time_ns(l.

Thanks Thomas, it was interesting! You confirmed that time.time_ns() and other system clocks exposed by Python are inappropriate for sub-nanosecond physical experiment. By the way, you mentionned that clocks are not synchronized. That's another revelant point. Even if system clocks are synchronized on a single computer, I read that you cannot reach nanosecond resolution for a NTP synchronization even in a small LAN. For large systems or distributed systems, a "global (synchronized) clock" is not an option. You cannot synchronize clocks correctly, so your algorithms must not rely on time, or at least not too precise resolution. I am saying that to again repeat that we are far from sub-second nanosecond resolution for system clock. Victor Le 24 oct. 2017 01:39, "Thomas Jollans" <tjol@tjol.eu> a écrit :

On Tue, 24 Oct 2017 09:00:45 +0200 Victor Stinner <victor.stinner@gmail.com> wrote:
What does synchronization have to do with it? If synchronization matters, then your PEP should be rejected, because current computers using NTP can't synchronize with a better precision than 230 ns. See https://blog.cloudflare.com/how-to-achieve-low-latency/ Regards Antoine.

On Tuesday, October 24, 2017, Antoine Pitrou <solipsis@pitrou.net <javascript:_e(%7B%7D,'cvml','solipsis@pitrou.net');>> wrote:
So, in regards to time synchronization, FWIU: - WWVB "can provide time with an accuracy of about 100 microseconds" - GPS time can synchronize down to "tens of nanoseconds" - Blockchains work around local timestamp issues by "enforcing" linearity

2017-10-24 11:22 GMT+02:00 Antoine Pitrou <solipsis@pitrou.net>:
Currently, the PEP 564 is mostly designed for handling time on the same computer. Better resolution inside the same process, and "synchronization" between two processes running on the same host: https://www.python.org/dev/peps/pep-0564/#issues-caused-by-precision-loss Maybe tomorrow, time.time_ns() will help for use cases with more computers :-)
This article doesn't mention NTP, synchronization or nanoseconds. Where did you see "230 ns" for NTP? Victor

Le 24/10/2017 à 13:20, Victor Stinner a écrit :
This article doesn't mention NTP, synchronization or nanoseconds.
NTP is layered over UDP. The article shows base case UDP latencies of around 15µs over 10Gbps Ethernet. Regards Antoine.

Warning: the PEP 564 doesn't make any assumption about clock synchronizations. My intent is only to expose what the operating system provides without losing precision. That's all :-) 2017-10-24 13:25 GMT+02:00 Antoine Pitrou <antoine@python.org>:
NTP is layered over UDP. The article shows base case UDP latencies of around 15µs over 10Gbps Ethernet.
Ah ok. IMHO the discussion became off-topic somewhere, but I'm curious, so I searched about the best NTP accuracy and found: https://blog.meinbergglobal.com/2013/11/22/ntp-vs-ptp-network-timing-smackdo... "Is the accuracy you need measured in microseconds or nanoseconds? If the answer is yes, you want PTP (IEEE 1588). If the answer is in milliseconds or seconds, then you want NTP." "There is even ongoing standards work to use technology developed at CERN (...) to extend PTP to picoseconds." It seems like PTP is more accurate than NTP. Victor

Since the ``time.clock()`` function was deprecated in Python 3.3, no ``time.clock_ns()`` is added.
FYI I just proposed a change to *remove* time.clock() from Python 3.7: https://bugs.python.org/issue31803 This change is not required by, nor directly related to, the PEP 564. Victor

I read again the discussions on python-ideas and noticed that I forgot to mention the "time_ns module" idea. I also added a section to give concrete examples of the precision loss. https://github.com/python/peps/commit/a4828def403913dbae7452b4f9b9d62a0c83a2... Issues caused by precision loss ------------------------------- Example 1: measure time delta ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A server is running for longer than 104 days. A clock is read before and after running a function to measure its performance. This benchmark lose precision only because the float type used by clocks, not because of the clock resolution. On Python microbenchmarks, it is common to see function calls taking less than 100 ns. A difference of a single nanosecond becomes significant. Example 2: compare time with different resolution ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Two programs "A" and "B" are runing on the same system, so use the system block. The program A reads the system clock with nanosecond resolution and writes the timestamp with nanosecond resolution. The program B reads the timestamp with nanosecond resolution, but compares it to the system clock read with a worse resolution. To simplify the example, let's say that it reads the clock with second resolution. If that case, there is a window of 1 second while the program B can see the timestamp written by A as "in the future". Nowadays, more and more databases and filesystems support storing time with nanosecond resolution. .. note:: This issue was already fixed for file modification time by adding the ``st_mtime_ns`` field to the ``os.stat()`` result, and by accepting nanoseconds in ``os.utime()``. This PEP proposes to generalize the fix. (...) Modify time.time() result type ------------------------------ It was proposed to modify ``time.time()`` to return a different float type with better precision. The PEP 410 proposed to use ``decimal.Decimal`` which already exists and supports arbitray precision, but it was rejected. Apart ``decimal.Decimal``, no portable ``float`` type with better precision is currently available in Python. Changing the builtin Python ``float`` type is out of the scope of this PEP. Moreover, changing existing functions to return a new type introduces a risk of breaking the backward compatibility even the new type is designed carefully. (...) New time_ns module ------------------ Add a new ``time_ns`` module which contains the five new functions: * ``time_ns.clock_gettime(clock_id)`` * ``time_ns.clock_settime(clock_id, time: int)`` * ``time_ns.perf_counter()`` * ``time_ns.monotonic()`` * ``time_ns.time()`` The first question is if the ``time_ns`` should expose exactly the same API (constants, functions, etc.) than the ``time`` module. It can be painful to maintain two flavors of the ``time`` module. How users use suppose to make a choice between these two modules? If tomorrow, other nanosecond variant are needed in the ``os`` module, will we have to add a new ``os_ns`` module as well? There are functions related to time in many modules: ``time``, ``os``, ``signal``, ``resource``, ``select``, etc. Another idea is to add a ``time.ns`` submodule or a nested-namespace to get the ``time.ns.time()`` syntax. Victor

I've read the examples you wrote here, but I'm struggling to see what the real-life use cases are for this. When would you care about *both* very long-running servers (104 days+) and nanosecond precision? I'm not saying it could never happen, but would want to see real "experience reports" of when this is needed. -Ben On Mon, Oct 16, 2017 at 9:50 AM, Victor Stinner <victor.stinner@gmail.com> wrote:

On Mon, Oct 16, 2017 at 8:37 AM, Ben Hoyt <benhoyt@gmail.com> wrote:
A long-running server might still want to log precise *durations* of various events. (Durations of events are the bread and butter of server performance tuning.) And for this it might want to use the most precise clock available, which is perf_counter(). But if perf_counter()'s epoch is the start of the process, after 104 days it can no longer report ns precision due to float rounding (even though the internal counter does not lose ns). -- --Guido van Rossum (python.org/~guido)

Got it -- fair enough. We deploy so often where I work (a couple of times a week at least) that 104 days seems like an eternity. But I can see where for a very stable file server or something you might well run it that long without deploying. Then again, why are you doing performance tuning on a "very stable server"? -Ben On Mon, Oct 16, 2017 at 11:58 AM, Guido van Rossum <guido@python.org> wrote:

2017-10-16 18:14 GMT+02:00 Ben Hoyt <benhoyt@gmail.com>:
I'm not sure of what you mean by "performance *tuning*". My idea in the example is more to collect live performance metrics to make sure that everything is fine on your "very stable server". Send these metrics to your favorite time serie database like Gnocchi, Graphite, Graphana or whatever. Victor

2017-10-16 17:37 GMT+02:00 Ben Hoyt <benhoyt@gmail.com>:
The second example doesn't depend on the system uptime nor how long the program is running. You can hit the issue just after the system finished to boot: "Example 2: compare time with different resolution" https://www.python.org/dev/peps/pep-0564/#example-2-compare-time-with-differ... Victor

Hi, On Mon, 16 Oct 2017 12:42:30 +0200 Victor Stinner <victor.stinner@gmail.com> wrote:
``time.time()`` returns seconds elapsed since the UNIX epoch: January 1st, 1970. This function loses precision since May 1970 (47 years ago)::
This is a funny sentence. I doubt computers (Unix or not) had nanosecond clocks in May 1970.
Why not ``time.process_time_ns()``?
Typo: easily. But how is easy is it?
Such rare use case don't justify to design the Python standard library to support sub-nanosecond resolution.
I suspect that assertion will be challenged at some point :-) Though I agree with the ease of implementation argument (about int64_t being wide enough for nanoseconds but not picoseconds). Regards Antoine.

2017-10-16 17:06 GMT+02:00 Antoine Pitrou <solipsis@pitrou.net>:
I only wrote my first email on python-ideas to ask this question, but I got no answer on this question, only proposal of other solutions to get time with nanosecond resolution. So I picked the simplest option: start simple, only add new clocks, and maybe add more "_ns" functions later. If we add process_time_ns(), should we also add nanosecond resolution to other functions related to process or CPU time? * Add "ru_utime_ns" and "ru_stime_ns" to the resource.struct_rusage used by os.wait3(), os.wait4() and resource.getrusage() * For os.times(): add os.times_ns()? For this one, I prefer to add a new function rather than duplicating *all* fields of os.times_result, since all fields store durations Victor

2017-10-16 17:42 GMT+02:00 Antoine Pitrou <solipsis@pitrou.net>:
Restricting this PEP to the time module would be fine with me.
Maybe I should add a short sentence to keep the question open, but exclude it from the direct scope of the PEP? For example: "New nanosecond flavor of these functions may be added later, if a concrete use case comes in." What do you think? Victor

Oh, now I'm confused. I misunderstood your previous message. I understood that you changed you mind and didn't want to add process_time_ns(). Can you elaborate why you consider that time.process_time_ns() is needed, but not the nanosecond flavor of os.times() nor resource.getrusage()? These functions use the same or similar clock, no? Depending on platform, time.process_time() may be implemented with resource.getrusage(), os.times() or something else. Victor

On Mon, 16 Oct 2017 19:20:44 +0200 Victor Stinner <victor.stinner@gmail.com> wrote:
I didn't say they weren't needed, I said that we could restrict ourselves to the time module for the time being if it makes things easier. But if you want to tackle all of them at once, go for it! :-) Regards Antoine.

Antoine Pitrou:
Why not ``time.process_time_ns()``?
I measured the minimum delta between two clock reads, ignoring zeros. I tested time.process_time(), os.times(), resource.getrusage(), and their nanosecond variants (with my WIP implementation of the PEP 564). Linux: * process_time_ns(): 1 ns * process_time(): 2 ns * resource.getrusage(): 1 us ru_usage structure uses timeval, so it makes sense * clock(): 1 us CLOCKS_PER_SECOND = 1,000,000 => res = 1 us * times_ns().elapsed, times().elapsed: 10 ms os.sysconf("SC_CLK_TCK") == HZ = 100 => res = 10 ms * times_ns().user, times().user: 10 ms os.sysconf("SC_CLK_TCK") == HZ = 100 => res = 10 ms Windows: * process_time(), process_time_ns(): 15.6 ms * os.times().user, os.times_ns().user: 15.6 ms Note: I didn't test os.wait3() and os.wait4(), but they also use the ru_usage structure and so probably also have a resolution of 1 us. It looks like *currently*, only time.process_time() has a resolution in nanoseconds (smaller than 1 us). I propose to only add time.process_time_ns(), as you proposed. We might add nanosecond variant for the other functions once operating systems will add new functions with better resolution. Victor

I updated my PEP 564 to add time.process_time_ns(): https://github.com/python/peps/blob/master/pep-0564.rst The HTML version should be updated shortly: https://www.python.org/dev/peps/pep-0564/ I better explained why some functions got a new nanosecond variant, whereas others don't. The rationale is the precision loss affecting only a few functions in practice. I completed the "Annex: Clocks Resolution in Python" with more numbers, again, to explain why some functions don't need a nanosecond variant. Thanks Antoine, the PEP now looks better to me :-) Victor 2017-10-18 0:05 GMT+02:00 Victor Stinner <victor.stinner@gmail.com>:

Hi Victor, On 10/18/2017 01:14 AM, Victor Stinner wrote:
** In practive, the resolution of 1 nanosecond ** ** no need for resolution better than 1 nanosecond in practive in the Python standard library.** practice vs practice If I understood you correctly on Python-ideas (here just for the records, otherwise please ignore it): why not something like (please change '_in' for what you like): time.time_in(precision) time.monotonic_in(precision) where precision is an enumeration for: 'seconds', 'milliseconds' 'microseconds'... (or 's', 'ms', 'us', 'ns', ...) Thanks, --francis

If it sounds as there is no need or is unnecessary to you then it its ok :-), thank you for the feedback ! I'm just curious on: On 10/21/2017 05:45 PM, Guido van Rossum wrote:
That sounds like unnecessary generality, Meaning that the selection of precision on running time 'costs'?
I understand that one can just multiply/divide the nanoseconds returned, (or it could be a factory) but wouldn't it help for future enhancements to reduce the number of functions (the 'pico' question)?
Thanks, --francis

Le 21 oct. 2017 20:31, "francismb" <francismb@email.de> a écrit : I understand that one can just multiply/divide the nanoseconds returned, (or it could be a factory) but wouldn't it help for future enhancements to reduce the number of functions (the 'pico' question)? If you are me to predict the future, I predict that CPU frequency will be stuck below 10 GHz for the next 10 years :-) Did you hear that the Moore law is no more true since 2012 (Intel said since 2015)? Since 2002, CPUs frequency are blocked around 3 GHz. Overclock records are around 8 GHz with very specialized hardware, not usable for a classical PC. I don't want to overengineer an API "just in case". Let's provide nanoseconds. We can discuss picoseconds later, maybe in 10 years? You can now start to bet if decimal128 will come before or after picoseconds in mainstream CPUs :-) By the way, we are talking about a resolution of 1 ns, but remember that a Python function call is closer to 50 ns. I am not sure that picosecond makes sense if CPU doesn't become much faster. I am too shy to put such predictions in a very offical PEP ;-) Victor

On 22 October 2017 at 09:32, Victor Stinner <victor.stinner@gmail.com> wrote:
There are actually solid physical reasons for that prediction likely being true. Aside from the power consumption, heat dissipation, and EM radiation issues that arise with higher switching frequencies, you also start running into more problems with digital circuit metastability ([1], [2]): the more clock edges you have per second, the higher the chances of an asynchronous input changing state at a bad time. So yeah, for nanosecond resolution to not be good enough for programs running in Python, we're going to be talking about some genuinely fundamental changes in the nature of computing hardware, and it's currently unclear if or how established programming languages will make that jump (see [3] for a gentle introduction to the current state of practical quantum computing). At that point, picoseconds vs nanoseconds is likely to be the least of our conceptual modeling challenges :) Cheers, Nick. [1] https://en.wikipedia.org/wiki/Metastability_in_electronics [2] https://electronics.stackexchange.com/questions/14816/what-is-metastability [3] https://medium.com/@decodoku/how-to-program-a-quantum-computer-982a9329ed02 -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On Saturday, October 21, 2017, Nick Coghlan <ncoghlan@gmail.com> wrote:
There are current applications with greater-than nanosecond precision: - relativity experiments - particle experiments Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?! - https://scholar.google.com/scholar?q=femtosecond - https://scholar.google.com/scholar?q=attosecond - GPS now supports nanosecond resolution - https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_experimental_clock...
What about bus latency (and variance)? From https://www.nist.gov/publications/optical-two-way-time-and-frequency-transfe... : transfer is inadequate for state-of-the-art optical clocks and oscillators that have femtosecond-level timing jitter and accuracies below 1 × 10−17. Commensurate optically based transfer methods are therefore needed. Here we demonstrate optical time-frequency transfer over free space via two-way exchange between coherent frequency combs, each phase-locked to the local optical oscillator. We achieve 1 fs timing deviation, residual instability below 1 × 10−18 at 1,000 s and systematic offsets below 4 × 10−19, despite frequent signal fading due to atmospheric turbulence or obstructions across the 2 km link. This free-space transfer can enable terrestrial links to support clock-based geodesy. Combined with satellite-based optical communications, it provides a path towards global-scale geodesy, high-accuracy time-frequency distribution and satellite-based relativity experiments. How much wider must an epoch-relative time struct be for various realistic time precisions/accuracies? 10-6 micro µ 10-9 nano n -- int64 10-12 pico p 10-15 femto f 10-18 atto a 10-21 zepto z 10-24 yocto y I'm at a loss to recommend a library to prefix these with the epoch; but future compatibility may be a helpful, realistic objective. Natural keys with such time resolution are still unfortunately likely to collide.

On Mon, Oct 23, 2017 at 2:06 AM, Wes Turner <wes.turner@gmail.com> wrote:
What about bus latency (and variance)?
I'm currently in Los Angeles. Bus latency is measured in minutes, and may easily exceed sixty of them. :| Seriously though: For applications requiring accurate representation of relativistic effects, the stdlib datetime module has a good few problems besides lacking sub-nanosecond precision. I'd be inclined to YAGNI this away unless/until some third-party module demonstrates that there's actually a use for a datetime module that can handle all that. ChrisA

On 23 October 2017 at 01:06, Wes Turner <wes.turner@gmail.com> wrote:
Yes, as time is a critical part of their experimental setup - when you're operating at relativistic speeds and the kinds of energy levels that particle accelerators hit, it's a bad idea to assume that regular time libraries that assume Newtonian physics applies are going to be up to the task. Normal software assumes a nanosecond is almost no time at all - in high energy particle physics, a nanosecond is enough time for light to travel 30 centimetres, and a high energy particle that stuck around that long before decaying into a lower energy state would be classified as "long lived". Cheers. Nick. P.S. "Don't take code out of the environment it was designed for and assume it will just keep working normally" is one of the main lessons folks learned from the destruction of the first Ariane 5 launch rocket in 1996 (see the first paragraph in https://en.wikipedia.org/wiki/Ariane_5#Notable_launches ) -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

I worked at a molecular dynamics lab for a number of years. I advocated switching all our code to using attosecond units (rather than fractional picoseconds). However, this had nothing whatsoever to do with the machine clock speeds, but only with the physical quantities represented and the scaling/rounding math. It didn't happen, for various reasons. But if it had, I certainly wouldn't have expected standard library support for this. The 'time' module is about wall clock out calendar time, not about *simulation time*. FWIW, a very long simulation might cover a millisecond of simulated time.... we're a very long way from looking at molecular behavior over 104 days. On Oct 22, 2017 8:10 AM, "Wes Turner" <wes.turner@gmail.com> wrote: On Saturday, October 21, 2017, Nick Coghlan <ncoghlan@gmail.com> wrote:
There are current applications with greater-than nanosecond precision: - relativity experiments - particle experiments Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?! - https://scholar.google.com/scholar?q=femtosecond - https://scholar.google.com/scholar?q=attosecond - GPS now supports nanosecond resolution - https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_ experimental_clocks
What about bus latency (and variance)?
From https://www.nist.gov/publications/optical-two-way- time-and-frequency-transfer-over-free-space :
How much wider must an epoch-relative time struct be for various realistic time precisions/accuracies? 10-6 micro µ 10-9 nano n -- int64 10-12 pico p 10-15 femto f 10-18 atto a 10-21 zepto z 10-24 yocto y I'm at a loss to recommend a library to prefix these with the epoch; but future compatibility may be a helpful, realistic objective. Natural keys with such time resolution are still unfortunately likely to collide.
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ mertz%40gnosis.cx

On Sunday, October 22, 2017, David Mertz <mertz@gnosis.cx> wrote:
Maybe that's why we haven't found any CTCs (closed timelike curves) yet. Aligning simulation data in context to other events may be enlightening: is there a good library for handing high precision time units in Python (and/or CFFI)? ... http://opendata.cern.ch/ http://opendata.cern.ch/getting-started/CMS

On Sun, Oct 22, 2017 at 1:42 PM, Wes Turner <wes.turner@gmail.com> wrote:
Well, numpy's datetime64 can be set to use (almost) whatever unit you want: https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays. datetime.html#datetime-units Though it uses a single epoch, which I don't think ever made sense with femtoseconds.... And it has other problems, but it was designed that way, just for the reason. However, while there has been discussion of improvements, like making the epoch settable, none of them have happened, which makes me think that no one is using it for physics experiments, but rather plain old human calendar time... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

I have read PEP 564 and (mostly) followed the discussion in this thread, and I am happy with the PEP. I am hereby approving PEP 564. Congratulations Victor! -- --Guido van Rossum (python.org/~guido)

Thank you Guido for your review and approval. I just implemented the PEP 564 and so changed the PEP status to Final. FYI I also added 3 new clock identifiers to the time module in Python 3.7: CLOCK_BOOTTIME, CLOCK_PROF and CLOCK_UPTIME. So you can now get your Linux uptime with a resolution of 1 nanosecond :-D haypo@selma$ ./python -c 'import time; print(time.clock_gettime_ns(time.CLOCK_BOOTTIME))' 232172588663888 Don't do that at home, it's just for educational purpose only! ;-) Victor 2017-10-30 18:18 GMT+01:00 Guido van Rossum <guido@python.org>:

Le 22 oct. 2017 17:06, "Wes Turner" <wes.turner@gmail.com> a écrit : Must they always use their own implementations of time., datetime. __init__, fromordinal, fromtimestamp ?! Yes, exactly. Note: Adding resolution better than 1 us to datetime is not in the scope of the PEP but there is an issue, open since a long time. I don't think that time.time_ns() is usable for such experiment. Again, calling a function is Python takes around 50 ns. Victor

On 22/10/17 17:06, Wes Turner wrote:
Sure, but in these kinds of experiments you don't have a "timestamp" in the usual sense. You'll have some kind of high-precision "clock", but in most cases there's no way and no reason to synchronise this to wall time. You end up distinguishing between "macro-time" (wall time) and "micro-time" (time in the experiment relative to something) In a particle accelerator, you care about measuring relative times of almost-simultaneous detection events with extremely high precision. You'll also presumably have a timestamp for the event, but you won't be able or willing to measure that with anything like the same accuracy. While you might be able to say that you detected, say, a muon at 01:23:45.6789 at Δt=543.6ps*, you have femtosecond resolution, you have a timestamp, but you don't have a femtosecond timestamp. In ultrafast spectroscopy, we get a time resolution equal to the duration of your laser pulses (fs-ps), but all the micro-times measured will be relative to some reference laser pulse, which repeats at >MHz frequencies. We also integrate over millions of events - wall-time timestamps don't enter into it. In summary, yes, when writing software for experiments working with high time resolution you have to write your own implementations of whatever data formats best describe time as you're measuring it, which generally won't line up with time as a PC (or a railway company) looks at it. Cheers Thomas * The example is implausible not least because I understand muon chambers tend to be a fair bit bigger than 15cm, but you get my point.

On Monday, October 23, 2017, Thomas Jollans <tjol@tjol.eu> wrote:
(Sorry, maybe too OT) So these experiments are all done in isolation; referent to t=0.
Aligning simulation data in context to other events may be enlightening:
IIUC, https://en.wikipedia.org/wiki/Quantum_mechanics_of_time_travel implies that there are (or may) Are potentially connections between events over greater periods of time. It's unfortunate that aligning this data requires adding offsets and working with nonstandard adhoc time structs. A problem for another day, I suppose. Thanks for adding time_ns(l.

Thanks Thomas, it was interesting! You confirmed that time.time_ns() and other system clocks exposed by Python are inappropriate for sub-nanosecond physical experiment. By the way, you mentionned that clocks are not synchronized. That's another revelant point. Even if system clocks are synchronized on a single computer, I read that you cannot reach nanosecond resolution for a NTP synchronization even in a small LAN. For large systems or distributed systems, a "global (synchronized) clock" is not an option. You cannot synchronize clocks correctly, so your algorithms must not rely on time, or at least not too precise resolution. I am saying that to again repeat that we are far from sub-second nanosecond resolution for system clock. Victor Le 24 oct. 2017 01:39, "Thomas Jollans" <tjol@tjol.eu> a écrit :

On Tue, 24 Oct 2017 09:00:45 +0200 Victor Stinner <victor.stinner@gmail.com> wrote:
What does synchronization have to do with it? If synchronization matters, then your PEP should be rejected, because current computers using NTP can't synchronize with a better precision than 230 ns. See https://blog.cloudflare.com/how-to-achieve-low-latency/ Regards Antoine.

On Tuesday, October 24, 2017, Antoine Pitrou <solipsis@pitrou.net <javascript:_e(%7B%7D,'cvml','solipsis@pitrou.net');>> wrote:
So, in regards to time synchronization, FWIU: - WWVB "can provide time with an accuracy of about 100 microseconds" - GPS time can synchronize down to "tens of nanoseconds" - Blockchains work around local timestamp issues by "enforcing" linearity

2017-10-24 11:22 GMT+02:00 Antoine Pitrou <solipsis@pitrou.net>:
Currently, the PEP 564 is mostly designed for handling time on the same computer. Better resolution inside the same process, and "synchronization" between two processes running on the same host: https://www.python.org/dev/peps/pep-0564/#issues-caused-by-precision-loss Maybe tomorrow, time.time_ns() will help for use cases with more computers :-)
This article doesn't mention NTP, synchronization or nanoseconds. Where did you see "230 ns" for NTP? Victor

Le 24/10/2017 à 13:20, Victor Stinner a écrit :
This article doesn't mention NTP, synchronization or nanoseconds.
NTP is layered over UDP. The article shows base case UDP latencies of around 15µs over 10Gbps Ethernet. Regards Antoine.

Warning: the PEP 564 doesn't make any assumption about clock synchronizations. My intent is only to expose what the operating system provides without losing precision. That's all :-) 2017-10-24 13:25 GMT+02:00 Antoine Pitrou <antoine@python.org>:
NTP is layered over UDP. The article shows base case UDP latencies of around 15µs over 10Gbps Ethernet.
Ah ok. IMHO the discussion became off-topic somewhere, but I'm curious, so I searched about the best NTP accuracy and found: https://blog.meinbergglobal.com/2013/11/22/ntp-vs-ptp-network-timing-smackdo... "Is the accuracy you need measured in microseconds or nanoseconds? If the answer is yes, you want PTP (IEEE 1588). If the answer is in milliseconds or seconds, then you want NTP." "There is even ongoing standards work to use technology developed at CERN (...) to extend PTP to picoseconds." It seems like PTP is more accurate than NTP. Victor

Since the ``time.clock()`` function was deprecated in Python 3.3, no ``time.clock_ns()`` is added.
FYI I just proposed a change to *remove* time.clock() from Python 3.7: https://bugs.python.org/issue31803 This change is not required by, nor directly related to, the PEP 564. Victor
participants (13)
-
Antoine Pitrou
-
Antoine Pitrou
-
Ben Hoyt
-
Chris Angelico
-
Chris Barker
-
David Mertz
-
Ethan Furman
-
francismb
-
Guido van Rossum
-
Nick Coghlan
-
Thomas Jollans
-
Victor Stinner
-
Wes Turner