# [Python-Dev] Subsecond time stamps

**Martin v. Loewis
**
martin@v.loewis.de

*10 Sep 2002 01:34:12 +0200*

"Andrew P. Lentvorski" <bsder@mail.allcaps.org> writes:
>* This then locks Python into a specific bit-description notion of a double
*>* in order to get the appropriate number of significant digits to describe
*>* time sufficiently. Embedded/portable processors may not support the
*>* notion of an IEEE double.
*
That's not true. Support you have two fields, tv_sec and tv_nsec. Then
the resulting float expression is
tv_sec + 1e-9 * tv_nsec;
This expression works on all systems that support floating point
numbers - be it IEEE or not.
>* In addition, timers get increasingly dense as computers get faster.
*>* Thus, doubles may work for nanoseconds, but will not be sufficient
*>* for picoseconds.
*
At the same time, floating point numbers get increasingly more
accurate as computer registers widen. In a 64-bit float, you can just
barely express 1e-7s (if you base the era at 1970); with a 128-bit
float, you can express 1e-20s easily.
>* If the goal is a field which never has to be changed to support any amount
*>* of time, the value should be "infinite precision".
*
No, just using floating point numbers is sufficient. Notice that
time.time() also returns a floating point number.
>* At that point, a Python Long used in some tuple representation of
*>* fixed-point arithmetic springs to mind. ie. (<long>, <bit of
*>* fractional point>)
*
Yes, when/if Python gets rational numbers, or decimal
fixed-or-floating point numbers, those data types might represent the
the value that the system reports more accurately. At that time, there
will be a transition plan to introduce those numbers at all places
where it is reasonable, with as little impact on applications as
possible.
Regards,
Martin