<div dir="ltr"><div>Yes, that's the right way to define it (and PEPs should primarily concern themselves with crisp definitions).<br><br></div>Isn't it so that you could get timeline arithmetic today by giving each datetime object a different tzinfo object?<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Aug 18, 2015 at 10:52 PM, Tim Peters <span dir="ltr"><<a href="mailto:tim.peters@gmail.com" target="_blank">tim.peters@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">[Guido]<br>
> ...<br>
<span class="">> This discussion sounds overly abstract. ISTM that d(x, y) in timeline<br>
> arithmetic can be computed as x.timestamp() - y.timestamp(), (and converting<br>
> to a timedelta).<br>
<br>
</span>As someone else might say, if you want timestamps, use timestamps ;-)<br>
<br>
I want to discourage people from thinking of it that way, because it<br>
only works in a theoretical framework abstracting away how arithmetic<br>
actually behaves. Timestamps in Python suck in a world of<br>
floating-point pain that I tried hard to keep entirely out of datetime<br>
module semantics (although I see float operations have increasingly<br>
wormed their way in).<br>
<br>
Everyone who thinks about it soon realizes that a datetime simply has<br>
"too many bits" to represent faithfully as a Python float, and so also<br>
as a Python timestamp. But I think few realize this isn't a problem<br>
confined to datetimes only our descendants will experience. It can<br>
surprise people even today. For example, here on my second try:<br>
<br>
>>> d = datetime.now()<br>
>>> d<br>
datetime.datetime(2015, 8, 18, 23, 8, 54, 615774)<br>
>>> datetime.fromtimestamp(d.timestamp())<br>
datetime.datetime(2015, 8, 18, 23, 8, 54, 615773)<br>
<br>
See? We can't even expect to round-trip faithfully with current<br>
datetimes. It's not really that there "aren't enough bits" to<br>
represent a current datetime value in a C double, it's that the<br>
closest binary float approximating the decimal 1439957334.615774 is<br>
strictly less than that decimal value. That causes the microsecond<br>
portion to get chopped to 615773 on the way back. It _could_ be<br>
rounded instead, which would make roundtripping work for some number<br>
of years to come (before it routinely failed again), but rounding<br>
would cause other surprises.<br>
<br>
Anyway, "the right way" to think about timeline arithmetic is the way<br>
the sample code in PEP 500 spells it:: using classic datetime<br>
arithmetic on datetimes in (our POSIX approximation of) UTC,<br>
converting to/from other timezones in the obvious ways There are no<br>
surprises then (not after PEP 495-compliant tzinfo objects exist),<br>
neither in theory nor in how code actually behaves (leaving aside that<br>
the results won't always match real-life clocks).<br>
<br>
If you want to _think_ of that as being equivalent to arithmetic using<br>
theoretical infinitely-precise Python timestamps, that's fine. But it<br>
also means you're over 50 years old and the kids will have a hard time<br>
understanding you ;-)<br>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature">--Guido van Rossum (<a href="http://python.org/~guido" target="_blank">python.org/~guido</a>)</div>
</div>