[Datetime-SIG] Timeline arithmetic?
tim.peters at gmail.com
Mon Sep 7 11:12:04 CEST 2015
[Carl Meyer <carl at oddbird.net>]
> (tl;dr I think your latest proposal re PEP 495 is great.)
I don't. The last two were less annoying, though ;-)
> I think we're still mis-communicating somewhat. Before replying point by
Or it could be we have different goals here, and each keep trying to
nudge the other to change the topic ;-)
> let me just try to explain what I'm saying as clearly as I can.
> Please tell me precisely where we part ways in this analysis.
> Consider two models for the meaning of a "timezone-aware datetime
> object". Let's just call them Model A and Model B:
In which context? Abstractly, or the context of Python's current
datetime module, or in the context of some hypothetical future Python
datetime module, or some datetime module that _might_ have existed
instead, or ...?
My only real interest here is moving the module that actually exists
to one that can get conversions right in all cases, preferably in a
wholly backward-compatible way. Models don't really matter to that,
but specific behaviors do.
> In Model A, an aware datetime (in any timezone) is nothing more than an
> alternate (somewhat complexified for human use) spelling of a Unix
> timestamp, much like a timedelta is just a complexified spelling of some
> number of microseconds.
A Python datetime is also just a complexified spelling of some number
of microseconds (since the start of 1 January 1 of the proleptic
> In this model, there's a bijection between aware datetimes in any
> two timezones. (This model requires the PEP 495 flag,
> or some equivalent. Technically, this model _could_ be implemented by
> simply storing a Unix timestamp and a timezone name, and doing all
> date/time calculations at display time.) In this model, "Nov 2 2014
> 1:30am US/Eastern fold=1" and "Nov 2 2014 6:30am UTC" are just alternate
> spellings of the _same_ underlying timestamp.
> Characteristics of Model A:
> * There's no issue with comparisons or arithmetic involving datetimes in
> different timezones; they're all just Unix timestamps under the hood
> anyway, so ordering and arithmetic is always obvious and consistent:
> it's always equivalent to simple integer arithmetic with Unix timestamps.
> * Conversions between timezones are always unambiguous and lossless:
> they're just alternate spellings of the same integer, after all.
> * In this model, timeline arithmetic everywhere is the only option.
Why? The kind of arithmetic needed for a task depends on the task.
There are no specific use cases given here, so who can say? Some
tasks need to account for real-world durations; others need to
overlook irregularities in real-world durations (across zone
transitions) in order to maintain regularities between the
before-and-after calendar notations. Timeline arithmetic is only
directly useful for dealing with real-world durations as they affect
civil calendar notations. Some tasks require that, other tasks can't
That said, it would be cleanest to have distinct types for each
purpose. Whether that would be more _usable_ I don't know.
> Every non-UTC aware datetime is just an alternate spelling of an
> equivalent UTC datetime / Unix timestamp, so in a certain sense you're
> always doing "arithmetic in UTC" (or "arithmetic with Unix timestamps"),
> but you can spell it in whichever timezone you like. In this model,
> there's very little reason to consider arithmetic in non-UTC timezones
> problematic; it's always consistent and predictable and gives exactly
> the same results as converting to UTC first. For sizable systems it may
> still be good practice to do everything internally in UTC and convert at
> the edges, but the reasons are not strong; mostly just avoiding
> interoperability issues with databases or other systems that don't
> implement the same model, or have poor timezone handling.
How do you think timeline arithmetic is implemented? datetime's
motivating use cases overwhelmingly involved quick access to local
calendar notation, so datetime stores local calendar notation (both in
memory and in pickles) directly. Any non-toy implementation of
timeline arithmetic would store time internally in UTC ticks instead,
enduring expensive conversions to local calendar notation only when
explicitly demanded. As is, the only way to get timeline arithmetic
in datetime is to do some equivalent to converting to UTC first, doing
dirt simple arithmetic in UTC, then converting back to local calendar
notation. That's _horridly_ expensive in comparison. pytz doesn't
avoid this. The arithmetic itself is fast, because it is in fact
classic arithmetic. The expense is hidden in the .normalize() calls,
which perform to-UTC-and-back "repair".
Pragmatics are important here too. For many problem domains, you have
to get results before the contract expires ;-)
> * In this model, "classic" arithmetic doesn't even rise to the level of
> "attractive nuisance," it's simply "wrong arithmetic," because you get
> different results if working with the "same time" represented in
> different timezones, which violates the core axiom of the model; it's no
> longer simply arithmetic with Unix timestamps.
Models are irrelevant to right or wrong; right or wrong can only be
judged with respect to use cases (does a gimmick address the required
task, or not? if so, "right"; if not, is it at least feasible to get
the job done? if so, "grr - but OK"; if still not, "wrong"). Models
can make _achieving_ "right" harder or easier, depending on what a use
datetime's model and implementation made it relatively easy to address
every use case collected across an extensive public design phase.
None of them were about accounting for real-world duration delta as
they affect, or are affected by, civil calendar notations.
Of course those may not be _your_ use cases.
> I don't believe there's anything wrong with Model A. It's not the right
> model for _all_ tasks, but it's simple, easy to understand, fully
> consistent, and useful for many tasks.
Sure! Except for the "simple" and "easy to understand" parts ;-)
People really do trip all the time over zone transitions, to the
extent that no two distinct implementations of C mktime() can really
be expected to set is_dst the same way in all cases, not even after
decades of bug fixes. Your "poor timezone handling" is a real problem
in edge cases across platforms.
> On the whole, it's still the model I find most intuitive and would prefer
> for most of the timezone code I personally write (and it's the one I actually
> use today in practice, because it's the model of pytz).
Do you do much datetime _arithmetic_ in pytz? If you don't, the kind
of arithmetic you like is pretty much irrelevant ;-) But, if you do,
take pytz's own docs to heart:
The preferred way of dealing with times is to always work in UTC,
converting to localtime only when generating output to be read
Your arithmetic-intensive code would run much faster if you followed
that advice, and you could throw out mountains of .normalize() calls.
You're working in Python, and even the storage format of Python
datetimes strongly favors classic arithmetic (as before, any serious
implementation of timeline arithmetic would store UTC ticks directly
> Now Model B. In Model B, an "aware datetime" is a "clock face" or
> "naive" datetime with an annotation of which timezone it's in. A non-UTC
> aware datetime in model B doesn't inherently know what POSIX timestamp
> it corresponds to; that depends on concepts that are outside of its
> naive model of local time, in which time never jumps or goes backwards.
> Model B is what Guido was describing in his email about an aware
> datetime in 2020: he wants an aware datetime to mean "the calendar says
> June 3, the clock face says noon, and I'm located in US/Eastern" and
> nothing more.
> Characteristics of Model B:
> * Naive (or "classic", or "move the clock hands") arithmetic is the only
> kind that makes sense under Model B.
It again depends on which specific use cases you have in mind. Few
people think inside a rigid model. Sometimes they want to break out
of the model, especially when a use case requires it ;-) As you know
all too well already, Python also intends to support a programmer
changing their mind, to view their annotated naive datetime as a
moment in civil time too, at least for zone conversion purposes.
> * As Guido described, if you store an aware datetime and then your tz
> database is updated before you load it again, Model A and Model B aware
> datetimes preserve different invariants. A Model A aware datetime will
> preserve the timestamp it represents, even if that means it now
> represents a different local time than before the zoneinfo change. A
> Model B aware datetime will preserve the local clock time, even though
> it now corresponds to a different timestamp.
> * You can't compare or do arithmetic between datetimes in different
> timezones under Model B; you need to convert them to the same time zone
> first (which may require resolving an ambiguity).
> * Maintaining a `fold` attribute on datetimes at all is a departure from
> Model B, because it represents a bit of information that's simply
> nonsense/doesn't exist within Model B's naive-clock-time model.
> * Under Model B, conversions between timezones are lossy during a fold
> in the target timezone, because two different UTC times map to the same
> Model B local time.
Should also note that Model B conversions to UTC can map two datetimes
to the same UTC time (for times in a gap - they don't exist on the
local civil clock, so have to map to the same UTC value as some other
Model B time that _does_ exist on the local clock).
> These models aren't chosen arbitrarily; they're the two models I'm aware
> of for what a "timezone-aware datetime" could possibly mean that
> preserve consistent arithmetic and total ordering in their allowed
> domains (in Model A, all aware datetimes in any timezone can
> interoperate as a single domain; in Model B, each timezone is a separate
> A great deal of this thread (including most of my earlier messages and,
> I think, even parts your last message here that I'm replying to) has
> consisted of proponents of one of these two models arguing that behavior
> from the other model is wrong or inferior or buggy (or an "attractive
Direct overloaded-operator support for timeline arithmetic is an
attractive nuisance _in datetime_, or any other Python module sharing
datetime's data representation. I disagree with your "but the reasons
are not strong" above. It requires relatively enormous complexity and
expense to perform each lousy timeline addition, subtraction, and
comparison in a non-eternally-fixed-offset zone. It's poor practice
for that reason alone.
Nevertheless, your code, your choice.
> I now think these assertions are all wrong :-) Both models
> are reasonable and useful, and in fact both are capable enough to handle
> all operations, it's just a question of which operations they make
> simple. Model B people say "just do all your arithmetic and comparisons
> in UTC"; Model A people say "if you want Model B, just use naive
> datetimes and track the implied timezone separately."
Do note that my _only_ complaint against timeline arithmetic is making
it seductively easy to spell in Python's datetime. It's dead easy to
get the same results in the intended way (or, would be, in a post-495
> I came into this discussion assuming that Model A was the only sensible
> way for a datetime library to behave. Now (thanks mostly to Guido's note
> about dates in 2020), I've been convinced that Model B is also
> reasonable, and preferable for some uses.
For the use cases collected when datetime was being designed, it was
often the clearly better model, and was never the worse model. Where
"better" and "worse" are judged relative to the model's naturalness in
addressing a use case. Alas, those were collected on a public Wiki
that no longer appears to exist.
> I've also been convinced that Model B is the dominant influence
> and intended model in datetime's design, and that's very unlikely
> to change (even in a backwards-compatible way), so I'm no
> longer advocating that.
That's good, because futility can become tiring as the decades drag on ;-)
> Datetime.py, unfortunately, has always mixed behavior from the two
> models (interzone operations are all implemented from a Model A
> viewpoint; intrazone are Model B). Part of the problem with this is that
> it results in a system that looks like it ought to have total ordering
> and consistent arithmetic, but doesn't. The bigger problem is that it
> has allowed people to come to the library from either a Model A or Model
> B viewpoint and find enough behavior confirming their mental model to
> assume they were right, and assume any behavior that doesn't match their
> model is a bug. That's what happened to Stuart, and that's why pytz
> implements Model A, and has thus encouraged large swathes of Python
> developers to even more confidently presume that Model A is the intended
Stuart would have to address that. He said earlier that his primary
concern was to fix conversions in all cases, not arithmetic.
Explained before that timeline arithmetic was a natural consequence of
the _way_ pytz repaired conversions. It's natural enough then to
assume "oh, I just fixed _two_ bugs!" ;-)
As is, as Isaac noted earlier, he's had a hellish time getting, e.g.,
pytz and dateutil to work together. dateutil requires classic
arithmetic (which is by far the more convenient for implementing
almost all forms of "calendar operations"). So, e.g., take a pytz
aware datetime d, and do
d += relativedelta(month=12, day=1, weekday=FR(+3))
where everything on the RHS is a dateutil way to spell "same time on
the 3rd Friday of this December" when added to a datetime. That's not
particularly contrived - it's, e.g., a way to spell the day monthly US
equity options expire in December, and a user may well need to set an
alarm "at the same wall clock time" then to check their expiring
December contracts before the market closes. Being an hour off
_could_ be a financial disaster to them.
The result is fine, until you do a pytz .normalize(). If d, e.g.,
started in June, then in the US the hour _will_ magically become wrong
"because" there was a DST transition between the original and final
times. Far worse than useless.
A similar fate awaits any attempt to make timeline arithmetic a
default behavior (if it changed what datetime + timedelta did
directly, the dateutil result would be wrong immediately, because
dateutil's relativedelta.__add__ relies in part on what `datetime +
timedelta` does). "Plays nice with others" is also important unless a
module is content to live in a world of its own.
> I think your latest proposal for PEP 495 (always ignore `fold` in all
> intra-zone operations, and push the inconsistency into inter-zone
> comparisons - which were already inconsistent - instead) is by far the
> best option for bringing loss-less timezone-conversion round-trips to
> Model B. Instead of saying (as earlier revisions of PEP 495 did) "we
> claim we're really Model B, but we're going to introduce even more Model
> A behaviors, breaking the consistency of Model B in some cases - good
> luck keeping it straight!" it says "we're sticking with Model B, in
> which `fold` is meaningless when you're working within a timezone, but
> in the name of practical usability we'll still track `fold` internally
> after a conversion, so you don't have to do it yourself in case you want
> to convert to another timezone later."
Alas, there's still no _good_ solution to this :-(
> If the above analysis makes any sense at all to anyone, and you think
> something along these lines (but shorter and more carefully edited)
> would make a useful addition to the datetime docs (either as a
> tutorial-style "intro to how datetime works and how to think about aware
> datetimes" or as an FAQ), I'd be very happy to write that patch.
I've mentioned a few times before that I'd welcome something more akin
to the "floating-point surprises" appendix:
Most users don't want to read anything about theory, but it needs to
be discussed sometimes. So in that appendix, the approach is to
introduce bite-sized chunks of theory to explain concrete, visible
_behaviors_, along with practical advice. The goal is to get the
reader unstuck, not to educate them _too_ much ;-) Anyway, that
appendix appears to have been effective at getting many users unstuck,
so I think it's a now-proven approach.
>> Classic arithmetic is equivalent to doing integer arithmetic on
>> integer POSIX timestamps (although with wider range the same across
>> all platforms, and extended to microsecond precision). That's hardly
>> novel - there's a deep and long history of doing exactly that in the
>> Unix(tm) world. Which is Guido's world. There "shouldn't be"
>> anything controversial about that. The direct predecessor was already
>> best practice in its world. How that could be considered a nuisance
>> seems a real strain to me.
> Unless I'm misunderstanding what you are saying (always likely!), I
> think this is just wrong. POSIX timestamps are a representation of an
> instant in time (a number of seconds since the epoch _in UTC_).
Well, in the POSIX approximation to UTC. Strict POSIX forbids using
real-world UTC (which suffers leap seconds). But, below, I won't keep
making this distinction. That should be a relief ;-)
> If you are doing any kind of "integer arithmetic on POSIX timestamps", you
> are _always_ doing timeline arithmetic.
> Classic arithmetic may be many things, but the one thing it definitively is
> _not_ is "arithmetic on POSIX timestamps."
False. UTC is an eternally-fixed-offset zone. There are no
transitions to be accounted for in UTC. Classic and timeline
arithmetic are exactly the same thing in any eternally-fixed-offset
zone. Because POSIX timestamps _are_ "in UTC", any arithmetic
performed on one is being done in UTC too. Your illustration next
goes way beyond anything I could possibly read as doing arithmetic on
> This is easy to demonstrate: take one POSIX timestamp, convert it to
> some timezone with DST, add 86400 seconds to it (using "classic
> arithmetic") across a DST gap or fold, and then convert back to a POSIX
> timestamp, and note that you don't have a timestamp 86400 seconds away
> from the first timestamp. If you were doing simple "arithmetic on POSIX
> timestamps", such a result would not be possible.
But you're cheating there. It's clear as mud what you have in mind,
concretely, for the _result_ of what you get from "convert it to
some timezone with DST", but the result of that can't possibly be a
POSIX timestamp: as you said at the start, a POSIX timestamp denotes
a number of seconds from the epoch _in UTC_ You're no longer in UTC.
You left the POSIX timestamp world at your very first step. So
anything you do after that is irrelevant to how arithmetic on POSIX
BTW, how do you intend to do that conversion to begin with? C's
localtime() doesn't return time_t (a POSIX timestamp). The standard C
library supports no way to perform the conversion you described,
because that's not how times are intended to work in C, because in
turn the Unix world has the same approach to this as Python's
datetime: all timeline arithmetic is intended to be done in UTC
(equivalent to POSIX timestamps), converting to UTC first (C's
mktime()), then back when arithmetic is done (C's localtime()). The
only difference is that datetime spells both C library functions via
.astimezone(), and is about 1000 times easier to use ;-)
If you're unfamiliar with how this stuff is done in C, here's a
typically incomprehensible ;-) man page briefly describing all the
main C time functions:
Note that mktime ("convert from local to UTC") is the _only_ one
returning a timestamp (time_t). The intent is you do all arithmetic
on time_t's, staying in UTC for the duration. When you're done,
_then_ localtime() converts your final time_t back to local calendar
notation (fills a `struct tm` for output). Exactly the same dance
datetime intends. Python stole almost all of this from C best
practice, except for the spelling.
If by "convert it to some timezone with DST", you intended to get a
struct tm (local calendar notation), then add 86400 to the tm_sec
member, then that doesn't even have an hallucinogenic resemblance to
doing arithmetic on POSIX timestamps.
> In Model A (the one that Lennart and myself and Stuart and Chris have
> all been advocating during all these threads) timezone) are unambiguous
> representations of a POSIX timestamp, and all arithmetic is "arithmetic
> on POSIX timestamps." That right there is the definition of timeline arithmetic.
Here's an example of arithmetic on POSIX timestamps:
1 + 2
returning 3. It's not some kind of equivalence relation or bijection,
it's concretely adding two integers to get a third integer. That's
all I mean by "arithmetic on POSIX timestamps". It's equally useful
for implementing classic or timeline arithmetic. The difference
between those isn't in the timestamp arithmetic, it's in how
conversions between integers and calendar notations are defined.
There does happen to be an obvious bijection between arithmetic on
(wide enough) POSIX timestamps and naive datetime arithmetic, which is
in turn trivially isomorphic to aware datetime arithmetic in UTC.
Although the "obvious" there depends on knowing first that, at heart,
a Python datetime is an integer count of microseconds since the start
of 1 January 1. It's just an integer stored in a bizarre mixed-radix
> So yes, I agree with you that it's hard to consider "arithmetic on POSIX
> timestamps" an attractive nuisance :-)
>> Where it gets muddy is extending classic arithmetic to aware datetimes
> If by "muddy" you mean "not in any way 'arithmetic on POSIX timestamps'
> anymore." :-)
> I don't even know what you mean by "extending to aware datetimes" here;
I meant what I said: extending classic arithmetic to aware datetimes
muddied the waters. Because some people do expect aware datetimes to
implement timeline arithmetic instead. That's all.
> the concept of "arithmetic on POSIX timestamps" has no meaning at all
> with naive datetimes (unless you're implicitly assuming some timezone),
> because naive datetimes don't correspond to any particular instant,
> whereas a POSIX timestamp does.
If you need to, implicitly assume UTC. There are no surprises at all
if you want to _think_ of naive datetimes as being in (the POSIX
approximation of real-world) UTC. They're identical in all visible
behaviors that don't require a tzinfo. Indeed, here's how to convert
a naive datetime `dt` "by hand" to an integer POSIX timestamp,
pretending `dt` is a UTC time:
EPOCH = datetime(1970, 1, 1)
ts = (dt - EPOCH) // timedelta(seconds=1)
Try it! If you don't have Python 3, it's just as trivial, but you'll
have to convert the 3 timedelta attributes (days, seconds,
microseconds) to seconds by hand and add them.
EPOCH + timedelta(seconds=ts)
to get back the original dt.
To get a floating POSIX timestamp instead (including microseconds):
ts = (dt - EPOCH).total_seconds()
Please let's not argue about trivially easy bijections. datetime's
natural EPOCH is datetime(1, 1, 1), and _all_ classic arithmetic is
easily defined in terms of integer arithmetic on
integer-count-of-microsecond timestamps starting from there. While it
would be _possible_ to think of those as denoting UTC timestamps, it
wouldn't really be helpful ;-)
>>> If datetime did naive arithmetic on tz-annotated datetimes, and also
>>> refused to ever implicitly convert them to UTC for purposes of
>>> cross-timezone comparison or arithmetic, and included a `fold` parameter
>>> not on the datetime object itself but only as an additional input
>>> argument when you explicitly convert from some other timezone to UTC,
>>> that would be a consistent view of the meaning of a tz-annotated
>>> datetime, and I wouldn't have any problem with that.
>> I would. Pure or not, it sounds unusable: when I convert _from_ UTC
>> to a local zone, I have no idea whether I'll end up in a gap, a fold,
>> or neither. And so I'll have no idea either what to pass _to_
>> .utcoffset() when I need to convert back to UTC. It doesn't solve the
>> conversion problem. It's a do-it-yourself kit missing the most
>> important piece. "But .fromutc() could return the right flag to pass
>> back later" isn't attractive either. Then the user ends up needing to
>> maintain their own (datetime, convert_back_flag) pairs. In which
>> case, why not just store the flag _in_ the datetime? Only tzinfo
>> methods would ever need to look at it.
> Yes, I agree with you here. I think your latest proposal for PEP 495
> does a great job of providing this additional convenience for the user
> without killing the intra-timezone Model B consistency. I just wish that
> the inconsistent inter-timezone operations weren't supported at all, but
> I know it's about twelve years too late to do anything about that other
> than document some variant of "you shouldn't compare or do arithmetic
> with datetimes in different timezones; if you do you'll get inconsistent
> results in some cases around DST transitions. Convert to the same
> timezone first instead."
Alas, I'm afraid Alex is right that people may well be using interzone
subtraction to do conversions already. For example, the timestamp
snippets I gave above are easily extended to convert any aware
datetime to a POSIX timestamp: just slap tzinfo=utc on the EPOCH
constant, and then by-magic interzone subtraction converts `dt` to UTC
automatically. For that to continue to work as intended in all cases
post-495, we can't change anything about interzone subtraction.
Which, for consistency between them, implies we "shouldn't" change
anything about interzone comparisons either.
> Until your latest proposal on PEP 495, I wasn't sure we really did agree
> on this, because it seemed you were still willing to break the
> consistency of Model B arithmetic in order to gain some of the benefits
> of Model A (that is, introduce _even more_ of this context-dependent
> ambiguity as to what a tz-annotated datetime means.) But your latest
> proposal fixes that in a way I'm quite happy with, given where we are.
I'm still not sure it's a net win to change anything . Lots of
tradeoffs. I do gratefully credit our exchanges for cementing my
hatred of muddying Model B: the more I had to "defend" Model B, the
more intense my determination to preserve its God-given honor at all
>> Although the conceptual fog has not really been an impediment to
>> using the module in my experience.
>> In yours? Do you use datetime? If so, do you trip over this?
> No, because I use pytz, in which there is no conceptual fog, just strict
> Model A (and an unfortunate API).
And applications that apparently require no use whatsoever of dateutil
> I didn't get to experience the joy of this conceptual fog until I
> started arguing with you on this mailing list! And now I finally feel
> like I'm seeing through that fog a bit. I hope I'm right :-)
I doubt we'll ever know for sure ;-)
More information about the Datetime-SIG