[Datetime-SIG] Timeline arithmetic?

Tim Peters tim.peters at gmail.com
Fri Sep 4 21:08:02 CEST 2015

>> But you have to know by now that datetime always intended that apps
>> needing timeline arithmetic use UTC instead (or timestamps), and
>> there's scarcely an experienced voice on the planet that would
>> _recommend_ doing it any other way.  Building in "by magic" timeline
>> arithmetic would be fighting both datetime's design and universally
>> recognized best practice.

[Carl Meyer <carl at oddbird.net>]
> I find this argument a bit disingenuous - though it depends what exactly
> you are arguing, which isn't clear to me.

In the above, I'm not arguing at all.  I'm trying to tell Chris in
advance what the most likely fundamental objections to any "timeline
arithmetic PEP" are likely to be when it comes to the one vote that
matters the most:  Guido's.  Here I'm wearing my "attempt to channel
Guido in his absence" hat.  Forewarned is forearmed.  In this case, it
happens to be much the same as I'd say wearing several of my other
hats ;-)

In other contexts, I wear my "Tim as a Python user hat", "Tim as a
computer `scientist'" hat, "Tim as an explainer of past decisions"
hat, "Tim as an advocate for a particular change" hat, "Tim as a
Python developer" hat, "Tim thinking out loud" hat, and so on.  It's
absurd to expect consistency among _all_ those roles.  In human
communication, context is necessary to distinguish, but sometimes

> All else being equal, designing a green-field datetime library,
> "universally recognized best practice" does not provide any argument for
> naive arithmetic over aware arithmetic on aware datetimes. Making the
> choice to implement aware arithmetic is not "fighting" a best practice,
> it's just providing a reasonable and fully consistent convenience for
> simple cases.

It would create an "attractive nuisance", yes ;-)  It's for much the
same reason, e.g., that Guido never gave a moment's serious
consideration to magically making

    1 + "123"

return "1123" or 124.  Make a dubious thing dead easy to spell, and
that _implicitly_ encourages its use.  That's where "best practice"
comes in.  Best practice when mixing ints and strings is to explicitly
force the choice you intend.  Best practice for timeline arithmetic in
goofy timezones is to explicitly convert to a non-goofy zone first.
In which case the distinction between "timeline" and "classic"
arithmetic is non-existent.

For that reason ("explainer of past decisions" hat), timeline
arithmetic was never really on the table - it was never needed for any
"best practice" use case, and Python never _intends_ to encourage poor

> You could perhaps argue that implementing _any_ kind of arithmetic on
> aware non-UTC datetimes is unnecessary and likely to give someone, at
> some point, results they didn't expect, and that it should instead just
> raise an exception telling you to convert to UTC first.

Wearing my "Tim as computer 'scientist'" hat, that's what I would have
preferred.  As a plain old Python user, I'm happy enough with the
status quo.  It's been useful to me!

> The fact that best practice is to manipulate datetimes internally in UTC
> (meaning the use case already has a usually-better alternative) can
> certainly _weaken_ the argument for bothering to _change_ the behavior
> of arithmetic on aware datetimes,

There is no argument that can possibly succeed for changing arithmetic
on aware datetimes:  "Tim as Python developer hat" there.  That would
be massively backward-incompatible.  No chance whatsoever.  Not even
if there were 100% agreement from everyone that classic arithmetic is
utterly useless for all purposes and that allowing it at all was a
horrible mistake.  That kind of change could only be made in Python 4.

> once it's been implemented otherwise for many years. That may be all
> you're trying to say here, in which case I fully agree.

I wasn't saying any of that.  I was telling Chris where a timeline
arithmetic PEP would most likely face deepest resistance from Guido.

> The core arguments _for_ aware arithmetic on aware datetimes are:
> 1) Conceptual coherence. Naive is naive, aware is aware, both models are
> fully internally consistent. Mixing them, as datetime does, will never
> be fully consistent. You may call this "purity" if you like, but the
> issues with PEP 495 do reveal a lack of coherence in datetime's design

I think making no distinction between "naive time" and "civil time" is
the core of coherence glitches.  An aware datetime is purely neither
in the implementation, and different operations treat it in different
ways.  Wearing many hats, I don't like that.  Wearing my "real life
Python user" hat, though - eh, I can't really say it's caused me

> (that is, that it lacks a consistently-applied notion of what a
> tz-annotated datetime means). I think you've admitted this much
> yourself, though you suggested (in passing) that it could/should have
> achieved coherence in the opposite direction, by disallowing all
> comparisons and aware arithmetic (that is, all implicit conversions to
> UTC) between datetimes in different timezones.

When wearing several different hats, yes, _that's_ more appealing.
But kinda pointless, since that's not what's actually done, and PEPs
have to move on from what _is_ the case.

> 2) Principle of least surprise for casual users. On this question, "you
> should use UTC for arithmetic" is equivalent to "you should use a period
> recurrence library for period arithmetic." Both arguments are true in
> principle, neither one is relevant to the question of casual users
> getting the results they expect.

That last wasn't ever really a _driving_ force in Python's design.
>From the earlier example, a great many users have complained a great
many times that

    1 + "123"

_doesn't_ return 124.  That _is_ what most casual users expect.  Tough
luck - Python's not for the terminally lazy.

That said, Guido's belief was that "adding 24 hours" _should_ return
"same clock time tomorrow" in all cases.  There was extensive public
review at the time, and I don't recall anyone disagreeing.

> There may of course be legitimate disagreement on which behavior is
> less surprising for casual users.
> Unfortunately I don't think datetime.py (even in its many years of
> existence) has given us useful data on that, since it never included a
> timezone database and most people who need one use pytz.

I agree, except that I'm not sure we can deduce much from pytz's
experience either.  Stuart has said that his _primary_ goal was to fix
conversion in all cases, not really to "fix arithmetic".  To fix the
former, fixed-offset classes always get used (to supply the "missing
bit" in a wonderfully convoluted way), and "timeline arithmetic" was
the _natural_ result of doing so (because timeline and classic
arithmetic are exactly the same thing in any fixed-offset zone).  So,
in pytz, assuming they always remember to call .normalize(), timeline
arithmetic is forced.

> It's often unclear to me when you're trying to justify datetime's design
> choices,

I'm not sure I ever try to justify them.  Why bother?  I do often try
to explain them, and sometimes express an opinion _about_ them when
wearing one hat or another.  It doesn't really matter whether anyone
(including me) agrees or disagrees with decisions made a decade ago -
with my Python developer hat on, it's only what we do tomorrow that
matters.  The past can only be a constraint on, or inspiration for,
future decisions.

> and when you're just pointing out that the bar is really high
> for changing established "good enough" behavior. If you want me to shut
> up and stop arguing with you (which would be an eminently reasonable
> desire!) clarifying that it's the latter more than the former would help
> tremendously, because on the latter point I agree completely.

Well, you can't see me, but I really do have a collection of 42 hats
on the table next to me, and every time I write a reply, sentence by
sentence I put on the hat most appropriate to what the current
sentence intends ;-)

More information about the Datetime-SIG mailing list