[Datetime-SIG] Timeline arithmetic?

Carl Meyer carl at oddbird.net
Mon Sep 7 18:20:55 CEST 2015


I'll offer another TL;DR:

* You prefer Model B, and the use cases that drove the implementation of
datetime favored Model B. Great! I have zero problem with that, and zero
problem with datetime continuing to implement Model B (thus I agree with
you completely that by-default -- operator overloaded -- timeline
arithmetic in datetime would be wrong and break its model). As with any
library I use, I just want its objects to implement a consistent and
simple-as-possible (but no simpler!) mental model so that I can reliably
predict its behavior. I understand that it's too late for datetime to do
that fully, but we can still keep it in mind as a principle to help
guide future changes.

On 09/07/2015 03:12 AM, Tim Peters wrote:
> [Carl Meyer <carl at oddbird.net>]
>> (tl;dr I think your latest proposal re PEP 495 is great.)
> 
> I don't.  The last two were less annoying, though ;-)

"Great" here is thoroughly in context of "where we are today, and where
it's feasible to go from here." Isn't that the context you keep trying
to get me to think in? Keep up with my hats already! ;-)

More on the PEP 495 options later on.

>> Consider two models for the meaning of a "timezone-aware datetime
>> object". Let's just call them Model A and Model B:
> 
> In which context?  Abstractly, or the context of Python's current
> datetime module, or in the context of some hypothetical future Python
> datetime module, or some datetime module that _might_ have existed
> instead, or ...?

Any of 1, 3, or 4. But the exercise is illuminating for question 2,
also. Per what you say below, it sounds like my insistence on discussing
abstract mental models and their implications has already helped nudge
you towards a proposal that maintains Model B consistency better. My
preference for model A vs B is negligible compared to my preference for
_some_ consistently-applied mental model, so I think that's "great."

> My only real interest here is moving the module that actually exists
> to one that can get conversions right in all cases, preferably in a
> wholly backward-compatible way.  Models don't really matter to that,
> but specific behaviors do.

I think the two most important questions you can ask about the behavior
of any library are a) Does it apply a consistent mental model of the
problem domain? and b) is that mental model applicable to the problems
you need to solve? (Or perhaps it may offer more than one mental model,
but clearly split in the API so you can decide which one applies best to
your use cases).

I can't really fathom an approach to library design (even library design
constrained by backwards compatibility) that honestly believes "models
don't really matter, but specific behaviors do." Models are critical in
order to present a consistent set of behaviors that the user of the
library can successfully predict, once they understand the model.

>> In Model A, an aware datetime (in any timezone) is nothing more than an
>> alternate (somewhat complexified for human use) spelling of a Unix
>> timestamp, much like a timedelta is just a complexified spelling of some
>> number of microseconds.
> 
> A Python datetime is also just a complexified spelling of some number
> of microseconds (since the start of 1 January 1 of the proleptic
> Gregorian calendar).

Which is a "naive time" concept, which is a pretty good sign that Python
datetime wasn't intended to implement Model A. I thought it was already
pretty clear that I'd figured that out by now :-)

>> In this model, there's a bijection between aware datetimes in any
>> two timezones. (This model requires the PEP 495 flag,
>> or some equivalent. Technically, this model _could_ be implemented by
>> simply storing a Unix timestamp and a timezone name, and doing all
>> date/time calculations at display time.) In this model, "Nov 2 2014
>> 1:30am US/Eastern fold=1" and "Nov 2 2014 6:30am UTC" are just alternate
>> spellings of the _same_ underlying timestamp.
>>
>> Characteristics of Model A:
>>
>> * There's no issue with comparisons or arithmetic involving datetimes in
>> different timezones; they're all just Unix timestamps under the hood
>> anyway, so ordering and arithmetic is always obvious and consistent:
>> it's always equivalent to simple integer arithmetic with Unix timestamps.
>>
>> * Conversions between timezones are always unambiguous and lossless:
>> they're just alternate spellings of the same integer, after all.
>>
>> * In this model, timeline arithmetic everywhere is the only option.
> 
> Why?  

Because it's the only choice that doesn't break the mental model. If
"all datetimes in any timezone are really just alternate spellings of a
Unix timestamp", then adding X seconds to a datetime in any timezone
must result in a datetime that represents a Unix timestamp that's X
seconds later. _If you're within this mental model_. You may not prefer
this mental model; you may think is less useful, or slower, or whatever,
and that's fine. But you have to at least acknowledge that it is
internally consistent and conceptually simple; it's fundamentally
nothing more than arithmetic on POSIX timestamps, all the time and
everywhere.

I don't know how to say this any more clearly. If you still can't
acknowledge that much, I think I have to give up.

> The kind of arithmetic needed for a task depends on the task.
> There are no specific use cases given here, so who can say?  Some
> tasks need to account for real-world durations; others need to
> overlook irregularities in real-world durations (across zone
> transitions) in order to maintain regularities between the
> before-and-after calendar notations.  Timeline arithmetic is only
> directly useful for dealing with real-world durations as they affect
> civil calendar notations.  Some tasks require that, other tasks can't
> tolerate that.

Of course! I'm describing the implications of a mental model here, not
arguing that it's the best model for all tasks.

>> Every non-UTC aware datetime is just an alternate spelling of an
>> equivalent UTC datetime / Unix timestamp, so in a certain sense you're
>> always doing "arithmetic in UTC" (or "arithmetic with Unix timestamps"),
>> but you can spell it in whichever timezone you like. In this model,
>> there's very little reason to consider arithmetic in non-UTC timezones
>> problematic; it's always consistent and predictable and gives exactly
>> the same results as converting to UTC first. For sizable systems it may
>> still be good practice to do everything internally in UTC and convert at
>> the edges, but the reasons are not strong; mostly just avoiding
>> interoperability issues with databases or other systems that don't
>> implement the same model, or have poor timezone handling.
> 
> How do you think timeline arithmetic is implemented?  datetime's
> motivating use cases overwhelmingly involved quick access to local
> calendar notation, so datetime stores local calendar notation (both in
> memory and in pickles) directly.  Any non-toy implementation of
> timeline arithmetic would store time internally in UTC ticks instead,
> enduring expensive conversions to local calendar notation only when
> explicitly demanded.  As is, the only way to get timeline arithmetic
> in datetime is to do some equivalent to converting to UTC first, doing
> dirt simple arithmetic in UTC, then converting back to local calendar
> notation.  That's _horridly_ expensive in comparison.  pytz doesn't
> avoid this.  The arithmetic itself is fast, because it is in fact
> classic arithmetic.  The expense is hidden in the .normalize() calls,
> which perform to-UTC-and-back "repair".

Yes, of course. I know all this. In summary: "datetime wasn't intended
as Model A." How many times do we need to agree on that? ;-)

And I've also agreed that datetime shouldn't be converted to Model A. So
what are you trying to convince me of, here?

>> * In this model, "classic" arithmetic doesn't even rise to the level of
>> "attractive nuisance," it's simply "wrong arithmetic," because you get
>> different results if working with the "same time" represented in
>> different timezones, which violates the core axiom of the model; it's no
>> longer simply arithmetic with Unix timestamps.
> 
> Models are irrelevant to right or wrong; right or wrong can only be
> judged with respect to use cases (does a gimmick address the required
> task, or not?  if so, "right"; if not, is it at least feasible to get
> the job done?  if so, "grr - but OK"; if still not, "wrong").  Models
> can make _achieving_ "right" harder or easier, depending on what a use
> case requires.

Once again, you seem to be trying to interpret every characterization of
Model A as an argument that "Model A is right, other models are wrong,
and datetime ought to be Model A." I'm not saying any of that; which
model is best obviously depends on the use case (though both models are
_capable_ of handling all use cases, it just may be slower and less
convenient. That's a typical set of tradeoffs when choosing a model).

All I'm saying is "if you accept Model A as your mental model, this is
the behavior that must follow (the behavior that is _right_ _for the
model_; which _is_ something that is possible to judge), else you've
broken the model, and you're implementing some other model instead, or
(worse) you're not implementing a consistent model at all."

>> I don't believe there's anything wrong with Model A. It's not the right
>> model for _all_ tasks, but it's simple, easy to understand, fully
>> consistent, and useful for many tasks.
> 
> Sure!  Except for the "simple" and "easy to understand" parts ;-)

Maybe not to you, I guess; though I have to suspect that you're playing
a little dumb here for effect (is this the jester hat?). I think
"everything is isomorphic to a Unix timestamp, just represented in
different spellings, and all arithmetic is isomorphic to integer
arithmetic on Unix timestamps" is pretty simple and easy to understand,
personally.

> People really do trip all the time over zone transitions,

Of course they do, because timezones, and timezone transitions
specifically, are terrible. And some will continue to trip over them, in
different ways and in different scenarios, regardless of whether they
work in Model A or Model B.

They will trip over them _more_ if they are using a library that can't
decide what mental model it implements, and tries to guess that they
mean one for this operation and another for that operation, than if they
are using a library that consistently implements one mental model. Do we
still agree on that, or not anymore? ;-)

>> On the whole, it's still the model I find most intuitive and would prefer
>> for most of the timezone code I personally write (and it's the one I actually
>> use today in practice, because it's the model of pytz).
> 
> Do you do much datetime _arithmetic_ in pytz?  If you don't, the kind
> of arithmetic you like is pretty much irrelevant ;-)  But, if you do,
> take pytz's own docs to heart:
> 
>     The preferred way of dealing with times is to always work in UTC,
>     converting to localtime only when generating output to be read
>     by humans.
> 
> Your arithmetic-intensive code would run much faster if you followed
> that advice, and you could throw out mountains of .normalize() calls.
> You're working in Python, and even the storage format of Python
> datetimes strongly favors classic arithmetic (as before, any serious
> implementation of timeline arithmetic would store UTC ticks directly
> instead).

I do follow that advice; I don't believe my latest heavy-datetime-using
application does non-UTC timeline arithmetic anywhere.

But unless a library outlaws arithmetic on non-UTC datetimes altogether,
I'd like it to implement it in a way that's consistent with its mental
model, whichever one it picks. Because not all little scripts need to
follow the ideal best practice and squeeze out optimal performance, but
they nonetheless deserve predictable behavior that consistently
implements _some_ mental model of the problem domain.

>> Now Model B. In Model B, an "aware datetime" is a "clock face" or
>> "naive" datetime with an annotation of which timezone it's in. A non-UTC
>> aware datetime in model B doesn't inherently know what POSIX timestamp
>> it corresponds to; that depends on concepts that are outside of its
>> naive model of local time, in which time never jumps or goes backwards.
>> Model B is what Guido was describing in his email about an aware
>> datetime in 2020: he wants an aware datetime to mean "the calendar says
>> June 3, the clock face says noon, and I'm located in US/Eastern" and
>> nothing more.
>>
>> Characteristics of Model B:
>>
>> * Naive (or "classic", or "move the clock hands") arithmetic is the only
>> kind that makes sense under Model B.
> 
> It again depends on which specific use cases you have in mind.  Few
> people think inside a rigid model.  Sometimes they want to break out
> of the model, especially when a use case requires it ;-)  As you know
> all too well already, Python also intends to support a programmer
> changing their mind, to view their annotated naive datetime as a
> moment in civil time too, at least for zone conversion purposes.

I'm all in favor of Python supporting a programmer switching from one
mental model to another. There are good ways to do that explicitly, e.g.
by representing each mental model with its own type of object. See
JodaTime/NodaTime for one example.

I'm not in favor of Python guessing that the programmer "probably" has
one mental model in mind when doing one operation, and another when
doing another, on the very same object. That kind of thing leads to
angry programmers who think the library is buggy. You may have seen a
few of them on this mailing list ;-)

I thought we agreed on this (I recall you saying "how many times do we
have to agree on this?"), but then it seems like you keep waffling as to
whether you actually do or not. I guess it depends which hat you're
wearing at the time ;-)

...

>> These models aren't chosen arbitrarily; they're the two models I'm aware
>> of for what a "timezone-aware datetime" could possibly mean that
>> preserve consistent arithmetic and total ordering in their allowed
>> domains (in Model A, all aware datetimes in any timezone can
>> interoperate as a single domain; in Model B, each timezone is a separate
>> domain).
>>
>> A great deal of this thread (including most of my earlier messages and,
>> I think, even parts your last message here that I'm replying to) has
>> consisted of proponents of one of these two models arguing that behavior
>> from the other model is wrong or inferior or buggy (or an "attractive
>> nuisance").
> 
> Direct overloaded-operator support for timeline arithmetic is an
> attractive nuisance _in datetime_, or any other Python module sharing
> datetime's data representation.

I 100% agree with you. Datetime is a Model B implementation (mostly);
its data representation reflects that, and I absolutely don't think it
should have operator-overloaded support for timeline arithmetic. Was I
insufficiently clear about that?

Actually, I think "attractive nuisance" is too weak here. I think
operator-overloaded timeline arithmetic on aware datetimes in datetime
would be simply wrong; it would break the mental model of what an aware
datetime is, under Model B.

> I disagree with your "but the reasons
> are not strong" above.  It requires relatively enormous complexity and
> expense to perform each lousy timeline addition, subtraction, and
> comparison in a non-eternally-fixed-offset zone.

"In datetime or a a module sharing datetime's data representation," yes.
My "but the reasons are not strong" was clearly specific to Model A,
which datetime is not.

I tried very hard to set up a clear delineation between the two models,
and be very clear that I understand datetime is Model B and should
remain that way. But nonetheless, you seem very determined to blur that
line and interpret all my comments about Model A as if I'm saying they
should apply to datetime. Please don't do that ;-)

>> I now think these assertions are all wrong :-) Both models
>> are reasonable and useful, and in fact both are capable enough to handle
>> all operations, it's just a question of which operations they make
>> simple. Model B people say "just do all your arithmetic and comparisons
>> in UTC"; Model A people say "if you want Model B, just use naive
>> datetimes and track the implied timezone separately."
> 
> Do note that my _only_ complaint against timeline arithmetic is making
> it seductively easy to spell in Python's datetime.

Great! Then we agree, so can we stop arguing about it? ;-)

I thought I was already pretty clear that I no longer believed that
timeline arithmetic should be made easy to spell in Python's datetime.

I just _also_ think that there _is_ a reasonable alternative mental
model in which only timeline arithmetic makes sense and classic
arithmetic looks buggy, and I thought that trying to clearly outline
that alternative mental model might help make sense of where the
"classic arithmetic is wrong!" viewpoint originates.

>> If the above analysis makes any sense at all to anyone, and you think
>> something along these lines (but shorter and more carefully edited)
>> would make a useful addition to the datetime docs (either as a
>> tutorial-style "intro to how datetime works and how to think about aware
>> datetimes" or as an FAQ), I'd be very happy to write that patch.
> 
> I've mentioned a few times before that I'd welcome something more akin
> to the "floating-point surprises" appendix:
> 
>      https://docs.python.org/3/tutorial/floatingpoint.html
> 
> Most users don't want to read anything about theory, but it needs to
> be discussed sometimes.  So in that appendix, the approach is to
> introduce bite-sized chunks of theory to explain concrete, visible
> _behaviors_, along with practical advice.  The goal is to get the
> reader unstuck, not to educate them _too_ much ;-)  Anyway, that
> appendix appears to have been effective at getting many users unstuck,
> so I think it's a now-proven approach.

That's very similar to what I had in mind, actually. I'll work on a doc
patch, and look forward to you tearing it apart ;-)

> 
>>> Classic arithmetic is equivalent to doing integer arithmetic on
>>> integer POSIX timestamps (although with wider range the same across
>>> all platforms, and extended to microsecond precision).  That's hardly
>>> novel - there's a deep and long history of doing exactly that in the
>>> Unix(tm) world.  Which is Guido's world.  There "shouldn't be"
>>> anything controversial about that.  The direct predecessor was already
>>> best practice in its world.  How that could be considered a nuisance
>>> seems a real strain to me.
> 
>> If you are doing any kind of "integer arithmetic on POSIX timestamps", you
>> are _always_ doing timeline arithmetic.
> 
> True.
> 
>> Classic arithmetic may be many things, but the one thing it definitively is
>> _not_ is "arithmetic on POSIX timestamps."
> 
> False.  UTC is an eternally-fixed-offset zone.  There are no
> transitions to be accounted for in UTC.  Classic and timeline
> arithmetic are exactly the same thing in any eternally-fixed-offset
> zone.  Because POSIX timestamps _are_ "in UTC", any arithmetic
> performed on one is being done in UTC too.  Your illustration next
> goes way beyond anything I could possibly read as doing arithmetic on
> POSIX timestamps:

Translation: "I refuse to countenance the possibility of Model A."

>> This is easy to demonstrate: take one POSIX timestamp, convert it to
>> some timezone with DST, add 86400 seconds to it (using "classic
>> arithmetic") across a DST gap or fold, and then convert back to a POSIX
>> timestamp, and note that you don't have a timestamp 86400 seconds away
>> from the first timestamp. If you were doing simple "arithmetic on POSIX
>> timestamps", such a result would not be possible.
> 
> But you're cheating there.  It's clear as mud what you have in mind,
> concretely, for the _result_ of what you get from "convert it to
> some timezone with DST", but the result of that can't possibly be a
> POSIX timestamp:  as you said at the start, a POSIX timestamp denotes
> a number of seconds from the epoch _in UTC_  You're no longer in UTC.
> You left the POSIX timestamp world at your very first step.  So
> anything you do after that is irrelevant to how arithmetic on POSIX
> timestamps behaves.

Not if your mental model is that an aware datetime in some other
timezone is isomorphic to a POSIX timestamp with a timezone annotation.
 In that case, the "timezone conversion" part is really easy and
obvious; you just change the timezone annotation.

> BTW, how do you intend to do that conversion to begin with?  C's
> localtime() doesn't return time_t (a POSIX timestamp).  The standard C
> library supports no way to perform the conversion you described,
> because that's not how times are intended to work in C, because in
> turn the Unix world has the same approach to this as Python's
> datetime:  all timeline arithmetic is intended to be done in UTC
> (equivalent to POSIX timestamps), converting to UTC first (C's
> mktime()), then back when arithmetic is done (C's localtime()).  The
> only difference is that datetime spells both C library functions via
> .astimezone(), and is about 1000 times easier to use ;-)
> 
> If you're unfamiliar with how this stuff is done in C, here's a
> typically incomprehensible ;-) man page briefly describing all the
> main C time functions:
> 
>     http://linux.die.net/man/3/mktime

Thank you. In exchange, here's a reference to the ZonedDateTime object
from NodaTime:
http://nodatime.org/1.3.x/api/html/T_NodaTime_ZonedDateTime.htm

I think (notably unlike the C libraries) NodaTime/JodaTime is an
excellent example of a datetime library that maintains its mental models
clearly, and provides the necessary set of objects to represent all the
various concepts unambiguously and consistently. I think its usability
is attested to by the fact that it's become the de facto standard in the
Java world, and somebody went to the trouble of porting it to .NET, too,
where it's also become quite popular.

>> In Model A (the one that Lennart and myself and Stuart and Chris have
>> all been advocating during all these threads)  timezone) are unambiguous
>> representations of a POSIX timestamp, and all arithmetic is "arithmetic
>> on POSIX timestamps." That right there is the definition of timeline arithmetic.
> 
> Here's an example of arithmetic on POSIX timestamps:
> 
>    1 + 2
> 
> returning 3.  It's not some kind of equivalence relation or bijection,
> it's concretely adding two integers to get a third integer.  That's
> all I mean by "arithmetic on POSIX timestamps".  It's equally useful
> for implementing classic or timeline arithmetic.  The difference
> between those isn't in the timestamp arithmetic, it's in how
> conversions between integers and calendar notations are defined.
> There does happen to be an obvious bijection between arithmetic on
> (wide enough) POSIX timestamps and naive datetime arithmetic, which is
> in turn trivially isomorphic to aware datetime arithmetic in UTC.
> Although the "obvious" there depends on knowing first that, at heart,
> a Python datetime is an integer count of microseconds since the start
> of 1 January 1.  It's just an integer stored in a bizarre mixed-radix
> notation.

So, "timeline arithmetic is just arithmetic on POSIX timestamps" means
viewing aware datetimes as isomorphic to POSIX timestamps.

"Classic arithmetic is just arithmetic on POSIX timestamps" means
viewing aware datetimes as naive datetimes which one can pretend are in
a hypothetical (maybe UTC, if you like) fixed-offset timezone which is
isomorphic to actual POSIX timestamps (even though their actual timezone
may not be fixed-offset).

I accept that those are both true and useful in the implementation of
their respective model. I just don't think either one is inherently
obvious or useful as a justification of their respective mental models;
rather, which one you find "obvious" just reveals your preferred mental
model.

...

>> I think your latest proposal for PEP 495
>> does a great job of providing this additional convenience for the user
>> without killing the intra-timezone Model B consistency. I just wish that
>> the inconsistent inter-timezone operations weren't supported at all, but
>> I know it's about twelve years too late to do anything about that other
>> than document some variant of "you shouldn't compare or do arithmetic
>> with datetimes in different timezones; if you do you'll get inconsistent
>> results in some cases around DST transitions. Convert to the same
>> timezone first instead."
> 
> Alas, I'm afraid Alex is right that people may well be using interzone
> subtraction to do conversions already.  For example, the timestamp
> snippets I gave above are easily extended to convert any aware
> datetime to a POSIX timestamp:  just slap tzinfo=utc on the EPOCH
> constant, and then by-magic interzone subtraction converts `dt` to UTC
> automatically.  For that to continue to work as intended in all cases
> post-495, we can't change anything about interzone subtraction.
> Which, for consistency between them, implies we "shouldn't" change
> anything about interzone comparisons either.

Such code wouldn't be any _more_ broken after PEP 495 in a fold case
than it is already.

You can't maintain consistency everywhere, because datetime already
wants to treat aware datetimes as two different things in different
places. I thought we'd established that. The interzone timeline
arithmetic combined with intrazone classic arithmetic already results in
inconsistencies. So your choices are:

a) don't do PEP 495, and leave timezone conversions lossy for everyone
(except people using pytz). This effectively forces everyone who wants
loss-less conversions (and doesn't want to roll their own solution) into
the pytz model, which you don't like, and the pytz API, which nobody likes.

b) add `fold` solely as an argument to `astimezone` (and maybe `combine`
and the constructor too?), and maybe somehow allow users to get its
value out of a conversion going the other way (no idea what API would
work there) and make the user keep track of it themselves if they are
working in "local" time but may want to convert back later. This option
forces the inconsistency out of datetime by just making it the user's
problem. Usability is pretty bad, but at least it doesn't change
existing behavior, gives users _some_ way to be correct, and doesn't
guess at their intentions in inconsistent cases.

c) spike your intra-timezone classic arithmetic with a dash of timeline
arithmetic, making datetime even more confused about its mental model
than it is already.

d) don't support PEP 495 in interzone operations at all, meaning code
using interzone operations gains no benefit from PEP 495, but is no more
broken than it is today (but code using explicit timezone conversions
does benefit)

e) make interzone equality weird in fold cases, but otherwise support
PEP 495 in interzone operations as well as conversions.

I think (d) and (e) are the best options of those, and I don't have a
strong preference between them. They aren't ideal, but there is no ideal
option, including the "do nothing" option. All of these cases introduce
inconsistency somewhere, it's just a question of where you want to put
it. I'm personally not that fussed if you decide to stick with (a) instead.

> I'm still not sure it's a net win to change anything .  Lots of
> tradeoffs.  I do gratefully credit our exchanges for cementing my
> hatred of muddying Model B:  the more I had to "defend" Model B, the
> more intense my determination to preserve its God-given honor at all
> costs ;-)

My work here is done ;-)

Funny how it had roughly the opposite result from what I thought I
wanted when I entered the conversation, but I still think it's the right
result.

>>> Although the conceptual fog has not really been an impediment to
>>> using the module in my experience.
> 
>>> In yours?  Do you use datetime?  If so, do you trip over this?
> 
>> No, because I use pytz, in which there is no conceptual fog, just strict
>> Model A (and an unfortunate API).
> 
> And applications that apparently require no use whatsoever of dateutil
> operations ;-)

Oh, I use dateutil.rrule frequently, I just separate the tzinfo from the
datetime first, which makes perfect sense to me as a way to say "Ok, I
want to operate in the naive time model now, please." It's really not
that hard :-)

Please don't take my "I use pytz, so I don't have _these_ problems" as
"I use pytz, so I have _no_ problems." I fully accept that pytz is a
god-awful (though very impressive!) hack to implement Model A on top of
something that was always meant to be Model B, and that results in both
a bad API and bad performance for some operations (though the latter
really couldn't be less of an issue for my uses).

I'm still not sure what's a _better_ option than pytz for someone who
wants fully-correct and round-trippable timezone conversions and
fully-consistent behavior from a Python datetime library _today_.

Carl

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: OpenPGP digital signature
URL: <http://mail.python.org/pipermail/datetime-sig/attachments/20150907/c6532d71/attachment-0001.sig>


More information about the Datetime-SIG mailing list