[Python-Dev] proposal: add basic time type to the standard library

M.-A. Lemburg mal@lemburg.com
Tue, 26 Feb 2002 15:33:05 +0100

Jim Fulton wrote:
> > I would be willing to make the mxDateTime types subtypes of
> > whatever Fredrik comes up with. The only requirement I have is
> > that the binary footprint of the types needs to match todays
> > layout of mxDateTime types since I need to maintain binary
> > compatibility.
> The binary footprint of your types, not the standard base class,
> right? I don't see a problem with that,

Fredrik's solution only provides an abstract base type 
with no additional parameters in the type object (only an
interface definition on top of it) -- this
would work nicely.
> > The other possibility would be adding a set of new types
> > to mxDateTime which focus on low memory requirements rather
> > than data roundtrip safety and speed.
> What is data roundtrip safety?

Roundtrip safety means that e.g. if you take a COM date value
from a ADO and create a DateTime object with it, you can
be sure to get back the exact same value via the COMDate()

The same is true for broken down values and, of course,
the internal values .absdate and .abstime.

This may not be too important for most applications, but
it certainly is for database related ones, since rounding
can cause e.g. 14:00:00.00 to become 13:59:59.99 and that's
not what you want if you transfer data from one database
to another.

> I rarely do date-time arithmetic, but I often do date-time-part
> extraction. I think that mxDateTime is optimized for arithmetic,
> whereas, I'd prefer a type more focussed on extraction efficiency,
> and memory usage,  and that effciently supports time zones.
> This is, of course, no knock on mxDateTime. I also want
> fast comparisons, which I presume mxDateTime provides.

DateTime objects use .abstime and .absdate for doing
arithmetic since these provides the best accuracy. The most
important broken down values are calculated once at creation
time; a few others are done on-the-fly. 

I suppose that I could easily make a few calculation
lazy to enhance speed; memory footprint would not change
though. It's currently at 56 bytes per DateTime object
and 36 bytes per DateTimeDelta object.

To get similar accuracy in Python, you'd need a float and
an integer per object, that's 16 bytes + 12 bytes == 28 bytes
+ malloc() overhead for the two and the wrapping instance
which gives another 32 bytes (provided you store the two
objects in slots)... >60 bytes per Python based date time 

Marc-Andre Lemburg
CEO eGenix.com Software GmbH
Company & Consulting:                           http://www.egenix.com/
Python Software:                   http://www.egenix.com/files/python/