[Python-Dev] proposal: add basic time type to the standard library
Tue, 26 Feb 2002 16:05:10 -0500
>>> The other possibility would be adding a set of new types
>>> to mxDateTime which focus on low memory requirements rather
>>> than data roundtrip safety and speed.
>> What is data roundtrip safety?
> Roundtrip safety means that e.g. if you take a COM date value
> from a ADO and create a DateTime object with it, you can
> be sure to get back the exact same value via the COMDate()
> The same is true for broken down values and, of course,
> the internal values .absdate and .abstime.
I'm unsure whether Jim is aware of this, but if not he should be: the
non-trivial time I spent over the last week repairing test failures in
Zope's current DateTime.py was all spent finding & repairing basic roundtrip
failures. These came in two flavors:
dt = DateTime()
dt2 = DateTime(
assert dt == dt2
could fail, depending on the exact time you tried it. This was the root
cause of many test failures (they were usually reported as failures after
doing some sort of arithmetic first, but the arithmetic was actually
irrelevant: when these failed, the base objects didn't match from the
2. Failure of roundtrip conversion between DateTime objects D and repr(D),
than back again, to reproduce the original DateTime or string.
"Floating-point" got the generic blame for these things under the covers,
but it was really peoples' spectacular inability to deal with *binary*
floating-point that caused all the problems. People just can't help but
see, e.g., "50.327" as an exact decimal value, so just can't help writing
code that assumes false correlates (such as, e.g., that int((50.327 - 50) *
1000) will return 327; but it doesn't; it returns 326). If we were using
decimal floating-point instead, the numerically naive code here would have
> DateTime objects use .abstime and .absdate for doing
> arithmetic since these provides the best accuracy. The most
> important broken down values are calculated once at creation
> time; a few others are done on-the-fly.
Note that 2.2 properties allow natural support of calculated attributes, and
that a computed attribute can easily arrange to cache its value.
> I suppose that I could easily make a few calculation
> lazy to enhance speed; memory footprint would not change
> though. It's currently at 56 bytes per DateTime object
> and 36 bytes per DateTimeDelta object.
I'm assuming that counts Python object header overhead, but does not count
hidden malloc overhead. Switching to pymalloc would slash the latter.
> To get similar accuracy in Python, you'd need a float and
> an integer per object, that's 16 bytes + 12 bytes == 28 bytes
> + malloc() overhead for the two and the wrapping instance
> which gives another 32 bytes (provided you store the two
> objects in slots)... >60 bytes per Python based date time
Just noting that a Zope DateTime instance is huge, with a dozen named
attributes, one of which holds a Python long (unbounded integer).