[Python-Dev] FixedPoint and % specifiers.
David LeBlanc
whisper@oz.net
Wed, 5 Feb 2003 16:29:42 -0800
Displaying FixedPoints:
Python 2.2.1 (#34, Jul 16 2002, 16:25:42) [MSC 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Alternative ReadLine 1.5 -- Copyright 2001, Chris Gonnerman
>>> from fixedpoint import *
>>> a = FixedPoint(2.135)
>>> print '%d' % (a)
2
>>> print '%4d' % (a)
2
>>> print '%e' % (a)
2.130000e+000
>>> print '%f' % (a)
2.130000
>>> print '%2f' % (a)
2.130000
>>> print '%.2d' % (a)
02
>>> print '%s' % (a)
2.13
>>> a.set_precision(4)
>>> print '%s' % (a)
2.1300
I naively expected the %d to produce 2.315 - and it would be nice, after the
manner of C, to be able to specify the precision (or scale if you prefer) of
the output as I attempted in '%.2d' and also (not shown) '%2.2d'. This might
need some work since %<num>d is a field filler spec. It would be nice if
given 2.135:
%d 2 - int part
%2d __2 - filled int part (_'s represent spaces)
%.2d 2.13 - ?rounded? to two places
%2.2d __2.13 - filled and ?rounded? to 2 places
%2.5d __2.13500 - filled and zero extended
%.3d 2.135 - etc.
If rounding vs. truncation is done for a 'short' (less than actual
precision) specifier , then string formatting needs to know something about
FixedPoints IMO. Displaying a frac part would require extraction and use of
'd' as in a normal int.
The '%s' output just looks suspicious, especially after setting the
precision to 4! The '%s' smells of bug to me, although I see the code forces
the initializer value to round to DEFAULT_PRECISION if a precision isn't
given ("you lost my work!"). IMO the precision of the initializer should
become the precision of the value and not the default if a (seems redundant
to me) precision isn't given.
David LeBlanc
Seattle, WA USA
N.B: this needs to be added to the distro (assuming it lives in
lib/site_packages/fixedpoint?):
#__init__.py
----------------------------
# FixedPoint
from fixedpoint import *