[Python-Dev] Fixed-point numeric type
Ka-Ping Yee
ping@zesty.ca
Wed, 5 Feb 2003 14:19:59 -0600 (CST)
On Wed, 5 Feb 2003, Tim Peters wrote:
> > 2. Turn 'precision' into a read-only attribute. To set precision,
> > just make a new number with decimal(x, new_precision).
>
> The existing set_precision() name still seems fine, though. It's not
> necessarily a "new" precision; set_precision says what it means.
Methods named "set-whatever" are typically understood to change a
component of something. If this must be done with a method, the method
should be called something like "with_scale(x)" instead of "set_scale(x)".
But i think the method is unnecessary -- the constructor already
provides a perfectly good way to make a new number with an altered
precision, so why provide a second way to do the same thing? There
should be OOWTDI.
> > 3. But don't call it 'precision'. The word 'precision' usually
> > means either the total number of significant digits, or the
> > least significant distinguishable unit. What we're talking
> > about (number of digits after the decimal point) is neither.
> > I believe the correct term is 'scale' (from SQL). See also
> > http://www2.hursley.ibm.com/decimal/.
>
> Opposed. Users had no trouble understanding what precision means in this
> context (the module has been in use for a few years).
That's not an argument that 'precision' is the right name. That's
just a claim that we can confuse people into believing that 'precision'
is not so bad.
'precision' is simply the *wrong word*. Using it would create confusion.
Let's use the correct word while we have the option to do so.
pre-ci-sion, n.
1. The state or quality of being precise; exactness.
2. a. The ability of a measurement to be consistently reproduced.
b. The number of significant digits to which a value has been
reliably measured.
-- The American Heritage Dictionary, 4th Edition.
In the context of floating-point numbers, which i think is likely
the most common context for programmers to encounter this word,
'precision' refers to significant digits. And in the context of SQL,
which will probably be familiar to a significant part of this type's
target audience, 'precision' also refers to significant digits. The
attribute we're talking about represents something *completely different*.
> "no public methods" is a non-goal to me.
[...and later...]
> Please drop this one -- "no methods" remains a non-goal. Strings didn't
> have methods either, now use of string methods is ubiquitous, and Guido
> approved of adding methods to numbers too years ago (but nobody ever got
> around to adding some; e.g., long.numbits()).
Sorry, i didn't express myself very well. I'm not arguing for "zero
methods just for the sake of zero methods". My position is "no unnecessary
methods". Actually it's "no unnecessary anything" -- the overall idea
is to make the type as minimal as possible. If we can get away with
few methods, or no methods at all, so much the better.
For example, i'm arguing against a set_precision/set_scale/with_scale
method on the grounds of TOOWTDI, not because i have a particular
dislike for all methods.
> > and just one attribute -- hence the least possible burden on a
> > future version of Python, if this type eventually becomes built-in.
> > (And the least possible burden is a good goal in general anyway.)
>
> I hope that an implementation of Mike Cowlishaw's nascent standard gets that
> honor instead.
Sure. We should still aim for minimality though.
> > 6. No global mutable state. That means no global settings for
> > default precision or rounding mode. Either pick a standard
> > precision or always require the second constructor argument.
> > Rounding mode is a tougher problem; i see two solutions:
> >
> > (a) Eliminate variable rounding; always round half to even.
>
> Too unrealistic to consider -- a slew of silly rounding modes is a
> requirement for commercial use.
Okay, granted.
> > (b) Set rounding function as an attribute of the number.
>
> Bleech.
Can you, er, elaborate?
And, can we reduce the issue to the rounding mode? Can we agree on
no global mutable default precision, at least? The cost is minimal,
and the introduction of a commonly-mutated default would be dangerous
and expensive in the long run.
> Tough luck. Practicality beats purity here: the pure solution is to pass a
> "numeric context" object to each operation, which some implementations
> actually require. The resulting source code is tedious and unreadable. The
> practical alternative is what you live with every day in FP arithmetic, yet
> don't even notice:
I don't notice it only because i don't ever care to change the rounding
mode. If i did try, i'd be in trouble. Same with the decimal fixed-point
type: if no one wants to change the rounding mode, no one notices.
I'm assuming we both believe that people will want to adjust rounding
modes more often with fixed-point than with floating-point. The question
is what happens to people who do mess with such things. Should such
people find interoperability impossible, or merely tedious? In such a
dilemma, i'm arguing for tedious.
I feel pretty hard-line about the default precision, but i may be willing
to concede the rounding issue if you have a good story for interoperability.
-- ?!ng