[Python-Dev] Re: Decimal data type issues
Jewett, Jim J
jim.jewett at eds.com
Fri Apr 16 13:05:20 EDT 2004
> Decimal(1.1, significant_digits=34) is unambiguous,
> but confusing.
What would the preferred solution be?
Requiring the user to explicitly type zeros so as to
have 34 digits?
Implicitly doing a Decimal(str(1.1)) and then adding
32 zeros to the internal representation? (Note: in
at least my environment, str(1.1) = '1.1'; it is only
repr that adds the extra digits.)
Living with the fact that it will have some confusing
> ... what is "exact conversion" to you:
> 1.1 -> "1.1"
> 1.1 -> "1.1000000000000001"
> 1.1 -> "1.100000000000000088817841970012523233890533447265625"
machine precision of floats, whatever that happens to be.
That implementation-dependent part is one reason to use a
longer name, like exact_float.
But if I say
Those are unambiguous. If the context is 43, then later
calculations will effectively fill in the extra digits;
I have just said to fill them in with zeros, rather than
whatever the float approximations were.
>> I never mentioned positions, and i don't see any reason
>> why it is necessary to isolate these two ways of specifying
> Because they are two totally different concepts. And they're
> not two ways of specifying precision.
Sure they are; the reason to specify positions is that the
underlying data wasn't really floating point -- it was an
integer which by convention is written in a larger unit.
Example with money:
Assume meat is 0.987USD/pound. There are three
price digits to multiply by the weight, but the
final cost is rounded to an integer number of
10 pounds cost 9.87, but
1 pound costs 0.99
I don't want to say that my prices have three digits of
precision, or I'll keep fractions of a penny. I don't
want to say that they have only two, or I'll drop the
pennies on expensive items. I want to say that the
precision varies depending on the actual price.
> What whould mean here "significant_digits=4"?
It means precision. In many US classrooms,
"precision" applies to the original measurement.
Once you start doing calculations, it gets called
"significant digits", as a reminder not to let your
answer be more precise than your inputs.
>> Converting from float is not confusing when the
>> precision is specified (either in terms of significant
>> digits or decimal places).
> Don't confuse the terminology. Precision means only one
> thing, and it's specified in Cowlishaw work.
But for any *specific* value, specifying either the total
number of valid digits (signficant figures, precision)
or the number of fractional digits (position) is enough
to determine both.
> "number" can be string, int, etc., but NOT float.
> The issue with rounding at a defined position has
> nothing to do with context.
I assume that a new Decimal would normally be created
with as much precision as the context would need for
calculations. By passing a context/precision/position,
the user is saying "yeah, but this measurement wasn't
that precise in the first place. Use zeros for the
rest, no matter what this number claims."
More information about the Python-Dev