[Python-ideas] isinstance(Decimal(), Real) -> False?
Oscar Benjamin
oscar.j.benjamin at gmail.com
Thu Aug 29 15:12:49 CEST 2013
On 29 August 2013 09:13, Draic Kin <drekin at gmail.com> wrote:
>
> On Thu, Aug 29, 2013 at 4:02 AM, Steven D'Aprano <steve at pearwood.info>
> wrote:
>>
>> On 28/08/13 20:48, Draic Kin wrote:
>>>
>>> For the same reason, I could think that isinstance(Decimal, Rational) ->
>>> True
>>
>> If Decimal were a subclass of Rational, so should float. The only
>> fundamental difference between the two is that one uses base 10 floating
>> point numbers and the other uses base 2.
>>
> Another difference is, that precision of float is fixly limited. I actually
> thought that Decimal is unlimited the same way as int, however
> decimal.MAX_PREC is pretty big number. But you're right Decimal shouldn't be
> subclass of Rational. However the original question was why it is not
> subclass of Real.
The precision of Decimal in arithmetic is fixed and usually much
smaller than MAX_PREC which is simply the maximum value it can be set
to. The default is 28 decimal digits of precision:
$ python3
Python 3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:03:43) [MSC v.1600
32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal
>>> import decimal
>>> print(decimal.getcontext())
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999,
capitals=1, clamp=0, flags=[], traps=[InvalidOperation,
DivisionByZero, Overflow])
>>> decimal.Decimal(1) / 3
Decimal('0.3333333333333333333333333333')
>>> decimal.MAX_PREC
425000000
The context says prec=28 and that Decimal(1)/3 gives a result with 28 threes.
Decimals are exact in the sense that conversion to Decimal from int,
str or float is guaranteed to be exact. Note that this is not required
by the standards on which it is based. The standards suggest that
conversion from a supported integer type or from a string should be
exact *if possible*. The reason for the "if possible" caveat is that
not every implementation of the standards would be able to create
arbitrary precision Decimals and in fact many would be limited to a
smaller precision than 28 (e.g. decimal hardware in hand calculators
etc.). The standards only require that inexact conversions should set
the Inexact flag and - if the Inexact trap is set - raise the Inexact
exception.
It's important to be clear about the distinction between the precision
of a Decimal *instance* and the precision of the current *arithmetic
context*. While it is possible to exactly convert an int/str/float to
a Decimal with a precision that is higher than the current context,
any arithmetic operations will be rounded to the context precision
according to the context rounding mode (there are 8 different rounding
modes and precision is any positive integer). This arithmetic rounding
is actually *required* by the IEEE-854 standard unlike the exact
conversion from arbitrary precision integers etc. Specifically the
standard requires that the result be (effectively) computed exactly
and then rounded according to context. This means that individual
binary arithmetic operations can behave as if they have a precision
that is higher than the current context but as soon as you try to e.g.
sum 3 numbers you should assume that you're effectively working with
context precision. An example:
>>> d1 = decimal.Decimal('1'*40 + '2')
>>> d2 = decimal.Decimal('-'+ '1'*41)
>>> d1
Decimal('11111111111111111111111111111111111111112')
>>> d2
Decimal('-11111111111111111111111111111111111111111')
>>> d1 + d2 # Computed exact and then rounded (no rounding occurs)
Decimal('1')
>>> (+d1) + (+d2) # Rounded, computed and rounded again
Decimal('0E+13')
>>> d1 + 0 + d2 # What happens here?
Decimal('-1111111111111')
For this reason sum(Decimals) is just as inaccurate as sum(floats) and
Decimals need a decimalsum function just like float's fsum. My first
attempt at such a function was the following (all code below was
modified for posting and is untested):
# My simplification of the algorithms from
# "Algorithms for Arbitrary Precision Floating Point Arithmetic"
# by Douglas M. Priest 1991.
# http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.3546
#
# This function is my own modification of Raymond Hettinger's recipe to use
# the more general sum_err function from the Priest paper and perform the
# final summation with Fractions
def fixedwidthsum(iterable):
"Full precision summation for fixed-width floating point types"
partials = []
iterator = iter(iterable)
isfinite = math.isfinite
for x in iterable:
# Handle NaN/Inf
if not isfinite(x):
return sum(iterator, x)
i = 0
for y in partials:
if abs(x) < abs(y):
x, y = y, x
hi, lo = sum_err(x, y) # The key modification
if lo:
partials[i] = lo
i += 1
x = hi
partials[i:] = [x]
# Also modified: used fractions to add the partials
if not partials:
return 0
elif isinstance(partials[0], Decimal):
# This is needed because Decimal(Fraction) doesn't work
fresult = sum(map(Fraction, partials)) # Assumes Python 3.3
return Decimal(fresult.numerator) / Decimal(fresult.denominator)
else:
return type(partials[0])(sum(map(Fraction, partials)))
def sum_err(a, b):
if abs(a) < abs(b):
a, b = b, a
c = a + b; e = c - a # Standard Kahan
# The line below is needed unless the arithmetic is
# faithful-binary, properly-truncating or correctly-chopping
g = c - e; h = g - a; f = b - h
d = f - e # For Kahan replace f with b
# The two lines below are needed unless the arithmetic
# uses round-to-nearest or proper-truncation
if d + e != f:
c, d = a, b
return c, d
The functions above can exactly sum any fixed precision floating point
type of any radix (including decimal) under any sensible rounding mode
including all the rounding modes in the decimal module. The problem
with it though is that Decimals are almost but not quite a fixed
precision type: it is possible to create Decimals whose precision
exceeds that of the arithmetic context (as in the examples I showed
above).
If the instance is precision does not exceed twice the arithmetic
context then we can decompose the decimal into two numbers each of
which has a precision within the current context e.g.:
def expand_two(d):
if d == +d: # Does d equal itself after rounding
return [d]
else:
return [+d, d-(+d)]
>>> decimal.getcontext().prec=4
>>> d1 = decimal.Decimal('1234567')
Decimal('1234567')
>>> [+d1, d1-(+d1)]
[Decimal('1.235E+6'), Decimal('-433')]
However once we go to more than twice the context precision there's no
duck-typey way to do it: We need to know the instance precision and
the context precision and we're better off ripping out the internal
Decimal representation than trying to use arithmetic:
def expand_full(d):
if d == +d:
return [d]
prec = decimal.getcontext().prec
sign, digits, exponent = d.as_tuple()
expansion = []
while digits:
digits, lodigits = digits[:-prec], digits[-prec:]
expansion.append(decimal.Decimal((sign, lodigits, exponent)))
exponent += prec
return expansion
And that should do it. The above functions are jumping through the
same kind of hoops that fsum does precisely because Decimals are a
floating point type (not exact) based on the IEEE-854 "Standard for
radix-independent *floating-point* arithmetic".
Oscar
More information about the Python-ideas
mailing list