[Python-ideas] Python Numbers as Human Concept Decimal System
Oscar Benjamin
oscar.j.benjamin at gmail.com
Mon Mar 10 11:07:11 CET 2014
On 9 March 2014 20:39, Guido van Rossum <guido at python.org> wrote:
> On Sun, Mar 9, 2014 at 1:07 PM, Oscar Benjamin <oscar.j.benjamin at gmail.com> wrote:
>>
>> The problem though is with things like +3.14d or -3.14d. Python the
>> language treats the + and - signs as not being part of the literal but
>> as separate unary operators. This is unimportant for int, float and
>> imaginary literals because there unary + is a no-op and unary - is
>> exact. For decimal this is not the same as +Decimal('3.14') is not the
>> same as Decimal('+3.14'):
> Any solutions you are going to come up with would cause other anomalies such
> as -(5d) not being equal to -5d. My vote goes to treating + and - as the
> operators they are and telling people that if they want a negative constant
> that exceeds the context's precision they're going to have to use
> Decimal('-N'), rather than adding a terrible hack to the parser (and hence
> to the language spec).
I think it's reasonable that -(5d) not be equal to -5d. Expert users
would exploit this behaviour the same way that they currently exploit
unary + for rounding to context. I don't think that non-expert users
would write that by mistake. I'm also not that bothered about +5d
rounding to context since the + is optional and current users of the
decimal module would probably expect that to round.
What I dislike about this is that it would mean that negative decimal
literals were essentially unsafe. It seems okay if you assume that the
precision is 28 or higher but users can set it to a lower value and
decimal contexts are unscoped so they have action at a distance on
other code. So if I have a module libmod.py with:
# libmod.py
def libfunc():
a = -1.23465789d
return f(a)
Then I have a script:
# script.py
from decimal import localcontext
import libmod
with localcontext() as ctx:
ctx.prec = 5
b = libmod.libfunc()
I think we could easily get into a situation where the author of
libmod just assumes that the decimal context has enough precision and
the author of script.py doesn't realise the effect that their change
of context has on libmod.
These kinds of things are generally problematic when using the decimal
module and the solution is typically that libfunc needs to take
control of the context. I think that it would be particularly
surprising with decimal literals though. The change in the precision
of "a" above might be a harmless reduction in the precision of the
result or it could lead to an infinite loop or basically anything else
depending on what libmod uses the value for.
> BTW Do you why the default precision is 28?
I have no idea. The standards I've read describe a few particular 9
digit contexts that should be provided by a conforming implementation
and decimal provides them but neither is the default:
>>> import decimal
>>> decimal.BasicContext.prec
9
>>> decimal.ExtendedContext.prec
9
AFAIK there's no standard that suggests 28 digits. There are other
common contexts such as decimal32 (7 digits), decimal64 (16 digits),
and decimal128 (34 digits) and these are the contexts that Java's
BigDecimal provides (as well as an "unlimited" context).
Oscar
More information about the Python-ideas
mailing list