[Python-Dev] Decimal data type issues
Batista, Facundo
FBatista at uniFON.com.ar
Tue Apr 13 12:18:41 EDT 2004
People:
Tim Peters reviewed the PEP 327, and asked some questions. Trying to
answer those questions, I found that some items needs to be addressed with
a comunity consensus.
So, the next items are to be included in the PEP when the discussion
finishes:
Exponent Maximum
----------------
The Decimal number composes of three elements: A sign that can be 0 or 1, a
tuple of digits where each can be 0..9, and an exponent.
The exponent is an integer, and in the actual implementation exists a
maximum
value::
DEFAULT_MAX_EXPONENT = 999999999
DEFAULT_MIN_EXPONENT = -999999999
ABSOLUTE_MAX_EXP = 999999999
ABSOLUTE_MIN_EXP = -999999999
The issue is that this limit is artificial: As long it's a long, you should
be able to make it as big as your memory let you.
In the General Decimal Arithmetic Specification says:
In the abstract, there is no upper limit on the absolute value of the
exponent. In practice there may be some upper limit, E_limit , on the
absolute value of the exponent.
So, should we impose an artificial limit to the exponent?
This is important, as there're several cases where this maximums are checked
and exceptions raised and/or the numbers get changed.
New operations
--------------
Tim Peters found I missed three operations required by the standard. Those
are:
a. ``to-scientific-string``: This operation converts a number to a string,
using scientific notation if an exponent is needed. The operation is not
affected by the context.
b. ``to-engineering-string``: This operation converts a number to a string,
using engineering notation if an exponent is needed.
c. ``to-number``: This operation converts a string to a number, as defined
by its abstract representation.
First we should agree the names of the methods, I propose:
a. to_sci_string
b. to_eng_string
c. from_string
The (a) and (b) methods are different from str, as this method just doesn't
adjust the exponent at all.
Regarding the method (c), the only difference with creating the decimal with
Decimal(string) is that method (c) honors the context (if the literal
contains more digits that the current precision the numbers get rounded, and
gets rounded according to the round method specified in context, etc). For
example, with a precision of 9 and with the name I proposed::
>>> Decimal('112233445566')
Decimal( (0, (1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6), 0) )
>>> Decimal.from_string('112233445566')
Decimal( (0, (1, 1, 2, 2, 3, 3, 4, 4, 6), 3L) )
Hash behaviour
--------------
This item actually wasn't raised by Tim, but I found it when implementing
the module.
In the PEP I wrote that Decimal must be hashable. But what hash should it
give?
Should the following be true?::
hash(Decimal(25) == hash(25)
hash(Decimal.from_float(25.35) == hash(25.35)
hash(Decimal('-33.8')) == hash(-33.8)
I don't think so. I think that hash(Decimal(...)) just should return a
different value in each case, but no the same value that other data types.
Thank you all for your feedback.
. Facundo
More information about the Python-Dev
mailing list