[Python-Dev] Re: Decimal data type issues

Kevin Jacobs jacobs at theopalgroup.com
Fri Apr 16 12:56:43 EDT 2004


Batista, Facundo wrote:

>[Kevin Jacobs]
>#- Don't forget that many financial applications use fixed scale and 
>#- precision as
>#- their primary mechanism for specifying Decimal types.  As such, it
>#- would be very nice to have a constructor that took a literal 
>#- representation
>#- as well as scale and precision.  While using context is 
>
>What is "fixed scale"?
>
>Could you provide examples of the functionality that you're asking for?
>Thanks.
>  
>

See for example, the SQL Decimal and Numeric data types, which is one of 
many
applications that use scale and precision parameters for decimal 
representations:

[borrowed from the PostgreSQL manual @
    
http://www.postgresql.org/docs/7.4/interactive/datatype.html#DATATYPE-NUMERIC-DECIMAL]

> The /scale/ of a numeric is the count of decimal digits in the 
> fractional part, to the right
> of the decimal point. The /precision/ of a numeric is the total count 
> of significant digits
> in the whole number, that is, the number of digits to both sides of 
> the decimal point. So
> the number 23.5141 has a precision of 6 and a scale of 4. Integers can 
> be considered to
> have a scale of zero.
>
> Both the precision and the scale of the numeric type can be 
> configured. To declare a column
> of type numeric use the syntax
>
>  NUMERIC(precision, scale)
>
> The precision must be positive, the scale zero or positive. 
> Alternatively,
>
>  NUMERIC(precision)
>
> selects a scale of 0. Specifying
>
>  NUMERIC
>
> without any precision or scale creates a column in which numeric 
> values of any precision
> and scale can be stored, up to the implementation limit on precision. 
> A column of this kind
> will not coerce input values to any particular scale, whereas numeric 
> columns with a
> declared scale will coerce input values to that scale. (The SQL 
> standard requires a default
> scale of 0, i.e., coercion to integer precision. We find this a bit 
> useless. If you're concerned
> about portability, always specify the precision and scale explicitly.)
>
> If the precision or scale of a value is greater than the declared 
> precision or scale of a
> column, the system will attempt to round the value. If the value 
> cannot be rounded
> so as to satisfy the declared limits, an error is raised.
>

Python decimals would do well to support the creation of instances with 
fixed scale
and precision parameters, since this information will be what is 
provided by databases,
other financial and engineering applications, and schema.  i.e., these 
parameters override
the natural scale and precision found in the literal values used when 
constructing decimals.
e.g., hypothetically:

  Decimal('2.4000', precision=2, scale=1) == Decimal('2.4')
  Decimal('2.4', precision=5, scale=4) == Decimal('2.4000')

Remember, these literals are frequently coming from an external source 
that must
be constrained to a given schema.  This issue doesn't come up with 
floating point
numbers nearly as often, because those schema use explicitly 
precision-qualified
types (i.e., single, double, quad precision).

-Kevin




More information about the Python-Dev mailing list