prePEP: Decimal data type

John Roth newsgroups at jhrothjr.com
Tue Nov 4 17:40:46 EST 2003


"Batista, Facundo" <FBatista at uniFON.com.ar> wrote in message
news:mailman.445.1067979491.702.python-list at python.org...
> John Roth wrote:
>
> #- I can see how what I said could be interpreted that way, but
> #- I certainly
> #- didn't mean it for strings.
>
> If you understand one thing and I understand another, one of both of us is
> wrong, :p. Who? In my last example, what do you think that happens?

I usually think that determining who was right or wrong is a waste
of effort that could be better spent on finding a mutually satisfactory
solution.

I don't remember the exact example, so I'm guessing that the
underlying issue is the one of not having an unambiguous decimal
literal. As currently defined, Decimal(1.1) and Decimal("1.1")
go through the same process: 1.1 first gets converted to a float,
and then converted a second time to a Decimal. I'd rather not
have even the potential for confusion.

Part of the reason for wanting a type (rather than a class)
is that it's much easier to justify a specific literal for a
built in type. And it would not be that hard to do either,
1.1D is quite natural, and not all that easy to mistake for
a float (the D isn't part of the float syntax in Python,
although it is in other languages.)

> #- It is. I was thinking in terms of a type, not a class. All
> #- the builtin
> #- types start with lower class names.
>
> OK, so it stays uppercase (as long it's a class).
>
>
> #- What you propose?
> #-
> #- - the configuration (precision, flags, etc) is on by-instance basis
> #- - you have different contexts, and a group of instances with each
> #- context.
> #-
> #- [John Roth]
> #- More likely the second. My concern here is the usual one with
> #- singletons and globals. Processing gets very messy when you
> #- have to operate in several different modes or areas. See the
> #- difficulties people get into with internationalization when they
> #- have an application that has to operate in several different
> #- jurisdictions at once, etc.
>
> In another mail, Aahz explains (even to me) that the idea is to have a
> "context per thread". So, all the instances of a thread belongs to a
> context, and you can change a context in thread A (and the behaviour of
the
> instances of that thread) without changing nothing on the thread B.

I saw his comment. I believe the reason why it needs to be thread local
has more to do with data integrity than a desire to provide for multiple
contexts, but if that's all that's possible, then that's what we'll get.

> So, I think your proposal has future, as long I could finish the Aahz
work,
> ;)
>
>
> #- So we're planning
> #- on doing a software implementation of a *draft* standard,
> #- including very complicated facilities that I most
> #- respectfully think are going to be of marginal
> #- utility.
>
> But is that or is making a Money data type and reinventing the wheel for
all
> its arithmetic behaviour.
>
> I can't assure you if somebody ever will use the "not a number"
capabilities
> of Decimal (I think a lot of people will). But that's the specification,
:p

The trouble is that we're trying to get two rather different things out of
one implementation.

The reason I say that they're rather different  is that floating point and
fixed point have very different application domains. Floating point is for
continuous measurements of the kind that occur in the natural world,
fixed point is for counting discrete entities, like coins. Floating decimal
is better than floating binary in one respect, but it's still forcing two
different
things together.

My personal opinion in the matter is that setting the precision high
enough so that you won't get into trouble is a hack, and it's a dangerous
hack because the amount of precision needed isn't directly related to the
data you're processing; it's something that came out of an analysis,
probably by someone else under some other circumstances. Given
a software implementation, there's a performance advantage to setting it as
low
as possible, which immediately puts things at risk if your data
changes.

Fixed precision is also counter to the infinite precision we're moving
toward with the integers; and integers are a more comfortable metaphor
for money than floats.

The natural implementation for fixed decimal is much simpler
than for floating decimal. And I frankly don't think that floating
decimal is going to get that much use outside of an accounting
context, given that I think it's going to be a *lot* slower than
built-in binary floating point. Especially if PyPy succeeds in
their dream of creating a JIT for Python.

John Roth
>
> . Facundo
>






More information about the Python-list mailing list