[Christopher Barker <pythonchb@gmail.com>]
So what does Decimal provide? Two things that you can't do with the built-in (hardware) float:
Variable precision Control of rounding
And a third: precisely defined results in base 10. You're thinking like an engineer, not an accountant ;-) Politicians don't know anything about computer arithmetic, and mountains of regulations predate widespread computer use. Because of the need for businesses to comply with mountains of regulations written by people for whom working out small decimal examples by hand was peak knowledge, COBOL _required_ decimal arithmetic (although in fixed point). COBOL access to binary floats was added later with the obscure (to COBOL programmers!) COMP-1 and COMP-2 types (4- and 8--byte binary floats);. And a fourth: a concept of significant trailing zeroes.
from decimal import Decimal as D D("3.000") Decimal('3.000') _ * D("5.4456") Decimal('16.3368000')
Absolutely true that decimal isn't "more accurate" than binary. It saves users from worlds of shallow surprises, mostly related to string representations. But it's not _really_ what "people expect". They're routinely surprised by, e.g., hand calculators too (which have historically used decimal arithmetic internally). The bundled Windows calculator app is probably the most widely used numeric toy on Earth, and over the years they worked to make it as unsurprising as possible. While it supplies no access to this via the UI, which displays base 10 floats, under the covers the current version uses unbounded rationals. So, e.g., pick any integer n you like, and in that app (1/n)*n is always 1 exactly. Python's predecessor (ABC) also converted float notation to rationals. But added HW floats later because chains of calculations with rationals too often make enormous memory demands. The Windows calculator generally doesn't have that problem, because it's not programmable. Do a few dozen calculations by hand, and it's unlikely rationals will "blow up".
... The relevant bit -- it seems that someone could write an accounting module that utilized Decimal to precisely follow a particular set of accounting rules : (it's probably been done).
That's a _primary_ use case for decimal arithmetic. For example, the monstrously large federal US tax code allows rounding to dollars, and the law uniformly defines what "rounding" means with reference to decimal: drop amounts under 50 cents and increase amounts from 50 to 99 cents to the next dollar. Regulation enforcers lack flexibility, common sense, and humor ;-) Banking apps more often require to-nearest/even rounding (which is where the name "banker's rounding" comes from).
I don't get why:
In [54]: str(1.1 + 2.2) Out[54]: '3.3000000000000003'
Isn't the point of __str__ to provide a more human-readable, but perhaps not reproducable, representation?
That str() and repr() _used_ to give different results was another very widespread source of shallow surprises. You cannot "win" this game. Now, both deliver the shortest decimal string `s` such that float(s) exactly reproduces the original float. Which spares users from a different class of shallow surprises: when they type in a float by hand, that's usually - by definition - the shortest string that can produce the resulting float. So it will display the same way they typed it. This is different from rounding to a fixed number of decimal places (which str() and repr() used to do).
But in that case, you'd want to be darn sure that the specific context was used in that package -- not any global setting that a user of the package, or some other package, might mess with?
Much ado about nothing ;-) Accounting and tax programmers are professionals too. and don't need to be saved from gross newbie mistakes. If some internal algorithm needs a specific decimal context, they'll write it from the start to force use of a local context.