[Python-ideas] Python Float Update
Nicholas Chammas
nicholas.chammas at gmail.com
Tue Jun 2 00:53:33 CEST 2015
On Mon, Jun 1, 2015 at 6:15 PM Andrew Barnert abarnert at yahoo.com
<http://mailto:abarnert@yahoo.com> wrote:
Obviously if you know the maximum precision needed before you start and
> explicitly set it to something big enough (or 7 places bigger than needed)
> you won't have any problem. Steven chose a low precision just to make the
> problems easy to see and understand; he could just as easily have
> constructed examples for a precision of 18.
>
> Unfortunately, even in cases where it is both possible and sufficiently
> efficient to work out and set the precision high enough to make all of your
> calculations exact, that's not something most people know how to do
> reliably. In the fully general case, it's as hard as calculating error
> propagation.
>
> As for the error: Python's decimal flags that too; the difference is that
> the flag is ignored by default. You can change it to warn or error instead.
> Maybe the solution is to make that easier--possibly just changing the docs.
> If you read the whole thing you will eventually learn that the default
> context ignores most such errors, but a one-liner gets you a different
> context that acts like SQL Server, but who reads the whole module docs
> (especially when they already believe they understand how decimal
> arithmetic works)? Maybe moving that up near the top would be useful?
>
> This angle of discussion is what I was getting at when I wrote:
Perhaps Python needs better rules for how precision and scale are affected
by calculations (here are SQL Server’s
<https://msdn.microsoft.com/en-us/library/ms190476.aspx>, for example), or
better defaults when they are not specified?
It sounds like there are perhaps several improvements that can be made to
how decimals are handled, documented, and configured by default, that could
possibly address the majority of gotchas for the majority of people in a
more user friendly way than can be accomplished with floats.
For all the problems presented with decimals by Steven and others, I’m not
seeing how overall they are supposed to be *worse* than the problems with
floats.
We can explain precision and scale to people when they are using decimals
and give them a basic framework for understanding how they affect
calculations, and we can pick sensible defaults so that people won’t hit
nasty gotchas easily. So we have some leverage there for making the
experience better for most people most of the time.
What’s our leverage for improving the experience of working with floats?
And is the result really something better than decimals?
Nick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150601/c0057ae2/attachment.html>
More information about the Python-ideas
mailing list