Re: [Python-ideas] Python-ideas Digest, Vol 103, Issue 14
That patch sounds nice, I don't have to edit my Python distribution! We'll have to do with this. On Mon, Jun 1, 2015 at 7:03 PM, <python-ideas-request@python.org> wrote:
Send Python-ideas mailing list submissions to python-ideas@python.org
To subscribe or unsubscribe via the World Wide Web, visit https://mail.python.org/mailman/listinfo/python-ideas or, via email, send a message with subject or body 'help' to python-ideas-request@python.org
You can reach the person managing the list at python-ideas-owner@python.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of Python-ideas digest..."
Today's Topics:
1. Re: Python Float Update (Nick Coghlan) 2. Re: Python Float Update (Andrew Barnert) 3. Re: Python Float Update (Andrew Barnert) 4. Re: Python Float Update (Steven D'Aprano)
----------------------------------------------------------------------
Message: 1 Date: Tue, 2 Jun 2015 10:08:37 +1000 From: Nick Coghlan <ncoghlan@gmail.com> To: Andrew Barnert <abarnert@yahoo.com> Cc: python-ideas <python-ideas@python.org> Subject: Re: [Python-ideas] Python Float Update Message-ID: < CADiSq7fjhS_XrKe3QfF58hXdhLSSbX6NvsFZZKjRq-+OLOQ-eQ@mail.gmail.com> Content-Type: text/plain; charset=UTF-8
But the basic idea can be extracted out and Pythonified:
The literal 1.23 no longer gives you a float, but a FloatLiteral, which is either a subclass of float, or an unrelated class that has a __float__ method. Doing any calculation on it gives you a float. But as long as you leave it alone as a FloatLiteral, it has its literal characters available for any function that wants to distinguish FloatLiteral from float, like
On 2 Jun 2015 08:44, "Andrew Barnert via Python-ideas" <python-ideas@python.org> wrote: the Decimal constructor.
The problem that Python faces that Swift doesn't is that Python doesn't
use static typing and implicit compile-time conversions. So in Python, you'd be passing around these larger values and doing the slow conversions at runtime. That may or may not be unacceptable; without actually building it and testing some realistic programs it's pretty hard to guess.
Joonas's suggestion of storing the original text representation passed to the float constructor is at least a novel one - it's only the idea of actual decimal literals that was ruled out in the past.
Aside from the practical implementation question, the main concern I have with it is that we'd be trading the status quo for a situation where "Decimal(1.3)" and "Decimal(13/10)" gave different answers.
It seems to me that a potentially better option might be to adjust the implicit float->Decimal conversion in the Decimal constructor to use the same algorithm as we now use for float.__repr__ [1], where we look for the shortest decimal representation that gives the same answer when rendered as a float. At the moment you have to indirect through str() or repr() to get that behaviour:
from decimal import Decimal as D 1.3 1.3 D('1.3') Decimal('1.3') D(1.3) Decimal('1.3000000000000000444089209850062616169452667236328125') D(str(1.3)) Decimal('1.3')
Cheers, Nick.
[1] http://bugs.python.org/issue1580
------------------------------
Message: 2 Date: Mon, 1 Jun 2015 18:27:32 -0700 From: Andrew Barnert <abarnert@yahoo.com> To: Nick Coghlan <ncoghlan@gmail.com> Cc: python-ideas <python-ideas@python.org> Subject: Re: [Python-ideas] Python Float Update Message-ID: <EBB58361-19F4-4275-B6A6-E5AF2F77EF9C@yahoo.com> Content-Type: text/plain; charset=us-ascii
On Jun 1, 2015, at 17:08, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 2 Jun 2015 08:44, "Andrew Barnert via Python-ideas" <python-ideas@python.org> wrote:
But the basic idea can be extracted out and Pythonified:
The literal 1.23 no longer gives you a float, but a FloatLiteral, which
is either a subclass of float, or an unrelated class that has a __float__ method. Doing any calculation on it gives you a float. But as long as you leave it alone as a FloatLiteral, it has its literal characters available for any function that wants to distinguish FloatLiteral from float, like the Decimal constructor.
The problem that Python faces that Swift doesn't is that Python doesn't
use static typing and implicit compile-time conversions. So in Python, you'd be passing around these larger values and doing the slow conversions at runtime. That may or may not be unacceptable; without actually building it and testing some realistic programs it's pretty hard to guess.
Joonas's suggestion of storing the original text representation passed to the float constructor is at least a novel one - it's only the idea of actual decimal literals that was ruled out in the past.
I actually built about half an implementation of something like Swift's LiteralConvertible protocol back when I was teaching myself Swift. But I think I have a simpler version that I could implement much more easily.
Basically, FloatLiteral is just a subclass of float whose __new__ stores its constructor argument. Then decimal.Decimal checks for that stored string and uses it instead of the float value if present. Then there's an import hook that replaces every Num with a call to FloatLiteral.
This design doesn't actually fix everything; in effect, 1.3 actually compiles to FloatLiteral(str(float('1.3')) (because by the time you get to the AST it's too late to avoid that first conversion). Which does actually solve the problem with 1.3, but doesn't solve everything in general (e.g., just feed in a number that has more precision than a double can hold but less than your current decimal context can...).
But it just lets you test whether the implementation makes sense and what the performance effects are, and it's only an hour of work, and doesn't require anyone to patch their interpreter to play with it. If it seems promising, then hacking the compiler so 2.3 compiles to FloatLiteral('2.3') may be worth doing for a test of the actual functionality.
I'll be glad to hack it up when I get a chance tonight. But personally, I think decimal literals are a better way to go here. Decimal(1.20) magically doing what you want still has all the same downsides as 1.20d (or implicit decimal literals), plus it's more complex, adds performance costs, and doesn't provide nearly as much benefit. (Yes, Decimal(1.20) is a little nicer than Decimal('1.20'), but only a little--and nowhere near as nice as 1.20d).
Aside from the practical implementation question, the main concern I have with it is that we'd be trading the status quo for a situation where "Decimal(1.3)" and "Decimal(13/10)" gave different answers.
Yes, to solve that you really need Decimal(13)/Decimal(10)... Which implies that maybe the simplification in Decimal(1.3) is more misleading than helpful. (Notice that this problem also doesn't arise for decimal literals--13/10d is int vs. Decimal division, which is correct out of the box. Or, if you want prefixes, d13/10 is Decimal vs. int division.)
It seems to me that a potentially better option might be to adjust the implicit float->Decimal conversion in the Decimal constructor to use the same algorithm as we now use for float.__repr__ [1], where we look for the shortest decimal representation that gives the same answer when rendered as a float. At the moment you have to indirect through str() or repr() to get that behaviour:
from decimal import Decimal as D 1.3 1.3 D('1.3') Decimal('1.3') D(1.3) Decimal('1.3000000000000000444089209850062616169452667236328125') D(str(1.3)) Decimal('1.3')
Cheers, Nick.
------------------------------
Message: 3 Date: Mon, 1 Jun 2015 19:00:48 -0700 From: Andrew Barnert <abarnert@yahoo.com> To: Andrew Barnert <abarnert@yahoo.com> Cc: Nick Coghlan <ncoghlan@gmail.com>, python-ideas <python-ideas@python.org> Subject: Re: [Python-ideas] Python Float Update Message-ID: <90691306-98E3-421B-ABEB-BA2DE05962C6@yahoo.com> Content-Type: text/plain; charset=us-ascii
On Jun 1, 2015, at 18:27, Andrew Barnert via Python-ideas < python-ideas@python.org> wrote:
On Jun 1, 2015, at 17:08, Nick Coghlan <ncoghlan@gmail.com> wrote:
On 2 Jun 2015 08:44, "Andrew Barnert via Python-ideas" <python-ideas@python.org> wrote:
But the basic idea can be extracted out and Pythonified:
The literal 1.23 no longer gives you a float, but a FloatLiteral,
The problem that Python faces that Swift doesn't is that Python
doesn't use static typing and implicit compile-time conversions. So in Python, you'd be passing around these larger values and doing the slow conversions at runtime. That may or may not be unacceptable; without actually building it and testing some realistic programs it's pretty hard to guess.
Joonas's suggestion of storing the original text representation passed to the float constructor is at least a novel one - it's only the idea of actual decimal literals that was ruled out in the past.
I actually built about half an implementation of something like Swift's LiteralConvertible protocol back when I was teaching myself Swift. But I
which is either a subclass of float, or an unrelated class that has a __float__ method. Doing any calculation on it gives you a float. But as long as you leave it alone as a FloatLiteral, it has its literal characters available for any function that wants to distinguish FloatLiteral from float, like the Decimal constructor. think I have a simpler version that I could implement much more easily.
Basically, FloatLiteral is just a subclass of float whose __new__ stores
its constructor argument. Then decimal.Decimal checks for that stored string and uses it instead of the float value if present. Then there's an import hook that replaces every Num with a call to FloatLiteral.
This design doesn't actually fix everything; in effect, 1.3 actually
compiles to FloatLiteral(str(float('1.3')) (because by the time you get to the AST it's too late to avoid that first conversion). Which does actually solve the problem with 1.3, but doesn't solve everything in general (e.g., just feed in a number that has more precision than a double can hold but less than your current decimal context can...).
But it just lets you test whether the implementation makes sense and
what the performance effects are, and it's only an hour of work,
Make that 15 minutes.
https://github.com/abarnert/floatliteralhack
and doesn't require anyone to patch their interpreter to play with it. If it seems promising, then hacking the compiler so 2.3 compiles to FloatLiteral('2.3') may be worth doing for a test of the actual functionality.
I'll be glad to hack it up when I get a chance tonight. But personally, I think decimal literals are a better way to go here. Decimal(1.20) magically doing what you want still has all the same downsides as 1.20d (or implicit decimal literals), plus it's more complex, adds performance costs, and doesn't provide nearly as much benefit. (Yes, Decimal(1.20) is a little nicer than Decimal('1.20'), but only a little--and nowhere near as nice as 1.20d).
Aside from the practical implementation question, the main concern I have with it is that we'd be trading the status quo for a situation where "Decimal(1.3)" and "Decimal(13/10)" gave different answers.
Yes, to solve that you really need Decimal(13)/Decimal(10)... Which implies that maybe the simplification in Decimal(1.3) is more misleading than helpful. (Notice that this problem also doesn't arise for decimal literals--13/10d is int vs. Decimal division, which is correct out of the box. Or, if you want prefixes, d13/10 is Decimal vs. int division.)
It seems to me that a potentially better option might be to adjust the implicit float->Decimal conversion in the Decimal constructor to use the same algorithm as we now use for float.__repr__ [1], where we look for the shortest decimal representation that gives the same answer when rendered as a float. At the moment you have to indirect through str() or repr() to get that behaviour:
from decimal import Decimal as D 1.3 1.3 D('1.3') Decimal('1.3') D(1.3) Decimal('1.3000000000000000444089209850062616169452667236328125') D(str(1.3)) Decimal('1.3')
Cheers, Nick.
Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
------------------------------
Message: 4 Date: Tue, 2 Jun 2015 11:58:09 +1000 From: Steven D'Aprano <steve@pearwood.info> To: python-ideas@python.org Subject: Re: [Python-ideas] Python Float Update Message-ID: <20150602015809.GE932@ando.pearwood.info> Content-Type: text/plain; charset=us-ascii
On Tue, Jun 02, 2015 at 10:08:37AM +1000, Nick Coghlan wrote:
It seems to me that a potentially better option might be to adjust the implicit float->Decimal conversion in the Decimal constructor to use the same algorithm as we now use for float.__repr__ [1], where we look for the shortest decimal representation that gives the same answer when rendered as a float. At the moment you have to indirect through str() or repr() to get that behaviour:
Apart from the questions of whether such a change would be allowed by the Decimal specification, and the breaking of backwards compatibility, I would really hate that change for another reason.
At the moment, a good, cheap way to find out what a binary float "really is" (in some sense) is to convert it to Decimal and see what you get:
Decimal(1.3) -> Decimal('1.3000000000000000444089209850062616169452667236328125')
If you want conversion from repr, then you can be explicit about it:
Decimal(repr(1.3)) -> Decimal('1.3')
("Explicit is better than implicit", as they say...)
Although in fairness I suppose that if this change happens, we could keep the old behaviour in the from_float method:
# hypothetical future behaviour Decimal(1.3) -> Decimal('1.3') Decimal.from_float(1.3) -> Decimal('1.3000000000000000444089209850062616169452667236328125')
But all things considered, I don't think we're doing people any favours by changing the behaviour of float->Decimal conversions to implicitly use the repr() instead of being exact. I expect this strategy is like trying to flatten a bubble under wallpaper: all you can do is push the gotchas and surprises to somewhere else.
Oh, another thought... Decimals could gain yet another conversion method, one which implicitly uses the float repr, but signals if it was an inexact conversion or not. Explicitly calling repr can never signal, since the conversion occurs outside of the Decimal constructor and Decimal sees only the string:
Decimal(repr(1.3)) cannot signal Inexact.
But:
Decimal.from_nearest_float(1.5) # exact Decimal.from_nearest_float(1.3) # signals Inexact
That might be useful, but probably not to beginners.
-- Steve
------------------------------
Subject: Digest Footer
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas
------------------------------
End of Python-ideas Digest, Vol 103, Issue 14 *********************************************
-- -Surya Subbarao
participants (1)
-
u8y7541 The Awesome Person