[Python-ideas] Python Float Update
abarnert at yahoo.com
Tue Jun 2 03:27:32 CEST 2015
On Jun 1, 2015, at 17:08, Nick Coghlan <ncoghlan at gmail.com> wrote:
> On 2 Jun 2015 08:44, "Andrew Barnert via Python-ideas"
> <python-ideas at python.org> wrote:
>> But the basic idea can be extracted out and Pythonified:
>> The literal 1.23 no longer gives you a float, but a FloatLiteral, which is either a subclass of float, or an unrelated class that has a __float__ method. Doing any calculation on it gives you a float. But as long as you leave it alone as a FloatLiteral, it has its literal characters available for any function that wants to distinguish FloatLiteral from float, like the Decimal constructor.
>> The problem that Python faces that Swift doesn't is that Python doesn't use static typing and implicit compile-time conversions. So in Python, you'd be passing around these larger values and doing the slow conversions at runtime. That may or may not be unacceptable; without actually building it and testing some realistic programs it's pretty hard to guess.
> Joonas's suggestion of storing the original text representation passed
> to the float constructor is at least a novel one - it's only the idea
> of actual decimal literals that was ruled out in the past.
I actually built about half an implementation of something like Swift's LiteralConvertible protocol back when I was teaching myself Swift. But I think I have a simpler version that I could implement much more easily.
Basically, FloatLiteral is just a subclass of float whose __new__ stores its constructor argument. Then decimal.Decimal checks for that stored string and uses it instead of the float value if present. Then there's an import hook that replaces every Num with a call to FloatLiteral.
This design doesn't actually fix everything; in effect, 1.3 actually compiles to FloatLiteral(str(float('1.3')) (because by the time you get to the AST it's too late to avoid that first conversion). Which does actually solve the problem with 1.3, but doesn't solve everything in general (e.g., just feed in a number that has more precision than a double can hold but less than your current decimal context can...).
But it just lets you test whether the implementation makes sense and what the performance effects are, and it's only an hour of work, and doesn't require anyone to patch their interpreter to play with it. If it seems promising, then hacking the compiler so 2.3 compiles to FloatLiteral('2.3') may be worth doing for a test of the actual functionality.
I'll be glad to hack it up when I get a chance tonight. But personally, I think decimal literals are a better way to go here. Decimal(1.20) magically doing what you want still has all the same downsides as 1.20d (or implicit decimal literals), plus it's more complex, adds performance costs, and doesn't provide nearly as much benefit. (Yes, Decimal(1.20) is a little nicer than Decimal('1.20'), but only a little--and nowhere near as nice as 1.20d).
> Aside from the practical implementation question, the main concern I
> have with it is that we'd be trading the status quo for a situation
> where "Decimal(1.3)" and "Decimal(13/10)" gave different answers.
Yes, to solve that you really need Decimal(13)/Decimal(10)... Which implies that maybe the simplification in Decimal(1.3) is more misleading than helpful. (Notice that this problem also doesn't arise for decimal literals--13/10d is int vs. Decimal division, which is correct out of the box. Or, if you want prefixes, d13/10 is Decimal vs. int division.)
> It seems to me that a potentially better option might be to adjust the
> implicit float->Decimal conversion in the Decimal constructor to use
> the same algorithm as we now use for float.__repr__ , where we look
> for the shortest decimal representation that gives the same answer
> when rendered as a float. At the moment you have to indirect through
> str() or repr() to get that behaviour:
>>>> from decimal import Decimal as D
>  http://bugs.python.org/issue1580
More information about the Python-ideas