[Python-ideas] Settable defaulting to decimal instead of float
George Fischhof
george at fischhof.hu
Thu Jan 12 08:09:34 EST 2017
2017-01-12 13:13 GMT+01:00 Stephan Houben <stephanh42 at gmail.com>:
> Something like:
>
> from __syntax__ import decimal_literal
>
> which would feed the rest of the file through the "decimal_literal"
> transpiler.
> (and not influence anything in other files).
>
> Not sure if you would want to support multiple transpilers per file.
>
> Note that Racket has something similar with their initial "#lang ..."
> directive.
> That only allows a single "language". Possibly wisely so.
>
> Stephan
>
>
> 2017-01-12 12:59 GMT+01:00 אלעזר <elazarg at gmail.com>:
>
>> I think such proposals are special cases of a general theme: a compiler
>> pragma, similar to "from __future__", to make Python support
>> domain-specific syntax in the current file. Whether it's decimal literals
>> or matrix/vector literals etc.
>>
>> I think it will be nice to make some tool, external to Python, that will
>> allow defining such "sibling languages" (transpiled into Python) easily and
>> uniformly.
>>
>> Elazar
>>
>> בתאריך יום ה׳, 12 בינו' 2017, 13:21, מאת Paul Moore <p.f.moore at gmail.com
>> >:
>>
>>> On 12 January 2017 at 10:28, Victor Stinner <victor.stinner at gmail.com>
>>> wrote:
>>> > George requested this feature on the bug tracker:
>>> > http://bugs.python.org/issue29223
>>> >
>>> > George was asked to start a discusson on this list. I posted the
>>> > following comment before closing the issue:
>>> >
>>> > You are not the first one to propose the idea.
>>>
>>> OK, but without additional detail (for example, how would the proposed
>>> flag work, if the main module imports module A, then would float
>>> literals in A be decimal or binary? Both could be what the user wants)
>>> it's hard to comment. And as you say, most of this has been discussed
>>> before, so I'd like to see references back to the previous discussions
>>> in any proposal, with explanations of how the new proposal addresses
>>> the objections raised previously.
>>>
>>> Paul
>>
>>
Most of the time one of my students talks to me about decimal vs
> binary, they're thinking that a decimal literal (or converting the
> default non-integer literal to be decimal) is a panacea to the "0.1 +
> 0.2 != 0.3" problem. Perhaps the real solution is a written-up
> explanation of why binary floating point is actually a good thing, and
> not just a backward-compatibility requirement?
>
> ChrisA
from __future__ import use_decimal_instead_of_float
or any other import would be very good.
The most important thing in my point of view is that I do not want to
convert every variable every time to decimal.
Accuracy is important for me (yes, 0.1 + 0.2 should equal to 0.3 , no more,
no less ;-) )
And if it is mentioned, I would like to ask why binary floating point is
"better". It is faster, I agree, but why "better"?
Actually I create test automation (I am a senior QA eng), the fastest test
case runs for about 1-2 minutes. I do not know the exact time difference
between binary and decimal arithmetic, but I do not care with this. My test
would run some microseconds faster. It does not matter at a minute range.
In the tests I calculate with numbers with 4 decimal digits, and I need
exact match. ;-)
Actually I have a new colleague, they did image analysis, and the
calculated much calculations, and they used a library (not python) that is
accurate as well. Becasue accuracy was more important for them as well.
BR
George
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20170112/2ecace80/attachment.html>
More information about the Python-ideas
mailing list