Hi list,
when dividing two integers, the result is a float, which means we immediately lose precision. This is not good if you want to use code which supports higher precision. Decimals come to mind, but also sympy. This loss of precision could be avoided if the result of a division is a fraction instead: a fraction is exact.
So when writing Decimal(1/3), currently we lose the precision in the division, something that the Decimal module cannot undo. With my proposal, the entire precision is retained, and it works as expected. This is even more clear for sympy, a Package for symbolic calculations: currently, sympy cannot do much about "1/2 * m * v**2", although it looks like a perfectly fine formula. But sympy only sees "0.5" instead of "1/2", which is not usable in symbolic calculations.
I am aware that this would be a huge change. But we have had such a change in the past, from integers having a floor division in Python 2, to a float in Python 3. Compared to this, this is actually a small change: the value of the result is only different by the small pecision loss of floats. The bigger problem is caused by the fact that some code may rely on the fact that a value is a float. This can be fixed easily by simply calling float(), which is also backwards-compatible, it will work on older versions of Python as well.
I have prototyped this here: https://github.com/tecki/cpython/tree/int-divide-fraction The prototype uses the fractions.Fraction class written in Python as result for integer true divisions. I expected that to go horribly wrong, but astonishingly it did not. Only a small number of tests of Python fail, mostly those where it is explicitly tested whether an object is a float. So I lowered the bar even more and tried to compile and test numpy. And also there, except some tests that very explicitly require floats, it worked fine.
In order to showcase how that would look like, let me give an example session:
>>> 5/6-4/15 17/30 >>> a=22/7 >>> f"{a}" '22/7' >>> f"{a:f}" '3.142857' >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.3333333333333333333333333333')
As a comparison, the same with current Python:
>>> 5/6-4/15 0.5666666666666667 >>> a=22/7 >>> f"{a}" '3.142857142857143' >>> f"{a:f}" '3.142857' >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.333333333333333314829616256247390992939472198486328125')
Cheers
Martin
On Fri, 14 May 2021 at 09:24, Martin Teichmann martin.teichmann@gmail.com wrote:
Hi list,
when dividing two integers, the result is a float, which means we immediately lose precision. This is not good if you want to use code which supports higher precision. Decimals come to mind, but also sympy. This loss of precision could be avoided if the result of a division is a fraction instead: a fraction is exact.
"...Although practicality beats purity."
This is a nice idea, but I think this ship has sailed. Performance reasons apart, I think that most builtin (as in compiled from C or other native code) calls that expect a float simply won't work with a Fraction object as it is today in Python.
Yes, not for the drawbacks, this could be nice - but it spells an avalanche of trouble, a lot of which due to the fact that the Fraction type and all concepts in the Numeric Tower, had not sem much active use in a lot of domains Python is used.
If you take all examples on the static typing frontend, for example, they all check for "float" when requiring non-integer numbers. (Instead of, say "numbers.Real") - so almost all type-annotated code would break with this change.
So when writing Decimal(1/3), currently we lose the precision in the division, something that the Decimal module cannot undo. With my proposal, the entire precision is retained, and it works as expected. This is even more clear for sympy, a Package for symbolic calculations: currently, sympy cannot do much about "1/2 * m * v**2", although it looks like a perfectly fine formula. But sympy only sees "0.5" instead of "1/2", which is not usable in symbolic calculations.
I am aware that this would be a huge change. But we have had such a change in the past, from integers having a floor division in Python 2, to a float in Python 3. Compared to this, this is actually a small change: the value of the result is only different by the small pecision loss of floats. The bigger problem is caused by the fact that some code may rely on the fact that a value is a float. This can be fixed easily by simply calling float(), which is also backwards-compatible, it will work on older versions of Python as well.
I have prototyped this here: https://github.com/tecki/cpython/tree/int-divide-fraction The prototype uses the fractions.Fraction class written in Python as result for integer true divisions. I expected that to go horribly wrong, but astonishingly it did not. Only a small number of tests of Python fail, mostly those where it is explicitly tested whether an object is a float. So I lowered the bar even more and tried to compile and test numpy. And also there, except some tests that very explicitly require floats, it worked fine.
In order to showcase how that would look like, let me give an example session:
>>> 5/6-4/15 17/30 >>> a=22/7 >>> f"{a}" '22/7' >>> f"{a:f}" '3.142857' >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.3333333333333333333333333333')
As a comparison, the same with current Python:
>>> 5/6-4/15 0.5666666666666667 >>> a=22/7 >>> f"{a}" '3.142857142857143' >>> f"{a:f}" '3.142857' >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.333333333333333314829616256247390992939472198486328125')
Cheers
Martin _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/RM4JDT... Code of Conduct: http://python.org/psf/codeofconduct/
Hi Joao,
I actually had the same fear as you. But as stated before, I managed to run most of the tests, and even compile and test numpy, with only some 30 tests failing. This means the entire tool chain, setuptools, pytest, cython and much more just worked. No, I did not need to touch one of them, even numpy itself runs fine except the failing tests.
The reason for that is that nearly all builtin code uses PyFloat_AsDouble, which is happily accepting a Fraction. I did not have to change that, it did that out of the box.
In the beginning I just did all that for fun just to see how it works, but after I had seen that it actually works, I decided I could at least propose it.
Cheers
Martin
I like the idea of supporting fractions to maintain more precision in calculations. I wonder if this should fall outside the scope of a standard library and be implemented in an external library. I'd lean toward inclusion in stdlib.
A couple of thoughts, which should result in zero breakage of existing code:
1. Mirror `decimal.Decimal`, allowing expression of fraction with something like `fraction.Fraction(numerator, denominator)`. 2. Add a new division operator that yields `fraction.Fraction`.
Paul
On Fri, 2021-05-14 at 12:12 +0000, Martin Teichmann wrote:
Hi list,
when dividing two integers, the result is a float, which means we immediately lose precision. This is not good if you want to use code which supports higher precision. Decimals come to mind, but also sympy. This loss of precision could be avoided if the result of a division is a fraction instead: a fraction is exact.
So when writing Decimal(1/3), currently we lose the precision in the division, something that the Decimal module cannot undo. With my proposal, the entire precision is retained, and it works as expected. This is even more clear for sympy, a Package for symbolic calculations: currently, sympy cannot do much about "1/2 * m * v**2", although it looks like a perfectly fine formula. But sympy only sees "0.5" instead of "1/2", which is not usable in symbolic calculations.
I am aware that this would be a huge change. But we have had such a change in the past, from integers having a floor division in Python 2, to a float in Python 3. Compared to this, this is actually a small change: the value of the result is only different by the small pecision loss of floats. The bigger problem is caused by the fact that some code may rely on the fact that a value is a float. This can be fixed easily by simply calling float(), which is also backwards- compatible, it will work on older versions of Python as well.
I have prototyped this here: https://github.com/tecki/cpython/tree/int-divide-fraction The prototype uses the fractions.Fraction class written in Python as result for integer true divisions. I expected that to go horribly wrong, but astonishingly it did not. Only a small number of tests of Python fail, mostly those where it is explicitly tested whether an object is a float. So I lowered the bar even more and tried to compile and test numpy. And also there, except some tests that very explicitly require floats, it worked fine.
In order to showcase how that would look like, let me give an example session:
>>> 5/6-4/15 17/30 >>> a=22/7 >>> f"{a}" '22/7' >>> f"{a:f}" '3.142857' >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.3333333333333333333333333333')
As a comparison, the same with current Python:
>>> 5/6-4/15 0.5666666666666667 >>> a=22/7 >>> f"{a}" '3.142857142857143' >>> f"{a:f}" '3.142857' >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.333333333333333314829616256247390992939472198486328125')
Cheers
Martin _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/RM4JDT... Code of Conduct: http://python.org/psf/codeofconduct/
On Fri, May 14, 2021 at 11:23 AM Paul Bryan pbryan@anode.ca wrote:
I like the idea of supporting fractions to maintain more precision in calculations. I wonder if this should fall outside the scope of a standard library and be implemented in an external library. I'd lean toward inclusion in stdlib.
It already exists.
https://docs.python.org/3/library/fractions.html
André Roberge
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/HUMMOS... Code of Conduct: http://python.org/psf/codeofconduct/
And I've just learned something. 🙂
On Fri, 2021-05-14 at 11:28 -0300, André Roberge wrote:
On Fri, May 14, 2021 at 11:23 AM Paul Bryan pbryan@anode.ca wrote:
I like the idea of supporting fractions to maintain more precision in calculations. I wonder if this should fall outside the scope of a standard library and be implemented in an external library. I'd lean toward inclusion in stdlib.
It already exists.
https://docs.python.org/3/library/fractions.html
André Roberge
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at
https://mail.python.org/archives/list/python-ideas@python.org/message/HUMMOS...
Code of Conduct: http://python.org/psf/codeofconduct/
On Sat, May 15, 2021 at 12:23 AM Paul Bryan pbryan@anode.ca wrote:
I like the idea of supporting fractions to maintain more precision in calculations. I wonder if this should fall outside the scope of a standard library and be implemented in an external library. I'd lean toward inclusion in stdlib.
A couple of thoughts, which should result in zero breakage of existing code:
- Mirror `decimal.Decimal`, allowing expression of fraction with something like `fraction.Fraction(numerator, denominator)`.
That can already be done (btw it's "fractions.Fraction"), although I tend to write it as Fraction(numerator) / denominator, since division of a Fraction and an integer works as expected.
- Add a new division operator that yields `fraction.Fraction`.
Alternatively: Add a Fraction literal. Like an imaginary literal, this would be attached to one half or the other, so what most humans would think of as a literal is actually two of them. It could be something like:
1F / 3
or
1 / 3F
It would work just as well either way, but the stdlib would pick one of them to recommend (and then change the repr of a Fraction to produce this).
The downside would be that Fraction code would no longer be buried away in the stdlib - it'd be a mandatory part of the running interpreter. I'm not sure how much impact this would have on startup time.
This has been suggested a number of times, and IMO it'd be a far easier path forward than either redefining integer division or creating a new operator.
ChrisA
Martin wrote:
when dividing two integers, the result is a float, which means we
immediately lose precision. This is not good if you want to use code which supports higher precision.
I am of course in favour of keeping precision, if there is no cost. But here there is a cost. Arbitrary precision arithmetic (and roots) uses more memory and takes longer. Therefore, the system must allow the programmer to choice what's wanted (either explicitly or implicitly).
Python has been set up to make 1/3 evaluate to a float (an implicit choice), and have Fraction(1, 3) be a fraction (an explicit choice).
I don't think it fair to say that this decision is "not good". Rather, it's good for some use cases and not so good for others. On balance, I'm happy with that decision.
That said, as a pure mathematician who does integer calculations, I'd like numpy to have better capabilities for integer linear algebra.
I hope Martin that you agree with me, that this change would not be an improvement for everyone.
Jonathan Fine said
But here there is a cost. Arbitrary precision arithmetic (and roots) uses
more memory and takes longer.
I think a wider (perhaps naive) interpretation of the idea could be, "can we defer certain lossy calculations until they have to be done?"
Fraction may do eager arbitrary precision arithmetic (I don't know), but is it necessarily the case that integer division has to be eager? Could we have our cake and eat it, too?
(Sorry for the dupe, Jonathan, I need to dig through my email configuration to learn why "reply" doesn't do what I think it should do.)
On Fri, May 14, 2021 at 10:48 AM Jonathan Fine jfine2358@gmail.com wrote:
Martin wrote:
when dividing two integers, the result is a float, which means we
immediately lose precision. This is not good if you want to use code which supports higher precision.
I am of course in favour of keeping precision, if there is no cost. But here there is a cost. Arbitrary precision arithmetic (and roots) uses more memory and takes longer. Therefore, the system must allow the programmer to choice what's wanted (either explicitly or implicitly).
Python has been set up to make 1/3 evaluate to a float (an implicit choice), and have Fraction(1, 3) be a fraction (an explicit choice).
I don't think it fair to say that this decision is "not good". Rather, it's good for some use cases and not so good for others. On balance, I'm happy with that decision.
That said, as a pure mathematician who does integer calculations, I'd like numpy to have better capabilities for integer linear algebra.
I hope Martin that you agree with me, that this change would not be an improvement for everyone.
-- Jonathan _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/HJXX2M... Code of Conduct: http://python.org/psf/codeofconduct/
On Sat, May 15, 2021 at 1:16 AM Michael Smith michael@smith-li.com wrote:
Jonathan Fine said
But here there is a cost. Arbitrary precision arithmetic (and roots) uses more memory and takes longer.
I think a wider (perhaps naive) interpretation of the idea could be, "can we defer certain lossy calculations until they have to be done?"
Fraction may do eager arbitrary precision arithmetic (I don't know), but is it necessarily the case that integer division has to be eager? Could we have our cake and eat it, too?
Well, integer division has to return *something*, there has to be a value for it. So the only benefit you might get would be that 10/8 could be stored as the fraction 10/8 instead of reducing it to 5/4, which would save some effort at the time of division at the cost of worsening addition.
ChrisA
Hi Jonathan,
I would agree with you if what I was asking was arbitrary precision math. But this is not what I am asking for, what I am asking for is that integer true division should result in a Fraction. The difference is huge: arbitrary precision math is costly, and while arbitrarily precise, that "arbitrary" does not include "exact". Fractions, on the other hand, are exact.
The speed argument also does not hold: whenever you do actual math on the fractions, they will be seemlessly converted to floats. Anybody doing anything speed will use something like numpy anyways, and there is no discussion there, numpy uses floats, and I am not proposing to change that.
To illustrate what I am talking about, see how it looks like in reality (yes, this is the actual Python interpreter with my changes, not artificially generated stuff):
>>> from math import sin, sqrt >>> sin(1/2) 0.479425538604203 >>> sqrt(1/4) 0.5 >>> (1/2)**2 1/4 >>> (1/4)**(1/2) 0.5 >>> (1/2)**-2 4
Now you can ask, if everything is converted to floats anyways, where is the benefit? The benefit is when you use symbolic math (or, indeed, arbitrary precision math like the Decimal example I already mentioned):
>>> from sympy import symbols, sqrt, sin >>> x = symbols("x") >>> sin(1/2) sin(1/2) >>> 1/2 + x x + 1/2 >>> 2/3 * x 2*x/3 >>> sqrt(1/4) 1/2
So, my argument is, this change will benefit many people, and be of only a small disadvantage to others. That disadvantage is that yes, there will be places where it does not work, so libraries need to get fixed. But this is the case with every Python update. Once it will be working, there will be no disadvantage anymore.
Cheers
Martin
On Fri, 2021-05-14 at 15:15 +0000, Martin Teichmann wrote:
So, my argument is, this change will benefit many people, and be of only a small disadvantage to others. That disadvantage is that yes, there will be places where it does not work, so libraries need to get fixed. But this is the case with every Python update. Once it will be working, there will be no disadvantage anymore.
Why not support a new division operator and/or fraction literal to avoid breakage altogether?
Paul
Hi Paul,
I would be fine with a new division operator or fraction literal as well, and in the beginning this is what I actually wanted to propose.
But then I started prototyping, and given that it is not so easy to change the parser, I just changed the normal division operator. And interestingly, nearly nothing broke.
The biggest compatibility problem I actually experienced was the completely unrelated breaking of PyCode_New in recent commits of CPython, which made using cython and thus numpy impossible. That was so hard to tackle I actually gave up and simply branched of an older version of CPython. What I want to say with that is: we are happily breaking compatibility for more minor stuff. I think my proposal would be a major improvement, which would warrant breaking of compatiblity, especially because this break of compatibility is astonishingly small. As said above, I already installed unmodified numpy, sympy, cython and all their tool chains.
Cheers
Martin
On Fri, May 14, 2021 at 12:40 PM Martin Teichmann < martin.teichmann@gmail.com> wrote:
Hi Paul,
I would be fine with a new division operator or fraction literal as well, and in the beginning this is what I actually wanted to propose.
You can already experiment with this.
First, install a few packages
python -m pip install ideas # will also install token-utils python -m pip install fraction-literal
Next, start your standard Python interpreter and do the following:
from ideas import experimental_syntax_encoding from ideas import console console.start()
Configuration values for the console: transform_source: <function transform_source at 0x00E61FA8> -------------------------------------------------- Ideas Console version 0.0.19. [Python version: 3.7.8]
~>> from experimental-syntax import fraction_literal ~>> 2/3F Fraction(2, 3) ~>>
Appending an "F" after integer fractions literal is all that is needed.
André Roberge
But then I started prototyping, and given that it is not so easy to change the parser, I just changed the normal division operator. And interestingly, nearly nothing broke.
The biggest compatibility problem I actually experienced was the completely unrelated breaking of PyCode_New in recent commits of CPython, which made using cython and thus numpy impossible. That was so hard to tackle I actually gave up and simply branched of an older version of CPython. What I want to say with that is: we are happily breaking compatibility for more minor stuff. I think my proposal would be a major improvement, which would warrant breaking of compatiblity, especially because this break of compatibility is astonishingly small. As said above, I already installed unmodified numpy, sympy, cython and all their tool chains.
Cheers
Martin _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/ZAB5CT... Code of Conduct: http://python.org/psf/codeofconduct/
On Fri, May 14, 2021 at 1:14 PM André Roberge andre.roberge@gmail.com wrote:
You can already experiment with this.
First, install a few packages
python -m pip install ideas # will also install token-utils python -m pip install fraction-literal
Next, start your standard Python interpreter and do the following:
from ideas import experimental_syntax_encoding from ideas import console console.start()
Configuration values for the console: transform_source: <function transform_source at 0x00E61FA8>
Ideas Console version 0.0.19. [Python version: 3.7.8]
~>> from experimental-syntax import fraction_literal ~>> 2/3F Fraction(2, 3) ~>>
Appending an "F" after integer fractions literal is all that is needed.
André Roberge
Thanks Andre I tried it out and it works great.
Do the appended capital Fs make these numbers look like some kind of hexadecimal representation or is it just me?
--- Ricky.
"I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler
On Fri, May 14, 2021 at 2:09 PM Ricky Teachey ricky@teachey.org wrote:
On Fri, May 14, 2021 at 1:14 PM André Roberge andre.roberge@gmail.com wrote:
You can already experiment with this.
Thanks Andre I tried it out and it works great.
Do the appended capital Fs make these numbers look like some kind of hexadecimal representation or is it just me?
Actually it just hit me: I don't know if this will be considered a big difficulty or not, but currently you can write division operations using hexadecimal numbers:
0xF / 0xF
1.0
And this works in Andre's experimental implementation like so for fractions of hex numbers:
0xF / 0xF F
Fraction(1, 1)
But it looks... funny. I don't know if is is good or bad. It just is.
--- Ricky.
"I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler
I'll vote: bad. I don't think non-integer representations of fraction attributes should be considered valid when expressing as a literal.
On Fri, 2021-05-14 at 14:17 -0400, Ricky Teachey wrote:
On Fri, May 14, 2021 at 2:09 PM Ricky Teachey ricky@teachey.org wrote:
On Fri, May 14, 2021 at 1:14 PM André Roberge andre.roberge@gmail.com wrote:
You can already experiment with this.
Thanks Andre I tried it out and it works great.
Do the appended capital Fs make these numbers look like some kind of hexadecimal representation or is it just me?
Actually it just hit me: I don't know if this will be considered a big difficulty or not, but currently you can write division operations using hexadecimal numbers:
>>> 0xF / 0xF 1.0
And this works in Andre's experimental implementation like so for fractions of hex numbers:
>>> 0xF / 0xF F Fraction(1, 1)
But it looks... funny. I don't know if is is good or bad. It just is.
Ricky.
"I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/JRRIRY... Code of Conduct: http://python.org/psf/codeofconduct/
s/non-integer/non-decimal-integer/
On Fri, 2021-05-14 at 11:23 -0700, Paul Bryan wrote:
I'll vote: bad. I don't think non-integer representations of fraction attributes should be considered valid when expressing as a literal.
On Fri, 2021-05-14 at 14:17 -0400, Ricky Teachey wrote:
On Fri, May 14, 2021 at 2:09 PM Ricky Teachey ricky@teachey.org wrote:
On Fri, May 14, 2021 at 1:14 PM André Roberge andre.roberge@gmail.com wrote:
You can already experiment with this.
Thanks Andre I tried it out and it works great.
Do the appended capital Fs make these numbers look like some kind of hexadecimal representation or is it just me?
Actually it just hit me: I don't know if this will be considered a big difficulty or not, but currently you can write division operations using hexadecimal numbers:
>>> 0xF / 0xF 1.0
And this works in Andre's experimental implementation like so for fractions of hex numbers:
>>> 0xF / 0xF F Fraction(1, 1)
But it looks... funny. I don't know if is is good or bad. It just is.
Ricky.
"I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/JRRIRY... Code of Conduct: http://python.org/psf/codeofconduct/
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/NBRDJP... Code of Conduct: http://python.org/psf/codeofconduct/
On Sat, May 15, 2021 at 4:18 AM Ricky Teachey ricky@teachey.org wrote:
On Fri, May 14, 2021 at 2:09 PM Ricky Teachey ricky@teachey.org wrote:
On Fri, May 14, 2021 at 1:14 PM André Roberge andre.roberge@gmail.com wrote:
You can already experiment with this.
Thanks Andre I tried it out and it works great.
Do the appended capital Fs make these numbers look like some kind of hexadecimal representation or is it just me?
Actually it just hit me: I don't know if this will be considered a big difficulty or not, but currently you can write division operations using hexadecimal numbers:
0xF / 0xF
1.0
And this works in Andre's experimental implementation like so for fractions of hex numbers:
0xF / 0xF F
Fraction(1, 1)
But it looks... funny. I don't know if is is good or bad. It just is.
I can't imagine that this would be a normal thing, and I wouldn't be averse to specifying that a Fraction literal consist of a sequence of digits (and optionally underscores) followed immediately, without whitespace, by the letter F. Basically the same kinds of rules as for an imaginary literal, only without the decimal point. Keep it simple, keep it easy.
ChrisA
On Fri, May 14, 2021 at 3:09 PM Ricky Teachey ricky@teachey.org wrote:
On Fri, May 14, 2021 at 1:14 PM André Roberge andre.roberge@gmail.com wrote:
You can already experiment with this.
First, install a few packages
python -m pip install ideas # will also install token-utils python -m pip install fraction-literal
Next, start your standard Python interpreter and do the following:
from ideas import experimental_syntax_encoding from ideas import console console.start()
Configuration values for the console: transform_source: <function transform_source at 0x00E61FA8>
Ideas Console version 0.0.19. [Python version: 3.7.8]
~>> from experimental-syntax import fraction_literal ~>> 2/3F Fraction(2, 3) ~>>
Appending an "F" after integer fractions literal is all that is needed.
André Roberge
Thanks Andre I tried it out and it works great.
Do the appended capital Fs make these numbers look like some kind of hexadecimal representation or is it just me?
No, it was F for Fraction. Any integer followed by "F" will be transformed into a fraction.Fraction instance.
You can also install decimal-literal as use D as a suffix.
I had mentioned this in a previous message on this list (See https://www.mail-archive.com/python-ideas@python.org/msg24946.html)
André
Ricky.
"I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler
The memory simply blows up too fast for this to be practical (at least as a default) a float is always 64 bits, a fraction is unboundedly large if numerator and denominator are coprime.
A toy example with a half dozen operations won't make huge fractions. A loop over a million operations will often be a gigantic memory hog.
That said, Chris's idea for a literal spelling of "Fraction" is very appealing. One extra letter or symbol could indicate that you want to work in the Fraction domain rather than floating point. That's a perfectly reasonable decision for a user to make.
On Fri, May 14, 2021, 11:18 AM Martin Teichmann martin.teichmann@gmail.com wrote:
Hi Jonathan,
I would agree with you if what I was asking was arbitrary precision math. But this is not what I am asking for, what I am asking for is that integer true division should result in a Fraction. The difference is huge: arbitrary precision math is costly, and while arbitrarily precise, that "arbitrary" does not include "exact". Fractions, on the other hand, are exact.
The speed argument also does not hold: whenever you do actual math on the fractions, they will be seemlessly converted to floats. Anybody doing anything speed will use something like numpy anyways, and there is no discussion there, numpy uses floats, and I am not proposing to change that.
To illustrate what I am talking about, see how it looks like in reality (yes, this is the actual Python interpreter with my changes, not artificially generated stuff):
>>> from math import sin, sqrt >>> sin(1/2) 0.479425538604203 >>> sqrt(1/4) 0.5 >>> (1/2)**2 1/4 >>> (1/4)**(1/2) 0.5 >>> (1/2)**-2
4
Now you can ask, if everything is converted to floats anyways, where is the benefit? The benefit is when you use symbolic math (or, indeed, arbitrary precision math like the Decimal example I already mentioned):
>>> from sympy import symbols, sqrt, sin >>> x = symbols("x") >>> sin(1/2) sin(1/2) >>> 1/2 + x x + 1/2 >>> 2/3 * x 2*x/3 >>> sqrt(1/4) 1/2
So, my argument is, this change will benefit many people, and be of only a small disadvantage to others. That disadvantage is that yes, there will be places where it does not work, so libraries need to get fixed. But this is the case with every Python update. Once it will be working, there will be no disadvantage anymore.
Cheers
Martin _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/UPHKJP... Code of Conduct: http://python.org/psf/codeofconduct/
On Fri, 14 May 2021 at 16:29, David Mertz mertz@gnosis.cx wrote:
The memory simply blows up too fast for this to be practical (at least as a default) a float is always 64 bits, a fraction is unboundedly large if numerator and denominator are coprime.
A toy example with a half dozen operations won't make huge fractions. A loop over a million operations will often be a gigantic memory hog.
+1 on this. My experience has been that fraction classes are a lot less useful in (general) practical situations than people instinctively assume.
That said, Chris's idea for a literal spelling of "Fraction" is very appealing. One extra letter or symbol could indicate that you want to work in the Fraction domain rather than floating point. That's a perfectly reasonable decision for a user to make.
Agreed, it is appealing. The problem (and this is not a new suggestion, it's come up a number of times) is that of having language syntax depend on non-builtin classes. So either you have to *also* propose making the fractions module a builtin, or you very quickly get sucked into "why not make this mechanism more general, so that libraries can define their own literals?"
Scope creep is nearly always what kills these proposals. Or the base proposal is too specialised to gain enough support.
Personally, I can see value in fraction, decimal and regex literals. But I can live without them.
Paul
Hi Paul,
Agreed, it is appealing. The problem (and this is not a new suggestion, it's come up a number of times) is that of having language syntax depend on non-builtin classes. So either you have to *also* propose making the fractions module a builtin
That is absolutely what I would like to have. The fractions module is very small, it can easily be made a builtin. This would also speed it up significantly, I hope. Probably close to the speed of floats, given that most of the time spent for floats is whithin the interpreter anyways.
Personally, I can see value in fraction, decimal and regex literals. But I can live without them.
This is why I proposed not to make a new literal. With my proposal, I already served two of your proposed use cases: fractions and decimal. Decimal(1/3) would be perfectly fine. Plus symbolic math.
Effectively what I am proposing is lazy evaluation of the divison operator, using fractions as a mathematical tool.
Cheers
Martin
On Fri, 14 May 2021 at 16:54, Martin Teichmann martin.teichmann@gmail.com wrote:
That is absolutely what I would like to have. The fractions module is very small, it can easily be made a builtin. This would also speed it up significantly, I hope. Probably close to the speed of floats, given that most of the time spent for floats is whithin the interpreter anyways.
Builtins have to be in C, with very few exceptions. That makes it harder for alternative implementations, who now have to write their own implementation rather than just grabbing the pure Python stdlib version. It also makes maintenance harder, and means that bugs take longer to get fixed, as fewer people know how to maintain the code.
This is why I proposed not to make a new literal. With my proposal, I already served two of your proposed use cases: fractions and decimal. Decimal(1/3) would be perfectly fine. Plus symbolic math.
But I don't want Decimal(1/3), I want Decimal(0.01).
Effectively what I am proposing is lazy evaluation of the divison operator, using fractions as a mathematical tool.
Well, you don't need fractions (as in the fractions *module* and the Fraction *type*) for lazy evaluation. You can just use a tuple of numerator and denominator. But that's just details. What's more critical is when you do the actual conversion to float. In other words, how lazy would this be in reality?
What about
isinstance(1/3, float) Decimal(1/3)
If you convert to float too quickly, there's no benefit. If you delay, there's a period where the difference is detectable. And that's a breaking change. I'm very impressed that you made the actual interpreter change and checked both the test suite and real-world code like numpy, but that doesn't mean nothing changed.
At a minimum, I'd suggest that you need to specify *exactly* when the division is actually executed.
But actually, I suspect that you really *do* want 1/3 to be a `fractions.Fraction` object. So see above. And in addition, you still have the question about when, and how, the conversion to float happens. Because Fraction doesn't magically convert to a float - so the behaviour of the object returned from the expression 1/3 very definitely *isn't* the behaviour of Fraction(1,3).
I guess I'm confused. I'm -1 on a proposal that I don't understand, so I'll have to wait for a clearer explanation of what you're suggesting.
From other emails:
actually have a new idea about how a fraction literal could look like: just write 2/3 as opposed to 2 / 3, and you get a fraction. So: no spaces: fraction, with spaces: float.
That would break code that currently writes 2/3 and expects a float.
I'm fine with speculation, but I think you need to be a *lot* more conscious of what counts as backward incompatibility here. I'm not saying we can't break backward compatibility, but we need to do so consciously, and having assessed the consequences, not just because we assumed "no-one will do that".
Also consider the section in the PEP format "How would we teach this?" How would you explain to someone with no programming background, maybe a high school student, that 3/4 and 3 / 4 mean different things in Python? Your audience might not even know that there is a difference between "fraction", "decimal" and "float" at this stage.
Paul
Hi Paul,
Also consider the section in the PEP format "How would we teach this?" How would you explain to someone with no programming background, maybe a high school student, that 3/4 and 3 / 4 mean different things in Python? Your audience might not even know that there is a difference between "fraction", "decimal" and "float" at this stage.
Well, I think a high school student would be the one with the least problems: s/he would just realize "wow, that thing can do fractions! I can do my math homework with that!" And I can tell you, kids will be the first ones to figure out that if you type spaces you get decimals, if you do not type spaces you get fractions. They are used to this kind of stuff from their math class, from calculators (or their phones, I guess).
So, if you show them (the following is fake)
>>> 1/2 + 1/3 5/6 >>>> 1 / 2 + 1 / 3 0.833333
They will immediately spot what's going on.
On Fri, 14 May 2021 at 20:06, Martin Teichmann martin.teichmann@gmail.com wrote:
Also consider the section in the PEP format "How would we teach this?" How would you explain to someone with no programming background, maybe a high school student, that 3/4 and 3 / 4 mean different things in Python? Your audience might not even know that there is a difference between "fraction", "decimal" and "float" at this stage.
Well, I think a high school student would be the one with the least problems: s/he would just realize "wow, that thing can do fractions! I can do my math homework with that!" And I can tell you, kids will be the first ones to figure out that if you type spaces you get decimals, if you do not type spaces you get fractions. They are used to this kind of stuff from their math class, from calculators (or their phones, I guess).
OK, maybe I chose a bad example. Maybe I should have said "PL/SQL programmers who don't really understand much of the theory behind programming, but are just interested in getting the job done, who have picked up some Python code written by someone else and have to fix it because the automation script broke and the normal guy isn't in the office today". I can give you names, if you want :-)
So, if you show them (the following is fake)
>>> 1/2 + 1/3 5/6 >>>> 1 / 2 + 1 / 3 0.833333
They will immediately spot what's going on.
In effect you're saying "we don't need to teach it because it's obvious". I disagree. There's certainly some audiences who won't find it intuitive. How do we teach it to them?
But I'm not really interested in going into ever more detail on this point to be honest. All I'm trying to say is "I think that having 1/2 mean something different than 1 / 2 is unacceptable, because it's too easy for people to misunderstand". You may choose to disagree with me, which is fine. But at some point, the responsibility has to be on you to persuade people that your idea is good, so you can't do that indefinitely.
I'd still like to see a more precise explanation of your proposal. I'm finding that every time I try to guess details of what you mean, I'm guessing wrong, and that's not productive (for me or for you).
Paul
On 5/14/21 1:07 PM, Paul Moore wrote:
All I'm trying to say is "I think that having 1/2 mean something different than 1 / 2 is unacceptable, because it's too easy for people to misunderstand".
Agreed. Significant white space for control flow is great; for specifying data types it's horrible.
-- ~Ethan~
Martin wrote
So, if you show them (the following is fake)
>>> 1/2 + 1/3 5/6 >>>> 1 / 2 + 1 / 3 0.833333
They will immediately spot what's going on.
I'm sighted. I can see the difference. I suspect a blind person using a screen reader would struggle a lot to spot the difference. (I don't know enough about screen readers to be sure.)
On Fri, May 14, 2021, 4:31 PM Jonathan Fine jfine2358@gmail.com wrote:
>>> 1/2 + 1/3
5/6 >>>> 1 / 2 + 1 / 3 0.833333
I'm sighted. I can see the difference. I suspect a blind person using a screen reader would struggle a lot to spot the difference. (I don't know enough about screen readers to be sure.)
There's also the fact that approximately 50% of the Python code I've written will now be broken. If you reverse the meaning, it will be the complementary 50%.
-100 on this idea.
Actually, even though I suggested before that I liked `1F/3` before, or perhaps `1/3F` (or either with type coercion rules), I think I've gone to -0.5 for that also.
Basically, the fact I can do (and often have done) this makes it needless:
from fractions import Fraction as F x = F(1, 3)
Yours, David...
P.S. A very long time ago, Python only had floor division of ints. I wrote plenty of `1.0*x/y` code to work around that. It wasn't the worst thing, but I understand the convenience of promotion to float.
However, the existing `/` was given a backwards incompatible meaning of "true division" and the new `//` operator took on floor division. I still believe that was the wrong way around. I thought the existing operator should keep the same meaning, and a new operator could mean something else. But so it goes; I adjusted, of course.
Nowadays I remember the distinction by thinking of the first slash as division, and the second one as the "floor operator." It's a mnemonic, not the actual machinery, but it works for my brain.
David Mertz writes:
On Fri, May 14, 2021, 4:31 PM Jonathan Fine jfine2358@gmail.com wrote:
>>> 1/2 + 1/3
5/6 >>>> 1 / 2 + 1 / 3 0.833333
I'm sighted. I can see the difference. I suspect a blind person using a screen reader would struggle a lot to spot the difference.
This is a really good point. I think a screen reader that reads out whitespace would be really annoying if it were more frequent than, say, paragraph breaks.
However, the existing `/` was given a backwards incompatible meaning of "true division" and the new `//` operator took on floor division. I still believe that was the wrong way around. I thought the existing operator should keep the same meaning,
That *would* have been true if str = Unicode didn't break the world. But by freshman year in college students expect real division (either with a fractional result or a float result). I think it was better to cater to that prejudice. (At least that's true here in Japan, where few students do programming before they get here, and was true in Columbus Ohio a couple decades ago. These are schools where most students come from pretty good high schools and the students have access to computers to learn programming if they wanted to.)
This thread seems related to the other thread I just answered I don't find, I wrote "julia" there, hmmm
Le mar. 18 mai 2021 à 12:41, Stephen J. Turnbull < turnbull.stephen.fw@u.tsukuba.ac.jp> a écrit :
David Mertz writes:
On Fri, May 14, 2021, 4:31 PM Jonathan Fine jfine2358@gmail.com
wrote:
>>> 1/2 + 1/3
5/6 >>>> 1 / 2 + 1 / 3 0.833333
I'm sighted. I can see the difference. I suspect a blind person using
a
screen reader would struggle a lot to spot the difference.
This is a really good point. I think a screen reader that reads out whitespace would be really annoying if it were more frequent than, say, paragraph breaks.
However, the existing `/` was given a backwards incompatible meaning of "true division" and the new `//` operator took on floor division. I
still
believe that was the wrong way around. I thought the existing operator should keep the same meaning,
That *would* have been true if str = Unicode didn't break the world. But by freshman year in college students expect real division (either with a fractional result or a float result). I think it was better to cater to that prejudice. (At least that's true here in Japan, where few students do programming before they get here, and was true in Columbus Ohio a couple decades ago. These are schools where most students come from pretty good high schools and the students have access to computers to learn programming if they wanted to.)
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/TCG5DK... Code of Conduct: http://python.org/psf/codeofconduct/
There it is : https://mail.python.org/archives/list/python-ideas@python.org/message/B4EPUQ...
Le mar. 18 mai 2021 à 12:43, Ir. Robert Vanden Eynde robertve92@gmail.com a écrit :
This thread seems related to the other thread I just answered I don't find, I wrote "julia" there, hmmm
Le mar. 18 mai 2021 à 12:41, Stephen J. Turnbull < turnbull.stephen.fw@u.tsukuba.ac.jp> a écrit :
David Mertz writes:
On Fri, May 14, 2021, 4:31 PM Jonathan Fine jfine2358@gmail.com
wrote:
>>> 1/2 + 1/3
5/6 >>>> 1 / 2 + 1 / 3 0.833333
I'm sighted. I can see the difference. I suspect a blind person
using a
screen reader would struggle a lot to spot the difference.
This is a really good point. I think a screen reader that reads out whitespace would be really annoying if it were more frequent than, say, paragraph breaks.
However, the existing `/` was given a backwards incompatible meaning of "true division" and the new `//` operator took on floor division. I
still
believe that was the wrong way around. I thought the existing operator should keep the same meaning,
That *would* have been true if str = Unicode didn't break the world. But by freshman year in college students expect real division (either with a fractional result or a float result). I think it was better to cater to that prejudice. (At least that's true here in Japan, where few students do programming before they get here, and was true in Columbus Ohio a couple decades ago. These are schools where most students come from pretty good high schools and the students have access to computers to learn programming if they wanted to.)
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/TCG5DK... Code of Conduct: http://python.org/psf/codeofconduct/
On Fri, May 14, 2021 at 07:05:43PM -0000, Martin Teichmann wrote:
Hi Paul,
Also consider the section in the PEP format "How would we teach this?" How would you explain to someone with no programming background, maybe a high school student, that 3/4 and 3 / 4 mean different things in Python? Your audience might not even know that there is a difference between "fraction", "decimal" and "float" at this stage.
Well, I think a high school student would be the one with the least problems: s/he would just realize "wow, that thing can do fractions! I can do my math homework with that!" And I can tell you, kids will be the first ones to figure out that if you type spaces you get decimals, if you do not type spaces you get fractions. They are used to this kind of stuff from their math class, from calculators (or their phones, I guess).
*raises hand*
Not any students I work with.
No text book I've seen teaches that `1 / 2` and `1/2` are different things (one a rational fraction and the other a decimal). And no calculator I've seen, and I've seen a lot (I'm a bit of a calculator geek...) treats them differently. Most scientific calculators don't even have a way of entering spaces.
Both the Texas Instruments "Nspire" and Casio "Classpad" have a global setting that controls whether calculations between integers result in exact fractional, or symbolic, values, or a potentially inexact decimal approximation. The Nspire also allows the user to force a decimal by using a decimal point in any of the input values. Spaces make no difference in either.
In Python it would be odd to have space sensitivity for operators. For every operator ⊕, the presence or absence of surrounding spaces makes no difference, provided the operator can be unambiguously parsed:
>>> []or{} {}
And it applies to the dot pseudo-operator, with the same restriction:
>>> str . find <method 'find' of 'str' objects>
There's really only one common case where spaces make a difference: float literals cannot contain spaces, so `1 .` is interpreted as the dot pseudo-operator applied to the int 1, rather than the float `1.`
But that's a consequence of the parsing rules, not a difference in the operator.
On Fri, 14 May 2021 at 19:41, Paul Moore p.f.moore@gmail.com wrote:
On Fri, 14 May 2021 at 16:54, Martin Teichmann martin.teichmann@gmail.com wrote:
That is absolutely what I would like to have. The fractions module is very small, it can easily be made a builtin. This would also speed it up significantly, I hope. Probably close to the speed of floats, given that most of the time spent for floats is whithin the interpreter anyways.
Exact rational arithmetic is much slower than floating point because of the gcd calculations and because of bit growth. Even with CPython interpreter overhead and a highly optimised C implementation calculations with simple rationals would still be noticeably slower than float. That's with simple rationals though: once you start doing arithmetic with rationals the bitcount typically grows and the computational cost of elementary operations is unbounded if there is no limit on the bit count.
As an example consider something like Gaussian elimination which is a basic algorithm for solving systems of equations or for inverting matrices etc. The algorithm is O(n^3) in the number of elementary add/multiply operations and if you write that algorithm for fixed-width floating point and time it with random matrices then you will be able to see the n^3 performance. When you write the same algorithm using exact rationals the complexity in terms of bit operations can grow exponentially. Leaving aside rationals, just applying Gaussian elimination to a matrix of exact integers can be done with modified algorithms but those are still O(n^5) because of bit growth. This is the basic reason why floating point is more commonly used than exact rational arithmetic: it provides bounded computational cost for elementary operations. For most numerical applications it is better to have bounded computation in exchange for all of the other problems that rounding errors and inexact arithmetic lead to.
Builtins have to be in C, with very few exceptions. That makes it harder for alternative implementations, who now have to write their own implementation rather than just grabbing the pure Python stdlib version. It also makes maintenance harder, and means that bugs take longer to get fixed, as fewer people know how to maintain the code.
Yes, but having a faster fraction type would be great. SymPy doesn't actually use the fractions module because it's too slow. Instead SymPy has its own pure Python implementation that is a little faster and will use gmpy2's mpq type if gmpy2 is installed. The mpq type is something like 30x faster than Fraction. The fmpq type from python_flint is around twice as fast again and is particularly fast for small rationals. I'm referring to the time difference for small fractions but more efficient integer implementations used in these libraries mean that the speed difference increases for larger bit counts. For some operations in SymPy this speed ratio translates directly to the time taken for a slow user-level calculation.
The increased inherent cost of rational arithmetic actually means that with an efficient implementation of a fraction class pure Python code can compete for speed with code written in C. Maybe SymPy is a specialist library that should use other specialist libraries for faster integer and rational arithmetic but to me the Fraction class is clearly the kind of thing that should be on the C side of the implementation. The current Fraction class is great for basic use but too slow for larger calculations.
-- Oscar
On Fri, May 14, 2021 at 09:39:33PM +0100, Oscar Benjamin wrote:
Yes, but having a faster fraction type would be great. SymPy doesn't actually use the fractions module because it's too slow. Instead SymPy has its own pure Python implementation that is a little faster and will use gmpy2's mpq type if gmpy2 is installed. The mpq type is something like 30x faster than Fraction. The fmpq type from python_flint is around twice as fast again and is particularly fast for small rationals.
Can we not improve the implementation of Fraction? What do the non-gmpy versions do that makes them faster?
The statistics module relies on Fraction for much of its calculations, improving Fraction would have a direct benefit for that. Just sayin'...
On Sat, 15 May 2021 at 12:52, Steven D'Aprano steve@pearwood.info wrote:
On Fri, May 14, 2021 at 09:39:33PM +0100, Oscar Benjamin wrote:
Yes, but having a faster fraction type would be great. SymPy doesn't actually use the fractions module because it's too slow. Instead SymPy has its own pure Python implementation that is a little faster and will use gmpy2's mpq type if gmpy2 is installed. The mpq type is something like 30x faster than Fraction. The fmpq type from python_flint is around twice as fast again and is particularly fast for small rationals.
Can we not improve the implementation of Fraction? What do the non-gmpy versions do that makes them faster?
The statistics module relies on Fraction for much of its calculations, improving Fraction would have a direct benefit for that. Just sayin'...
SymPy's fallback PythonMPQ class is here: https://github.com/sympy/sympy/blob/e30fcee96c3dbec337bc665229e5ef685277d56e... It isn't intended to be a public API so for example it doesn't enforce immutability. That makes accessing numerator and denominator more efficient than having a property method but this is probably not that significant. Care is taken to structure operations so that it can avoid unnecessary gcd calls but that's true of Fraction as well.
The main difference when I checked seemed to be the arithmetic algorithms but looking now I can see that Fraction was recently changed to use the same algorithms as SymPy: https://github.com/python/cpython/commit/690aca781152a498f5117682524d2cd9aa4... It looks like Sergey saw my comments when refactoring the PythonMPQ code and upstreamed the algorithms.
Maybe it will now be possible to eliminate PythonMPQ which is redundant with Fraction if Fraction is made faster (once support for Python 3.9 is dropped).
Oscar
Yes, but having a faster fraction type would be great. SymPy doesn't actually use the fractions module because it's too slow. Instead SymPy has its own pure Python implementation
Oscar, I think now (3.10) the stdlib implementation arithmetics is optimized like the SymPy's pure Python fallback. Let me know if I miss something. (There is some slowdown for fractions with small components and a PR to address this.)
In truth, when I want fractions, I write:
from fractions import Fraction as F
So a literal doesn't really save many characters anyway. I guess 2 characters during first use. E.g.
x = F(1, 3)
vs. a possible future:
x = 1F / 3
The next stuff comes free either way:
y = ((x / 7) + 13) * 11
On Fri, May 14, 2021, 11:38 AM Paul Moore p.f.moore@gmail.com wrote:
On Fri, 14 May 2021 at 16:29, David Mertz mertz@gnosis.cx wrote:
The memory simply blows up too fast for this to be practical (at least
as a default) a float is always 64 bits, a fraction is unboundedly large if numerator and denominator are coprime.
A toy example with a half dozen operations won't make huge fractions. A
loop over a million operations will often be a gigantic memory hog.
+1 on this. My experience has been that fraction classes are a lot less useful in (general) practical situations than people instinctively assume.
That said, Chris's idea for a literal spelling of "Fraction" is very
appealing. One extra letter or symbol could indicate that you want to work in the Fraction domain rather than floating point. That's a perfectly reasonable decision for a user to make.
Agreed, it is appealing. The problem (and this is not a new suggestion, it's come up a number of times) is that of having language syntax depend on non-builtin classes. So either you have to *also* propose making the fractions module a builtin, or you very quickly get sucked into "why not make this mechanism more general, so that libraries can define their own literals?"
Scope creep is nearly always what kills these proposals. Or the base proposal is too specialised to gain enough support.
Personally, I can see value in fraction, decimal and regex literals. But I can live without them.
Paul
On Sat, May 15, 2021 at 1:39 AM Paul Moore p.f.moore@gmail.com wrote:
On Fri, 14 May 2021 at 16:29, David Mertz mertz@gnosis.cx wrote:
That said, Chris's idea for a literal spelling of "Fraction" is very appealing. One extra letter or symbol could indicate that you want to work in the Fraction domain rather than floating point. That's a perfectly reasonable decision for a user to make.
Agreed, it is appealing. The problem (and this is not a new suggestion, it's come up a number of times)
Interestingly, even though it's definitely been proposed periodically, it actually hasn't been codified into a PEP except for PEP 240, which was associated with PEP 239, which was rejected:
https://www.python.org/dev/peps/pep-0239/ https://www.python.org/dev/peps/pep-0240/
So we actually haven't had a proper PEP about adding a Fraction literal since the Fraction type itself was added. Whether it gets accepted or not, I think having a PEP would be a good thing here.
is that of having language syntax depend on non-builtin classes. So either you have to *also* propose making the fractions module a builtin, or you very quickly get sucked into "why not make this mechanism more general, so that libraries can define their own literals?"
Yes, fractions would have to become built-in. The consequences of doing so would have to be explored; I'd start by looking at:
* Interpreter startup time * Memory usage * Subprocess creation time (only significant on Windows, I think - every other platform forks) * Possible interactions with decimal.Decimal (see below) * Whether Fraction would need to become a builtin name
Currently, fractions.py imports decimal.py, mainly (purely?) for the sake of being able to construct a Fraction from a Decimal. The decimal module is *large* and has other reasons for not becoming builtin too, so ideally, the two should be decoupled. A solid proposal would need to explore decoupling the modules, figure out whether it's feasible, or if not, why not, and figure out how these constructors should be implemented. (For instance, should Fraction.from_decimal(123) cause the decimal module to be imported?)
Scope creep is nearly always what kills these proposals. Or the base proposal is too specialised to gain enough support.
Personally, I can see value in fraction, decimal and regex literals. But I can live without them.
Decimal literals have a number of awkward wrinkles, so I'd leave them aside for now; but if Fraction literals gain traction, it may be possible to figure out a similar proposal. But the two would be independent.
IMO regex literals are less valuable in Python than in some other languages due to Python's rich set of string manipulation features, but if they existed, I'd probably use them.
Anyone want to run with this? I'd be happy to help out with any PEP questions.
ChrisA
Hi Chris,
I would be willing to write such a PEP. It will take a while though, I am not fast at those kinda things.
Currently, fractions.py imports decimal.py, mainly (purely?) for the sake of being able to construct a Fraction from a Decimal. The decimal module is *large* and has other reasons for not becoming builtin too, so ideally, the two should be decoupled.
I started the decoupling as bpo-44115, aka GH-26064 https://github.com/python/cpython/pull/26064 I would be happy about your comments there.
I actually have a new idea about how a fraction literal could look like: just write 2/3 as opposed to 2 / 3, and you get a fraction. So: no spaces: fraction, with spaces: float.
I hear you crying "but that's illegal, whitespace should not matter!", to which I respond:
>>> 3.real File "<stdin>", line 1 3.real ^ SyntaxError: invalid syntax >>> 3 . real 3
Cheers
Martin
On Sat, May 15, 2021 at 2:55 AM Martin Teichmann martin.teichmann@gmail.com wrote:
Hi Chris,
I would be willing to write such a PEP. It will take a while though, I am not fast at those kinda things.
Cool cool. Feel free to email me off-list about getting started; here's some handy info to read:
https://www.python.org/dev/peps/pep-0012/
Currently, fractions.py imports decimal.py, mainly (purely?) for the sake of being able to construct a Fraction from a Decimal. The decimal module is *large* and has other reasons for not becoming builtin too, so ideally, the two should be decoupled.
I started the decoupling as bpo-44115, aka GH-26064 https://github.com/python/cpython/pull/26064 I would be happy about your comments there.
Not seeing anything there about decoupling the two modules?
I actually have a new idea about how a fraction literal could look like: just write 2/3 as opposed to 2 / 3, and you get a fraction. So: no spaces: fraction, with spaces: float.
I hear you crying "but that's illegal, whitespace should not matter!", to which I respond:
>>> 3.real File "<stdin>", line 1 3.real ^ SyntaxError: invalid syntax >>> 3 . real 3
Cheers
Martin _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/3XWRLC... Code of Conduct: http://python.org/psf/codeofconduct/
Premature send with incomplete info, sorry!
On Sat, May 15, 2021 at 3:01 AM Chris Angelico rosuav@gmail.com wrote:
On Sat, May 15, 2021 at 2:55 AM Martin Teichmann martin.teichmann@gmail.com wrote:
Hi Chris,
I would be willing to write such a PEP. It will take a while though, I am not fast at those kinda things.
Cool cool. Feel free to email me off-list about getting started; here's some handy info to read:
https://www.python.org/dev/peps/pep-0001/ https://www.python.org/dev/peps/pep-0012/
I actually have a new idea about how a fraction literal could look like: just write 2/3 as opposed to 2 / 3, and you get a fraction. So: no spaces: fraction, with spaces: float.
I hear you crying "but that's illegal, whitespace should not matter!", to which I respond:
>>> 3.real File "<stdin>", line 1 3.real ^ SyntaxError: invalid syntax >>> 3 . real 3
That's a syntactic matter, and it's not actually whitespace necessarily that does it.
(3).real
3
x = 3 x.real
3
x . real
3
So, no, whitespace or lack thereof should not drastically change the behaviour. In any case, this would still need to be defined as a new operator, so that different types can react appropriately, so you don't really gain much in return for sacrificing readability by having x/y be different from x / y !
But there's no real reason to create a new operator if it can be done in a simpler way.
ChrisA
Hi Chris,
I think you did not get my point. I do not want to allow x/y. I want to only allow literals, as in 3/2. This would then be a new kind of literal, that has the type of Fraction. Much like 2.5 is a float, but x.y means something completely different, even though it has no spaces. So x/y would mean "divide x by y" or actually call __truediv__ on x plus some details, while 2/3 would just be the constant two-thirds. 2 / 3 would then mean the same as it used to: divide 2 by 3, giving some 0.666ish.
Cheers
Martin
On Sat, May 15, 2021 at 3:51 AM Martin Teichmann martin.teichmann@gmail.com wrote:
Hi Chris,
I think you did not get my point. I do not want to allow x/y. I want to only allow literals, as in 3/2. This would then be a new kind of literal, that has the type of Fraction. Much like 2.5 is a float, but x.y means something completely different, even though it has no spaces. So x/y would mean "divide x by y" or actually call __truediv__ on x plus some details, while 2/3 would just be the constant two-thirds. 2 / 3 would then mean the same as it used to: divide 2 by 3, giving some 0.666ish.
Ahhh, I see what you mean.
That's more plausible than what I was thinking of, but I think it'd still be cleaner to just adorn an integer with a letter to mark that it should be a Fraction instead (since Fraction divided by int, or int divided by Fraction, will yield the correct Fraction result).
ChrisA
On Fri, May 14, 2021 at 05:49:57PM -0000, Martin Teichmann wrote:
2/3 would just be the constant two-thirds. 2 / 3 would then mean the same as it used to: divide 2 by 3, giving some 0.666ish.
I don't like your changes of getting a core developer to champion the introduction of significant whitespace around operators like that.
Hi Chris,
Not seeing anything there about decoupling the two modules?
Well, the PR removes the line "from decimal import Decimal", I think that is quite some decoupling. The other place decimal is imported is in a classmethod only existing for backwards compatibility, so with this patch the decimal module does not need to be imported anymore.
Cheers
Martin
Hi Chris, Hi List,
having slept over it, I think I have to take back my offer to write a PEP. Why? Well, I am actually sure that it will get rejected anyways.
What I would like to have is that you can write 1/2 * m * v**2, and that can be treated symbolically. Writing 1/2F instead looks ugly to me, and then there is a much easier way to achieve that:
F = Fraction(1) F*1/2 * m * v**2
that's equally ugly, but I do not need to write a single line of code.
I see that you would like to sum up the discussion in a PEP so you can point future requesters there. But given that there are repeatedly requests here, and over at sympy I learned they even wrote a pre-parser, not to forget André's efforts presented here, shows that there seems to be an actual need. Telling people: "look at that PEP, we solved that ages ago" would be dishonest, because we did not.
In general, doing symbolic math in Python is not very beautiful. The number of hoops you have to jump through is large, mostly because syntax is abused for things it was not actually meant for.
It could be fruitful to add syntax for symbolic math, but this is a whole new topic. Looking around there also seems to be not much out there, even dedicated languages like mathematica are honestly pretty ugly.
Cheers
Martin
On Sat, May 15, 2021 at 5:15 PM Martin Teichmann martin.teichmann@gmail.com wrote:
Hi Chris, Hi List,
having slept over it, I think I have to take back my offer to write a PEP. Why? Well, I am actually sure that it will get rejected anyways.
Wouldn't be the first time a PEP has been written with full expectation of it being rejected, though!
I see that you would like to sum up the discussion in a PEP so you can point future requesters there. But given that there are repeatedly requests here, and over at sympy I learned they even wrote a pre-parser, not to forget André's efforts presented here, shows that there seems to be an actual need. Telling people: "look at that PEP, we solved that ages ago" would be dishonest, because we did not.
A PEP doesn't mean that we've solved anything; it means that there is a single place where the arguments for and against have been gathered, rather than searching through email archives.
ChrisA
On Sat, May 15, 2021 at 05:28:39PM +1000, Chris Angelico wrote:
Wouldn't be the first time a PEP has been written with full expectation of it being rejected, though!
Nor the first time that someone wrote a PEP initionally expecting it to be rejected, and had it accepted.
*cough* dict union operator *cough*
On Sat, May 15, 2021 at 6:24 PM Steven D'Aprano steve@pearwood.info wrote:
On Sat, May 15, 2021 at 05:28:39PM +1000, Chris Angelico wrote:
Wouldn't be the first time a PEP has been written with full expectation of it being rejected, though!
Nor the first time that someone wrote a PEP initionally expecting it to be rejected, and had it accepted.
*cough* dict union operator *cough*
Nor the first time someone wrote a PEP intending for it to be rejected, and came around to actually liking the idea himself...
Documenting the pros and cons has consequences (and prosequences??) far beyond our meagre expectations!
ChrisA
On Sat, May 15, 2021 at 07:14:47AM -0000, Martin Teichmann wrote:
In general, doing symbolic math in Python is not very beautiful.
[...]
It could be fruitful to add syntax for symbolic math, but this is a whole new topic. Looking around there also seems to be not much out there, even dedicated languages like mathematica are honestly pretty ugly.
I think that is unavoidable.
Symbolic maths is a 2D format. It doesn't map easily to a line-based format like programming languages. Think of things like summation and integration. You need subscripts, superscripts and a two dimensional layout of expressions.
CAS calculators like the Nspire and Classpad that support symbolic maths also support 2D entry methods. Python would need a GUI IDE to support something like that.
We could come up with a preprocessor that would allow you to write 1/(2π) and get a symbolic expression but its still going to be line-oriented and share the same weaknesses as Mathematica syntax.
On Sat, 15 May 2021 at 09:55, Steven D'Aprano steve@pearwood.info wrote:
On Sat, May 15, 2021 at 07:14:47AM -0000, Martin Teichmann wrote:
In general, doing symbolic math in Python is not very beautiful.
[...]
It could be fruitful to add syntax for symbolic math, but this is a whole new topic. Looking around there also seems to be not much out there, even dedicated languages like mathematica are honestly pretty ugly.
I think that is unavoidable.
Symbolic maths is a 2D format. It doesn't map easily to a line-based format like programming languages. Think of things like summation and integration. You need subscripts, superscripts and a two dimensional layout of expressions.
SymPy in particular has the property that it is implemented within Python and is (often) used from Python so for most users there is no separation between the user-language and the language of implementation and that constrains what it can provide syntactically. This is both a strength and a weakness. The strength is that power users can use a powerful programming language as part of symbolic manipulation and can build their own primitive routines, subclass the basic types etc. The weaknesses are things like not being able to make 1/2 be an exact rational.
Of course SymPy is also a library rather than an application and is used as the backend for many different symbolic applications and has bindings to other languages so there are also users who use sympy from e.g. Julia or Octave which use the syntax from those languages so e.g. you can do something like 2x rather than 2*x in Julia. SymPy is also used internally as part of Sage which is an application that has a Python-like syntax but with changes like 1/2 being exact and 2^3 being the same as 2**3. There are also projects like Mathics which is a Python project based on SymPy that implements the Mathematica language.
The issue with float and SymPy is not just about division of ints though because it's also very common for users to write things like 0.5*m*v**2 without realising that using 0.5 creates a float and that SymPy will treat that float differently from an exact rational. This is partly a problem of SymPy's own making and it can be at least partially solved in SymPy though. In an expression like 0.5*m where m is a SymPy expression __rmul__ will call sympify which converts the builtin float to SymPy's symbolic Float type. The majority of users would be better served if the sympify function would actually convert the float to an exact Rational and it would be possible to provide a mode that can do that like:
import sympy sympy.init_session(auto_float_to_rational=True)
SymPy already has a function that can do this and that can also undo the rounding errors caused in decimal to binary conversion for expressions like 0.1:
from sympy import nsimplify nsimplify(0.1)
1/10
Actually this function goes a bit too far so a more limited form of it should probably be used:
from math import e e
2.718281828459045
nsimplify(e)
E
nsimplify(2**0.5)
sqrt(2)
So I don't think changes in Python core are needed for this use case as the basic issue can be fixed in SymPy itself.
-- Oscar
On Sat, May 15, 2021 at 4:18 AM Martin Teichmann martin.teichmann@gmail.com wrote:
Hi Chris, Hi List,
having slept over it, I think I have to take back my offer to write a PEP. Why? Well, I am actually sure that it will get rejected anyways.
[SNIP]
I see that you would like to sum up the discussion in a PEP so you can point future requesters there. But given that there are repeatedly requests here, and over at sympy I learned they even wrote a pre-parser, not to forget André's efforts presented here, shows that there seems to be an actual need.
Sorry if I gave the wrong impression.
I implemented this 1) because I find it fun to experiment with unusual syntax, and, more importantly 2) because this idea (literal fractions, and similarly for literal decimal) often comes up on this list and we get abstract discussions about the (supposed) benefit of such syntax. I personally find that actual experimentation with working code is often much more illuminating when it comes to highlighting the strength and weaknesses of proposals compared with near-endless mostly abstract discussions often based on preconceived opinions (at least, that's how I interpret them).
I definitely do not believe that there is a need for fraction literals in Python, and even less when it comes to having "fractions by default" without special syntax (or, even worse in my opinion, with syntax that rely on space surrounding operators.) Again, sorry if I gave the wrong impression.
A while ago, I attempted to summarize various discussions on having literal decimals in Python, with the thought of doing the same for fraction literals, but did not finish. For those curious, the "summary" so far can be found at https://github.com/aroberge/python-ideas-summaries/blob/master/literal-numbe... I do think that, if I (or someone else) could complete this summary, it would be worthwhile writing a PEP and, if it is rejected, we could refer people to it rather than restarting a new discussion thread on such recurring topic. Rejected PEPs can potentially be huge time savers for the community ...
André Roberge
Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/LFNXM5... Code of Conduct: http://python.org/psf/codeofconduct/
Martin wrote:
In general, doing symbolic math in Python is not very beautiful.
I think this is a problem worth investigating. (Disclaimer: I do research in pure mathematics.)
The number of hoops you have to jump through is large, mostly because syntax is abused for things it was not actually meant for.
I don't completely agree with this diagnosis. I think there are more serious difficulties. I'd be interested in a discussion on symbolic math in Python sometime, but perhaps not on this list. I'd like the people who use and develop https://www.sagemath.org/ to be involved.
On Sat, May 15, 2021 at 01:57:29AM +1000, Chris Angelico wrote:
Decimal literals have a number of awkward wrinkles, so I'd leave them aside for now;
I'm surprised at this.
Decimal literals have come up at least twice in the past, with a general consensus that they are a good idea. This is a the first time I've seen anyone suggest fraction literals (that I recall). The use of decimal literals have a major and immediate benefit to nearly everyone who uses Python for calculations: the elimination of rounding error from base-10 literals like `0.1` and the internal binary representation.
I don't see that literal fractions have anywhere close to the same immediate benefit for the average user. If anything, I think that it could be surprising. If somebody uses Python to, e.g., calculate the GST (Goods and Services Tax) inclusive value of something costing $17.00 ex:
17*11/10
they're probably expecting an answer of 18.7 not Fraction(187, 10).
What are these awkward wrinkles for decimal literals? The past proposals have come down to a "d" or "D" suffix, e.g.:
2.456d
would be a decimal rather than a float. The implementation would not support the full General Decimal Arithmetic specification -- for that, users would continue to use the decimal module.
http://speleotrove.com/decimal/decarith.html
The builtin decimals would be a fixed 64- or 128-bit decimal implementation.
https://en.wikipedia.org/wiki/Decimal64_floating-point_format
https://en.wikipedia.org/wiki/Decimal128_floating-point_format
Like float, it would support only a minimal set of functionality, e.g. no context objects with user-configurable traps, precision or rounding modes.
So as I see it, the only wrinkles would be:
- the choice of a 64- or 128-bit; - the internal format (binary or decimal coded); - and the coercion rules for arithmetic between mixed types;
but I wouldn't call them *awkward*. They will of course allow for plenty of bike-shedding, but the decision should be quite tractable.
On Sat, May 15, 2021 at 2:55 PM Steven D'Aprano steve@pearwood.info wrote:
On Sat, May 15, 2021 at 01:57:29AM +1000, Chris Angelico wrote:
Decimal literals have a number of awkward wrinkles, so I'd leave them aside for now;
I'm surprised at this.
Decimal literals have come up at least twice in the past, with a general consensus that they are a good idea. This is a the first time I've seen anyone suggest fraction literals (that I recall).
Well, they've definitely both come up :)
What are these awkward wrinkles for decimal literals? The past proposals have come down to a "d" or "D" suffix, e.g.:
2.456d
would be a decimal rather than a float.
That part isn't a problem, yep. I'm pretty sure everyone's in agreement on that (with the possible exception of bikeshedding about whether it's "d" or "D" or either).
So as I see it, the only wrinkles would be:
- the choice of a 64- or 128-bit;
- the internal format (binary or decimal coded);
- and the coercion rules for arithmetic between mixed types;
but I wouldn't call them *awkward*. They will of course allow for plenty of bike-shedding, but the decision should be quite tractable.
Mainly it comes down to the fact that there's an implicit set of rules governing the Decimal constructor (the default context), and literals cannot follow those. That would mean that a Decimal literal 1.234d might mean something sneakily and subtly different from Decimal("1.234"), due to a distinction that might have taken place long, long ago in a module far, far away (since the default context is global). I'm going largely on memory here, as a search through python-ideas history for "decimal literal" produced a couple of monster threads that were talking about a variety of numeric-related matters (including Decimal literals, but also including a ton of other stuff), and I can't find the exact posts in question.
So, let me put it this way: I'm +3/4F on adding a Fraction literal, -0.25D on adding a Decimal literal, and +1 (the integer!) on writing a PEP regarding either or both.
ChrisA
On Sat, 15 May 2021 at 05:54, Steven D'Aprano steve@pearwood.info wrote:
On Sat, May 15, 2021 at 01:57:29AM +1000, Chris Angelico wrote:
Decimal literals have a number of awkward wrinkles, so I'd leave them aside for now;
I'm surprised at this.
Decimal literals have come up at least twice in the past, with a general consensus that they are a good idea. This is a the first time I've seen anyone suggest fraction literals (that I recall). The use of decimal literals have a major and immediate benefit to nearly everyone who uses Python for calculations: the elimination of rounding error from base-10 literals like `0.1` and the internal binary representation.
Fraction also has this benefit except in a much more complete way since *all* arithmetic can be exact. The discussion above has mentioned the idea that you could write a fraction like `1/10F` or `1F/10` but I would also want to be able to do something like `0.1F` to use a decimal literal to represent an exact rational number.
In my experience direct language support for exact rationals is more common than for decimal floating point. Computing things exactly is a widely applicable feature whereas wanting a different kind of rounding error is a more niche application. You mention that in some cases Decimal gives exactness but if you follow this idea of exact arithmetic to its conclusion you end up with something like Fraction rather than something like Decimal.
I suspect that the fact that Python has a more extensively developed decimal module has probably led many of us familiar with Python to think that decimal arithmetic is somehow more useful or more commonly needed or used than rational arithmetic.
What are these awkward wrinkles for decimal literals?
They are not insurmountable but the main issues to resolve are about the fact that Python's decimal implementation is a multiprecision library. Not all decimal implementations are but Python's is. The precision is controlled by a global context and affects all operations. The simple way to implement decimal literals would be to say that 1.001d is the same as Decimal('1.001') but then what about -1.001d?
The minus sign is not part of the literal but rather an unary operation so whereas the Decimal(...) constructor is always exact an unary minus is an "operation" with a behaviour that is context dependent e.g.:
from decimal import Decimal as D, getcontext getcontext().prec = 3
D('0.9999')
Decimal('0.9999')
D('-0.9999')
Decimal('-0.9999')
-D('0.9999')
Decimal('-1.00')
+D('0.9999')
Decimal('1.00')
That would mean that a simple statement like x = -1.01d could assign different values depending on the context. Maybe with the new parser it is easier to change this so that an unary +/- can be part of the literal. The context dependence here also undermines other benefits of literals like the possibility of constant-folding in e.g. 0.01d + 1.00d (depending on the context this might compute different values or it might raise an exception or set flags).
The other question is about which other languages to line up with. For example C might gain a _Decimal128 type and maybe even hardware support would become widespread for calculations with decimal in these fixed width formats. Python would be able to leverage those if it defines its decimals in the same way but that would mean departing from the multiprecision model that the decimal module currently has.
These and other questions about how to implement decimal literals are not necessarily hard to overcome but it naturally leads to ideas like:
The implementation would not support the full General Decimal Arithmetic specification -- for that, users would continue to use the decimal module.
Then you have this kind of hybrid where you have two different kinds of decimal type. Presumably that means that the decimal literals aren't really there for power users because they need to use the decimal module with all of its extra features. If the decimal literals aren't for users of the decimal module then who are they for?
I started trying to write a PEP about decimal literals some time ago but what I found was that most arguments I could come up with for decimal literals were really arguments for using decimal floating point instead of binary floating point in the first place. In other words if Python did not currently have float types then I would advocate for using decimal as the *default* float type for literals like `0.1`. It is problematic that the only non-integer literals in Python are decimal literals that are converted into binary floating point format when the vast majority of possible non-integer decimal literals like `0.12345` can not be represented exactly in binary. Many humans can understand decimal rounding much easier than binary rounding so using decimal rounding for non-integer arithmetic would make Python itself more understandable and intuitive.
We already have float though and it uses binary floating point and is the implementation for ordinary decimal literals like 0.1. In that context adding Decimal literals like 0.1d does not bring as much benefit. Really it just lowers the bar a little bit for less-experienced users to use the decimal module correctly and not make mistakes like D(0.1). Maybe that's worth it or maybe not. For novices it would be great to make 0.1 + 0.2 just do the right thing but if they need to know that they should do 0.1d + 0.2d then the unintuitive behaviour is still there to trip them up by default. For experienced users the literals don't really add that much and probably most code that uses the decimal module in anger does not really have that many literals anyway.
Yes, 0.1d is a bit nicer than D('0.1') and slightly less error-prone but how much of a benefit is that in practice?
Is it actually worth the churn or the increased complexity of having an entirely new decimal type?
If it is necessary to have new fixed-width decimal types then probably the first step to decimal literals is implementing those rather than writing a PEP.
These same arguments could be made for Fraction literals but in my experience rational arithmetic has more widespread use than decimal arithmetic and also rational arithmetic is much easier to implement and doesn't need to have multiple types or context dependent behaviour etc. One thing I would like is for Fraction to be reimplemented in C and made much faster. Having literals would be nice but Fraction would need to at least be a builtin first.
-- Oscar
On Sat, May 15, 2021, 3:13 PM Oscar Benjamin
That would mean that a simple statement like x = -1.01d could assign different values depending on the context. Maybe with the new parser it is easier to change this so that an unary +/- can be part of the literal.
Steven, at least, stated he assumed this decimal literal would be either decimal64 or decimal128. That would remove this exact concern you state.
But decimal64 behaves differently that decimal128, and both have different semantics than Decimal. I don't disagree that decimal128 is a useful data type, but it's definitely a wrinkle for various flavors of "decimal" to arise in various ways. For example, how does this change the numeric tower?
On Sat, 15 May 2021 at 20:52, David Mertz mertz@gnosis.cx wrote:
On Sat, May 15, 2021, 3:13 PM Oscar Benjamin
That would mean that a simple statement like x = -1.01d could assign different values depending on the context. Maybe with the new parser it is easier to change this so that an unary +/- can be part of the literal.
Steven, at least, stated he assumed this decimal literal would be either decimal64 or decimal128. That would remove this exact concern you state.
But decimal64 behaves differently that decimal128, and both have different semantics than Decimal. I don't disagree that decimal128 is a useful data type, but it's definitely a wrinkle for various flavors of "decimal" to arise in various ways. For example, how does this change the numeric tower?
Decimal is not really in the numeric tower: https://bugs.python.org/issue43602
The question is really what exactly is the rationale for having a new Decimal type? I guess it's something like: """ The decimal module is well suited to more advanced needs but is hard to use for less experienced users who just want to do some simple calculation and expect that the arithmetic should work intuitively unlike binary floating point. We therefore introduce a new decimal type that is simpler and add literal syntax for that type so that the barrier to using decimal-based calculations is as low as possible. """ Then you need to consider how much easier you are really making things e.g. 0.1d vs D('0.1'). To what extent can the "hard to use" claim be countered by improving the docs e.g. by making a simpler explanation of how to use Decimal for basic calculations and perhaps adding a brief section to the tutorial?
Most importantly: who is prepared to implement and maintain any of this?
If the proposal is having 0.1d be Decimal('0.1') then that's a lot easier than introducing a new decimal128 type. If anyone wants to take this forward then I think that for that version of the proposal a usable C implementation of e.g. decimal128 is what is needed. I expect that making a conforming C implementation of decimal128 will be a lot harder than making a C implementation of Fraction. Making a really good implementation of either is hard but the "spec" for rational numbers is much simpler than for decimal floating point.
-- Oscar
Hi Martin
I think it important to realise there are at least two issues. The first is whether sometimes fraction is better than float, and vice versa. The second is whether the default behaviour for int / int should be changed.
A third issue is whether some of the benefits you are seeking can be achieved in some other way. Finally, well done for changing the Python source to implement your idea. That's something I wouldn't have the courage to do.
Hi David,
A toy example with a half dozen operations won't make huge fractions. A loop over a million operations will often be a gigantic memory hog.
I do not propose to do that, and I agree that it would be stupid. Usually, the fraction gets very quickly converted into a float, and your millions of operations will be done with floats.
Now I cannot guarantee that this is always the case, but doing millions of operations in Python directly is usually very slow anyways, so people tend to use numpy or alike. And those use floats, and I am not proposing to change that.
But maybe I am all wrong and there is loads of code out there that would suffer from your problem, in this case, could you show me an example?
Cheers
Martin
On Sat, May 15, 2021 at 1:47 AM Martin Teichmann martin.teichmann@gmail.com wrote:
Hi David,
A toy example with a half dozen operations won't make huge fractions. A loop over a million operations will often be a gigantic memory hog.
I do not propose to do that, and I agree that it would be stupid. Usually, the fraction gets very quickly converted into a float, and your millions of operations will be done with floats.
But if integer division gave a Fraction, then at what point would it be converted into a float? As long as all the literals are integers, it would remain forever a rational, and adding fractions does get pretty costly.
ChrisA
On Fri, May 14, 2021 at 12:02 PM Chris Angelico rosuav@gmail.com wrote:
On Sat, May 15, 2021 at 1:47 AM Martin Teichmann martin.teichmann@gmail.com wrote:
Hi David,
A toy example with a half dozen operations won't make huge fractions. A loop over a million operations will often be a gigantic memory hog.
I do not propose to do that, and I agree that it would be stupid.
Usually, the fraction gets very quickly converted into a float, and your millions of operations will be done with floats.
But if integer division gave a Fraction, then at what point would it be converted into a float? As long as all the literals are integers, it would remain forever a rational, and adding fractions does get pretty costly.
ChrisA
I think most of the time, people just want to do an accurate enough division operation and don't really have an opinion what the output type is (so long as it works), but for the situations where the type really really does matter, you can choose the type by using Fraction(1,2) or 1/2. If precision is critical, you can already have it with the Fraction choice.
But I would also assume there are probably some fraction (tee hee) of situations where people specifically want/need a float from an integer division operation and NOT a fraction, and would either have to take the extra step to convert it. If you're in a loop doing millions of conversions, seems like that could get slow.
A way around this would be to add a new integer division operation that returns float:
>>> 1/2 Fraction(1, 2) >>> 1//2 0 >>> 1///2 # new float division operator 0.5
But that doesn't seem like a very optimal solution to me.
So +1 for fraction literal PEP (not sure I would want it approved or not, but it should get a full hearing), -1 for changing the output type of integer division.
--- Ricky.
"I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler
On 5/14/21 12:16 PM, Ricky Teachey wrote:
On Fri, May 14, 2021 at 12:02 PM Chris Angelico <rosuav@gmail.com mailto:rosuav@gmail.com> wrote:
On Sat, May 15, 2021 at 1:47 AM Martin Teichmann <martin.teichmann@gmail.com <mailto:martin.teichmann@gmail.com>> wrote: > > Hi David, > > > A toy example with a half dozen operations won't make huge fractions. A > > loop over a million operations will often be a gigantic memory hog. > > I do not propose to do that, and I agree that it would be stupid. Usually, the fraction gets very quickly converted into a float, and your millions of operations will be done with floats. > But if integer division gave a Fraction, then at what point would it be converted into a float? As long as all the literals are integers, it would remain forever a rational, and adding fractions does get pretty costly. ChrisA
I think most of the time, people just want to do an accurate enough division operation and don't really have an opinion what the output type is (so long as it works), but for the situations where the type really really does matter, you can choose the type by using Fraction(1,2) or 1/2. If precision is critical, you can already have it with the Fraction choice.
But I would also assume there are probably some fraction (tee hee) of situations where people specifically want/need a float from an integer division operation and NOT a fraction, and would either have to take the extra step to convert it. If you're in a loop doing millions of conversions, seems like that could get slow.
A way around this would be to add a new integer division operation that returns float:
>>> 1/2 Fraction(1, 2) >>> 1//2 0 >>> 1///2 # new float division operator 0.5
But that doesn't seem like a very optimal solution to me.
So +1 for fraction literal PEP (not sure I would want it approved or not, but it should get a full hearing), -1 for changing the output type of integer division.
Ricky.
"I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler
If anything like this was done, the NEW operator needs to be the fraction and the original / the float, for backwards compatibility.
I would say that I have enough code that does division of numbers that might be integers that expect it to be relatively efficient as floats, and I suspect so do others, that the backwards breaks would be significant. The cases where you really need the fraction are much smaller, and if you do, having a convenient way to ask for it could be useful, but not so useful that it is worth breaking all the code that expects floats.
It wouldn't be quite as bad as the python2 -> python3 string/bytes changes, but the comparison isn't totally out of line.
Hi Richard,
I would say that I have enough code that does division of numbers that might be integers that expect it to be relatively efficient as floats, and I suspect so do others, that the backwards breaks would be significant.
Could I please see an example? This is a real question, by now I ran my prototyped interpreter through quite some libraries, and none made problems like this.
Real world examples I am talking about. Sure, I can easily code an approximation for pi which goes out of hand quickly, but doing this in Python would be just wrong unless you are writing a text book.
Cheers
Martin
On 5/14/21 12:58 PM, Martin Teichmann wrote:
Hi Richard,
I would say that I have enough code that does division of numbers that might be integers that expect it to be relatively efficient as floats, and I suspect so do others, that the backwards breaks would be significant.
Could I please see an example? This is a real question, by now I ran my prototyped interpreter through quite some libraries, and none made problems like this.
Real world examples I am talking about. Sure, I can easily code an approximation for pi which goes out of hand quickly, but doing this in Python would be just wrong unless you are writing a text book.
Cheers
Martin
Do to company rules, I can't publish the actual code, but the input data is all integers, and I am building up a curve fit where I sum thousands of distinct values, some of which are ratios of data. If that division became an fraction, because the denominators are all different values, the resultant sum would get to be very big numbers (basiclly the least common multiple of hundreds of numbers in the range of thousands.
As floats, this sumation is quick. If done with 'exact fractional' math, I suspect it slows down by several orders of magnitude, going from sub second to maybe an hour. That would be totally unacceptable.
Yes, the fix would likely be changing one line of the program to force the results to be a float, but if the program is on the factory floor and just working. If IT at some point updates the version of python being used due to some security release being needed, suddenly programs like this just stop running as they grind to a halt.
On Fri, May 14, 2021 at 04:58:08PM -0000, Martin Teichmann wrote:
Could I please see an example? This is a real question, by now I ran my prototyped interpreter through quite some libraries, and none made problems like this.
Any script or application that does calculations and formats them for display to the user which uses print() or equivalent string formatting such as `%s` etc will suddenly start producing output like this:
Fraction(3135227393067235, 17592186044416)
instead of the expected output:
178.217043928
Any library that uses doctests will also suffer similar issues.
Real world examples I am talking about. Sure, I can easily code an approximation for pi which goes out of hand quickly, but doing this in Python would be just wrong unless you are writing a text book.
An approximation to pi accurate to 15 decimal places is just Fraction(884279719003555, 281474976710656), which calculates almost instantly and takes only 48 bytes:
>>> sys.getsizeof(Fraction(math.pi)) 48
plus another 64 bytes for the numerator and denominator ints themselves.
14.05.21 15:12, Martin Teichmann пише:
when dividing two integers, the result is a float, which means we immediately lose precision. This is not good if you want to use code which supports higher precision. Decimals come to mind, but also sympy. This loss of precision could be avoided if the result of a division is a fraction instead: a fraction is exact.
Please read http://python-history.blogspot.com/2009/03/problem-with-integer-division.htm... .
In short, it was a feature of Python's predecessor, ABC. But it turned out to be not as good as it seemed at first glance. Fixing this mistake was one of causes of creating Python.
I should probably explain (again) why I am not a fan of such a change. I blogged about this before -- this is mostly a treatise about / vs. //, but it explains my reservations about this proposal as well: http://python-history.blogspot.com/2009/03/problem-with-integer-division.htm...
In particular:
""" For example, in ABC, when you divided two integers, the result was an exact rational number representing the result. [...]
In my experience, rational numbers didn't pan out as ABC's designers had hoped. A typical experience would be to write a simple program for some business application (say, doing one’s taxes), and find that it was running much slower than expected. After some debugging, the cause would be that internally the program was using rational numbers with thousands of digits of precision to represent values that would be truncated to two or three digits of precision upon printing. This could be easily fixed by starting an addition with an inexact zero, but this was often non-intuitive and hard to debug for beginners. """
I should probably explain (again) why I am not a fan of such a change.
We have read your blog, Guido:-) Yet, this "feature" is one of top Python's misfeatures, ex. for Fernando Perez. I share his opinion too.
The numbers module borrowed from the Scheme numbers tower, yet it doesn't use the concept of "exactness". (Perhaps, one of the reasons, why the numbers module is not very useful, outside of the stdlib, see https://bugs.python.org/issue43602.) The conversion exact(known algebraic structure)->inexact (like floating point numbers) must be explicit, not as / does now.
Of course, I realize that changing / (again!) - will be painful. Yet it's possible: the Python - is a language, that can fix design flaws.
If this is too costly - what do you think about a special literal for Floats, e.g. suggested above 1.2F==Fraction(12, 10)? R suffix might be an alternative.
After some debugging, the cause would be that internally the program was using rational numbers with thousands of digits of precision to represent values that would be truncated to two or three digits of precision upon printing.
This seems to be an error from the programmer, not from the language designers. Use correct data types, etc. Now we have even the Decimal class in the stdlib...
On 5/14/21 5:12 AM, Martin Teichmann wrote:
In order to showcase how that would look like, let me give an example session:
>>> 5/6-4/15 17/30 >>> a=22/7 >>> f"{a}" '22/7' >>> f"{a:f}" '3.142857' >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.3333333333333333333333333333')
This looks very interesting! I see some confusion on all sides on what, exactly, you are proposing. As best as I can figure, the rules for your conversions are something like (where .div. is division and .op. is any other operation):
1) int .div. int --> fraction
2) int .op. fraction --> fraction
3) fraction .op. non-fraction --> float
What I am not sure of:
4) fraction .op. fraction --> ???
5) fraction .op. non-fraction --> ???
Am I correct on 1-3? What are you proposing as regards 4 and 5?
-- ~Ethan~
On 5/14/2021 5:29 PM, Ethan Furman wrote:
On 5/14/21 5:12 AM, Martin Teichmann wrote:
In order to showcase how that would look like, let me give an
example session:
>>> 5/6-4/15 17/30 >>> a=22/7 >>> f"{a}" '22/7' >>> f"{a:f}" '3.142857' >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.3333333333333333333333333333')
This looks very interesting! I see some confusion on all sides on what, exactly, you are proposing. As best as I can figure, the rules for your conversions are something like (where .div. is division and .op. is any other operation):
1) int .div. int --> fraction
2) int .op. fraction --> fraction
3) fraction .op. non-fraction --> float
What I am not sure of:
4) fraction .op. fraction --> ???
5) fraction .op. non-fraction --> ???
Am I correct on 1-3? What are you proposing as regards 4 and 5?
My understanding of the proposal is that OP is only talking about <literal-integer> / <literal-integer> becomes a Fraction. So:
x=1 x/2 # unchanged, still yields a float.
It's only literals like "1/2" that would become Fraction(1,2).
Eric
On 5/14/21 2:34 PM, Eric V. Smith wrote:
My understanding of the proposal is that OP is only talking about <literal-integer> / <literal-integer> becomes a Fraction. So:
x=1 x/2 # unchanged, still yields a float.
It's only literals like "1/2" that would become Fraction(1,2).
Ah -- which means we end up with fractions and fraction math which results in memory and time issues, since most fraction/fraction operations return, unsurprisingly, fractions.
-- ~Ethan~
On 14/05/2021 22:34, Eric V. Smith wrote:
My understanding of the proposal is that OP is only talking about <literal-integer> / <literal-integer> becomes a Fraction. So:
x=1 x/2 # unchanged, still yields a float.
It's only literals like "1/2" that would become Fraction(1,2).
This would appear to limit the usefulness of the proposal. If you actually wanted x/y to yield a fraction when x and y were integers, you would need an explicit syntax such as Fraction(x,y) or F(x,y) anyway. So what's the big deal about having to write Fraction(1,2) or F(1,2) ? Rob Cliffe
Rob Cliffe via Python-ideas writes:
So what's the big deal about having to write Fraction(1,2) or F(1,2) ?
Writing that is never a big deal. Forgetting to write that when you need to incurs immediate loss of precision, which is a big deal in applications where it matters at all.
Knuth (Seminumerical Algorithms) discusses "slash" arithmetic (including floating slash), ie, fixed width fractional arithmetic (floating slash assigns a varying division of bits to numerator and denominator to get the closest representable number). I wonder if a slash type which either converts to float on need for rounding (or converts to float on under/overflow of numerator or denominator) might be workable.
I'm -1 on this change. Don't get me wrong I'd love having this change in Python. *But* we use float not decimal.Decimal right? Why not? Because of memory and precisions. Decimal takes more memory than float and also 0.33333333333333 (float) is accurate and very easy to deal with rather than 0.33333333121211211200134333434343 (Decimal). I believe the same reason applies to fractions.Fraction. It's available to users if they don't want to lose precision. But including it as a built-in I don't think is a good idea. And it will also be a "non-pep-reading-users-concerning-change" or simply "beginners concerning change" since all previous versions of Python displayed floats and now it's displaying Fractions! Some code may be hoping to find type() == float and to it's surprise it's not that!
Thanking you,
With Regards, Shreyan Avigyan
On Mon, May 17, 2021 at 5:39 PM Shreyan Avigyan pythonshreyan09@gmail.com wrote:
I'm -1 on this change. Don't get me wrong I'd love having this change in Python. *But* we use float not decimal.Decimal right? Why not? Because of memory and precisions.
That argument only takes you so far. For instance, Python uses bignum integers even though most programs would be fine with 32-bit signed ints, because it's better to be correct than to save memory.
Decimal takes more memory than float and also 0.33333333333333 (float) is accurate and very easy to deal with rather than 0.33333333121211211200134333434343 (Decimal).
Not sure I understand your point here. Generally, a Decimal is more accurate to what a human expects, because a float has to be representable in binary, but converting a Decimal into decimal digits is lossless.
I believe the same reason applies to fractions.Fraction. It's available to users if they don't want to lose precision. But including it as a built-in I don't think is a good idea. And it will also be a "non-pep-reading-users-concerning-change" or simply "beginners concerning change" since all previous versions of Python displayed floats and now it's displaying Fractions! Some code may be hoping to find type() == float and to it's surprise it's not that!
I agree that the division operator should not change. But none of the rest of your statement is an argument against Fraction literals.
ChrisA
I actually think the biggest argument against this idea is exactly the same as why it’s not a major breaking change:
Python assumes, and converts to, floats all over the place. So users need to understand and accommodate the limitations of floats anyway. Having exact fractions in seemingly arbitrary places will not result in more accurate (or precise) results in most cases, but would result in more confusion and more difficult error analysis.
-CHB
On Mon, May 17, 2021 at 12:56 AM Shreyan Avigyan pythonshreyan09@gmail.com wrote:
Chris:
But none of the rest of your statement is an argument against Fraction
literals.
If we have fractions.Fraction then we must have decimal.Decimal. We always don't judge by accuracy. There are other factors in motion that are to be considered. _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/Y272MZ... Code of Conduct: http://python.org/psf/codeofconduct/
Christopher Baker:
Python assumes, and converts to, floats all over the place. So users need to understand and accommodate the limitations of floats anyway. Having exact fractions in seemingly arbitrary places will not result in more accurate (or precise) results in most cases, but would result in more confusion and more difficult error analysis.
Exactly. That's what I have been trying to phrase.
Christopher Barker writes:
Python assumes, and converts to, floats all over the place. So users need to understand and accommodate the limitations of floats anyway. Having exact fractions in seemingly arbitrary places will not result in more accurate (or precise) results in most cases, but would result in more confusion
I don't see how that's a big deal. It's not arbitrary, and it happens only in one kind of context, which has to be explicit in the program (with the exception of eval, I guess?) I don't think it should be hard to learn that this applies only to parsing literal fractions, not to integer division in general. That it doesn't apply to integer division other than literals is certainly an inconsistency, but not that great.
The main problem creating confusion would be in cases where Fractions propagate, taking unexpected amounts of time and space in an extended calculation. I don't think we have a good idea how to analyze how often it would happen, and how much pain it would cause. But I can easily see it happening in evaluating power series and that kind of thing. It seems far less likely, although I suppose possible, in statistical (which are going to use something like numpy) or financial computations (where fractional constants are normal expressed with decimal notation or integer percentages, not fraction literals).
and more difficult error analysis.
I don't understand what you mean by "error analysis", unless you're referring to performance degradation due to Fraction propagation.
Aside: I think the real weakness of the proposal is that what symbolic math "really" wants is for algebraic expressions to be returned to the program as syntax trees. The current situation where symbolic math libraries for Python create Symbol objects with full suites of arithmetic dunders that create expression trees instead of doing arithmetic is clunky, but it's almost good enough. What would be a much bigger win for me would be a package where I don't have to declare Symbols in advance, and that's exactly where the "stop at syntax trees please" mode would come in.
Regards,
On Tue, 18 May 2021 at 11:41, Stephen J. Turnbull turnbull.stephen.fw@u.tsukuba.ac.jp wrote:
Christopher Barker writes:
Python assumes, and converts to, floats all over the place. So users need to understand and accommodate the limitations of floats anyway. Having exact fractions in seemingly arbitrary places will not result in more accurate (or precise) results in most cases, but would result in more confusion
When you want to use Fraction you need to decide explicitly that that is what you are doing because you want arithmetic without rounding errors. You also need to understand that that limits what you can do because e.g. math.sin can not give a rational number for (most) rational input. The same is true of writing code that should work only with integers and the distinction between a // b and a / b: you need a // b to divide ints exactly but you definitely shouldn't use it to divide "integer-valued" floats unless you are very careful because a // b is extremely sensitive to rounding errors.
Likewise maybe you want to use real numbers or maybe complex numbers: should you use math.sqrt or cmath.sqrt? Or maybe numpy.sqrt or sympy.sqrt? I see a lot of novices get confused by these things. Some languages like matlab or Julia do a better job of integrating different types so that it feels more seamless to the user. Apart from very basic operations though nothing really obviates the need for the user to have some understanding about whether they are doing exact vs inexact calculations or integer vs float vs fraction vs multiprecision vs symbolics etc.
and more difficult error analysis.
I don't understand what you mean by "error analysis", unless you're referring to performance degradation due to Fraction propagation.
The error analysis for arithmetic with Fraction is much easier than for float:
If you didn't get a ZeroDivisionError then the result is exact and the error is zero.
Aside: I think the real weakness of the proposal is that what symbolic math "really" wants is for algebraic expressions to be returned to the program as syntax trees. The current situation where symbolic math libraries for Python create Symbol objects with full suites of arithmetic dunders that create expression trees instead of doing arithmetic is clunky, but it's almost good enough. What would be a much bigger win for me would be a package where I don't have to declare Symbols in advance, and that's exactly where the "stop at syntax trees please" mode would come in.
I think declaring the symbols is fine. It's better to be explicit with these things. You need to be able to do that so that you can say what kind of thing the symbol represents anyway e.g.:
x = Symbol('x', positive=True)
What I would like though is to eliminate the repetition in something like this:
x, t, Ck, Cr, Cl = symbols('x, t, Cr, Ck, Cl')
I've seen people write scripts using sympy that declare anything up to a hundred symbols like this at the top. It's very easy for a bug to creep in e.g. because Ck and Cr are the wrong way round and I do see people getting bitten by this: https://github.com/sympy/sympy/issues/21368
The Julia bindings for sympy and also Julia's new Symbolics.jl both have a syms macro so that you can do:
julia> using Symbolics
julia> @syms x y (x, y)
julia> e = (x + y)^2 (x + y)^2
SageMath has something similar but you write var('x, y, z') and sympy has the same but it is discouraged because it uses global-injection which is problematic. Also var puts the variable names in strings which then makes it seem magic that they become variables in scope. The Julia way is nicer because x and y at least look like local variables in the statement.
It would be great if Python could have a way of doing this as well. We already have
@deco def func(): pass
which is a way of avoiding this repetition:
def func(): pass func = deco(func)
If there was some way to make @syms x, y translate to x, y = syms('x, y') or something like that then that would be great. Maybe that doesn't have broad enough use for Python the language but I would certainly add something like that if I was providing a SymPy UI based on a modified form of Python (e.g. like SageMath).
-- Oscar
On Tue, May 18, 2021 at 6:28 AM Oscar Benjamin oscar.j.benjamin@gmail.com wrote:
and more difficult error analysis.
I don't understand what you mean by "error analysis", unless you're referring to performance degradation due to Fraction propagation.
The error analysis for arithmetic with Fraction is much easier than for float:
sure -- but the error analysis is harder for computations that use a mixture of Fraction and float, and it's not obvious where which is used.
If someone was proposing Fractions everywhere -- there would be massive performance and backward compatibility issues -- but yes, Error Analysis would be a lot easier :-)
Anyway, it looks like this was always about (or was transformed into) an idea to better support symbolic math, which is in a new thread now.
-CHB
Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython
Christopher Barker writes:
sure -- but the error analysis is harder for computations that use a mixture of Fraction and float, and it's not obvious where which is used.
I don't understand what the problem is. Fractions are just a field of computer numbers where all the computations give exact results. Same with ints. The error analyis is harder if you want to take advantage of exact rational arithmetic for Fraction-Fraction operations, but the same would be true for ints, or many powers-of-two operations for IEEE binary floats, etc. Do folks actually compute those tighter bounds where the information is available, eg, if the data happen to be ints?
Anyway, it looks like this was always about (or was transformed into) an idea to better support symbolic math, which is in a new thread now.
"Always was."
If there was some way to make @syms x, y translate to x, y = syms('x, y') or something like that then that would be great. Maybe that doesn't have broad enough use for Python the language but I would certainly add something like that if I was providing a SymPy UI based on a modified form of Python (e.g. like SageMath).
This actually seems like it would be very helpful to use for some factory functions as well. If @ could be made to work on variable assignment similar to how it does on function definition then the following would be possible and make code a bit more DRY:
# currently: LunchBox = namedtuple("LunchBox", "spam eggs") @namedtuple("spam", "eggs") LunchBox LunchBox(eggs=2, spam=4) # LunchBox(spam=4, eggs=2)
# currently T = TypeVar("T") @TypeVar T
The variable decorator would be in full control, just like function decorators so I'm sure some even nicer abstractions could be created. Perhaps instead of turning UserID = NewType("UserID", int) into @NewType(int) UserID it could be @NewType UserID: int