On Fri, 14 May 2021 at 09:24, Martin Teichmann <martin.teichmann@gmail.com> wrote:
Hi list,

when dividing two integers, the result is a float, which means we immediately lose precision. This is not good if you want to use code which supports higher precision. Decimals come to mind, but also sympy. This loss of precision could be avoided if the result of a division is a fraction instead: a fraction is exact.

"...Although practicality beats purity."

This is a nice idea, but I think this ship has sailed.
Performance reasons apart, I think that most builtin (as in compiled from C
or other native code) calls that expect  a float simply won't work
with a Fraction object as it is today in Python.

Yes, not for the drawbacks, this could be nice - but it spells 
an avalanche of trouble, a lot of which due to the fact that the Fraction type
and all concepts in the Numeric Tower, had not sem much active use
in a lot of domains Python is used.

If you take all  examples on the static typing frontend, for example,
they all check for "float" when requiring non-integer numbers. 
(Instead of, say "numbers.Real") - so almost all type-annotated code
would break with this change.

 

So when writing Decimal(1/3), currently we lose the precision in the division, something that the Decimal module cannot undo. With my proposal, the entire precision is retained, and it works as expected. This is even more clear for sympy, a Package for symbolic calculations: currently, sympy cannot do much about "1/2 * m * v**2", although it looks like a perfectly fine formula. But sympy only sees "0.5" instead of "1/2", which is not usable in symbolic calculations.

I am aware that this would be a huge change. But we have had such a change in the past, from integers having a floor division in Python 2, to a float in Python 3. Compared to this, this is actually a small change: the value of the result is only different by the small pecision loss of floats. The bigger problem is caused by the fact that some code may rely on the fact that a value is a float. This can be fixed easily by simply calling float(), which is also backwards-compatible, it will work on older versions of Python as well.

I have prototyped this here: https://github.com/tecki/cpython/tree/int-divide-fraction
The prototype uses the fractions.Fraction class written in Python as result for integer true divisions. I expected that to go horribly wrong, but astonishingly it did not. Only a small number of tests of Python fail, mostly those where it is explicitly tested whether an object is a float. So I lowered the bar even more and tried to compile and test numpy. And also there, except some tests that very explicitly require floats, it worked fine.

In order to showcase how that would look like, let me give an example session:

    >>> 5/6-4/15
    17/30
    >>> a=22/7
    >>> f"{a}"
    '22/7'
    >>> f"{a:f}"
    '3.142857'
    >>> from decimal import Decimal
    >>> Decimal(1/3)
    Decimal('0.3333333333333333333333333333')

As a comparison, the same with current Python:

    >>> 5/6-4/15
    0.5666666666666667
    >>> a=22/7
    >>> f"{a}"
    '3.142857142857143'
    >>> f"{a:f}"
    '3.142857'
    >>> from decimal import Decimal
    >>> Decimal(1/3)
    Decimal('0.333333333333333314829616256247390992939472198486328125')

Cheers

Martin
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-leave@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/RM4JDTGQCOW3MGIKIGEP2BIFOTFFAZI4/
Code of Conduct: http://python.org/psf/codeofconduct/