Hi List, some days ago I posted about my idea let integer division result in fractions, not floats. The upshot of the discussion was that it is a pity that we do not have literals for fractions, but that there is nothing to do about it, as all proposed syntaxes were a bit ugly. But why do we need to have different syntax for both, isn't it possible to simply do both at the same time? The interesting answer is: yes. So my proposal is: number literals (which are not integers) are both fractions and floats a the same time - only when we start calculating with them, the receiving function will pick whatever it prefers. For backwards compatiblity the default is float - but you may write code that looks at the fraction as well. I prototyped that here: https://github.com/tecki/cpython/tree/ratiofloat The idea is the following: when the parser (technically, the AST optimizer) creates the objects that represents the literals, let it add some bread crumbs as to where those data are coming from. Currently, 1/2 just means the float 0.5. Instead, let it be the object of a new class, I dubbed it ratiofloat, which inherits from float, but has the exact value added to it as well. ratiofloat just adds two C ints to the float class, making it a bit bigger. But as I said, only for literals, calculated floats still have the same size as before. To give an example (this is not fake, but from the prototype): >>> 2/5 0.4 >>> (2/5).denominator 5 >>> isinstance(2/5, float) True >>> type(2/5) <class 'ratiofloat'> Note that this is only done at compile time, no such behavior is done at run time, everything just behaves like normal floats: >>> two=2 >>> five=5 >>> (two/five) 0.4 >>> (two/five).numerator Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'float' object has no attribute 'numerator' I have tested this, and all tests pass except those where it is explicitly tested whether a value is a float, not one of its subclasses. I ran that also through numpy and sympy, and both behave mostly fine, except again where the tests are testing whether something is actually a float. All of this does not only work for integers, but also for float literals: >>> a=1/3 + 0.1 >>> a 0.43333333333333335 >>> a.numerator 13 >>> a.denominator 30 All this is only interesting once you teach some classes about it. To give an example here: >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.3333333333333333333333333333') >>> Decimal(0.1) Decimal('0.1') >>> from fractions import Fraction >>> Fraction(1/3) Fraction(1, 3) >>> Fraction(0.1) Fraction(1, 10) I also tries to teach sympy about this. While I succeeded in general, there were many tests that failed, and that for an interesting reason: the sympy tests seem to assume that if you use a float, you want to tell sympy to calculate numerically. So for the sympy tests, 0.1*x and x/10 are something completely different. IMHO this is actually an abuse of features: 0.1 simply is the same as one-tenth, and code should at least try to treat it the same, even if it fails at that. Other than that, sympy works fine (after I taught it): >>> from sympy import symbols >>> x = symbols("x") >>> 1.5*x 3*x/2 >>> x**(0.5) sqrt(x) I think this is now all good enough to be wrapped in a PEP, Chris, can you guide me through the bureaucracy? How would we go forward about this? The good news is that all that I described happens at compile time, so we can use a good ole' "from __future__ import" approach. So my suggestion is: implement it for 3.11, but activate it only with a future import or an interpreter option for the command line. This gives libraries time to adopt to this new style. For 3.12, make this option the default for the command line. So we can tell people to just switch that option off if it doesn't work. And in some far, glorious future, when everybody has a from __future__ line in all of their files, we can make it a default everywhere. Cheers Martin