Fractions vs. floats - let's have the cake and eat it

Hi List, some days ago I posted about my idea let integer division result in fractions, not floats. The upshot of the discussion was that it is a pity that we do not have literals for fractions, but that there is nothing to do about it, as all proposed syntaxes were a bit ugly. But why do we need to have different syntax for both, isn't it possible to simply do both at the same time? The interesting answer is: yes. So my proposal is: number literals (which are not integers) are both fractions and floats a the same time - only when we start calculating with them, the receiving function will pick whatever it prefers. For backwards compatiblity the default is float - but you may write code that looks at the fraction as well. I prototyped that here: https://github.com/tecki/cpython/tree/ratiofloat The idea is the following: when the parser (technically, the AST optimizer) creates the objects that represents the literals, let it add some bread crumbs as to where those data are coming from. Currently, 1/2 just means the float 0.5. Instead, let it be the object of a new class, I dubbed it ratiofloat, which inherits from float, but has the exact value added to it as well. ratiofloat just adds two C ints to the float class, making it a bit bigger. But as I said, only for literals, calculated floats still have the same size as before. To give an example (this is not fake, but from the prototype): >>> 2/5 0.4 >>> (2/5).denominator 5 >>> isinstance(2/5, float) True >>> type(2/5) <class 'ratiofloat'> Note that this is only done at compile time, no such behavior is done at run time, everything just behaves like normal floats: >>> two=2 >>> five=5 >>> (two/five) 0.4 >>> (two/five).numerator Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'float' object has no attribute 'numerator' I have tested this, and all tests pass except those where it is explicitly tested whether a value is a float, not one of its subclasses. I ran that also through numpy and sympy, and both behave mostly fine, except again where the tests are testing whether something is actually a float. All of this does not only work for integers, but also for float literals: >>> a=1/3 + 0.1 >>> a 0.43333333333333335 >>> a.numerator 13 >>> a.denominator 30 All this is only interesting once you teach some classes about it. To give an example here: >>> from decimal import Decimal >>> Decimal(1/3) Decimal('0.3333333333333333333333333333') >>> Decimal(0.1) Decimal('0.1') >>> from fractions import Fraction >>> Fraction(1/3) Fraction(1, 3) >>> Fraction(0.1) Fraction(1, 10) I also tries to teach sympy about this. While I succeeded in general, there were many tests that failed, and that for an interesting reason: the sympy tests seem to assume that if you use a float, you want to tell sympy to calculate numerically. So for the sympy tests, 0.1*x and x/10 are something completely different. IMHO this is actually an abuse of features: 0.1 simply is the same as one-tenth, and code should at least try to treat it the same, even if it fails at that. Other than that, sympy works fine (after I taught it): >>> from sympy import symbols >>> x = symbols("x") >>> 1.5*x 3*x/2 >>> x**(0.5) sqrt(x) I think this is now all good enough to be wrapped in a PEP, Chris, can you guide me through the bureaucracy? How would we go forward about this? The good news is that all that I described happens at compile time, so we can use a good ole' "from __future__ import" approach. So my suggestion is: implement it for 3.11, but activate it only with a future import or an interpreter option for the command line. This gives libraries time to adopt to this new style. For 3.12, make this option the default for the command line. So we can tell people to just switch that option off if it doesn't work. And in some far, glorious future, when everybody has a from __future__ line in all of their files, we can make it a default everywhere. Cheers Martin

On Tue, May 18, 2021 at 5:40 PM Martin Teichmann <martin.teichmann@gmail.com> wrote:
I think this is now all good enough to be wrapped in a PEP, Chris, can you guide me through the bureaucracy?
Sure, but first, I would strongly recommend getting some hard performance numbers. This sort of thing is definitely going to have a cost, and your proposal will stand or fall on whether that cost makes a measurable difference to normal operations. https://speed.python.org/ https://pyperformance.readthedocs.io/ ChrisA

Julia has these kind of builtin things. The main problem is backward compatibility. However your tool is useful as a python-to-python parser (from the) (I remember some static analysis tools like "mpy"?) pip install funcoperators <https://pypi.org/project/funcoperators/> solve the problem differently : From: (1/2).denominator -} To: (1 /frac/ 2).denominator With: frac = infix(Fraction) With: from Fraction import fractions Le mar. 18 mai 2021 à 09:40, Martin Teichmann <martin.teichmann@gmail.com> a écrit :

What's the actual problem you are solving with this complex, complicated proposal? In other words, what is the motivation? You mentioned symbolic maths in the previous thread. Python is never going to look good for symbolic maths, because symbolic maths is a two-dimensional layout and Python (like most programming languages) is line-oriented in a way that does not lend itself to complex mathematical notation. What is so special about constant expressions like `1/3`? Why shouldn't non-constant expressions like `one/three` work? How about expressions like `(1-3**-2)*7`? If you care about making symbolic maths look good, don't you need a way for expressions like √2 and sin π/3 to give you an exact symbolic result too? How do you respond to the argument that this will add lots of complexity to the mental model of numbers in Python, without actually making Python a symbolic maths language? You suggest:
only when we start calculating with them, the receiving function will pick whatever it prefers.
In concrete terms, how do I write a function to do that? Once I have a ratiofloat and do arithmetic to it, what happens? x = 1/3 # this is a ratiofloat # what are these? x + 1 x**2 x**0.5 You say:
All of this does not only work for integers, but also for float literals
and give 0.1 as an example. Okay, I will grant you that *most* people will expect that 0.1 is 1/10, rather than 3602879701896397/36028797018963968 but what about less obvious examples? If I write 0.6666666666666666 will I get 2/3? What if I write it as 0.6666666666666667 instead? (They are different floats.) How about 0.666666666667, copied from my calculator? How about 0.66667? Is that close enough to get 2/3? Its obvious that in real life anyone writing 0.66667 is thinking "2/3". What's the size of your ratiofloats? >>> sys.getsizeof(2.5) 24 Including any associated data stored in attributes (like the numerator and denominator). You say:
All this is only interesting once you teach some classes about it.
What is involved in teaching classes about this? For example, how much work did it take for you to get this result? >>> Decimal(0.1) Decimal('0.1')
I can see that, maybe, sympy would be interested in this. Aside from sympy, what benefit do you think other libraries will get from this? Especially libraries like numpy and scipy which pretty much do all their work in pure floating point. -- Steve

On Tue, May 18, 2021, 8:07 AM Steven D'Aprano <steve@pearwood.info> wrote:
It seems to me the motivation is only related to looks insofar as it makes it a little more convenient to not lose information when writing expressions, which also happens to have the side benefit of looking a little better. What is so special about constant expressions like `1/3`? Why shouldn't
non-constant expressions like `one/three` work?
I'm also interested in this question. If one/three is a float at compile time, you're going to have an ugly situation where: Fraction(1/3) and: Fraction(one/three) Result in different values? If not, then how is this avoided...? You say:
I think the point here is actually the reverse: you can't create the actual value of 2/3 using any currently available literal. Some "fractional literals" (speaking about these in mathematical syntax, not python syntax) such as 1/10 can also be exactly represented as a "decimal literal", 0.1. You said most people would expect the actual value of 1/10 when seeing 0.1, but I'd be interested in meeting people who expect something else.. I'm assuming here that most would at least would LIKE to be able to be able to expect 1/10 IF it doesn't cost them too much in some other way. That seems uncontroversial. The fact that this it isn't possible to represent 2/3 using "decimal literal" in mathematical language isn't a shortcoming of the proposal, it's just a shortcoming of written math in a base 10 system. N other words the proposal isn't trying to create a way to magically guess what the real value is MEANT by the user in their mind; on the contrary, it is intended to try to better preserve to WRITTEN INTENT of the user, mapped to standard mathematical syntax. I'm not against or for it yet mind you, but I don't see this as really a big objection.

Hi Steven,
While this is certainly true, I think that improving is always a good idea. One of the cool things about Python is that it is a very broad general purpose language. Many people from many communities like its concise syntax. Those include the symbolic math community. And it is unfortunate that it is indeed possible to write symbolic math expressions very beautifully in Python, just the Python interpreter mangles it, as I show here without need.
What is so special about constant expressions like `1/3`? Why shouldn't non-constant expressions like `one/three` work?
Because reality. People would like to write 1/2 * m * v**2 to mean the obvious thing, without having to think about the details. And there are many people like this, this is why it shows up on this mailing list regularly. I have never felt the urge to write two/three * m * v**two. Sure, one can add yet another syntax to Python to express fractions, adding some letter here, some special character there. %But $the $cool @thing $about ?Python is $that !this _is_ $normally &unnecessary.
How about expressions like `(1-3**-2)*7`?
I will make the ** operator work as well. Certainly only as long as the exponent is integer.
Sure. I am waiting for your proposal.
There are quite a number of people using Python as a symbolic math language, sympy and sagemath are examples. They constantly invent pre-processors, as mentioned in this thread and the previous one before, to make their life easier. Those preprocessors usually have something that turns 1/2 into some magic. I think it is fruitful to look at the existing preprocessors in more detail and pick the best they offer for standard Python. Also, I do not think it makes the mental model of numbers more complicated. If you don't need it, you won't even notice. But for symbolic math, users will actually need to think less about the details, not more.
class Fraction: def __init__(self, x): self.numerator = x.numerator self.denominator = x.denominator
Once I have a ratiofloat and do arithmetic to it, what happens?
The result is a simple float. For backwards compatibility.
a float
x**2
a float
x**0.5
a float As I said, everything happens on the parser level.
No. Because 0.6666666666666666 != 2/3. We are talking about exact math here, as in exact. Not high precision, exact. What if I write it as
Nobody would like to write 0.66667 * x to mean 2/3 * x. So your example is actually very artificial. And no, my code does not do any magic, it just retains the original exact value entered by the user. Actually, it does that only on a best effort basis, some 9 digits are possible, after that it just drops the details entirely. I do not think this is too bad, as I do believe nobody has a deep desire to write things like 134512342/234233. And if they do, telling them to write F(134512342, 234233) is certainly not such a big issue.
On my system: >>> import sys >>> a=2.5 >>> b=float(2.5) >>> type(a) <class 'ratiofloat'> >>> type(b) <class 'float'> >>> sys.getsizeof(a) 32 >>> sys.getsizeof(b) 24 So it is 8 bytes bigger, that are the two C ints mentioned before. Remember that they are only created at compile time and stored in the .pyc files, so I have a hard time imagining that they will ever take up more than some kilobytes.
It's this commit: https://github.com/tecki/cpython/commit/15c7e05cd50e4c671072b8497dbccbebf654... It took me about 30 min. Sympy was a bit harder, an hour or two.
I can see that, maybe, sympy would be interested in this. Aside from sympy, what benefit do you think other libraries will get from this?
sagemath will also benefit. Those are the symbolic math libraries I know about. Those are not small communities, though.
Especially libraries like numpy and scipy which pretty much do all their work in pure floating point.
I think numpy and scipy will get nothing from it, because they do not do symblic math. They are actually the unfortunate guys, because they would have to modify some (not much) of their code, as they sometimes do use "type(x) is float" instead of "isinstance(x, float)" to do their magic. That said, once that's fixed I do not see any more problems, ratiofloat is binary backwards compatible to float, so I do not even see a speed penalty (like always that has to be shown). Cheers Martin

On Tue, 18 May 2021 at 15:16, Martin Teichmann <martin.teichmann@gmail.com> wrote:
Because reality. People would like to write 1/2 * m * v**2 to mean the obvious thing, without having to think about the details. And there are many people like this, this is why it shows up on this mailing list regularly. I have never felt the urge to write two/three * m * v**two.
I'd actually prefer to write (m*v**2)/2. Or (m/2)*v**2. But those wouldn't work, the way you describe your proposal. And I'd be very concerned if they behaved differently than 1/2 * m * v**2... Paul

On Tue, May 18, 2021 at 10:55 AM Paul Moore <p.f.moore@gmail.com> wrote:
A much more concrete way of making the point I was trying to make! Different results from:
..and:
1/3
... has to be avoided. I don't see how it can be with the way the proposal has been described. --- Ricky. "I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler

Fully agreed on the sentiment that we shouldn't treat compile-time literals differently from runtime operations. It has no precedent in Python and adds a significant mental burden to keep track of. I can only imagine the deluge StackOverflow threads from surprised users if this were to be done. That said, I also really like the idea of better Python support for symbolic and decimal math. How about this as a compromise: `from __feature__ import decimal_math, fraction_math` With the 2 interpreter directives above (which must appear at the top of the module, either before or after __future__ imports, but before anything else), any float literals inside that module would be automatically coerced to `Decimal`, and any division operations would be coerced to `Fraction`. You could also specify just one of those directives if you wanted. Upsides - No change at all to existing code - By specifically opting into this behavior you would be accepting the reduced performance and memory-efficiency - Very simple to explain to people using Python for maths or science (not an insignificant userbase) who don't understand or care about the merits and downsides of various data-types and just want to do math that behaves the way they expect. Downsides: - the complexity of adding this new '__feature__' interpreter directive, although it *should* be possibly to reuse the existing __future__ machinery for it - having to maintain these new 'features' into the future - you can't choose to mark a single float literal as a decimal or a single division operation as a fraction, it's all-or-nothing within a given module I don't know. It was just an idea off the top of my head. On second thought, maybe it's needlessly contrived. Cheers everyone On Tue, May 18, 2021 at 4:05 PM Ricky Teachey <ricky@teachey.org> wrote:

Matt del Valle writes:
Fully agreed on the sentiment that we shouldn't treat compile-time literals differently from runtime operations.
But as you just pointed out, we do. Literals are evaluated at compile time, operations at runtime. "This" and f"This" generate very different code! There's nothing in the concept of literal that prevents us from treating the sequence of tokens 1 / 3 (with space optional) as a *single* literal, and mapping that to Fraction(1, 3). The question is purely UI. 1 / 3 "looks like" an expression involving two int objects and a division operator. Do we force treatment as a literal (allowing it to be a Fraction), or do we treat it as an expression? This only matters because the type is different. It doesn't bother me any more than it bothers Martin, but since it bothers a lot of Pythonistas that kills it for me. I admit to being surprised at the vehemence of the pushback, especially from people who clearly haven't understood the proposal. (Guido's response is another matter, as he thought very carefully about this decades ago.)
It has no precedent in Python and adds a significant mental burden to keep track of.
Only if you want it to. To me it's much like multiple value returns in Common Lisp: if you don't use a special multiple values expression to capture the extra values as a list, all you'll see is the principal value. The analogy is that if you don't do something to capture the Fraction-ness of Martin's ratiofloats, it will (more or less) quickly disappear from the rest of the computation. I agree that the "more or less" part is problematic in Python, and the ratiofloat object itself could persist indefinitely, which could raise issues at any time. So AFAICS you can basically treat ratiofloats as infinitely precise floats, which lose their precision as soon as they come in contact with finite-precision floats. Since most people think of floats as "approximations" (which is itself problematic!), I don't see that it adds much cognitive burden -- unless you need it, as SymPy users do.
Followed by "import numpy", what should happen? Should numpy respect those? Should floats received from numpy be converted? Which takes precedence if both decimal_math and fraction_math are imported? I don't think this can work very well. Martin's approach works at all *because* the ratiofloats are ephemeral, while computations involving floats are everywhere and induce "float propagation".
This is basically a pragma. "from __future__" was acceptable because it was intended to be temporary (with a few exceptional Easter eggs). But in general pragmas were considered un-Pythonic. Aside from __future__ imports, we have PEP 263 coding "cookies", and I think that's it. I'm pretty sure these features are not important enough to overcome that tradition.
I don't know. It was just an idea off the top of my head. On second thought, maybe it's needlessly contrived.
It's an idea. I think it highly unlikely to pass *in Python*, but that doesn't make it necessarily a bad idea. There are other languages, other ways of thinking about these issues. Reminding ourselves of that occasionally is good! Steve

On 5/20/2021 5:41 PM, Stephen J. Turnbull wrote:
This will be unhelpful for this discussion, but to be technically correct (the best kind of correctness!), those two expressions generate identical code:
It's only when the f-string has internal expressions that the generated code is different:
Eric

Hi Paul,
Sure they do work, and they work exactly the same way. That is acually the point: currently 1/2 * m * v**2 is not the same as (m/2) * v**2 (in sympy, that is), with my proposal it would be exactly the same (again, from my prototype, not fake): >>> m, v, r = symbols("m v r") >>> 1/2 * m * v**2 m*v**2/2 >>> (m/2) * v**2 m*v**2/2 >>> (m * v**2) / 2 m*v**2/2 >>> 4/3 * pi * r**3 4*pi*r**3/3 Cheers Martin

On Tue, 18 May 2021 at 16:55, Martin Teichmann <martin.teichmann@gmail.com> wrote:
But *not* in sympy, in normal Python, if m == 1 and v == 1, then 1/2 * m * v**2 is 0.5 (a float) currently, as is (m/2) * v**2. But in your proposal, the former will be a float/fraction hybrid, whereas the latter will be a float. And what about x = 1 a = 1/3 b = x/3 a == Fraction(1,3) b == Fraction(1,3) a == b Currently these are False, False, True. You'll change that to True, False, True and you've now broken the idea that things that are equal should compare the same to a 3rd value. Never mind. At the end of the day, I simply think your proposal is not viable. We can argue details all day, but I'm not going to be persuaded otherwise. Paul

Hi Paul,
No. In my proposal, this will simply be a float. Why? Because my ratiofloats are only generated during compile time. At runtime, they behave like normal floats, so when you calculate (1/2) * x with x being 1, you get 0.5, just float.
Indeed, one could break transitivity here if the implementer of Fraction chose so. But transitivity is not a required thing for == in Python, numpy even returns arrays for it... Cheers Martin

On Tue, 18 May 2021 at 15:56, Paul Moore <p.f.moore@gmail.com> wrote:
In SymPy these do behave differently right now:
The problem is that while SymPy expressions can define __div__(self, int) to be exact there isn't a way for SymPy to hook into an expression like 1/2 which is just a Python expression using the int type whose __div__ returns floats. SymPy itself treats floats as being different from the exact rational numbers that they represent which is important for users who *do* want to use floats (although I think many users do this just because they want decimal display without understanding the issues that imprecise arithmetic can bring). There is a function in sympy that can "convert" a float to rational with heuristics that allow it to undo decimal-to-binary rounding errors e.g.:
Note that the Python float 0.1 is not exactly equal to the rational number 1/10 which can not be represented exactly in binary floating point. So that is not an exact conversion although it is necessarily within 1 ulp (in this case). It would be possible to have a mode in SymPy that can do this automatically but then some people wouldn't want that so it possibly shouldn't be on by default. Really there is no way to get around the fact that sometimes you want to use floating point and sometimes you want exact rational arithmetic so both need to be possible in some way. Some other programming languages have special syntax for rational numbers e.g. 1//2 in Julia or 1%2 in Haskell but I don't know of a programming language that uses 1/2 syntax for rational numbers apart from things like Maple/Mathematica. I think that Matlab just does the equivalent of nsimplify above so you can do e.g. sym(1/3) and get 1/3 as an exact rational number even though the original Matlab expression 1/3 gives an approximate floating point result. The Matlab docs acknowledge that this is not always reliable though (the suggestion is to use 1/sym(3) instead): """ Use sym on subexpressions instead of the entire expression for better accuracy. Using sym on entire expressions is inaccurate because MATLAB first converts the expression to a floating-point number, which loses accuracy. sym cannot always recover this lost accuracy. """ https://uk.mathworks.com/help/symbolic/sym.html#bu1rs8g-1 I think that maybe the best solution here is something more like a domain-specific language that can be used with ipython/jupyter as an alternative profile. In the DSL 1/2 could be a Fraction and 2x^2 could be the equivalent of 2*x**2 etc. You'd probably want to be able to write things like √2 and x'' and so on. Maybe you could have a better syntax for creating matrices than the clunky list of lists. Basically there are lots of things that you can't quite do in Python but that you might want to do if you were making a language specifically for the purpose of doing stuff with equations. -- Oscar

Hi Oscar,
The entire point of my proposal is to give SymPy the possibility to "hook into expressions like 1/2". The examples I posted come from an interpreter (and slightly modified SymPy) where I made that possible. The idea is that the Python parser keeps the information where the 1/2 is coming from, and give SymPy a chance to peek into it. And all this in a documented, official way, without the need to do magic.
Also in this case my proposal would give SymPy the chance to understand what the user actually entered, and act accordingly. I agree that the case for 0.1 is not as compelling as the one for 1/2: while for the latter the user almost certainly means "one half", this is not clear for the former.
On the thread some days ago we had this discussion, whether we should add a new operator. People didn't seem to like it.
I think SageMath does exactly this. While this is perfectly fine, it splits the symbolic-math-in-Python world. If everybody uses their favorite DSL, we drift apart. My goal is to look at those DSLs and check what could be integrated into standard Python. I started with the 1/2 problem, because it was a problem I actually stumbled upon, and which I considered simple to solve.
For most of the time my opinion was: stop dreaming, get real. But then they pushed structural pattern matching, a super complicated beast of syntax for not much benefit. So I hoped that we could get some syntax for symbolic math as well. Cheers Martin

On Tue, May 18, 2021 at 05:30:29PM -0000, Martin Teichmann wrote:
https://www.jonathanturner.org/rethinking-the-blub-paradox/ -- Steve

Martin Teichmann writes:
Also in this case my proposal would give SymPy the chance to understand what the user actually entered, and act accordingly.
But *only* in this case. It seems to me that what SymPy would really like is for arithmetic expressions in certain contexts to be returned as syntax trees rather than executed. But this really would greatly complicate the mental model, unlike your ratiofloat proposal. Steve

Martin Teichmann writes:
Also in this case my proposal would give SymPy the chance to understand what the user actually entered, and act accordingly.
I am sorry, but could you point out an open issue or discussion by developers on SymPy repository or a forum where such a feature has been mentioned? I don't recall seeing any such discussion and would love to find out more about SymPy's perspective on this topic. André Roberge

On Tue, May 18, 2021 at 05:21:28PM +0100, Oscar Benjamin wrote:
Not just ipython and jupyter. The Python std lib has a module that simulates the built-in interpreter. With a bit of jiggery-pokery, it should be possible to adapt that to allow a symbolic maths DSL. https://docs.python.org/3/library/code.html I think the only limitation is that the code module requires legal Python syntax, so you can't write "2x" to get `2*x`. Nevertheless, it should be capable of exact fractions. The sys module has a display hook for customizing printing: https://docs.python.org/3/library/sys.html#sys.displayhook It would be nice if it had an input hook as well, so that Martin could customise "1/2" to "Fraction(1, 2)". I think that would be a much more useful enhancement than ratiofloat. -- Steve

On Tue, May 18, 2021, 9:18 PM Steven D'Aprano <steve@pearwood.info> wrote:
A math DSL that at minimum supported 1. subreddit line these fraction literals 2. a more math-like lambda syntax, like: f(x)=x ... 3. a caret symbol for the exponentiation operator would generate quite a bit of interest from me. Happily Sage math exists. Sadly, in general I've found it difficult to make use of Sage compared to regular python. If something like this were supported as a part of the core language I'd be likely to use it. I have no idea what kind of effort that would entail on the part of others though.

As I said in my post in your previous thread, I'm -1. Why? Because you're not solving anything. In fact you're making it more complicated. Developers can have difficulties studying your idea, beginners don't stand a chance (In fact I have no idea what you described here in your idea and trust me I'd move on to another programming language if this change was brought to Python. This is one of the things that doesn't fit in my brain.). Python is a simple language and we, the community, like to keep it simple. Yes we can extend it by providing more changes that improve performance or by adding functionality to third-party modules but introducing this big a change is not a great idea. But yeah I could be wrong.

On Tue, May 18, 2021 at 12:39 AM Martin Teichmann < martin.teichmann@gmail.com> wrote:
This violates a basic property of Python. If 1/2 has a certain property, then `x = 1; y = 2; x/2` should have the same property. Please don't go down this road. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>

Why do we need this functionality? What are the benefits? What are you trying to solve? Is it worth adding it?

Hi Shreyan, we need this for symbolic math, e.g. in sympy. Probably you have never seen somebody doing symbolic math with Python, but believe me, there are many. Let me walk you through a toy problem to show where the issues are. Let's say we want to solve the equation x**2 == 3. Not a very tough equation, in reality things would be much more complicated. To start, we import sympy:
from sympy import symbols, solve
Then we need to declare our variable:
x = symbols("x")
Now we would just use the solve function to solve the equation. Unfortunately the == operator does not work (another, independent problem), so we have to reformulate our equation to x**2 - 3 == 0, and knowing that sympy implicitly adds "== 0" to the equation, we can just type
solve(x**2-3) [-sqrt(3), sqrt(3)]
The result is not very astonishing. But wait, we realize we did a mistake, it is not actually 3 that we have to equate to, but 2/3! So we type
solve(x**2-2/3) [-0.816496580927726, 0.816496580927726]
That went wrong. We wanted to have a symbolic result, not a numeric solution! How did that come about? Well, that's the point of discussion: 2/3 gets immediately turned into 0.66666..., and sympy has no clue where this is coming from. Sure, we can reformulate our problem:
solve(3*x**2-2) [-sqrt(6)/3, sqrt(6)/3]
in our toy problem that is simple, but imagine what happens if you have 20+ terms that need to be changed. This is when it comes in handy to just be able to write 2/3 to mean two-third. Sympy gives you other ways than reformulating the problem to solve it, but that's a different topic. My point is, with my proposal, you can just write
solve(x**2 - 2/3) [-sqrt(6)/3, sqrt(6)/3]
no weird things necessary. But you might ask, what if the fraction is not a constant? Well, in this case we need to declare another variable, and also tell solve in a second parameter which variable we want to solve for:
ok, but if we now put some concrete value for a, would that not cause problems? Well the way you do that is to substitute:
as you see, there is no problem here. This is why the discussion whether the ratiofloat should also appear when we are dividing two variables which contain an int was not very interesting to me, as this is not a relevant problem for symbolic math. I hope I was able to show you where fractional constants are useful. Symbolic math in Python has many more problems, the fact that we cannot use == for example (I actually don't know why that is not possible, we should ask people from SymPy). Also the declaration of variables is a bit weird. But those problems are for another day. Cheers Martin

The more explanation I see from the OP, the more I oppose the idea. It becomes more clear that this is a specialization, basically only to help SymPy, while introducing semantic changes for everyone else (even if minor ones). Moreover, SymPy already has several readily available ways to do this, albeit with slightly less appealing spellings. Now, I WILL note that NumPy motivated several changes to Python. I believe even strides in slices first benefitted NumPy. Likewise the ellipse object, extended slices, and the `@` operator. However... 1. Unlike this proposal, none of those were backwards-incompatible changes (beyond the trivial sense that "code that used to raise a SyntaxError doesn't now). 2. The NumPy community (especially counting indirect users such as Panda, scikit-learn, matplotlib, etc) is more than 10x the size of SimPy, and probably more than 100x. On Tue, May 18, 2021, 5:42 PM Martin Teichmann <martin.teichmann@gmail.com> wrote:

On Tue, May 18, 2021 at 5:40 PM Martin Teichmann <martin.teichmann@gmail.com> wrote:
I think this is now all good enough to be wrapped in a PEP, Chris, can you guide me through the bureaucracy?
Sure, but first, I would strongly recommend getting some hard performance numbers. This sort of thing is definitely going to have a cost, and your proposal will stand or fall on whether that cost makes a measurable difference to normal operations. https://speed.python.org/ https://pyperformance.readthedocs.io/ ChrisA

Julia has these kind of builtin things. The main problem is backward compatibility. However your tool is useful as a python-to-python parser (from the) (I remember some static analysis tools like "mpy"?) pip install funcoperators <https://pypi.org/project/funcoperators/> solve the problem differently : From: (1/2).denominator -} To: (1 /frac/ 2).denominator With: frac = infix(Fraction) With: from Fraction import fractions Le mar. 18 mai 2021 à 09:40, Martin Teichmann <martin.teichmann@gmail.com> a écrit :

What's the actual problem you are solving with this complex, complicated proposal? In other words, what is the motivation? You mentioned symbolic maths in the previous thread. Python is never going to look good for symbolic maths, because symbolic maths is a two-dimensional layout and Python (like most programming languages) is line-oriented in a way that does not lend itself to complex mathematical notation. What is so special about constant expressions like `1/3`? Why shouldn't non-constant expressions like `one/three` work? How about expressions like `(1-3**-2)*7`? If you care about making symbolic maths look good, don't you need a way for expressions like √2 and sin π/3 to give you an exact symbolic result too? How do you respond to the argument that this will add lots of complexity to the mental model of numbers in Python, without actually making Python a symbolic maths language? You suggest:
only when we start calculating with them, the receiving function will pick whatever it prefers.
In concrete terms, how do I write a function to do that? Once I have a ratiofloat and do arithmetic to it, what happens? x = 1/3 # this is a ratiofloat # what are these? x + 1 x**2 x**0.5 You say:
All of this does not only work for integers, but also for float literals
and give 0.1 as an example. Okay, I will grant you that *most* people will expect that 0.1 is 1/10, rather than 3602879701896397/36028797018963968 but what about less obvious examples? If I write 0.6666666666666666 will I get 2/3? What if I write it as 0.6666666666666667 instead? (They are different floats.) How about 0.666666666667, copied from my calculator? How about 0.66667? Is that close enough to get 2/3? Its obvious that in real life anyone writing 0.66667 is thinking "2/3". What's the size of your ratiofloats? >>> sys.getsizeof(2.5) 24 Including any associated data stored in attributes (like the numerator and denominator). You say:
All this is only interesting once you teach some classes about it.
What is involved in teaching classes about this? For example, how much work did it take for you to get this result? >>> Decimal(0.1) Decimal('0.1')
I can see that, maybe, sympy would be interested in this. Aside from sympy, what benefit do you think other libraries will get from this? Especially libraries like numpy and scipy which pretty much do all their work in pure floating point. -- Steve

On Tue, May 18, 2021, 8:07 AM Steven D'Aprano <steve@pearwood.info> wrote:
It seems to me the motivation is only related to looks insofar as it makes it a little more convenient to not lose information when writing expressions, which also happens to have the side benefit of looking a little better. What is so special about constant expressions like `1/3`? Why shouldn't
non-constant expressions like `one/three` work?
I'm also interested in this question. If one/three is a float at compile time, you're going to have an ugly situation where: Fraction(1/3) and: Fraction(one/three) Result in different values? If not, then how is this avoided...? You say:
I think the point here is actually the reverse: you can't create the actual value of 2/3 using any currently available literal. Some "fractional literals" (speaking about these in mathematical syntax, not python syntax) such as 1/10 can also be exactly represented as a "decimal literal", 0.1. You said most people would expect the actual value of 1/10 when seeing 0.1, but I'd be interested in meeting people who expect something else.. I'm assuming here that most would at least would LIKE to be able to be able to expect 1/10 IF it doesn't cost them too much in some other way. That seems uncontroversial. The fact that this it isn't possible to represent 2/3 using "decimal literal" in mathematical language isn't a shortcoming of the proposal, it's just a shortcoming of written math in a base 10 system. N other words the proposal isn't trying to create a way to magically guess what the real value is MEANT by the user in their mind; on the contrary, it is intended to try to better preserve to WRITTEN INTENT of the user, mapped to standard mathematical syntax. I'm not against or for it yet mind you, but I don't see this as really a big objection.

Hi Steven,
While this is certainly true, I think that improving is always a good idea. One of the cool things about Python is that it is a very broad general purpose language. Many people from many communities like its concise syntax. Those include the symbolic math community. And it is unfortunate that it is indeed possible to write symbolic math expressions very beautifully in Python, just the Python interpreter mangles it, as I show here without need.
What is so special about constant expressions like `1/3`? Why shouldn't non-constant expressions like `one/three` work?
Because reality. People would like to write 1/2 * m * v**2 to mean the obvious thing, without having to think about the details. And there are many people like this, this is why it shows up on this mailing list regularly. I have never felt the urge to write two/three * m * v**two. Sure, one can add yet another syntax to Python to express fractions, adding some letter here, some special character there. %But $the $cool @thing $about ?Python is $that !this _is_ $normally &unnecessary.
How about expressions like `(1-3**-2)*7`?
I will make the ** operator work as well. Certainly only as long as the exponent is integer.
Sure. I am waiting for your proposal.
There are quite a number of people using Python as a symbolic math language, sympy and sagemath are examples. They constantly invent pre-processors, as mentioned in this thread and the previous one before, to make their life easier. Those preprocessors usually have something that turns 1/2 into some magic. I think it is fruitful to look at the existing preprocessors in more detail and pick the best they offer for standard Python. Also, I do not think it makes the mental model of numbers more complicated. If you don't need it, you won't even notice. But for symbolic math, users will actually need to think less about the details, not more.
class Fraction: def __init__(self, x): self.numerator = x.numerator self.denominator = x.denominator
Once I have a ratiofloat and do arithmetic to it, what happens?
The result is a simple float. For backwards compatibility.
a float
x**2
a float
x**0.5
a float As I said, everything happens on the parser level.
No. Because 0.6666666666666666 != 2/3. We are talking about exact math here, as in exact. Not high precision, exact. What if I write it as
Nobody would like to write 0.66667 * x to mean 2/3 * x. So your example is actually very artificial. And no, my code does not do any magic, it just retains the original exact value entered by the user. Actually, it does that only on a best effort basis, some 9 digits are possible, after that it just drops the details entirely. I do not think this is too bad, as I do believe nobody has a deep desire to write things like 134512342/234233. And if they do, telling them to write F(134512342, 234233) is certainly not such a big issue.
On my system: >>> import sys >>> a=2.5 >>> b=float(2.5) >>> type(a) <class 'ratiofloat'> >>> type(b) <class 'float'> >>> sys.getsizeof(a) 32 >>> sys.getsizeof(b) 24 So it is 8 bytes bigger, that are the two C ints mentioned before. Remember that they are only created at compile time and stored in the .pyc files, so I have a hard time imagining that they will ever take up more than some kilobytes.
It's this commit: https://github.com/tecki/cpython/commit/15c7e05cd50e4c671072b8497dbccbebf654... It took me about 30 min. Sympy was a bit harder, an hour or two.
I can see that, maybe, sympy would be interested in this. Aside from sympy, what benefit do you think other libraries will get from this?
sagemath will also benefit. Those are the symbolic math libraries I know about. Those are not small communities, though.
Especially libraries like numpy and scipy which pretty much do all their work in pure floating point.
I think numpy and scipy will get nothing from it, because they do not do symblic math. They are actually the unfortunate guys, because they would have to modify some (not much) of their code, as they sometimes do use "type(x) is float" instead of "isinstance(x, float)" to do their magic. That said, once that's fixed I do not see any more problems, ratiofloat is binary backwards compatible to float, so I do not even see a speed penalty (like always that has to be shown). Cheers Martin

On Tue, 18 May 2021 at 15:16, Martin Teichmann <martin.teichmann@gmail.com> wrote:
Because reality. People would like to write 1/2 * m * v**2 to mean the obvious thing, without having to think about the details. And there are many people like this, this is why it shows up on this mailing list regularly. I have never felt the urge to write two/three * m * v**two.
I'd actually prefer to write (m*v**2)/2. Or (m/2)*v**2. But those wouldn't work, the way you describe your proposal. And I'd be very concerned if they behaved differently than 1/2 * m * v**2... Paul

On Tue, May 18, 2021 at 10:55 AM Paul Moore <p.f.moore@gmail.com> wrote:
A much more concrete way of making the point I was trying to make! Different results from:
..and:
1/3
... has to be avoided. I don't see how it can be with the way the proposal has been described. --- Ricky. "I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler

Fully agreed on the sentiment that we shouldn't treat compile-time literals differently from runtime operations. It has no precedent in Python and adds a significant mental burden to keep track of. I can only imagine the deluge StackOverflow threads from surprised users if this were to be done. That said, I also really like the idea of better Python support for symbolic and decimal math. How about this as a compromise: `from __feature__ import decimal_math, fraction_math` With the 2 interpreter directives above (which must appear at the top of the module, either before or after __future__ imports, but before anything else), any float literals inside that module would be automatically coerced to `Decimal`, and any division operations would be coerced to `Fraction`. You could also specify just one of those directives if you wanted. Upsides - No change at all to existing code - By specifically opting into this behavior you would be accepting the reduced performance and memory-efficiency - Very simple to explain to people using Python for maths or science (not an insignificant userbase) who don't understand or care about the merits and downsides of various data-types and just want to do math that behaves the way they expect. Downsides: - the complexity of adding this new '__feature__' interpreter directive, although it *should* be possibly to reuse the existing __future__ machinery for it - having to maintain these new 'features' into the future - you can't choose to mark a single float literal as a decimal or a single division operation as a fraction, it's all-or-nothing within a given module I don't know. It was just an idea off the top of my head. On second thought, maybe it's needlessly contrived. Cheers everyone On Tue, May 18, 2021 at 4:05 PM Ricky Teachey <ricky@teachey.org> wrote:

Matt del Valle writes:
Fully agreed on the sentiment that we shouldn't treat compile-time literals differently from runtime operations.
But as you just pointed out, we do. Literals are evaluated at compile time, operations at runtime. "This" and f"This" generate very different code! There's nothing in the concept of literal that prevents us from treating the sequence of tokens 1 / 3 (with space optional) as a *single* literal, and mapping that to Fraction(1, 3). The question is purely UI. 1 / 3 "looks like" an expression involving two int objects and a division operator. Do we force treatment as a literal (allowing it to be a Fraction), or do we treat it as an expression? This only matters because the type is different. It doesn't bother me any more than it bothers Martin, but since it bothers a lot of Pythonistas that kills it for me. I admit to being surprised at the vehemence of the pushback, especially from people who clearly haven't understood the proposal. (Guido's response is another matter, as he thought very carefully about this decades ago.)
It has no precedent in Python and adds a significant mental burden to keep track of.
Only if you want it to. To me it's much like multiple value returns in Common Lisp: if you don't use a special multiple values expression to capture the extra values as a list, all you'll see is the principal value. The analogy is that if you don't do something to capture the Fraction-ness of Martin's ratiofloats, it will (more or less) quickly disappear from the rest of the computation. I agree that the "more or less" part is problematic in Python, and the ratiofloat object itself could persist indefinitely, which could raise issues at any time. So AFAICS you can basically treat ratiofloats as infinitely precise floats, which lose their precision as soon as they come in contact with finite-precision floats. Since most people think of floats as "approximations" (which is itself problematic!), I don't see that it adds much cognitive burden -- unless you need it, as SymPy users do.
Followed by "import numpy", what should happen? Should numpy respect those? Should floats received from numpy be converted? Which takes precedence if both decimal_math and fraction_math are imported? I don't think this can work very well. Martin's approach works at all *because* the ratiofloats are ephemeral, while computations involving floats are everywhere and induce "float propagation".
This is basically a pragma. "from __future__" was acceptable because it was intended to be temporary (with a few exceptional Easter eggs). But in general pragmas were considered un-Pythonic. Aside from __future__ imports, we have PEP 263 coding "cookies", and I think that's it. I'm pretty sure these features are not important enough to overcome that tradition.
I don't know. It was just an idea off the top of my head. On second thought, maybe it's needlessly contrived.
It's an idea. I think it highly unlikely to pass *in Python*, but that doesn't make it necessarily a bad idea. There are other languages, other ways of thinking about these issues. Reminding ourselves of that occasionally is good! Steve

On 5/20/2021 5:41 PM, Stephen J. Turnbull wrote:
This will be unhelpful for this discussion, but to be technically correct (the best kind of correctness!), those two expressions generate identical code:
It's only when the f-string has internal expressions that the generated code is different:
Eric

Hi Paul,
Sure they do work, and they work exactly the same way. That is acually the point: currently 1/2 * m * v**2 is not the same as (m/2) * v**2 (in sympy, that is), with my proposal it would be exactly the same (again, from my prototype, not fake): >>> m, v, r = symbols("m v r") >>> 1/2 * m * v**2 m*v**2/2 >>> (m/2) * v**2 m*v**2/2 >>> (m * v**2) / 2 m*v**2/2 >>> 4/3 * pi * r**3 4*pi*r**3/3 Cheers Martin

On Tue, 18 May 2021 at 16:55, Martin Teichmann <martin.teichmann@gmail.com> wrote:
But *not* in sympy, in normal Python, if m == 1 and v == 1, then 1/2 * m * v**2 is 0.5 (a float) currently, as is (m/2) * v**2. But in your proposal, the former will be a float/fraction hybrid, whereas the latter will be a float. And what about x = 1 a = 1/3 b = x/3 a == Fraction(1,3) b == Fraction(1,3) a == b Currently these are False, False, True. You'll change that to True, False, True and you've now broken the idea that things that are equal should compare the same to a 3rd value. Never mind. At the end of the day, I simply think your proposal is not viable. We can argue details all day, but I'm not going to be persuaded otherwise. Paul

Hi Paul,
No. In my proposal, this will simply be a float. Why? Because my ratiofloats are only generated during compile time. At runtime, they behave like normal floats, so when you calculate (1/2) * x with x being 1, you get 0.5, just float.
Indeed, one could break transitivity here if the implementer of Fraction chose so. But transitivity is not a required thing for == in Python, numpy even returns arrays for it... Cheers Martin

On Tue, 18 May 2021 at 15:56, Paul Moore <p.f.moore@gmail.com> wrote:
In SymPy these do behave differently right now:
The problem is that while SymPy expressions can define __div__(self, int) to be exact there isn't a way for SymPy to hook into an expression like 1/2 which is just a Python expression using the int type whose __div__ returns floats. SymPy itself treats floats as being different from the exact rational numbers that they represent which is important for users who *do* want to use floats (although I think many users do this just because they want decimal display without understanding the issues that imprecise arithmetic can bring). There is a function in sympy that can "convert" a float to rational with heuristics that allow it to undo decimal-to-binary rounding errors e.g.:
Note that the Python float 0.1 is not exactly equal to the rational number 1/10 which can not be represented exactly in binary floating point. So that is not an exact conversion although it is necessarily within 1 ulp (in this case). It would be possible to have a mode in SymPy that can do this automatically but then some people wouldn't want that so it possibly shouldn't be on by default. Really there is no way to get around the fact that sometimes you want to use floating point and sometimes you want exact rational arithmetic so both need to be possible in some way. Some other programming languages have special syntax for rational numbers e.g. 1//2 in Julia or 1%2 in Haskell but I don't know of a programming language that uses 1/2 syntax for rational numbers apart from things like Maple/Mathematica. I think that Matlab just does the equivalent of nsimplify above so you can do e.g. sym(1/3) and get 1/3 as an exact rational number even though the original Matlab expression 1/3 gives an approximate floating point result. The Matlab docs acknowledge that this is not always reliable though (the suggestion is to use 1/sym(3) instead): """ Use sym on subexpressions instead of the entire expression for better accuracy. Using sym on entire expressions is inaccurate because MATLAB first converts the expression to a floating-point number, which loses accuracy. sym cannot always recover this lost accuracy. """ https://uk.mathworks.com/help/symbolic/sym.html#bu1rs8g-1 I think that maybe the best solution here is something more like a domain-specific language that can be used with ipython/jupyter as an alternative profile. In the DSL 1/2 could be a Fraction and 2x^2 could be the equivalent of 2*x**2 etc. You'd probably want to be able to write things like √2 and x'' and so on. Maybe you could have a better syntax for creating matrices than the clunky list of lists. Basically there are lots of things that you can't quite do in Python but that you might want to do if you were making a language specifically for the purpose of doing stuff with equations. -- Oscar

Hi Oscar,
The entire point of my proposal is to give SymPy the possibility to "hook into expressions like 1/2". The examples I posted come from an interpreter (and slightly modified SymPy) where I made that possible. The idea is that the Python parser keeps the information where the 1/2 is coming from, and give SymPy a chance to peek into it. And all this in a documented, official way, without the need to do magic.
Also in this case my proposal would give SymPy the chance to understand what the user actually entered, and act accordingly. I agree that the case for 0.1 is not as compelling as the one for 1/2: while for the latter the user almost certainly means "one half", this is not clear for the former.
On the thread some days ago we had this discussion, whether we should add a new operator. People didn't seem to like it.
I think SageMath does exactly this. While this is perfectly fine, it splits the symbolic-math-in-Python world. If everybody uses their favorite DSL, we drift apart. My goal is to look at those DSLs and check what could be integrated into standard Python. I started with the 1/2 problem, because it was a problem I actually stumbled upon, and which I considered simple to solve.
For most of the time my opinion was: stop dreaming, get real. But then they pushed structural pattern matching, a super complicated beast of syntax for not much benefit. So I hoped that we could get some syntax for symbolic math as well. Cheers Martin

On Tue, May 18, 2021 at 05:30:29PM -0000, Martin Teichmann wrote:
https://www.jonathanturner.org/rethinking-the-blub-paradox/ -- Steve

Martin Teichmann writes:
Also in this case my proposal would give SymPy the chance to understand what the user actually entered, and act accordingly.
But *only* in this case. It seems to me that what SymPy would really like is for arithmetic expressions in certain contexts to be returned as syntax trees rather than executed. But this really would greatly complicate the mental model, unlike your ratiofloat proposal. Steve

Martin Teichmann writes:
Also in this case my proposal would give SymPy the chance to understand what the user actually entered, and act accordingly.
I am sorry, but could you point out an open issue or discussion by developers on SymPy repository or a forum where such a feature has been mentioned? I don't recall seeing any such discussion and would love to find out more about SymPy's perspective on this topic. André Roberge

On Tue, May 18, 2021 at 05:21:28PM +0100, Oscar Benjamin wrote:
Not just ipython and jupyter. The Python std lib has a module that simulates the built-in interpreter. With a bit of jiggery-pokery, it should be possible to adapt that to allow a symbolic maths DSL. https://docs.python.org/3/library/code.html I think the only limitation is that the code module requires legal Python syntax, so you can't write "2x" to get `2*x`. Nevertheless, it should be capable of exact fractions. The sys module has a display hook for customizing printing: https://docs.python.org/3/library/sys.html#sys.displayhook It would be nice if it had an input hook as well, so that Martin could customise "1/2" to "Fraction(1, 2)". I think that would be a much more useful enhancement than ratiofloat. -- Steve

On Tue, May 18, 2021, 9:18 PM Steven D'Aprano <steve@pearwood.info> wrote:
A math DSL that at minimum supported 1. subreddit line these fraction literals 2. a more math-like lambda syntax, like: f(x)=x ... 3. a caret symbol for the exponentiation operator would generate quite a bit of interest from me. Happily Sage math exists. Sadly, in general I've found it difficult to make use of Sage compared to regular python. If something like this were supported as a part of the core language I'd be likely to use it. I have no idea what kind of effort that would entail on the part of others though.

As I said in my post in your previous thread, I'm -1. Why? Because you're not solving anything. In fact you're making it more complicated. Developers can have difficulties studying your idea, beginners don't stand a chance (In fact I have no idea what you described here in your idea and trust me I'd move on to another programming language if this change was brought to Python. This is one of the things that doesn't fit in my brain.). Python is a simple language and we, the community, like to keep it simple. Yes we can extend it by providing more changes that improve performance or by adding functionality to third-party modules but introducing this big a change is not a great idea. But yeah I could be wrong.

On Tue, May 18, 2021 at 12:39 AM Martin Teichmann < martin.teichmann@gmail.com> wrote:
This violates a basic property of Python. If 1/2 has a certain property, then `x = 1; y = 2; x/2` should have the same property. Please don't go down this road. -- --Guido van Rossum (python.org/~guido) *Pronouns: he/him **(why is my pronoun here?)* <http://feministing.com/2015/02/03/how-using-they-as-a-singular-pronoun-can-c...>

Why do we need this functionality? What are the benefits? What are you trying to solve? Is it worth adding it?

Hi Shreyan, we need this for symbolic math, e.g. in sympy. Probably you have never seen somebody doing symbolic math with Python, but believe me, there are many. Let me walk you through a toy problem to show where the issues are. Let's say we want to solve the equation x**2 == 3. Not a very tough equation, in reality things would be much more complicated. To start, we import sympy:
from sympy import symbols, solve
Then we need to declare our variable:
x = symbols("x")
Now we would just use the solve function to solve the equation. Unfortunately the == operator does not work (another, independent problem), so we have to reformulate our equation to x**2 - 3 == 0, and knowing that sympy implicitly adds "== 0" to the equation, we can just type
solve(x**2-3) [-sqrt(3), sqrt(3)]
The result is not very astonishing. But wait, we realize we did a mistake, it is not actually 3 that we have to equate to, but 2/3! So we type
solve(x**2-2/3) [-0.816496580927726, 0.816496580927726]
That went wrong. We wanted to have a symbolic result, not a numeric solution! How did that come about? Well, that's the point of discussion: 2/3 gets immediately turned into 0.66666..., and sympy has no clue where this is coming from. Sure, we can reformulate our problem:
solve(3*x**2-2) [-sqrt(6)/3, sqrt(6)/3]
in our toy problem that is simple, but imagine what happens if you have 20+ terms that need to be changed. This is when it comes in handy to just be able to write 2/3 to mean two-third. Sympy gives you other ways than reformulating the problem to solve it, but that's a different topic. My point is, with my proposal, you can just write
solve(x**2 - 2/3) [-sqrt(6)/3, sqrt(6)/3]
no weird things necessary. But you might ask, what if the fraction is not a constant? Well, in this case we need to declare another variable, and also tell solve in a second parameter which variable we want to solve for:
ok, but if we now put some concrete value for a, would that not cause problems? Well the way you do that is to substitute:
as you see, there is no problem here. This is why the discussion whether the ratiofloat should also appear when we are dividing two variables which contain an int was not very interesting to me, as this is not a relevant problem for symbolic math. I hope I was able to show you where fractional constants are useful. Symbolic math in Python has many more problems, the fact that we cannot use == for example (I actually don't know why that is not possible, we should ask people from SymPy). Also the declaration of variables is a bit weird. But those problems are for another day. Cheers Martin

The more explanation I see from the OP, the more I oppose the idea. It becomes more clear that this is a specialization, basically only to help SymPy, while introducing semantic changes for everyone else (even if minor ones). Moreover, SymPy already has several readily available ways to do this, albeit with slightly less appealing spellings. Now, I WILL note that NumPy motivated several changes to Python. I believe even strides in slices first benefitted NumPy. Likewise the ellipse object, extended slices, and the `@` operator. However... 1. Unlike this proposal, none of those were backwards-incompatible changes (beyond the trivial sense that "code that used to raise a SyntaxError doesn't now). 2. The NumPy community (especially counting indirect users such as Panda, scikit-learn, matplotlib, etc) is more than 10x the size of SimPy, and probably more than 100x. On Tue, May 18, 2021, 5:42 PM Martin Teichmann <martin.teichmann@gmail.com> wrote:
participants (15)
-
André Roberge
-
Chris Angelico
-
David Mertz
-
Eric V. Smith
-
Guido van Rossum
-
Ir. Robert Vanden Eynde
-
Martin Teichmann
-
Matt del Valle
-
Oscar Benjamin
-
Paul Moore
-
Ricky Teachey
-
Shreyan Avigyan
-
Stefan Behnel
-
Stephen J. Turnbull
-
Steven D'Aprano