approximate equality operator ("PEP 485 follow-up")

Hi all, after just having typed tons of `math.isclose` (see PEP 485 [1]) and `numpy.isclose` calls (while basically using their default tolerances most of the time), I was wondering whether it makes sense to add a matching operator. "Strict" equality check as usual: `a == b` "Approximate" equality check: -> `a ?= b` (similar to `!=`) -> `a ~= b` (my preference) -> `a === b` (just to confuse the JS crowd) A corresponding method could for instance be named `__ic__` (Is Close), `__ae__` (Approximate Equal) or `__ce__` (Close Equal). It's such a common problem when dealing with floating point numbers or similar data types (arrays of floats, float-based geometries etc). An operator of this kind would make many people's lives much easier, I bet. I have overloaded the modulo operator in some test code and it actually helps a lot to make some logic more readable. Best regards, Sebastian 1: https://www.python.org/dev/peps/pep-0485/

https://en.wikipedia.org/wiki/List_of_mathematical_symbols#Symbols_based_on_... https://en.wikipedia.org/wiki/Tilde#As_a_relational_operator - https://patsy.readthedocs.io/en/latest/formulas.html <https://patsy.readthedocs.io/en/latest/formulas.html> https://docs.python.org/3/reference/expressions.html#index-61
https://numpy.org/doc/stable/reference/generated/numpy.allclose.html
Presumably, tensor comparisons have different broadcasting rules but allclosr is similarly defined.
On Sun, Jun 14, 2020, 8:43 AM Sebastian M. Ernst <ernst@pleiszenburg.de> wrote:

On Sun, Jun 14, 2020 at 6:34 AM Wes Turner <wes.turner@gmail.com> wrote:
yes, the tilde would be good, but, well, is already a unary operator, not a binary one. And If we were to add a new operator, it should have the same precedence as == currently does, so we can't really re-use any others either. But ~= would read well for those of us accustomed it its use in math. Or, if ever Python goes to full on Unicode: ≈ https://numpy.org/doc/stable/reference/generated/numpy.allclose.html
The above equation is not symmetric in a and b, so that allclose(a, b) might be different from allclose(b, a) in some rare cases.
There are a number of issues with the numpy implementations, one of which is this. We really wouldn't want an asymmetric equality-like operator. But that's one reason why math.isclose doesn't use that same algorithm: it is symmetric. But that being said, I don't think this is a good idea, because of two issues: 1) as stated by Greg Ewing, if you are comparing floats, you really should be thinking about what tolerance makes sense in your use case. 2) Even more critical, isclose() has the absolute tolerance set to zero, so that nothing will compare as "close" to zero. So you can't really use it without thinking about what value makes sense in your use case. Both of which lead to a basic concept: there is no single definition of"close", and operators can only have a single definition. NOTE: A way to get around that would be to have a global setting for the tolerances, or a context manager: with close_tolerances(rel_tol=1e-12, abs_tol=1e-25): if something ~= something_else: do_something() but frankly, the overhead of writing the extra code overwhelms that hassle of writing is_close(). So: -1 from me. However, to Alex's point: if someone posts some motivating examples where the operator would really make the code more readable, *maybe* we could be persuaded. -CHB -- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 15/06/20 12:39 am, Sebastian M. Ernst wrote:
It's such a common problem when dealing with floating point numbers
Is it really? I've done quite a lot of work with floating point numbers, and I've very rarely needed to compare two of them for almost-equality. When I do, I always want to be in control of the tolerance rather than have a default tolerance provided for me. I'm inclined to suspect that if you think you need something like this, you're using an unreliable algorithm. -- Greg

On Sun, Jun 14, 2020, 10:22 AM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
I've had occasion to use math.isclose(), np.isclose(), and np.allclose() quite often. And most of the time, the default tolerances are good enough for my purpose. Note that NumPy and math use different algorithms to define closeness, moreover. But it's more often than rare that I want to choose a different tolerance (or switch between absolute and relative tolerance). Adding an operator adds an impediment to refactoring to change tolerance. I'm more concerned about that problem than I am with the few extra characters needed to call a function.

On Mon, 15 Jun 2020 at 00:21, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
I can't elaborate on David's use but in my own experience these functions are mostly useful for interactive checking or for something like unit tests. They can be used extensively in the testing code for projects with a lot of floating point functions. It's unlikely that a fundamental numeric algorithm would want to make use of a generic function such as this though without explicitly setting the tolerance which wouldn't work with an operator. Fishing for examples in numpy testing code I quickly came across this test file where almost every test is based on a function assert_array_almost_equal: https://github.com/numpy/numpy/blob/master/numpy/fft/tests/test_helper.py The docstring for assert_array_almost_equal refers to more functions that are variants of the idea of closeness testing but somehow not quite the same. -- Oscar

On Sun, Jun 14, 2020 at 7:49 PM Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
At times I have computations which *should be* the same mathematically, but are carried out through a different sequence of specific computations. One common example is in parallel frameworks where the order of computation is indeterminate because multiple workers/threads/processes are each calculating portions to aggregate. Another related case is when I call some library to do an operation, but I did not write the library, nor do I understand its guts well. For example, the tensor libraries used in neural networks that will calculate a loss function. Occasionally I'd like to be able to replicate (within a tolerance) a computation the library performs using something more general like NumPy. Having a few ulps difference is typical, but counts as validating the "same" answer. Another occasion I encounter it is with data measurements. Some sort of instrument collects measurements with a small jitter. Two measurements that cannot be distinguished based on the precision of the instrument might nonetheless be stored as different floating point numbers. In that case, I probably want to be able to tweak the tolerances for the specific case. -- The dead increasingly dominate and strangle both the living and the not-yet born. Vampiric capital and undead corporate persons abuse the lives and control the thoughts of homo faber. Ideas, once born, become abortifacients against new conceptions.

pytest.approx https://docs.pytest.org/en/stable/reference.html#pytest-approx ``` The ``approx`` class performs floating-point comparisons using a syntax that's as intuitive as possible:: >>> from pytest import approx >>> 0.1 + 0.2 == approx(0.3) True The same syntax also works for sequences of numbers:: >>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) True Dictionary *values*:: >>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6}) True ``numpy`` arrays:: >>> import numpy as np # doctest: +SKIP >>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) # doctest: +SKIP True And for a ``numpy`` array against a scalar:: >>> import numpy as np # doctest: +SKIP >>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) # doctest: +SKIP True By default, ``approx`` considers numbers within a relative tolerance of ``1e-6`` (i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was ``0.0``, because nothing but ``0.0`` itself is relatively close to ``0.0``. To handle this case less surprisingly, ``approx`` also considers numbers within an absolute tolerance of ``1e-12`` of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting the ``nan_ok`` argument to True. (This is meant to facilitate comparing arrays that use NaN to mean "no data".) Both the relative and absolute tolerances can be changed by passing arguments to the ``approx`` constructor:: >>> 1.0001 == approx(1) False >>> 1.0001 == approx(1, rel=1e-3) True >>> 1.0001 == approx(1, abs=1e-3) True ``` On Sun, Jun 14, 2020, 9:39 PM David Mertz <mertz@gnosis.cx> wrote:

On Sun, Jun 14, 2020 at 7:15 PM Wes Turner <wes.turner@gmail.com> wrote:
pytest.approx https://docs.pytest.org/en/stable/reference.html#pytest-approx
Thanks Wes, somehow I never noticed that. It's pretty nifty, particularly how it can handle dicts and the like automatically. I'm not a fan of the defaults, but maybe that's nit picking... Anyone know when that was added to pytest? -CHB -- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 14.06.20 17:52, David Mertz wrote:
I never use `math.isclose` or `np.isclose` without specifying tolerances, even if they happen to be the defaults (which is rare). There is no such thing as "x and y are approximately equal"; the question is always within what bounds. And this question must be answered by the programmer and the answer should be stated explicitly. Obviously these tolerances are application dependent, be it measurement errors, limited precision of sensors, numerical errors, etc. What makes the default values so special anyway? If I were to design such a function, I wouldn't provide any defaults at all. Yes, I read PEP-485, but I'm not convinced. The paragraph [Relative Tolerance Default](https://www.python.org/dev/peps/pep-0485/#relative-tolerance-default) starts with:
The relative tolerance required for two values to be considered "close" is entirely use-case dependent.
That doesn't call for a default value. [`np.isclose`](https://numpy.org/doc/stable/reference/generated/numpy.isclose.html) is even more extreme: They also specify (non-zero) defaults and because of that they need to display a *warning* at their docs which reads:
The default atol is not appropriate for comparing numbers that are much smaller than one (see Notes).
Then in the notes there is:
atol should be carefully selected for the use case at hand.
Sounds like it would've been more appropriate to not specify a default in the first place. Sure, some people might complain, who want a quick way to determine if two numbers are approximately equal, but as mentioned above, this question cannot be answered without specifying the bounds. All that the default tolerances do is prevent people from thinking about the appropriate values for their specific application. Since an operator doesn't allow to specify any tolerances, it's not a suitable replacement for the `isclose` functions.

On Sun, Jun 14, 2020 at 10:41 PM Sebastian M. Ernst <ernst@pleiszenburg.de> wrote:
-1. I don't see that Python needs a different comparison operator, with all the debates that will come through about "when should I use == and when should I use the other". Especially since it'll almost certainly refuel the argument that you should never compare floats for equality. If you're doing a lot with isclose, you can always "from math import isclose as cl" and use a shorter name. But please don't encourage everyone to use isclose() in place of all comparisons. ChrisA

Hello, On Mon, 15 Jun 2020 00:30:19 +1000 Chris Angelico <rosuav@gmail.com> wrote:
All that makes good sense. I'd encourage everyone who thinks "I need a very special operator just for me", instead think in terms "Python needs ability to define custom operators". Needless to say, that doesn't have anything to do with changes to a core implementation. Instead, you're looking to be able to define a custom parser/tokenizer/AST transformer for your source. And all that is possible already yesterday. Recent example: implementation of "from __future__ import braces": https://github.com/NeKitDS/braces.py .
ChrisA
[] -- Best regards, Paul mailto:pmiscml@gmail.com

Paul Sokolovsky writes:
I'd encourage everyone who thinks "I need a very special operator just for me",
I don't think anybody who posts here thinks that, though. They think "wow, I could really use this, and I bet other people too." math.isclose probably tempts somebody somewhere in the world about once a minute. Of course they often find out differently -- and mostly when they do, they're OK with that. (There are also joke or half-joke proposals, but that's a different thing, and we're all in on those jokes.)
instead think in terms "Python needs ability to define custom operators".
Reading that *literally* (I take it seriously, below): Python has that ability already. It's just that the set of operator *symbols* is fixed (in a given version of Python), and almost exhausted for numerical types (but see below ;-). This has an important implication for readability: the associativity and precedence order of the symbols is also fixed, and only needs to be learned once.[1] If you always want isclose behavior for float "equality", you can't monkey patch (and you don't want to, I think), but you can subclass: import math class Real(float): exact_eq = float.__eq__ def__eq__(self, other): return math.isclose(self, other) or class AltReal(float): def __matmul__(self, other): # is self at other? close enuff! return math.isclose(self,other) Getting serious, as promised: Of course there will be work to do ensuring that all floats (more likely, all numbers) are converted to Reals in each entry point of the module, and probably a lot of duplication where "_private" versions of functions don't do the conversion, and "public" versions of them do. That could be avoided with custom operator symbols in many cases. But imposing this work is a deliberate choice of the language designers, for readability reasons, and maybe others. Perhaps it should be reconsidered, but I'm quite conservative on this one.
Recent example: implementation of "from __future__ import braces": https://github.com/NeKitDS/braces.py .
I strongly recommend the idiom: import __future__ as __watch_out_for_jokes__ (Recent? Isn't that more than a decade old? Hmmm: >>> from __future__ import braces File "<stdin>", line 1 SyntaxError: not a chance Different implementation, I guess. :-D :-þ :-D :-þ :-D :-þ :-D) Steve Footnotes: [1] This is not always a benefit. Occasionally there's a conventional assignment of symbols to operations where the "natural" behavior of the operations doesn't fit well with the associativity and precedence of the operators of 3rd grade arithmetic. But as a rule it's helpful.

Hello, On Tue, 16 Jun 2020 14:21:55 +0900 "Stephen J. Turnbull" <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Sure, everyone thinks that they invented the next best thing since sliced bread. So, the natural turn of the discussion is that maybe it's not. And then maybe, the turn of the discussion is how to let different people do what they want, without forcing their ideas on everyone else.
But that's exactly my point: the core of the language is already quite comprehensive, well-defined, and even crowded. Stuffing even more stuff into it just doesn't scale. Take the recent example - "number-crunching people" came by to claim the "~=" operator, but "everyone else" knows that that operator is related to regexp/pattern matching.
Python has that ability already.
It absolutely has. It's just requires more legwork than it really should take, and that needs to be changed.
Well, operators, their associativity and precedence are fixed in *the language core*. But Python is multi-level structure. So, if you want to introduce a new operator symbol: 1. You subclass a tokenizer, and add lexing for your operator. Simple usages can also be handled on the level of tokenizer. Literally if you're interested in simple things like "a ~= b", you just turn it into "isclose(a, b)". 2. More general and complex cases require subclassing an AST parser. That will allow to deal with arbitrary precedence/associativity of new operators. 3. Then, you just use subclassed tokenizer and parser for your sources. To make it all easy, easiness of p.3 is important. There's a PEP on that: https://www.python.org/dev/peps/pep-0511/ , which fell victim of Python politburo self-censorship, but its usefulness is not less because of that. []
I'm not sure who didn't get a joke here. Let me just say that it's indeed a very old and buggy implementation of that feature from the proverbial Python politburo. Such old, that's it's not funny any more. There's a reason why even projects otherwise largely written in Python, use a different language (like JavaScript) when they need easy-for-users scripting capabilities. A random example is https://github.com/frida/frida . That's because such projects want to let their users write one-liners like: for_each_kaboom(function (kaboom) {print("here's kaboom", saboom); for (sub in kaboom) print(sub); }) And while majority of people just leave Python behind in such cases, some people spent so much time with Python, that they refuse to (for they know that outside there's only bigger mess). Instead, they look how to extend Python to do what they want (without necessary stuffing it all back in the core).
Steve
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Jun 16, 2020 at 6:16 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Actually there's another and a very important reason, unrelated to syntax. You can put untrusted code into a JS sandbox and expect it to be safe, but you can't easily sandbox Python code (especially not inside other Python code). That doesn't mean that JS is inherently better than Python; it just means that JS is the correct tool for that job. ChrisA

On Tue, Jun 16, 2020 at 11:14:24AM +0300, Paul Sokolovsky wrote:
What's so special about the ability to write one-liners? I trust that you agree that *limiting* users to only one line would be a terrible idea. So your scripting language surely will allow multiple lines of code. But if you give users the ability to write multiple lines of code, then what advantage is there to encourage them to cram them into a single line? Forth is a free-form concatenative language where whitespace separates words, and it lacks even the concept of "lines of code". To the interpreter, every Forth program is, or might as well be, a single line of code. You can literally write Forth code without once touching the Enter key, if you so choose. But in practice, everyone uses multiple lines, not just to separate word definitions but even within a single word once they exceed a certain level of complexity. As soon as a word contains loops or branches, the two dimensional structure is essential for the sake of maintainability and readability. In brace languages, there are a whole mountain of errors that occur when the logical structure of the program, as defined by the braces, doesn't match the apparent structure of the program, as seen by the programmer who is guided by indentation. These errors are rare in Python. (I was going to say "impossible", but I suppose that a sufficiently clever person could manage one using nested comprehensions. But they're impossible using regular for-loops and other block structures.) All Python does is require what people -- at least sensible people -- are already doing.
So wait, if you acknowledge that outside is only a bigger mess, why do you want to introduce that mess into Python and drag it down to that level? I honestly don't understand the point of your comment? Are you in favour of, or against, adding braces to Python?

Paul Sokolovsky writes:
Python *arithmetic* is comprehensive, well-defined, even crowded, and adding operators doesn't scale. Otherwise, it's generally quite sparse. Most classes define a very few operators, and mostly use named methods. I can't recall wanting a new operator symbol for any class myself, and the most plausible cases for Python at large we already got not so long ago, the walrus operator (which isn't a customizable operator in the sense we mean here) and the matmul operator. NULL-coalescing operators remain in discussion, but they're still of dubiously large benefit and the proposed syntaxes have yet to converge on a single candidate syntax IIRC.
Python is Turing-complete. It doesn't *need* anything. What you mean is that there are features you want that it doesn't have. And you (deliberately?) ignored the crucial distinction I was making between "custom operators" and "custom operator symbols". If there are enough operator symbols available, and there usually are, you just choose them and def dunders. That's hardly excessive legwork, although sometimes the analogy of the arithmetic operators to the operations of your class are somewhat strained.
Folks look how to extend Python to do what they want (without necessary stuffing it all back in the core).
Sure. That's the whole point of a programming language, of course: extending your software's capabilities without creating a monolith. The conflict here is that you want to take advantage of the well- structured Python language, then tweak it in ways that will be less readable to somebody who knows only Python-Dev Python. We in general don't want that -- we want a read-everywhere language. That's what the rejection rationale for PEP 511 says explicitly. We know about the advantages you're extolling, and we just don't want them, given the expected readability cost. If you want support from anybody who matters (I don't matter much :-), you need to provide convincing arguments that the facilities for customization we already have aren't enough. You keep asserting that they aren't, but without concrete evidence. By the way this:
Python politburo self-censorship,
in no way helps your case. Python-Dev is very opinionated, and that is reflected in the language. It is not, however, censorship in any sense. Regards, Steve

On Sun, Jun 14, 2020 at 02:39:49PM +0200, Sebastian M. Ernst wrote:
I wrote the statistics module in the stdlib, and the tests for that use a lot of approximate equality tests: https://github.com/python/cpython/blob/3.8/Lib/test/test_statistics.py which includes an approx_equal function that pre-dates the math.isclose and a unittest assertApproxEqual method. So I like to think I'm a heavy user of approximate comparisons. I wouldn't use an approximate operator. Not even if it were written with the unicode ≈ ALMOST EQUAL TO symbol :-) The problem is that the operator can only have a single pre-defined tolerance (or a pair of tolerances, absolute and relative), which would not be very useful in practice. So I would have to swap from the operator to a function call, and it is highly unlikely that the operator tolerances would be what I need the majority of the time. -- Steven

Perhaps a more versatile operator would be to introduce a +- operator that would return an object with an __eq__ method that checks for equality in the tolerance i.e a == b +- 0.5 Although I don't like this either since you could achieve the same thing with something like this: class Tolerance: def __init__(self, upper, lower=None): self.upper = upper self.lower = upper if lower is None else lower def __add__(self, number): return Tolerance(number+self.upper, number-self.lower) def __sub__(self, number): return Tolerance(number-self.upper, number+self.lower) def __eq__(self, number): return self.lower < number < self.upper a == b + Tolerance(0.5) So maybe it would be nice to have something like this built into math? On Thu, 18 Jun 2020 at 17:56, Steven D'Aprano <steve@pearwood.info> wrote:
-- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL.

Well there you go, good point. I didn't really like it being an operator myself. But I can see having a math.tolerance class being useful. On Tue, 23 Jun 2020 at 13:53, Jonathan Goble <jcgoble3@gmail.com> wrote:
-- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL.

On Tue, Jun 23, 2020 at 9:08 AM Mathew Elman <mathew.elman@ocado.com> wrote:
A little bit out of the box, but what about: a == b +/- 0.5 ...or even: a == b +or- 0.5 --- Ricky. "I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler

On Tue, Jun 23, 2020 at 11:09 AM Rhodri James <rhodri@kynesim.co.uk> wrote:
TLDR: Of course it's not that hard to use. But the friction from "python code" to "mathematical calculations" is too large to make reaching for python as my go-to tool the first inclination. I'd really like to see this improved. I'm not strongly opinionated that this is a large, glaring need that simply *must* be rectified. Only that it would be VERY nice for those of us that use python primarily for mathematical calculations. BORING LONGER VERSION, BIT OF A RANT, SORRY: As a civil engineer, when I reach for a tool with which I am intending to do MATH, I generally do not reach for python right now-- even inside a Jupyter notebook. I reach for Mathcad or even Excel (blech). One of the biggest reasons is python code doesn't READ like real math in so many instances; this is a problem for me partly for my own reading, but also for my colleagues and clients. I would like to not have to spend a lot of time changing bits of the code to an output form for other non-programming people to read. My clients are not dumb, but if I were to print a jupyter notebook out and hand it to someone to read who doesn't know what python is, this: import math math.isclose(a, b, abs_tol=0.5) ...is just a step too far for them to read. So I have to use something else, or spend a lot of time massaging things to look more presentable. It would be nice-- for ME-- if there were ways to write functional code that looked more like calculations in python code. And something like +/- fits that bill. But I understand that not everyone-- perhaps not even close to significant % of people-- has the same needs I do, spending their days focused on producing, writing/reading mostly mathematical calculations with explanations. And that not everyone has the difficulty of having to present the math they are performing with their code to other people who are expecting to be reading calculations, not computer code. SIDEBAR Another BIG pain point for the "gee it would be nice if python code could look more like mathematical calculations" problem is the function writing syntax-- I love python, but sometimes I want to write a simple mathematical-looking structural engineering function: ∆y(P, H,L, E, I) = H * L^4 * P / (384 * E * I) ...and not: def ∆y(P, H,L, E, I): return H * L**4 * P / (384 * E * I) Boy would it be cool if we could use the walrus to write one-liner math functions! ∆y(P, H,L, E, I) := H * L^4 * P / (384 * E * I) THAT right there would change my life. --- Ricky. "I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler

∆y(P, H,L, E, I) := H * L^4 * P / (384 * E * I)
```python Δy = lambda P, H, L, E, I: H * L**4 * P / (384 * E * I) Δy ``` <function __main__.<lambda>(P, H, L, E, I)> Is there a good way to redefine the '^' operator for {int, float, Decimal, Fraction, numbers.Number}? Why would it be dangerous to monkey-patch global types this way? Could a context manager redefine `float.__xor__ = float.__pow_` for just that scope? Why or why not? https://docs.python.org/3.4/library/operator.html#operator.__pow__ As far as syntactical preferences, LaTeX may be considered to be the canonical non-executable syntax for mathematical expressions and equations. SymPy can parse LaTeX into symbolic expressions. Jupyter can render LaTeX embedded in Markdown. SymPy displays expressions as LaTeX in Jupyter notebooks. ```python from sympy import symbols P, H, L, E, I = symbols('P, H, L, E, I') Δy = H * L**4 * P / (384 * E * I) Δy ``` $\frac{H L^{4} P}{384 E I}$ ```python import sympy sympy.sympify("H*L^4*P/(384*E_*I_)") ``` $\frac{H L^{4} P}{384 E_{} I_{}}$ https://docs.sympy.org/latest/modules/core.html#sympy.core.sympify.sympify calls parse_expr with `convert_xor=True`, by default. https://docs.sympy.org/latest/modules/parsing.html#sympy.parsing.sympy_parse... :
Treats XOR, ^, as exponentiation, **.
```python from sympy.parsing.sympy_parser import parse_expr, standard_transformations,\ convert_xor trx = (standard_transformations + (convert_xor,)) parse_expr("H*L^4*P/(384*E_*I_)", transformations=trx) ``` $\frac{H L^{4} P}{384 E_{} I_{}}$ https://docs.sympy.org/latest/modules/parsing.html#experimental-mathrm-latex... :
```python from sympy.parsing.latex import parse_latex #parse_latex(r'\frac{H L^{4} P}{384 E_{} I_{}}') parse_latex(r'\frac{H L^{4} P}{384 E I}') ``` $\displaystyle \frac{H L^{4} P}{384 E I}$ SageMath defines ^ as operator.pow. CoCalc supports Sage notebooks as well as Jupyter notebooks which import a CAS like SymPy, SymEngine, Diofant, SageMath. https://ask.sagemath.org/question/49127/what-is-sage-equivalent-to-pow-in-sy... ... https://www.lidavidm.me/blog/posts/2013-09-15-implicit-parsing-in-sympy.html https://stackoverflow.com/questions/49284583/how-to-use-unicode-characters-a... On Tue, Jun 23, 2020, 11:55 AM Ricky Teachey <ricky@teachey.org> wrote:

https://en.wikipedia.org/wiki/List_of_mathematical_symbols#Symbols_based_on_... https://en.wikipedia.org/wiki/Tilde#As_a_relational_operator - https://patsy.readthedocs.io/en/latest/formulas.html <https://patsy.readthedocs.io/en/latest/formulas.html> https://docs.python.org/3/reference/expressions.html#index-61
https://numpy.org/doc/stable/reference/generated/numpy.allclose.html
Presumably, tensor comparisons have different broadcasting rules but allclosr is similarly defined.
On Sun, Jun 14, 2020, 8:43 AM Sebastian M. Ernst <ernst@pleiszenburg.de> wrote:

On Sun, Jun 14, 2020 at 6:34 AM Wes Turner <wes.turner@gmail.com> wrote:
yes, the tilde would be good, but, well, is already a unary operator, not a binary one. And If we were to add a new operator, it should have the same precedence as == currently does, so we can't really re-use any others either. But ~= would read well for those of us accustomed it its use in math. Or, if ever Python goes to full on Unicode: ≈ https://numpy.org/doc/stable/reference/generated/numpy.allclose.html
The above equation is not symmetric in a and b, so that allclose(a, b) might be different from allclose(b, a) in some rare cases.
There are a number of issues with the numpy implementations, one of which is this. We really wouldn't want an asymmetric equality-like operator. But that's one reason why math.isclose doesn't use that same algorithm: it is symmetric. But that being said, I don't think this is a good idea, because of two issues: 1) as stated by Greg Ewing, if you are comparing floats, you really should be thinking about what tolerance makes sense in your use case. 2) Even more critical, isclose() has the absolute tolerance set to zero, so that nothing will compare as "close" to zero. So you can't really use it without thinking about what value makes sense in your use case. Both of which lead to a basic concept: there is no single definition of"close", and operators can only have a single definition. NOTE: A way to get around that would be to have a global setting for the tolerances, or a context manager: with close_tolerances(rel_tol=1e-12, abs_tol=1e-25): if something ~= something_else: do_something() but frankly, the overhead of writing the extra code overwhelms that hassle of writing is_close(). So: -1 from me. However, to Alex's point: if someone posts some motivating examples where the operator would really make the code more readable, *maybe* we could be persuaded. -CHB -- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 15/06/20 12:39 am, Sebastian M. Ernst wrote:
It's such a common problem when dealing with floating point numbers
Is it really? I've done quite a lot of work with floating point numbers, and I've very rarely needed to compare two of them for almost-equality. When I do, I always want to be in control of the tolerance rather than have a default tolerance provided for me. I'm inclined to suspect that if you think you need something like this, you're using an unreliable algorithm. -- Greg

On Sun, Jun 14, 2020, 10:22 AM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
I've had occasion to use math.isclose(), np.isclose(), and np.allclose() quite often. And most of the time, the default tolerances are good enough for my purpose. Note that NumPy and math use different algorithms to define closeness, moreover. But it's more often than rare that I want to choose a different tolerance (or switch between absolute and relative tolerance). Adding an operator adds an impediment to refactoring to change tolerance. I'm more concerned about that problem than I am with the few extra characters needed to call a function.

On Mon, 15 Jun 2020 at 00:21, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
I can't elaborate on David's use but in my own experience these functions are mostly useful for interactive checking or for something like unit tests. They can be used extensively in the testing code for projects with a lot of floating point functions. It's unlikely that a fundamental numeric algorithm would want to make use of a generic function such as this though without explicitly setting the tolerance which wouldn't work with an operator. Fishing for examples in numpy testing code I quickly came across this test file where almost every test is based on a function assert_array_almost_equal: https://github.com/numpy/numpy/blob/master/numpy/fft/tests/test_helper.py The docstring for assert_array_almost_equal refers to more functions that are variants of the idea of closeness testing but somehow not quite the same. -- Oscar

On Sun, Jun 14, 2020 at 7:49 PM Oscar Benjamin <oscar.j.benjamin@gmail.com> wrote:
At times I have computations which *should be* the same mathematically, but are carried out through a different sequence of specific computations. One common example is in parallel frameworks where the order of computation is indeterminate because multiple workers/threads/processes are each calculating portions to aggregate. Another related case is when I call some library to do an operation, but I did not write the library, nor do I understand its guts well. For example, the tensor libraries used in neural networks that will calculate a loss function. Occasionally I'd like to be able to replicate (within a tolerance) a computation the library performs using something more general like NumPy. Having a few ulps difference is typical, but counts as validating the "same" answer. Another occasion I encounter it is with data measurements. Some sort of instrument collects measurements with a small jitter. Two measurements that cannot be distinguished based on the precision of the instrument might nonetheless be stored as different floating point numbers. In that case, I probably want to be able to tweak the tolerances for the specific case. -- The dead increasingly dominate and strangle both the living and the not-yet born. Vampiric capital and undead corporate persons abuse the lives and control the thoughts of homo faber. Ideas, once born, become abortifacients against new conceptions.

pytest.approx https://docs.pytest.org/en/stable/reference.html#pytest-approx ``` The ``approx`` class performs floating-point comparisons using a syntax that's as intuitive as possible:: >>> from pytest import approx >>> 0.1 + 0.2 == approx(0.3) True The same syntax also works for sequences of numbers:: >>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) True Dictionary *values*:: >>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6}) True ``numpy`` arrays:: >>> import numpy as np # doctest: +SKIP >>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) # doctest: +SKIP True And for a ``numpy`` array against a scalar:: >>> import numpy as np # doctest: +SKIP >>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) # doctest: +SKIP True By default, ``approx`` considers numbers within a relative tolerance of ``1e-6`` (i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was ``0.0``, because nothing but ``0.0`` itself is relatively close to ``0.0``. To handle this case less surprisingly, ``approx`` also considers numbers within an absolute tolerance of ``1e-12`` of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting the ``nan_ok`` argument to True. (This is meant to facilitate comparing arrays that use NaN to mean "no data".) Both the relative and absolute tolerances can be changed by passing arguments to the ``approx`` constructor:: >>> 1.0001 == approx(1) False >>> 1.0001 == approx(1, rel=1e-3) True >>> 1.0001 == approx(1, abs=1e-3) True ``` On Sun, Jun 14, 2020, 9:39 PM David Mertz <mertz@gnosis.cx> wrote:

On Sun, Jun 14, 2020 at 7:15 PM Wes Turner <wes.turner@gmail.com> wrote:
pytest.approx https://docs.pytest.org/en/stable/reference.html#pytest-approx
Thanks Wes, somehow I never noticed that. It's pretty nifty, particularly how it can handle dicts and the like automatically. I'm not a fan of the defaults, but maybe that's nit picking... Anyone know when that was added to pytest? -CHB -- Christopher Barker, PhD Python Language Consulting - Teaching - Scientific Software Development - Desktop GUI and Web Development - wxPython, numpy, scipy, Cython

On 14.06.20 17:52, David Mertz wrote:
I never use `math.isclose` or `np.isclose` without specifying tolerances, even if they happen to be the defaults (which is rare). There is no such thing as "x and y are approximately equal"; the question is always within what bounds. And this question must be answered by the programmer and the answer should be stated explicitly. Obviously these tolerances are application dependent, be it measurement errors, limited precision of sensors, numerical errors, etc. What makes the default values so special anyway? If I were to design such a function, I wouldn't provide any defaults at all. Yes, I read PEP-485, but I'm not convinced. The paragraph [Relative Tolerance Default](https://www.python.org/dev/peps/pep-0485/#relative-tolerance-default) starts with:
The relative tolerance required for two values to be considered "close" is entirely use-case dependent.
That doesn't call for a default value. [`np.isclose`](https://numpy.org/doc/stable/reference/generated/numpy.isclose.html) is even more extreme: They also specify (non-zero) defaults and because of that they need to display a *warning* at their docs which reads:
The default atol is not appropriate for comparing numbers that are much smaller than one (see Notes).
Then in the notes there is:
atol should be carefully selected for the use case at hand.
Sounds like it would've been more appropriate to not specify a default in the first place. Sure, some people might complain, who want a quick way to determine if two numbers are approximately equal, but as mentioned above, this question cannot be answered without specifying the bounds. All that the default tolerances do is prevent people from thinking about the appropriate values for their specific application. Since an operator doesn't allow to specify any tolerances, it's not a suitable replacement for the `isclose` functions.

On Sun, Jun 14, 2020 at 10:41 PM Sebastian M. Ernst <ernst@pleiszenburg.de> wrote:
-1. I don't see that Python needs a different comparison operator, with all the debates that will come through about "when should I use == and when should I use the other". Especially since it'll almost certainly refuel the argument that you should never compare floats for equality. If you're doing a lot with isclose, you can always "from math import isclose as cl" and use a shorter name. But please don't encourage everyone to use isclose() in place of all comparisons. ChrisA

Hello, On Mon, 15 Jun 2020 00:30:19 +1000 Chris Angelico <rosuav@gmail.com> wrote:
All that makes good sense. I'd encourage everyone who thinks "I need a very special operator just for me", instead think in terms "Python needs ability to define custom operators". Needless to say, that doesn't have anything to do with changes to a core implementation. Instead, you're looking to be able to define a custom parser/tokenizer/AST transformer for your source. And all that is possible already yesterday. Recent example: implementation of "from __future__ import braces": https://github.com/NeKitDS/braces.py .
ChrisA
[] -- Best regards, Paul mailto:pmiscml@gmail.com

Paul Sokolovsky writes:
I'd encourage everyone who thinks "I need a very special operator just for me",
I don't think anybody who posts here thinks that, though. They think "wow, I could really use this, and I bet other people too." math.isclose probably tempts somebody somewhere in the world about once a minute. Of course they often find out differently -- and mostly when they do, they're OK with that. (There are also joke or half-joke proposals, but that's a different thing, and we're all in on those jokes.)
instead think in terms "Python needs ability to define custom operators".
Reading that *literally* (I take it seriously, below): Python has that ability already. It's just that the set of operator *symbols* is fixed (in a given version of Python), and almost exhausted for numerical types (but see below ;-). This has an important implication for readability: the associativity and precedence order of the symbols is also fixed, and only needs to be learned once.[1] If you always want isclose behavior for float "equality", you can't monkey patch (and you don't want to, I think), but you can subclass: import math class Real(float): exact_eq = float.__eq__ def__eq__(self, other): return math.isclose(self, other) or class AltReal(float): def __matmul__(self, other): # is self at other? close enuff! return math.isclose(self,other) Getting serious, as promised: Of course there will be work to do ensuring that all floats (more likely, all numbers) are converted to Reals in each entry point of the module, and probably a lot of duplication where "_private" versions of functions don't do the conversion, and "public" versions of them do. That could be avoided with custom operator symbols in many cases. But imposing this work is a deliberate choice of the language designers, for readability reasons, and maybe others. Perhaps it should be reconsidered, but I'm quite conservative on this one.
Recent example: implementation of "from __future__ import braces": https://github.com/NeKitDS/braces.py .
I strongly recommend the idiom: import __future__ as __watch_out_for_jokes__ (Recent? Isn't that more than a decade old? Hmmm: >>> from __future__ import braces File "<stdin>", line 1 SyntaxError: not a chance Different implementation, I guess. :-D :-þ :-D :-þ :-D :-þ :-D) Steve Footnotes: [1] This is not always a benefit. Occasionally there's a conventional assignment of symbols to operations where the "natural" behavior of the operations doesn't fit well with the associativity and precedence of the operators of 3rd grade arithmetic. But as a rule it's helpful.

Hello, On Tue, 16 Jun 2020 14:21:55 +0900 "Stephen J. Turnbull" <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Sure, everyone thinks that they invented the next best thing since sliced bread. So, the natural turn of the discussion is that maybe it's not. And then maybe, the turn of the discussion is how to let different people do what they want, without forcing their ideas on everyone else.
But that's exactly my point: the core of the language is already quite comprehensive, well-defined, and even crowded. Stuffing even more stuff into it just doesn't scale. Take the recent example - "number-crunching people" came by to claim the "~=" operator, but "everyone else" knows that that operator is related to regexp/pattern matching.
Python has that ability already.
It absolutely has. It's just requires more legwork than it really should take, and that needs to be changed.
Well, operators, their associativity and precedence are fixed in *the language core*. But Python is multi-level structure. So, if you want to introduce a new operator symbol: 1. You subclass a tokenizer, and add lexing for your operator. Simple usages can also be handled on the level of tokenizer. Literally if you're interested in simple things like "a ~= b", you just turn it into "isclose(a, b)". 2. More general and complex cases require subclassing an AST parser. That will allow to deal with arbitrary precedence/associativity of new operators. 3. Then, you just use subclassed tokenizer and parser for your sources. To make it all easy, easiness of p.3 is important. There's a PEP on that: https://www.python.org/dev/peps/pep-0511/ , which fell victim of Python politburo self-censorship, but its usefulness is not less because of that. []
I'm not sure who didn't get a joke here. Let me just say that it's indeed a very old and buggy implementation of that feature from the proverbial Python politburo. Such old, that's it's not funny any more. There's a reason why even projects otherwise largely written in Python, use a different language (like JavaScript) when they need easy-for-users scripting capabilities. A random example is https://github.com/frida/frida . That's because such projects want to let their users write one-liners like: for_each_kaboom(function (kaboom) {print("here's kaboom", saboom); for (sub in kaboom) print(sub); }) And while majority of people just leave Python behind in such cases, some people spent so much time with Python, that they refuse to (for they know that outside there's only bigger mess). Instead, they look how to extend Python to do what they want (without necessary stuffing it all back in the core).
Steve
[] -- Best regards, Paul mailto:pmiscml@gmail.com

On Tue, Jun 16, 2020 at 6:16 PM Paul Sokolovsky <pmiscml@gmail.com> wrote:
Actually there's another and a very important reason, unrelated to syntax. You can put untrusted code into a JS sandbox and expect it to be safe, but you can't easily sandbox Python code (especially not inside other Python code). That doesn't mean that JS is inherently better than Python; it just means that JS is the correct tool for that job. ChrisA

On Tue, Jun 16, 2020 at 11:14:24AM +0300, Paul Sokolovsky wrote:
What's so special about the ability to write one-liners? I trust that you agree that *limiting* users to only one line would be a terrible idea. So your scripting language surely will allow multiple lines of code. But if you give users the ability to write multiple lines of code, then what advantage is there to encourage them to cram them into a single line? Forth is a free-form concatenative language where whitespace separates words, and it lacks even the concept of "lines of code". To the interpreter, every Forth program is, or might as well be, a single line of code. You can literally write Forth code without once touching the Enter key, if you so choose. But in practice, everyone uses multiple lines, not just to separate word definitions but even within a single word once they exceed a certain level of complexity. As soon as a word contains loops or branches, the two dimensional structure is essential for the sake of maintainability and readability. In brace languages, there are a whole mountain of errors that occur when the logical structure of the program, as defined by the braces, doesn't match the apparent structure of the program, as seen by the programmer who is guided by indentation. These errors are rare in Python. (I was going to say "impossible", but I suppose that a sufficiently clever person could manage one using nested comprehensions. But they're impossible using regular for-loops and other block structures.) All Python does is require what people -- at least sensible people -- are already doing.
So wait, if you acknowledge that outside is only a bigger mess, why do you want to introduce that mess into Python and drag it down to that level? I honestly don't understand the point of your comment? Are you in favour of, or against, adding braces to Python?

Paul Sokolovsky writes:
Python *arithmetic* is comprehensive, well-defined, even crowded, and adding operators doesn't scale. Otherwise, it's generally quite sparse. Most classes define a very few operators, and mostly use named methods. I can't recall wanting a new operator symbol for any class myself, and the most plausible cases for Python at large we already got not so long ago, the walrus operator (which isn't a customizable operator in the sense we mean here) and the matmul operator. NULL-coalescing operators remain in discussion, but they're still of dubiously large benefit and the proposed syntaxes have yet to converge on a single candidate syntax IIRC.
Python is Turing-complete. It doesn't *need* anything. What you mean is that there are features you want that it doesn't have. And you (deliberately?) ignored the crucial distinction I was making between "custom operators" and "custom operator symbols". If there are enough operator symbols available, and there usually are, you just choose them and def dunders. That's hardly excessive legwork, although sometimes the analogy of the arithmetic operators to the operations of your class are somewhat strained.
Folks look how to extend Python to do what they want (without necessary stuffing it all back in the core).
Sure. That's the whole point of a programming language, of course: extending your software's capabilities without creating a monolith. The conflict here is that you want to take advantage of the well- structured Python language, then tweak it in ways that will be less readable to somebody who knows only Python-Dev Python. We in general don't want that -- we want a read-everywhere language. That's what the rejection rationale for PEP 511 says explicitly. We know about the advantages you're extolling, and we just don't want them, given the expected readability cost. If you want support from anybody who matters (I don't matter much :-), you need to provide convincing arguments that the facilities for customization we already have aren't enough. You keep asserting that they aren't, but without concrete evidence. By the way this:
Python politburo self-censorship,
in no way helps your case. Python-Dev is very opinionated, and that is reflected in the language. It is not, however, censorship in any sense. Regards, Steve

On Sun, Jun 14, 2020 at 02:39:49PM +0200, Sebastian M. Ernst wrote:
I wrote the statistics module in the stdlib, and the tests for that use a lot of approximate equality tests: https://github.com/python/cpython/blob/3.8/Lib/test/test_statistics.py which includes an approx_equal function that pre-dates the math.isclose and a unittest assertApproxEqual method. So I like to think I'm a heavy user of approximate comparisons. I wouldn't use an approximate operator. Not even if it were written with the unicode ≈ ALMOST EQUAL TO symbol :-) The problem is that the operator can only have a single pre-defined tolerance (or a pair of tolerances, absolute and relative), which would not be very useful in practice. So I would have to swap from the operator to a function call, and it is highly unlikely that the operator tolerances would be what I need the majority of the time. -- Steven

Perhaps a more versatile operator would be to introduce a +- operator that would return an object with an __eq__ method that checks for equality in the tolerance i.e a == b +- 0.5 Although I don't like this either since you could achieve the same thing with something like this: class Tolerance: def __init__(self, upper, lower=None): self.upper = upper self.lower = upper if lower is None else lower def __add__(self, number): return Tolerance(number+self.upper, number-self.lower) def __sub__(self, number): return Tolerance(number-self.upper, number+self.lower) def __eq__(self, number): return self.lower < number < self.upper a == b + Tolerance(0.5) So maybe it would be nice to have something like this built into math? On Thu, 18 Jun 2020 at 17:56, Steven D'Aprano <steve@pearwood.info> wrote:
-- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL.

Well there you go, good point. I didn't really like it being an operator myself. But I can see having a math.tolerance class being useful. On Tue, 23 Jun 2020 at 13:53, Jonathan Goble <jcgoble3@gmail.com> wrote:
-- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL.

On Tue, Jun 23, 2020 at 9:08 AM Mathew Elman <mathew.elman@ocado.com> wrote:
A little bit out of the box, but what about: a == b +/- 0.5 ...or even: a == b +or- 0.5 --- Ricky. "I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler

On Tue, Jun 23, 2020 at 11:09 AM Rhodri James <rhodri@kynesim.co.uk> wrote:
TLDR: Of course it's not that hard to use. But the friction from "python code" to "mathematical calculations" is too large to make reaching for python as my go-to tool the first inclination. I'd really like to see this improved. I'm not strongly opinionated that this is a large, glaring need that simply *must* be rectified. Only that it would be VERY nice for those of us that use python primarily for mathematical calculations. BORING LONGER VERSION, BIT OF A RANT, SORRY: As a civil engineer, when I reach for a tool with which I am intending to do MATH, I generally do not reach for python right now-- even inside a Jupyter notebook. I reach for Mathcad or even Excel (blech). One of the biggest reasons is python code doesn't READ like real math in so many instances; this is a problem for me partly for my own reading, but also for my colleagues and clients. I would like to not have to spend a lot of time changing bits of the code to an output form for other non-programming people to read. My clients are not dumb, but if I were to print a jupyter notebook out and hand it to someone to read who doesn't know what python is, this: import math math.isclose(a, b, abs_tol=0.5) ...is just a step too far for them to read. So I have to use something else, or spend a lot of time massaging things to look more presentable. It would be nice-- for ME-- if there were ways to write functional code that looked more like calculations in python code. And something like +/- fits that bill. But I understand that not everyone-- perhaps not even close to significant % of people-- has the same needs I do, spending their days focused on producing, writing/reading mostly mathematical calculations with explanations. And that not everyone has the difficulty of having to present the math they are performing with their code to other people who are expecting to be reading calculations, not computer code. SIDEBAR Another BIG pain point for the "gee it would be nice if python code could look more like mathematical calculations" problem is the function writing syntax-- I love python, but sometimes I want to write a simple mathematical-looking structural engineering function: ∆y(P, H,L, E, I) = H * L^4 * P / (384 * E * I) ...and not: def ∆y(P, H,L, E, I): return H * L**4 * P / (384 * E * I) Boy would it be cool if we could use the walrus to write one-liner math functions! ∆y(P, H,L, E, I) := H * L^4 * P / (384 * E * I) THAT right there would change my life. --- Ricky. "I've never met a Kentucky man who wasn't either thinking about going home or actually going home." - Happy Chandler

∆y(P, H,L, E, I) := H * L^4 * P / (384 * E * I)
```python Δy = lambda P, H, L, E, I: H * L**4 * P / (384 * E * I) Δy ``` <function __main__.<lambda>(P, H, L, E, I)> Is there a good way to redefine the '^' operator for {int, float, Decimal, Fraction, numbers.Number}? Why would it be dangerous to monkey-patch global types this way? Could a context manager redefine `float.__xor__ = float.__pow_` for just that scope? Why or why not? https://docs.python.org/3.4/library/operator.html#operator.__pow__ As far as syntactical preferences, LaTeX may be considered to be the canonical non-executable syntax for mathematical expressions and equations. SymPy can parse LaTeX into symbolic expressions. Jupyter can render LaTeX embedded in Markdown. SymPy displays expressions as LaTeX in Jupyter notebooks. ```python from sympy import symbols P, H, L, E, I = symbols('P, H, L, E, I') Δy = H * L**4 * P / (384 * E * I) Δy ``` $\frac{H L^{4} P}{384 E I}$ ```python import sympy sympy.sympify("H*L^4*P/(384*E_*I_)") ``` $\frac{H L^{4} P}{384 E_{} I_{}}$ https://docs.sympy.org/latest/modules/core.html#sympy.core.sympify.sympify calls parse_expr with `convert_xor=True`, by default. https://docs.sympy.org/latest/modules/parsing.html#sympy.parsing.sympy_parse... :
Treats XOR, ^, as exponentiation, **.
```python from sympy.parsing.sympy_parser import parse_expr, standard_transformations,\ convert_xor trx = (standard_transformations + (convert_xor,)) parse_expr("H*L^4*P/(384*E_*I_)", transformations=trx) ``` $\frac{H L^{4} P}{384 E_{} I_{}}$ https://docs.sympy.org/latest/modules/parsing.html#experimental-mathrm-latex... :
```python from sympy.parsing.latex import parse_latex #parse_latex(r'\frac{H L^{4} P}{384 E_{} I_{}}') parse_latex(r'\frac{H L^{4} P}{384 E I}') ``` $\displaystyle \frac{H L^{4} P}{384 E I}$ SageMath defines ^ as operator.pow. CoCalc supports Sage notebooks as well as Jupyter notebooks which import a CAS like SymPy, SymEngine, Diofant, SageMath. https://ask.sagemath.org/question/49127/what-is-sage-equivalent-to-pow-in-sy... ... https://www.lidavidm.me/blog/posts/2013-09-15-implicit-parsing-in-sympy.html https://stackoverflow.com/questions/49284583/how-to-use-unicode-characters-a... On Tue, Jun 23, 2020, 11:55 AM Ricky Teachey <ricky@teachey.org> wrote:
participants (16)
-
Alex Hall
-
Chris Angelico
-
Christopher Barker
-
David Mertz
-
Dominik Vilsmeier
-
Greg Ewing
-
Jonathan Goble
-
Mathew Elman
-
Oscar Benjamin
-
Paul Sokolovsky
-
Rhodri James
-
Ricky Teachey
-
Sebastian M. Ernst
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Wes Turner