pytest.approx
https://docs.pytest.org/en/stable/reference.html#pytest-approx
```
The ``approx`` class performs floating-point comparisons using a syntax
that's as intuitive as possible::
>>> from pytest import approx
>>> 0.1 + 0.2 == approx(0.3)
True
The same syntax also works for sequences of numbers::
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
True
Dictionary *values*::
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
True
``numpy`` arrays::
>>> import numpy as np
# doctest: +SKIP
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) ==
approx(np.array([0.3, 0.6])) # doctest: +SKIP
True
And for a ``numpy`` array against a scalar::
>>> import numpy as np #
doctest: +SKIP
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) #
doctest: +SKIP
True
By default, ``approx`` considers numbers within a relative tolerance of
``1e-6`` (i.e. one part in a million) of its expected value to be equal.
This treatment would lead to surprising results if the expected value
was
``0.0``, because nothing but ``0.0`` itself is relatively close to
``0.0``.
To handle this case less surprisingly, ``approx`` also considers numbers
within an absolute tolerance of ``1e-12`` of its expected value to be
equal. Infinity and NaN are special cases. Infinity is only considered
equal to itself, regardless of the relative tolerance. NaN is not
considered equal to anything by default, but you can make it be equal to
itself by setting the ``nan_ok`` argument to True. (This is meant to
facilitate comparing arrays that use NaN to mean "no data".)
Both the relative and absolute tolerances can be changed by passing
arguments to the ``approx`` constructor::
>>> 1.0001 == approx(1)
False
>>> 1.0001 == approx(1, rel=1e-3)
True
>>> 1.0001 == approx(1, abs=1e-3)
True
```
On Sun, Jun 14, 2020, 9:39 PM David Mertz
On Sun, Jun 14, 2020 at 7:49 PM Oscar Benjamin
wrote: I've had occasion to use math.isclose(), np.isclose(), and np.allclose() quite often. Can you elaborate a bit on the kinds of things you use them for?
I can't elaborate on David's use but in my own experience these functions are mostly useful for interactive checking or for something like unit tests. They can be used extensively in the testing code for projects with a lot of floating point functions.
At times I have computations which *should be* the same mathematically, but are carried out through a different sequence of specific computations. One common example is in parallel frameworks where the order of computation is indeterminate because multiple workers/threads/processes are each calculating portions to aggregate.
Another related case is when I call some library to do an operation, but I did not write the library, nor do I understand its guts well. For example, the tensor libraries used in neural networks that will calculate a loss function. Occasionally I'd like to be able to replicate (within a tolerance) a computation the library performs using something more general like NumPy. Having a few ulps difference is typical, but counts as validating the "same" answer.
Another occasion I encounter it is with data measurements. Some sort of instrument collects measurements with a small jitter. Two measurements that cannot be distinguished based on the precision of the instrument might nonetheless be stored as different floating point numbers. In that case, I probably want to be able to tweak the tolerances for the specific case.
-- The dead increasingly dominate and strangle both the living and the not-yet born. Vampiric capital and undead corporate persons abuse the lives and control the thoughts of homo faber. Ideas, once born, become abortifacients against new conceptions. _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/VJSIJ3... Code of Conduct: http://python.org/psf/codeofconduct/