I am not sure about the business case for it but I'd like to fish for some feedback here too.

I hit this sort of thing often working with floating point data, for example with a simple left-and-right multiplies of some matrix (and its transpose, respectively) that should preserve symmetry:
>>> import numpy as np
>>> rng = np.random.RandomState(0)
>>> x = rng.randn(10, 10)
>>> y = x @ x.T  # symmetric
>>> np.array_equal(y, y.T)
True
>>> p = rng.randn(10, 10)
>>> z = p @ y @ p.T  # this should still be symmetric
>>> np.array_equal(z, z.T)
False
>>> np.allclose(z, z.T)  # it is symmetric to numerical precision
True
>>> z[0, 1]
7.912658519091485
>>> z[1, 0]
7.9126585190914875
The main issues I can see is that int types have no room for noise and also the tolerance adjustment would be quite difficult to pull off since it would depend on the magnitudes of the nonzero entries to decide what constitutes as noise.

I would vote for / expect `atol=0, rtol=1e-7` keyword arguments (maybe with different defaults) to have an API similar to `np.allclose`. Here I chose the defaults for `assert_allclose` because I have found in practice the `atol=1e-8` to be a bit dangerous / lead to silent bugs in people's code.

Finally, though I started this in SciPy, I still have some doubts whether this should go into NumPy or not. But we can do that anytime later.

To me NumPy would be a more natural fit for this sort of thing, and I would enjoy having `np.testing.*` variants of these things, too, for code development in other downstream libraries.

My 2c,
Eric