I am not sure about the business case for it but I'd like to fish for some feedback here too.
I hit this sort of thing often working with floating point data, for example with a simple left-and-right multiplies of some matrix (and its transpose, respectively) that should preserve symmetry:...
Yes I think you convinced me about this (for the array bandwidth still not quite sure though, very difficult to pull it off with noise and still being fast)
The main issues I can see is that int types have no room for noise and also the tolerance adjustment would be quite difficult to pull off since it would depend on the magnitudes of the nonzero entries to decide what constitutes as noise.
I would vote for / expect `atol=0, rtol=1e-7` keyword arguments (maybe with different defaults) to have an API similar to `np.allclose`. Here I chose the defaults for `assert_allclose` because I have found in practice the `atol=1e-8` to be a bit dangerous / lead to silent bugs in people's code.
This is precisely why I am hesitating. I am a big fan of "People should be allowed to shoot themselves in the foot" but I would suggest the defaults to be zero for both to delegate all responsibility to the user. Because if we define it, it's going to get quite messy in the bug reports.
Finally, though I started this in SciPy, I still have some doubts whether this should go into NumPy or not. But we can do that anytime later.
To me NumPy would be a more natural fit for this sort of thing, and I would enjoy having `np.testing.*` variants of these things, too, for code development in other downstream libraries.
Yes, that would be my preference too but I can't crack the C code no matter how hard I try (and I tried quite a few times). So I think they have to chime in.
Thanks!