I'm going to try to summarise what I got out of this discussion. Maybe it will help bring some focus to the topic.
I think there are two case's to consider.
# The most common case.
rel_is_good(actual, expected, delta) # value +- %delta.
# Testing for possible equivalence?
rel_is_close(value1, value2, delta) # %delta close to each other.
I don't think they are quite the same thing.
rel_is_good(9, 10, .1) --> True
rel_is_good(10, 9, .1) --> False
rel_is_close(9, 10, .1) --> True
rel_is_close(10, 9, .1) --> True
The next issue is, where does the numeric accuracy of the data, significant digits, and the languages accuracy (ULPs), come into the picture.
My intuition.. I need to test the idea to make a firmer claim.. is that in the case of is_good, you want to exclude the uncertain parts, but with is_close, you want to include the uncertain parts.
Two values "are close" if you can't tell one from the other with certainty. The is_close range includes any uncertainty.
This is where taking in consideration of an absolute delta comes in. The minimum range for both is the uncertainty of the data. But is_close and is_good do different things with it.