Andrew Barnert wrote:
But float _already_ preserves the exact precision and value for the example you gave. The two strings are different string representations of the exact same float value, and will therefore give the exact same mathematical results with the exact same error bars. You’ve made it very clear that the exact precision and value isn’t what you’re interested in anyway, but the exact bytes of the text, so you can hash them.
The original text representation defines the value. The fact that the two numbers mentioned in my example both end up as the same value in the binary representation the JSON decoder uses does not mean that those two values are same. Try to separate the JSON format (and the values it represent), from the internal representation JSON decoder/encoder uses to process these values. JSON (format) alone, does not stipulate any particular internal representation, and using something else than IEEE-754 Floating Point for the float representation is perfectly acceptable (e.g. decimal.Decimal). If you do that, you will realize that what I am asking for is really preserving the value (precision) and the text representation at the same time, because they are actually defining one another.