On Aug 9, 2019, at 17:16, Greg Ewing firstname.lastname@example.org wrote:
I think it can be justified on the grounds that it allows all of the information in the JSON text to be preserved during both deserialisation and serialisation.
Except that it doesn’t allow that. Using Decimal doesn’t preserve the difference between 1.0000E+3 and 1000.0, or between +12 and 12. Not to mention things like which characters in your strings, and even object keys, were read from backslash escapes, or which of your lists have spaces after their commas.
JSON is not canonicalizable or round-trippable, by design. So I don’t think giving people the illusion of round-tripping their input is a good thing. Especially if smart people who’ve read this thread still don’t get that it’s an illusion.
People needing to actually preserve 100% of the JSON that gets thrown at them are going to think this does it (because the docs imply it, or some random guy on StackOverflow says so, or it passed the couple of unit tests they thought of), and then deploy code they relies on that, and only discovering later that it’s.broken, and can’t be fixed without changing big chunks of their design.
I mean, the OP wants to use this for secure hashes; think about what kind of debugging nightmare that’s likely to lead to. (And I hope nobody actually tries to attack him while he’s debugging the secure hash failures that are just side effects of this bug…)
The fact that a feature _can_ be misused isn’t a reason to reject it. But the fact that a feature will _almost always_* be misused is a different story.
* I say “almost” because there are presumably some cases where being able to preserve 98.8% of the input that gets thrown at you is qualitatively better than being able to preserve 97.8%, or where the only JSON docs you ever receive are just individual numbers, and they’re all between -0.1 and -0.2, so the only possible error that can arise is this one. And so on. But I doubt any of those is common.