Richard Musil writes:
After some thinking it seems that the float is only case where this "loss of a precision" happen.
I don't think so. What's happening here is not restricted to "loss of precision". This can happen for any type that has multiple JSON representations of the same internal value, and therefore you cannot roundtrip from JSON representation to the same JSON representation for all of them. This is true of Unicode normalized forms as well as floats. According to Unicode these are entirely interchangable. If it's convenient for the Unicode process to change normalization form, according to Unicode you will get the same *characters* out that you fed in, but you will not necessarily get byte-for-byte equality. (This bites me all the time on the Mac, when I write to a file named in NFC and the Mac fs converts to NFD.) Python doesn't do any normalization by default, but it's an inherent feature of Unicode, and there's no (good) way for the codec to know that it's happening. As far as I can see, any of the proposals requires the cooperating systems to coordinate on a nonstandard JSON dialect[1], and for them to be of practical use, they'll need to have similarly capable internal representations -- which means the programs have to be designed for this cooperation. Pick an internal representation for float (probably Decimal), pick an external (JSON) normalization for Unicode (probably NFC), write the programs to ensure those representations -- using conformant JSON, and hope there are no compound types with multiple JSON representations. Footnotes: [1] I write "nonstandard dialect" rather than "non-conformant" because you can serialize floats as '{"Decimal" : 1.0}', and the internal processor just needs to know that such a dict should be automatically converted to Decimal('1.0').