Michael A. Smith writes:
It seems to me that obj.encode("json") and str.decode("json"), for example, would be a powerful feature,
This idea comes up a lot in various forms. The most popular lately is an optional __json__ dunder, which really would avoid the complication of working with custom JSONEncoders. That hasn't got a lot of takeup, though. Perhaps we could broaden the appeal by generalizing it to obj.__serialize__(protocol='json'), but that looks like overengineering to me. I think tying it to the codecs registry is probably going to get a lot of pushback, at least when you get to the point where you're discussing with the senior core devs. In Python terms like "codec" and methods like .encode and .decode are very deliberately tied to character encodings. In Python 2 there were "transcodings" like gzip, rarely used, discarded in Python 3, and not missed.
Right now, if I want to json.dumps a MappingProxyType, I believe I have to pass a custom JSONEncoder to json.dumps explicitly every time I call it.
That's exactly the kind of thing we have 'def' for though.
But I think I should be able to register one, and then just call thing.encode('json').
Think your MappingProxyType example through. It's going to be a lot more complicated than "register and just call", I think.
I could call codecs.encode(thing, 'json'), but I think maybe I shouldn't have to import codecs or json into my modules to do this.
That will never fly, I think. Text encoding is privileged in the open builtin and on str and bytes because every single Python program must do it (the source is *always* bytes and *always* has to be decoded to text), and because *only* text and bytes need .encode and .decode respectively.