On Aug 14, 2019, at 09:57, Christopher Barker email@example.com wrote:
The other issue on the table is whether it's a goal of the json module to make it hard to create invalid JSON -- I think that would be nice, but others don't seem to think so.
A NOTE on that:
Maybe it would be good to have an optional "validate" flag for the encoder, that would validate the generated JSON. False be default, so it wouldn't effect performance if not used. And users could turn it on only in their test code if desired. I would advocate for something like this if we do give users more control over the encoding.
(though I suppose simply trying to decode the generated JSON in your tests may be good enough)
This is trivial to implement in your own toolbox: just write a wrapper function that does that loads on the result before returning it.
And I think users would actually be more often interested in “valid for the way this protocol defines JSON” than “valid according to the RFC”, and that’s just as easy to do in your own toolbox:
def dumps(obj): result = json.dumps(obj, my_usual_flags…) json.loads(result, allow_nan=True, …) if '\n' in result or '\r' in result: raise ValueError('JSON with newlines') it '\u200b' in result or '\u200c' in result: raise ValueError('JSON with Unicode separators that JS hates') return result
… but it would be a pain to do with a mess of options to dump/dumps/JSONEncoder.
So I’m not sure this really needs to be added to the module even if some form of “raw” encode support is added. (To be clear, I agree with you that we probably don’t want to add that support in the first place. But, as you say, at least some people disagree, so…)