On Tue, 13 Aug 2019, 14:46 Paul Moore, <p.f.moore@gmail.com> wrote:
> 1) One has to recognize the type in the input data,

How do you recognise the type? For Decimal, you just check if it's an
instance of the stdlib type. How does *your* proposal define which
custom types are valid?

I expect it will be exactly the same for the custom type. Calling isinstance(o, dump_as_float), where "dump_as_float" will be user specified type (class). Basically the same way the encoder handles the other (Python) types during the encode. 

> 2) Get the JSON representation, possibly by using obj.__str__().

If you're using str() then this is easy enough, agreed. But what
happens when someone raises an issue saying they want a type that has
different str() and JSON formats? Are such types not allowed? Or do we
need a "JSON representation" protocol, which defaults to str()?

I guess asking the custom type to use __str__() as means to provide the JSON output is not unreasonable. It is an implicit contract on the same level as asking the same custom type to accept JSON real number in its constructor (and not for example in function "from_json") when used in decoder (specified in "parse_float"). 

For Decimal, again it's easy, we just use str() and stop.

That is basically what we would do in my proposal too. 

> 3) Check the output that it conforms JSON real number spec.

For Decimal, there's no need as we know what the str() representation
of Decimal looks like, so we don't need this step at all. For
arbitrary types, we do.

There were some comments in this thread that Decimal allows non-JSON numbers (e.g. ". 3"). I am not sure what happens on serialization or if it is really a problem, but I believe Decimal does not officially claim full JSON compliance and it is not its goal either. And if it does now, assuming that it will behave this way in the future is still risky (unless the maintainer of Decimal is aware of your assumption and commits to honoring it over Decimal lifetime). 

Then, even if Decimal always guarantees JSON compliance, how would you handle if someone decided to subclass his JSON real number custom type from Decimal.

Will you allow it, and then allow the custom output as well, or try explicitly prevent it?

And what if the client code just imports Decimal and mocks its constructor and __str__() function? 

We may question the usefullness of such an approach (though I believe the former is pretty legit), but unless you make sure they are really forbidden they can easily allow "non-allowed" JSON output. 

What I am saying is that basing the assumption of the type's output JSON validity on the type identification is more tricky than it may appear at first. And is far more complex to do "right" (if it really concerns you) than running the regex on the type's output and not care about the type at all. 

It is easier and clearer for both the implementer as the code is self-explaining and for the user too, since he just have to care that his type passes the check and not about the "right origin". 

So even in your solution I would still vote for using regex sanity check instead of basing the validity of JSON output on the identity of Decimal.

> The only difference is that in one case you need a kw to specify the custom type, in the other, you need a Boolean kw to explicitly pull in decimal.Decimal (to avoid unconditional import of decimal).

See above for all of the *other* differences. And it's arguable that
with Decimal, which is a known stdlib type, we could just support it
without needing a flag at all. We could

The purpose of the flag is make explicit what you propose to do implicitly. You can come up with some heuristic, but eventually you would need to import the right Decimal in order to do the full typecheck. 

I guess it is a matter of taste, but I prefer having the explicit control over whether 'json' module actually uses Decimal or not, but to each his own. 

defer the import by doing some
heuristic checks when we see an unknown type (for example, if
obj.__type__.__name__ != 'Decimal' we don't need to do an import and a
full typecheck). But even if we do need a boolean flag to opt in,
that's not such a big deal. Whereas with the custom type option,
someone is bound to say they want a list of types to support in the

With custom type solution the support for a list of custom types comes free (as a courtesy of the function 'isinstance') as suggested by Joao earlier, because 'isinstance' will be the only place where the "dump_as_float" value will be used. 

same call, etc. More maintenance complexity.

> Then, the decoder already allows custom type, in parse_float, so having the same option on the output seems to me better.

The symmetry argument is valid, but IMO pretty weak without an actual
example of a case that would benefit from the more general proposal.

Imagine this (my proposal) :

json_dict = json.loads(json_in, parse_float=decimal.Decimal)
json_out = json.dumps(json_dict, dump_as_float=decimal.Decimal)

is functionally equivalent to yours:

json_dict = json.loads(json_in, parse_float=decimal.Decimal)
json_out = json.dumps(json_dict, use_decimal=True)

Now assume custom type (my solution):

json_dict = json.loads(json_in, parse_float=MyFloat)
json_out = json.dumps(json_dict, dump_as_float=MyFloat)

and yours:

json_dict = json.loads(json_in, parse_float=MyFloat)
json_out = json.dumps(json_dict, use_decimal=True, cls=MyFloatEncoder)

where MyFloatEncoder is implemented along the line of your proposal plus the client code must also import and use Decimal, even if MyFloat technically does not need it. 

Apart from the JSON compliance check controversy, where we do not agree, I see no advantage of your proposal, while it makes the intuitive things more convoluted and tricky (both for the client code and the implementation). 

We may just have to agree to
differ, and I'll leave it to others to judge. If you write the code,

OK, let's agree to that.