My bad. I think I need to clarify my objective. I definitely understand the issues regarding serialization/deserialization on JSON, i.e. decimals as strings, etc., and hooking in a default serializer function is easy enough. I guess my question is more related to why the csv writer and DictWriter don't provide similar functionality for serialization/deserialization hooks? There seems to be a wide gap between reaching for a tool like pandas were maybe too much auto-magical parsing and guessing happens, and wrapping the functionality around the csv module IMO. I was curious to see if anyone else had similar opinions, and if so, whether conversion around what extended functionality would be most fruitful?

On Fri, Nov 2, 2018 at 11:28 AM Calvin Spealman <cspealma@redhat.com> wrote:
First, this list is not appropriate. You should ask such a question in python-list.

Second, JSON is a specific serialization format that explicitly rejects datetime objects in *all* the languages with JSON libraries. You can only use date objects in JSON if you control or understand both serialization and deserialization ends and have an agreed representation.

On Fri, Nov 2, 2018 at 12:20 PM Philip Martin <philip.martin2007@gmail.com> wrote:
Is there any reason why date, datetime, and UUID objects automatically serialize to default strings using the csv module, but json.dumps throws an error as a default? i.e.

import csv
import json
import io
from datetime import date

stream = io.StringIO()
writer = csv.writer(stream)
writer.writerow([date(2018, 11, 2)])
# versus
json.dumps(date(2018, 11, 2))

_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/