I'm aware of a couple Python implementations of JSON-LD: pyld [1] and rdflib-jsonld [2]. But you don't need a JSON-LD parser to parse or produce JSON-LD: You can just frame the data in the json document correctly such that other tools can easily parse your necessarily complex data types stored within the limited set of primitive datatypes supported by JSON.

We should welcome efforts to support linked data in Python. TimBL created the web on top of the internet in order that we could share resources in order to collaborate on science. In order to collaborate on science, we need to be able to share, discover, merge, join, concatenate, analyze, and compare Datasets. TimBL's 5-star Open Data plan [3] justifies the costs and benefits of sharing LOD: Linked Open Data.

   make your stuff available on the Web (whatever format) under an open license

   make it available as structured data (e.g., Excel instead of image scan of a table)

   make it available in a non-proprietary open format (e.g., CSV instead of Excel)

   use URIs to denote things, so that people can point at your stuff

   link your data to other data to provide context

JSON is ★★★ data. JSON-LD, RDFa, and Microformats are ★★★★ or ★★★★★ data.

We can link our data to other data with URIs in linked data formats. JSON-LD is one representation of RDF. RDF* (read as "RDF star") extends RDF for use with property graphs.
No one cares whether you believe that "semantic web failed" or "those standards are useless": being able to share, discover, merge, join, concatenate, analyze, and compare Datasets is of significant value to the progress of the sciences and useful arts; so, I think that we should support the linked data use case with at least:
1. a __json__(obj, spec=None) method and
2. a more-easily modifiable make_iterencode/iterencode implementation in the json module of the standard library.

[1] https://github.com/digitalbazaar/pyld
[2] https://github.com/RDFLib/rdflib-jsonld
[3] https://5stardata.info/en/

Here's this that merges JSON-LD, SHACL, and JSON schema: It has 3 stars. Validating JSON documents is indeed somewhat orthogonal to the __json__ / iterencode implementation details we're discussing; but specifying types in type annotations and then creating an additional complete data validation specification is not DRY.

re: generating JSON schema from type annotations

You can go from python:Str to jsonschema:format:string easily enough,
but, again, going from python:str to jsonschema:format:email will require either extending the type annotation syntax or modifying a generated schema stub and then changes to which will then need to be ported back to the type annotations.

I suppose if all you're working with are data classes, that generating part of the JSON schema from data class type annotations could be useful to you in your quest to develop a new subset of JSON thats supports deserializing (and validating) complex types in at least Python and JS (when there are existing standards for doing so).

On Wed, Apr 8, 2020 at 3:08 PM Andrew Barnert <abarnert@yahoo.com> wrote:
On Apr 8, 2020, at 01:18, Wes Turner <wes.turner@gmail.com> wrote:
> I don't see the value in using JSON to round-trip from Python to the same Python code.
> External schema is far more useful than embedding part of an ad-hoc nested object schema in type annotations that can't also do or even specify data validations.

But dataclasses with type annotations can express a complete JSON Schema. Or, of course, in an ad hoc schema only published in human readable form, as most web APIs use today.

> You can already jsonpickle data classes. If you want to share or just publish data, external schema using a web standard is your best bet.

Sure, but you don’t need JSON-LD for that. Again, the fact that type annotations are insufficient to represent semantic triples is irrelevant. They are sufficient for the case of writing code to parse what YouTube gives you, or to provide a documented API that you design to be consumed by other people in JS, or to generate a JSON Schema from your legacy collection of classes that you can then publish, or to validate that a dataclass hierarchy matches a published schema, and so on. All of which are useful.