Later, if we decide to start proving a stable ABI for embedded Python,
we can still add a "version" or "struct_size" field to PyConfig later
(for example in Python 3.9).
Thanks Victor, I think this is the right way for us to go, given the relative timing in the release cycle.
The idea of some day expanding the stable ABI to cover embedding applications, not just extension modules, is an intriguing one, but I'm genuinely unsure anyone would ever actually use it.
After merging your PR and closing mine, I had an idea for Python 3.9: what if we offered a separate public "int Py_CheckVersionCompatibility(uint64_t header_version)" call? (64 bit input rather than 32 to allow for possible future changes to the version number formatting scheme)
The basic variant of that API would be what I had in my PR: release candidates and final releases allow an inexact match, other releases require the hex version to match exactly.
If an embedding application called that before calling any PreConfig or Config APIs, they'd get the same easier debugging for version mismatches, without needing to change the config structs themselves.
Instead, we'd only need a new PreConfig field if we wanted to start having other runtime APIs change their behaviour based on the active nominal Python version.
As an added bonus, extension modules could also optionally call that compatibility checking API early in their init function.
If we wanted to get more exotic with the internal design, we could maintain a table of beta versions containing known ABI *breaks* (e.g. public structs changing size), and also permit inexact matches for beta releases, as long as the given header version was newer than the last ABI break, but older than the runtime version.
That table could be reset to empty when the ABI was frozen for a release series (thus causing a merge conflict if a backport was requested for a patch that changed that table).