On Mon., 30 Sep. 2019, 7:13 am Victor Stinner, <vstinner@python.org> wrote:
Hi Nick,

Le dim. 29 sept. 2019 à 08:47, Nick Coghlan <ncoghlan@gmail.com> a écrit :
> I don't quite understand the purpose of this change, as there's no
> stable ABI for applications embedding CPython.

Well, I would like to prepare Python to provide a stable ABI for
embedded Python. While it's not a design goal yet
(Include/cpython/initconfig.h is currently excluded from
Py_LIMITED_API), this change is a step towards that.

> As a result, updating
> to a new X.Y.0 release always requires rebuilding the entire
> application, not just building and relinking CPython.

In Python 3.8, C extensions are no longer linked to libpython which
allows to switch between a release build and a debug build of

Can we imagine the same idea for embedded Python? I checked vim on
Linux: it's linked to libpython3.7m.so.1.0: a specific Python version,
library built in release mode.

Switching between ABI compatible debug 3.8 and release 3.8 builds isn't the same as allowing switching between ABI incompatible release 3.8 and release 3.9 builds.

> I could understand a change to require passing in an expected Python
> version so we can fail more gracefully on a bad link where an
> application that intended to embed Python 3.8 is incorrectly linked
> against Python 3.9 (for example), but performing that kind of check
> would require passing in PY_VERSION_HEX, not the size of the config
> struct.

It seems simpler to me to pass the structure size rather than the
Python version. It avoids the risk of updating the structure without
update the Python version. I also avoids to have to change the Python
version immediately when PyConfig is modified.

We already change the Python version as soon as the maintenance branch gets created (master is 3.9.0a0, and has been pretty much since 3.8.0b1).

The main risk of
sizeof(PyConfig) comes if we *remove* a field and add a new field of
the same size: the structure size doesn't change... But in my
experience, we only add new ways to configure Pyhon, we never remove
old ones :-D

The main risk I see is some *other* unversioned struct in the full C ABI changing size.

If the config APIs are only checking for config struct size changes, then changes to anything else will still segfault.

If they're instead checking "What version was the calling application built against?" then we can decide how to handle it on the interpreter side (e.g. require that the "X.Y" part of the version match, and report an init error otherwise).

The question is if it's convenient to compute sizeof(PyConfig) in
programming languages other than C. Providing a "structure version" or
the structure size from a function call would not work. The value must
be known a compilation time, not at runtime. The purpose is to compare
the version/size between build and runtime (they must match).

You can also compare the build time value of a public integer macro to detect build discrepancies.

"sizeof(some_struct)" is just a straightforward way to define such an integer in a way that will always change when a new field is added to a particular struct.

In the first implementation of my PEP, I used an internal "config
version" provides by a macro. But it was said that macros are not

The initialisation macros aren't necessarily convenient (e.g. if you're using a C++ struct rather than a C one). That's an issue with requiring C-style initialisation, though, and less with macros in general.

That said, I think the change to make the expected API/ABI version an explicit part of the config API rather than a hidden part of the struct is a good idea, since it lets us replace cryptic segfaults with explicit "Python version mismatch" errors.

PY_VERSION_HEX is provided as a macro, but we are now trying to avoid
macros in our C API, no? At least, it's what I understood from the PEP
587 discussion.

The issue with macros is that their behaviour gets locked in at build time, so you can't fix bugs in them or otherwise change their behaviour just by linking against a new version. Instead, you have to recompile the consumer application or module in addition to recompiling the API provider.

In this case though, that's exactly what we want, as the whole point would be to detect cases where the CPython runtime library had been recompiled, but the embedding application hadn't (or vice-versa).

> We don't support that - all our APIs that accept
> PyObject/PyTypeObject/etc require that the caller pass in structs of
> the correct size for the version of Python being used.

For PyConfig, it's allocated (on the stack on or on heap) by the
application. So the application requires to access to the full

Objects (instances) are allocated by Python (on the heap).
Applications usually don't need to know/access the structure.

Python is far from being perfect, static types are still supported and
they are an issue for the stable ABI.

The config APIs aren't covered by the stable ABI.

If they (or a variant on them) were to be added to it some day though, then Py_VERSION_HEX would still work as a marker to select a specific config struct size, we'd just need to make the processing of any new fields conditional at runtime on the passed in "expected Python version".

The PEP 587 versions of the config APIs would still error out on a feature release version mismatch.

> The PyConfig
> and PyPreConfig structs are no different from PyObject in that regard:
> if there's a size mismatch, then the developers of the embedding
> application have somehow messed up their build process.

In short, PyConfig initialization works like that:

* The application allocates a PyConfig object on the stack
* Python calls memset(config, 0, sizeof(PyConfig))

If there is a size mismatch, Python triggers a buffer overflow which
is likely to cause issues.

Aye, that's why I agree adding some form of explicit "expected version" check is a good idea.

I prefer to have a clean API which makes buffer overflow impossible.

We'll never change the size of the config structs (or any other public struct) in a maintenance branch, so an "expected Python version" check would serve this purpose just as well as passing in the size of the config structs.

Embedding Python and handling different Python versions is not
trivial, especially if we start to add new fields to each PyConfig
(which is very likely). If prefer to be extra careful.

As noted above, despite what I wrote on BPO, you no longer need to persuade me that the version check is desirable, only that a narrow check on specific struct sizes is preferable to a broad check on the expected API version.

Consider the difference in error messages:

"Application expected CPython 3.8, but is attempting to load CPython 3.9"


"Application provided a 256 byte config struct, CPython expected 264 bytes" (e.g. when a new pointer was added)

It wouldn't make sense to try to continue in either case due to the potential for other ABI incompatibilities, but the first offers a much clearer hint to the affected developer as to what is going on.

If we ever do create a stable config ABI, then inside the code we can create a range lookup table from different Python versions to different versions of the config struct, whereas if the caller is only passing in the expected size of the config struct, we can't infer an expected Python version from that.

I also expect bad surprises even in CPython with Programs/_testembed:
many tests use PyConfig. Depending if _testembed is properly rebuilt
or not, bad thing will happen.

That's no worse than bad things happening after changing the compiler, or the ABI versioning scheme. Worst case we work around it with a clean rebuild, best case we figure out why the incremental build didn't do the right thing and fix it.


To implement my PEP 445 "Add new APIs to customize Python memory
allocators", I added a PyMemAllocator structure which is part of the

Quickly, I had to add a new field to PyMemAllocator.  But it wasn't
possible to detect if a C extension used the old or the new
structure... So I decided to rename the structure to PyMemAllocatorEx
to ensure that the compilation of all C extensions using the API will
fail... :-(

I really dislike this solution. What will happen when we will add
another field to the structure, like a new PyMem_Aligned() (similar to
posix_memalign()) function? PyMem_Aligned() can be implementation on
top of an existing memory allocator which doesn't support it natively.
But the problem is again the API and the PyMemAllocatorEx structure...

Passing PY_VERSION_HEX to the pre-init config APIs would address that for all structs in the public API, not just those that are included directly in the config structs.


Night gathers, and now my watch begins. It shall not end until my death.