I'm sorry for posting here and not to python-announce, somehow I think
(perhaps naively) that it may be of interest to people who are
interested in Python development. At the very least, creation of the
original package is (very likely, I didn't trace up to that) was
discussed on python-dev, its removal was discussed on python-dev, so
why revival of it can't be noted here?
Because, turns out, in old good times, there was a bytecode compiler
written in Python, and even as a part of stdlib. Well, actually it's
still there in the latest 2.7 release, with a proud notice: "Remove
in Python3". The point is that I'm with Python since 1.5 times, and
never knew about this package. I'd generally consider that to be
something to be ashamed and hush of, but unfortunately I found that to
be recurring pattern: people interested in playing with a Python
compiler discover "by a chance" and "suddenly" that they should look no
beyond the stdlib for their (usually pretty simple for starters) needs
- oftentimes, after they already started on the more effortful path
(I faithfully spent 2 days on trying to extract a bytecode compiler
from PyPy first).
With that intro, here's it - the port of Python2 compiler package
(https://docs.python.org/2/library/compiler.html) to Python3:
Currently, it generates bytecode compatible with CPython3.5, and is
able to compile its entire Lib/ (which includes not just stdlib modules
per se, but also tests, and the real "teeth-cracking" stuff is there of
course), except for test_super.py, for which current implementation's
teeth are indeed too weak yet.
Now that it passes the compile-stdlib test, the idea is to refactor
it into something which can be easily played with and extended. We'll
see how it goes.
As one of the example of what's intended to be easily done with it, see
I started it when updating the codegen for Python3, but also shows the
intended purpose - it would easy to analyze if an except handler body
contains "del exc" and if not, skip generating superfluous bytecode.
I am new to the list and arriving with a concrete problem that I'd
like to fix myself.
I am embedding Python (3.6) into my C++ application and I would like
to run Python scripts isolated from each other using sub-interpreters.
I am not using threads; everything is supposed to run in the
application's main thread.
I noticed that if I create an interpreter, switch to it and execute
code that imports numpy (1.13), my application will hang.
> python36.dll!_PyCOND_WAIT_MS(_PyCOND_T * cv=0x00000000748a67a0, _RTL_CRITICAL_SECTION * cs=0x00000000748a6778, unsigned long ms=5) Line 245 C
[Inline Frame] python36.dll!PyCOND_TIMEDWAIT(_PyCOND_T *) Line 275 C
python36.dll!take_gil(_ts * tstate=0x0000023251cbc260) Line 224 C
python36.dll!PyEval_RestoreThread(_ts * tstate=0x0000023251cbc260) Line 370 C
python36.dll!PyGILState_Ensure() Line 855 C
[Inline Frame] python36.dll!_PyObject_FastCallDict(_object *) Line 2316 C
[Inline Frame] python36.dll!_PyObject_FastCallKeywords(_object *) Line 2480 C
python36.dll!call_function(_object * * *
pp_stack=0x00000048be5f5e40, __int64 oparg, _object * kwnames) Line
Numpy's extension umath calls PyGILState_Ensure(), which in turn calls
PyEval_RestoreThread on the (auto) threadstate of the main
interpreter. And that's wrong.
We are already holding the GIL with the threadstate of our current
sub-interpreter, so there's no need to switch.
I know that the GIL API is not fully compatible with sub-interpreters,
as issues #10915 and #15751 illustrate.
But since I need to support calls to PyGILState_Ensure - numpy is the
best example -, I am trying to improve the situation here:
That change may be naive, but it does the trick for my use case. If
totally wrong, I don't mind pursuing another alley.
Essentially, I'd like to ask for some guidance in how to tackle this
problem while keeping the current GIL API unchanged (to avoid breaking
I am also wondering how I can test any changes I am proposing. Is
there a test suite for interpreters, for example?
Thank you very much,
I'm currently seeing a behaviour where every time I run "make html",
all 474 source files get rebuilt.
For now, I'm assuming I've messed something up with my local docs
build setup, but figured I'd ask if anyone else was seeing this, in
case it was actually broken at the build level (CI wouldn't pick this
up, since it always builds from scratch anyway).
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I recently modified signatures of two functions PyNode_AddChild() and
These two functions are not listed in C-API docs on docs.python.org, and
are not included in Python.h. However, their names look like they may be
part of C-API. So there appeared a question, what is the source of truth
for C-API, is there an official list? Or is it just the content of Python.h?
I have virtually completely lost the sight of my right eye (and the loss
is quickly progresses) and the sight of my left eye is weak. That is why
my activity as a core developer was decreased significantly at recent
time. My apologies to those who are waiting for my review. I will do it
Disclaimer: I'm not a ctypes expert, so I might have this completely
wrong. If so, I apologise for the noise.
The id() function is documented as returning an abstract ID number. In
CPython, that happens to have been implemented as the address of the
I understand that the only way to pass the address of an object to
ctypes is to use that id. Is that intentional?
As I see it, there is a conflict between two facts:
- that id() returns a memory address is an implementation detail; as
such users should not rely on it, as the implementation could (in
principle) change without notice;
- but users using ctypes have no choice but to rely on id() returning
the object memory address, as of it were an offical part of the API.
Implementations like PyPy which emulate ctypes, while objects don't have
fixed memory locations, will surely have a problem here. I don't know
how PyPy solves this.
Have I misunderstood something here?
For the record, the pickle5 backport (PEP 574) was updated to include
the latest pickle changes from CPython git master.
pickle5 is available for Python 3.6 and 3.7.
Many thanks to all who assisted with feedback. Back at the start of
August my make tests had a nasty block like this:
29 tests failed:
test__xxsubinterpreters test_array test_asyncio test_cmath
test_compile test_complex test_ctypes test_distutils test_embed
test_float test_fractions test_getargs2 test_httplib
test_httpservers test_imaplib test_importlib test_math test_poplib
test_shutil test_signal test_socket test_ssl test_statistics
test_strtod test_struct test_subprocess test_time test_timeout
And now that is down to:
4 tests failed:
test_eintr test_importlib test_multiprocessing_forkserver
While I am still trying to figure out where the "multiprocessing" errors
come from - there is
only one "test" left from the original list - and that one, plus
test_eintr have PR's waiting for your approval.
I could not have gotten this far without help!
Michael aka aixtools