Right now, there are 14 open issues with "test_asyncio" in the title.
Many test_asyncio tests have race conditions. I'm trying to fix them
one by one, but it takes time, and then new tests are added with new
race condition :-( For example, the following new test is failing
randomly on Windows:
"Windows: test_asyncio: test_huge_content_recvinto() fails randomly
with ProactorEventLoop" is failing randomly since 6 months:
test_asyncio uses more and more functional tests which is a good
thing. In the early days of asyncio, most tests mocked more than half
of asyncio to really be "unit test". But at the end, the test tested
more mocks than asyncio... The problem of functional tests is that
it's hard to design them properly to avoid all race conditions,
especially when you consider multiplatform (Windows, macOS, Linux,
It would help me if someone could try to investigate these issues,
provide a reliable way to reproduce them, and propose a fix. (Simply
saying that you can reproduce the test and that you would like to work
on an issue doesn't really help, sorry.)
Recently, I started to experiment "./python -m test [options] -F
-j100" to attempt to reproduce some tricky race conditions: -j100
spawns 100 worker processes in parallel and -F stands for --forever
(run tests in loop and stop at the first failure). I was surprised
that my Fedora 30 didn't burn in blame. In fact, the GNOME desktop
remains responsible even with a system load higher than 100. The Linux
kernel (5.2) is impressive! Under such high system load (my laptop has
8 logical CPUs), race conditions are way more likely.
The problem of test_asyncio is that it's made of 2160 tests, see:
./python -m test test_asyncio --list-cases
You may want to only run a single test case (class) or even a single
test method: see --match option which can be used multiple times to
only run selected test classes or selected test methods. See also
--matchfile which is similar but uses a file. Example:
$ ./python -m test test_asyncio --list-cases > cases
# edit cases
$ ./python -m test test_asyncio --matchfile=cases
test_asyncio is one of the most unstable test: I'm getting more and
more buildbot-status emails about test_asyncio... likely because we
fixed most of the other race conditions which is a good thing ;-)
Some issues look to be specific to Windows, but it should be possible
to reproduce most issues on Linux as Linux. Sometimes, it's just that
some specific Windows buildbot workers are slower than other buildbot
Good luck ;-)
Night gathers, and now my watch begins. It shall not end until my death.
I have been working on some feature of a deterministic profiler(
github.com/sumerc/yappi). The feature is about getting arguments for a
given set of function names. For example: you can define something like foo
1,3,arg_name and when foo function is called, profiler will simply collect
the first, third from *args and a named argument arg_name from *kwargs.
For Python functions I am using following approach: Please note that the
code below is executing in the PyTrace_CALL event of the profiler in C side:
Look co_argcount and co_varnames to determine the names of arguments and
then use these names to retrieve values from f_locals. It seems to be
working fine for now. My first question is: Is there a better way to do
And for C functions, I am totally in dark. I have played with f_valuestack
and retrieve some values from there but the indexes seem to change from
Python version to version and also I think there is no way of getting named
I have been dealing with this for a while, so there might be unclear points
in my explanation. I would really appreciate if anyone can help me on a
correct direction on this.
I wrote a PR to fix the following urllib security vulnerability:
"urlparse of urllib returns wrong hostname"
While writing my fix, I found another issue about "[" and "]"
characters in the user:password section of an URL:
"urllib IPv6 parsing fails with special characters in passwords"
My PR tries to validate the "scope" part of
"http://[IPv6%scope]/...": reject "%", "[" and "]" in scope. But I'm
not sure that Python should really support the scope in an URL. Should
we just reject URL with "%scope"? Or if we allow it, which characters
should be allowed and/or rejected?
It seems like Firefox and Chromium don't support an IPv6 with as a
scope: when I type http://[::1%1]/ : they open a Google search on this
I tested Python urllib.request.urlopen() with my PR:
http://[::1%1]:8080/ works as expected: it opens a connection to the
IPv6 localhost in the loopback interface (TCP port 8080).
Currently, my PR allows "%scope" but it rejects "%", "[" and "]"
characters in the scope.
I let you go through these 2 RFC about IPv6 scope / "zone identifier":
Night gathers, and now my watch begins. It shall not end until my death.
Since we have two competing proposals for the Steering Council to
consider when it comes to the target release date for Python 3.9.0,
Steve, Łukasz and I put together a short-ish overview of the common
set of problems that the two PEPs are both attempting to address, as
well as the common risks that they're attempting to mitigate.
If folks have questions or suggestions regarding the specific
proposals, those discussion threads can be found on Discourse here:
PEP 602 (12 month release cadence):
PEP 605 (24 month traditional release cadence with a rolling
There's also a thread for the new overview PEP:
I expect most comments will be best targeted at the threads for the
specific proposals, but the PEP 607 thread should still be useful if
folks have questions or comments about the overview PEP itself.
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I have been comiling Python 3.8 from source and have had a really difficult time
with getting _ctypes to compile. I see that libffi is no longer distributed with the
Python source code, in preference for what is on the system. I searched for a
PEP that describes the rationale behind this, but my Google fu must be weak.
I have also seen requests that a patch be committed that makes configuring the
use of libffi easier, but as far as I can tell, these have not been committed. It is
something I would like to see as I am in a situation where I cannot depend on the
system libffi - we support older Linux distributions that don't have libffi - an so I
am making a static libffi to be linked in.
Any guidance on this issue would be helpful.
I'm trying to upgrade the pydevd debugger to the latest version of CPython
(3.8), however I'm having some issues being able to access
`PyInterpreterState.eval_frame` when compiling, so, I'd like to ask if
someone can point me in the right direction.
What I'm trying to do is compile something like:
PyThreadState *ts = PyThreadState_Get();
PyInterpreterState *interp = ts->interp;
interp->eval_frame = my_frame_eval_func;
and the error I'm having is:
_pydevd_frame_eval/pydevd_frame_evaluator.c(7534): error C2037: left of
'eval_frame' specifies undefined struct/union '_is'
So, it seems that now "pystate.h" only has a forward reference to "_is" and
a typedef from " PyInterpreterState" to "_is" and "_is" is defined in
"include/internal/pycore_pystate.h", which doesn't seem like I should be
including (in fact, if I try to include it I get an error saying that I
would need to define Py_BUILD_CORE)... so, can someone point me to the
proper way to set the frame evaluation function on CPython 3.8?
I want Homebrew uses `--enable-optimizations` and `--with-lto` option
for building Python. But maintainer said:
> Given this is not a default option, probably not, unless it is done in upstream (“official”) binaries.
Are these options used for official macOS binaries?
Is there official information about the build step of official binaries?
Inada Naoki <songofacandy(a)gmail.com>
I cannot get Python 3.8.0 installed on Linux ( RHEL 8 / CentOS 8).
It's not available in any package repo. When I try to build from source, there are dependencies missing (3), that I cannot find anywhere.
More info here: (I did not want to write this up twice)
The latest version of Python 3 available to me on Linux was released over three years ago ( Python 3.6.0 ), I don't understand why.