Dear Sir / Madam ,
I'm Lasan Nishshanka, CEO and the Founder of Clenontec. We are creating
an IDE. It's name is Bacend Studio. We're creating this for the Windows
Operating System. This IDE is designed to perform special functions such
as App Development, Software Development, Games Development, Machine
Learning, Artificial Intelligence etc.
We will send you this letter to get permission to add the PYTHON
Programming Language to our IDE. If you can give it permission, please
tell us. We await a speedy reply from you.
Company Web Site : https://www.clenontec.com/
E Mail : Clenontec(a)gmail.com
This email has been checked for viruses by AVG.
Today I discovered the this struct
const char* name;
unsigned int flags;
PyType_Slot *slots; /* terminated by slot==0. */
with "PyTypeSlot *slots" being on line 190 of object.h causes a problem when compiled with code that brings in Qt. Qt has macro definitions of slots.
With a cursory preprocessing of the file I was working with, using the handy gcc options -dM -E, I found that
slots was defined to nothing
and hence caused problems when object.h was brought into the mix.
I will try to make a simple reproducer tomorrow. I know this probably could be solved by header file inclusion re-ordering,
or in some cases #undef'ing slots before including Python.h, but I also thought the Python dev team would like to know
about this issue.
Right now, there are 14 open issues with "test_asyncio" in the title.
Many test_asyncio tests have race conditions. I'm trying to fix them
one by one, but it takes time, and then new tests are added with new
race condition :-( For example, the following new test is failing
randomly on Windows:
"Windows: test_asyncio: test_huge_content_recvinto() fails randomly
with ProactorEventLoop" is failing randomly since 6 months:
test_asyncio uses more and more functional tests which is a good
thing. In the early days of asyncio, most tests mocked more than half
of asyncio to really be "unit test". But at the end, the test tested
more mocks than asyncio... The problem of functional tests is that
it's hard to design them properly to avoid all race conditions,
especially when you consider multiplatform (Windows, macOS, Linux,
It would help me if someone could try to investigate these issues,
provide a reliable way to reproduce them, and propose a fix. (Simply
saying that you can reproduce the test and that you would like to work
on an issue doesn't really help, sorry.)
Recently, I started to experiment "./python -m test [options] -F
-j100" to attempt to reproduce some tricky race conditions: -j100
spawns 100 worker processes in parallel and -F stands for --forever
(run tests in loop and stop at the first failure). I was surprised
that my Fedora 30 didn't burn in blame. In fact, the GNOME desktop
remains responsible even with a system load higher than 100. The Linux
kernel (5.2) is impressive! Under such high system load (my laptop has
8 logical CPUs), race conditions are way more likely.
The problem of test_asyncio is that it's made of 2160 tests, see:
./python -m test test_asyncio --list-cases
You may want to only run a single test case (class) or even a single
test method: see --match option which can be used multiple times to
only run selected test classes or selected test methods. See also
--matchfile which is similar but uses a file. Example:
$ ./python -m test test_asyncio --list-cases > cases
# edit cases
$ ./python -m test test_asyncio --matchfile=cases
test_asyncio is one of the most unstable test: I'm getting more and
more buildbot-status emails about test_asyncio... likely because we
fixed most of the other race conditions which is a good thing ;-)
Some issues look to be specific to Windows, but it should be possible
to reproduce most issues on Linux as Linux. Sometimes, it's just that
some specific Windows buildbot workers are slower than other buildbot
Good luck ;-)
Night gathers, and now my watch begins. It shall not end until my death.
I have been working on some feature of a deterministic profiler(
github.com/sumerc/yappi). The feature is about getting arguments for a
given set of function names. For example: you can define something like foo
1,3,arg_name and when foo function is called, profiler will simply collect
the first, third from *args and a named argument arg_name from *kwargs.
For Python functions I am using following approach: Please note that the
code below is executing in the PyTrace_CALL event of the profiler in C side:
Look co_argcount and co_varnames to determine the names of arguments and
then use these names to retrieve values from f_locals. It seems to be
working fine for now. My first question is: Is there a better way to do
And for C functions, I am totally in dark. I have played with f_valuestack
and retrieve some values from there but the indexes seem to change from
Python version to version and also I think there is no way of getting named
I have been dealing with this for a while, so there might be unclear points
in my explanation. I would really appreciate if anyone can help me on a
correct direction on this.
I wrote a PR to fix the following urllib security vulnerability:
"urlparse of urllib returns wrong hostname"
While writing my fix, I found another issue about "[" and "]"
characters in the user:password section of an URL:
"urllib IPv6 parsing fails with special characters in passwords"
My PR tries to validate the "scope" part of
"http://[IPv6%scope]/...": reject "%", "[" and "]" in scope. But I'm
not sure that Python should really support the scope in an URL. Should
we just reject URL with "%scope"? Or if we allow it, which characters
should be allowed and/or rejected?
It seems like Firefox and Chromium don't support an IPv6 with as a
scope: when I type http://[::1%1]/ : they open a Google search on this
I tested Python urllib.request.urlopen() with my PR:
http://[::1%1]:8080/ works as expected: it opens a connection to the
IPv6 localhost in the loopback interface (TCP port 8080).
Currently, my PR allows "%scope" but it rejects "%", "[" and "]"
characters in the scope.
I let you go through these 2 RFC about IPv6 scope / "zone identifier":
Night gathers, and now my watch begins. It shall not end until my death.
In early November this year I'll be leading the first ever Python
'EnHackathon' at the Ensoft Cisco office - we anticipate having ~10
first-time contributors, all with experience writing C and/or Python (and
training in both). We each have up to 5 days (per year) to contribute, with
my intention being to focus on contributing to CPython. We will be blogging
about our experience (probably using GitHub pages) - I'll send out the URL
when it's been set up.
Having spoken to people about this at PyCon UK this year and emailed on the
core-mentorship mailing list, I'm posting here looking for any core devs
who would be happy to provide us with some guidance. I'm wary of PR reviews
being a blocker, and not wanting my co-contributors to feel disheartened by
issues they're working on not reaching a resolution.
We're open to working on almost any area of CPython, although particular
areas of interest/familiarity might be: CPython core, re, unittest,
subprocess, asyncio, ctypes, typing. There would be scope for us to work in
small teams to work on more substantial issues if that is seen as a useful
way to contribute, otherwise we can start with some of the easier issues on
Would anyone here be willing to offer some support to help us reach our
full potential? Please don't hesitate to contact me if you're interested in
any way, or if you have any advice.
If this year is a success there's a high probability we would look to do a
similar thing in future years (with the experience from this year already
in the bag)!
Since we have two competing proposals for the Steering Council to
consider when it comes to the target release date for Python 3.9.0,
Steve, Łukasz and I put together a short-ish overview of the common
set of problems that the two PEPs are both attempting to address, as
well as the common risks that they're attempting to mitigate.
If folks have questions or suggestions regarding the specific
proposals, those discussion threads can be found on Discourse here:
PEP 602 (12 month release cadence):
PEP 605 (24 month traditional release cadence with a rolling
There's also a thread for the new overview PEP:
I expect most comments will be best targeted at the threads for the
specific proposals, but the PEP 607 thread should still be useful if
folks have questions or comments about the overview PEP itself.
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
I have been comiling Python 3.8 from source and have had a really difficult time
with getting _ctypes to compile. I see that libffi is no longer distributed with the
Python source code, in preference for what is on the system. I searched for a
PEP that describes the rationale behind this, but my Google fu must be weak.
I have also seen requests that a patch be committed that makes configuring the
use of libffi easier, but as far as I can tell, these have not been committed. It is
something I would like to see as I am in a situation where I cannot depend on the
system libffi - we support older Linux distributions that don't have libffi - an so I
am making a static libffi to be linked in.
Any guidance on this issue would be helpful.