since my latest benchmark for PEP 580  involved SageMath, which is
quite a big project, I instead propose a much simpler benchmark
mistune  is a Markdown parser implemented in the Python language. It
optionally allows Cython compilation. It doesn't use any kind of
optimization beyond that, but I created a branch  to use extension
types instead of Python classes.
Cython can either use built-in functions/methods or a custom class
(which is not optimized but which would be optimized with PEP 580).
I benchmarked mistune with custom classes  (binding=True, the
default) and with built-in functions/methods  (binding=False). This
is the median time of 5 runs:
So this shows again that PEP 580 improves performance in actual
real-world use cases.
Hello Python list,
I intend to cross-compile Python v3.6.6 to Threos ( https://threos.io )
operating system. Threos is supports a quite large set from POSIX and
C89/C99. Unfortunately, Threos lacks fork(2), but provides
posix_spawn(3) instead. I already made some local changes in
posixmodule.c to compile due to some features are detected as present
but actually not supported, like HAVE_FORK -- I blame autotools for this
:-). I don't know, however, whether the Python shall cross-compile
My question is that the _posixsubprocess.c can be prepared to use
posix_spawn(3) instead of fork(2)? Maybe the UNIX/Linux version can also
benefit from it, see:
On Thu, Jul 26, 2018 at 2:05 PM, Tim Golden <webhook-mailer(a)python.org> wrote:
> commit: 6a62e1d365934de82ff7c634981b3fbf218b4d5f
> branch: master
> author: Tim Golden <mail(a)timgolden.me.uk>
> committer: GitHub <noreply(a)github.com>
> date: 2018-07-26T22:05:00+01:00
> bpo-34239: Convert test_bz2 to use tempfile (#8485)
> * bpo-34239: Convert test_bz2 to use tempfile
> test_bz2 currently uses the test.support.TESTFN functionality which creates a temporary file local to the test directory named around the pid.
> This can give rise to race conditions where tests are competing with each other to delete and recreate the file.
Per the other thread--
this seems like a wrong statement of the problem as tests are properly
cleaning up after themselves. The leading hypothesis is that unrelated
Windows processes are delaying the deletion (e.g. virus scanners).
> This change converts the tests to use tempfile.mkstemp which gives a different file every time from the system's temp area
> M Lib/test/test_bz2.py
> diff --git a/Lib/test/test_bz2.py b/Lib/test/test_bz2.py
> index 003497f28b16..e62729a5a2f8 100644
> --- a/Lib/test/test_bz2.py
> +++ b/Lib/test/test_bz2.py
> @@ -6,6 +6,7 @@
> import os
> import pickle
> import glob
> +import tempfile
> import pathlib
> import random
> import shutil
> @@ -76,11 +77,14 @@ class BaseTest(unittest.TestCase):
> BIG_DATA = bz2.compress(BIG_TEXT, compresslevel=1)
> def setUp(self):
> - self.filename = support.TESTFN
> + fd, self.filename = tempfile.mkstemp()
> + os.close(fd)
> def tearDown(self):
> - if os.path.isfile(self.filename):
> + try:
> + except FileNotFoundError:
> + pass
> class BZ2FileTest(BaseTest):
> Python-checkins mailing list
My excuse if this is not the appropriate list for a question essentially concerning the AIX port of Python.
The current port of Python for AIX includes composing an export file (/lib/python2.7/config/python.exp) in which there are a number of functions starting "Py_" or "_Py_".
The Vim package for AIX is built referencing the python.exp file and unfortunately, when functions are removed from libpython, even those which are not called, the vim command detects missing symbols.
The most recent case (May 2017), functions _Py_hgidentity, _Py_hgversion and _Py_svnversion were replaced/removed, see "bpo-27593: Get SCM build info from git instead of hg (#1327)".
Is it correct to assume that the "_Py_" functions are internal (Python name space) that should/must not be called by or made visible to application code ?
Could you indicate a URL to the authoritative API documentation ?
Thanks for your replies.
Is the PEP 572 implemented? If no, who is working on that? Is there a WIP
pull request? An open issue?
One month ago, I tried Chis Angelo's implementation but it implemented an
old version of the PEP which evolved in the meanwhile.
[trying again; sorry if it shows up twice]
I'm just easing back into core development work by trying to get a
stable testing environment for Python development on Windows.
One problem is that certain tests use support.TESTFN (a local directory
constructed from the pid) for output files etc. However this can cause
issues on Windows when recreating the folders / files for multiple
tests, especially when running in parallel.
Here's an example on my laptop deliberately running 3 tests with -j0
which I know will generate an error about one time in three:
C:\work-in-progress\cpython>python -mtest -j0 test_urllib2 test_bz2
Running Debug|Win32 interpreter...
Run tests in parallel using 6 child processes
0:00:23 [1/3/1] test_urllib2 failed
test test_urllib2 failed -- Traceback (most recent call last):
File "C:\work-in-progress\cpython\lib\test\test_urllib2.py", line
821, in test_file
f = open(TESTFN, "wb")
PermissionError: [Errno 13] Permission denied: '@test_15564_tmp'
Although these errors are both intermittent and fairly easily spotted,
the effect is that I rarely get a clean test run when I'm applying a patch.
I started to address this years ago but things stalled. I'm happy to
pick this up again and have another go, but I wanted to ask first
whether there was any objection to my converting tests to using tempfile
functions which should avoid the problem?
I finally managed to get some real-life benchmarks for why we need a
faster C calling protocol (see PEPs 576, 579, 580).
I focused on the Cython compilation of SageMath. By default, a function
in Cython is an instance of builtin_function_or_method (analogously,
method_descriptor for a method), which has special optimizations in the
CPython interpreter. But the option "binding=True" changes those to a
custom class which is NOT optimized.
I ran the full SageMath testsuite several times without and with
binding=True to find out any significant differences. The most dramatic
difference is multiplication for generic matrices. More precisely, with
the following command:
python -m timeit -s "from sage.all import MatrixSpace, GF; M =
MatrixSpace(GF(9), 200).random_element()" "M * M"
With binding=False, I got
10 loops, best of 3: 692 msec per loop
With binding=True, I got
10 loops, best of 3: 1.16 sec per loop
This is a big regression which should be gone completely with PEP 580.
I should mention that this was done on Python 2.7.15 (SageMath is not
yet ported to Python 3) but I see no reason why the conclusions
shouldn't be valid for newer Python versions. I used SageMath 8.3.rc1
and Cython 0.28.4.
I hope that this finally shows that the problems mentioned in PEP 579
I encountered a problem with the Windows packaging of gPodder
basic libraries (zlib, openssl) depended upon by python
platform-specific modules are loaded preferably :
1. from lib-dynload (where they are not)
2. from the Windows directory (can be any version)
3. from the binary directory, next to gpodder.exe (where they are)
So an old zlib1.dll installed by another application in c:\Windows was
loaded, incompatible with libpng and gPodder couldn't start.
I don't know what's the best approach to solve it:
- copy those libraries to lib\pythonxx\lib-dynload (works)
- preload them in my main script before they are loaded by the module
- patch something in python (dynload_win.c ?) to search first in the
executable directory (not tried)
Please can you provide me with insight on this?
Details in the issue: