Hi Peter,
Thank you for your bug report. The bug is a recent regression in the
perf module. I just fixed it with the newly released perf 0.9.3.
http://perf.readthedocs.io/en/latest/changelog.html#version-0-9-3-2017-01-16
I also released performance 0.5.1 which now uses perf 0.9.3. By the
way, I fixed perf and performance unit tests on Windows. Changes:
* Fix Windows support (upgrade perf from 0.9.0 to 0.9.3)
* Upgrade requirements:
- Chameleon: 2.25 => 3.0
- Django: 1.10.3 => 1.10.5
- docutils: 0.12 => 0.13.1
- dulwich: 0.15.0 => 0.16.3
- mercurial: 4.0.0 => 4.0.2
- perf: 0.9.0 => 0.9.3
- psutil: 5.0.0 => 5.0.1
Victor
Hi all,
I'd like to run pyperformance (https://github.com/python/performance)
for exploring Python 3.6 x64 speed on Windows. Apparently, "Windows is
now supported" since version 0.1.2, but I'm observing errors such as
the below when trying to run pyperformance, which at first glance seem
pretty fatal for doing anything on Windows - am I missing something?
-----
Execute: venv\cpython3.6-68187c45ff81\Scripts\python.exe -m
performance run --inside-venv
Python benchmark suite 0.5.0
INFO:root:Skipping Python2-only benchmark pyflate; not compatible with
Python sys.version_info(major=3, minor=6, micro=0,
releaselevel='final', serial=0)
INFO:root:Skipping Python2-only benchmark spambayes; not compatible
with Python sys.version_info(major=3, minor=6, micro=0,
releaselevel='final', serial=0)
INFO:root:Skipping Python2-only benchmark hg_startup; not compatible
with Python sys.version_info(major=3, minor=6, micro=0,
releaselevel='final', serial=0)
[ 1/51] 2to3...
INFO:root:Running `c:\program
files\python36\venv\cpython3.6-68187c45ff81\Scripts\python.exe
c:\program files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\benchmarks\bm_2to3.py
--output C:\Users\Peter\AppData\Local\Temp\tmpwc1mjawc`
Traceback (most recent call last):
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\benchmarks\bm_2to3.py",
line 30, in <module>
runner.bench_func('2to3', bench_2to3, command, devnull_out)
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
line 573, in bench_func
return self._main(name, sample_func, inner_loops)
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
line 496, in _main
bench = self._master()
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
line 735, in _master
bench = self._spawn_workers()
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
line 698, in _spawn_workers
worker_bench = self._spawn_worker_bench(calibrate)
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
line 631, in _spawn_worker_bench
suite = self._spawn_worker_suite(calibrate)
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
line 617, in _spawn_worker_suite
proc = subprocess.Popen(cmd, env=env, **kw)
File "c:\program files\python36\lib\subprocess.py", line 707, in __init__
restore_signals, start_new_session)
File "c:\program files\python36\lib\subprocess.py", line 961, in
_execute_child
assert not pass_fds, "pass_fds not supported on Windows."
AssertionError: pass_fds not supported on Windows.
ERROR: Benchmark 2to3 failed: Benchmark died
Traceback (most recent call last):
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\run.py",
line 126, in run_benchmarks
bench = func(cmd_prefix, options)
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\benchmarks\__init__.py",
line 121, in BM_2to3
return run_perf_script(python, options, "2to3")
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\run.py",
line 92, in run_perf_script
run_command(cmd, hide_stderr=not options.verbose)
File "c:\program
files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\run.py",
line 61, in run_command
raise RuntimeError("Benchmark died")
RuntimeError: Benchmark died
-----
Thanks,
Peter
2017-01-03 0:56 GMT+01:00 Victor Stinner <victor.stinner(a)gmail.com>:
> I'm now trying to run benchmarks one more time... ;-)
Good news: results are slowly being uploaded to https://speed.python.org/
It takes 50 minutes to benchmark one revision: first compilation, run
the test suite to train the compiler, recompiler using traces, run
benchmarks.
I have a list of 32 revisions. Running benchmarks on these revisions
will take longer than one day (26 hours).
Victor
Hi,
tl;dr I'm trying to run benchmarks with PGO, but I get new errors, so
speed.python.org is currently broken.
Last december, I upgraded the speed-python server (server used to run
benchmarks for CPython) from Ubuntu 14.04 (LTS) to 16.04 (LTS) to be
able to compile Python using PGO compilation. On Ubuntu 14.04, GCC
failed with an internal error. Sadly, I lost my sudo permission during
the upgrade: it took almost one month to retrieve the permission.
I removed all results from speed.python.org to start to publish new
benchmark results of LTO+PGO compilation. Sadly, I got new not funny
errors.
The first benchmark on the 2.7 branch failed because of a recent
regression specific to the 2.7 branch (now fixed):
http://bugs.python.org/issue28871#msg284145
I fixed my configuration file to only test a list of revisions instead
of starting by testing all branches. One more time, benchmarks failed:
this time, the "python3 -m performance venv recreate" failed on the
creation of the virtual environment:
* It seems like "python -m ensurepip --verbose" failed, whereas it
works well usually. I didn't understand how/why the command failed.
* Running get-pip.py (https://bootstrap.pypa.io/get-pip.py) also
failed for an unknown reason
* virtualenv also failed, probably because the system virtualenv
command was outdated (virtualenv didn't copy _collections_abc in the
venv which is required in Python 3.6)
performance has multiple functions to create the venv, but all failed.
I'm really unlucky.
I modified my scripts to run benchmarks (scripts/ directory of the
performance project) to handle errors more nicely. For example, don't
fail immediately, but log errors.
I also modified performance to flush stdout+stderr before running
subprocess to not mess logs (when stdout and/or stderr are
redirected).
I'm now trying to run benchmarks one more time... ;-)
Victor