Starting today, while reviewing the "Summary of Python tracker issues",
I get the following for about half the clicks.
"""
Secure Connection Failed
An error occurred during a connection to bugs.python.org. A PKCS #11
module returned CKR_DEVICE_ERROR, indicating that a problem has occurred
with the token or slot. Error code: SEC_ERROR_PKCS11_DEVICE_ERROR
The page you are trying to view cannot be shown because the
authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem.
[ Try Again ]
"""
So far, the Try Again button has always worked.
I believe I upgraded Firefox (to 50.0) just last night, or possibly the
night before.
--
Terry Jan Reedy
When I added Python 3.6 support to coverage.py, I posted a Mac wheel to
PyPI: https://pypi.python.org/pypi/coverage/ That wheel was built
against 3.6a3, the latest version at the time. When I use it now on
3.6b3, it doesn't work right. The reason is that the co_firstlineno
field in PyCodeObject moved in September, as part of the PEP 523 work:
https://github.com/python/cpython/commit/f1e6d88b3ca2b56d51d87b6b38ea1870c5…
The docs say that PyCodeObject can change at any time, but I don't see
why the field had to move in the first place. Was this needed?
Am I doing the wrong thing by using PyCodeObject fields directly in the
coverage.py C trace function? It seems like this was an unnecessary
breaking change, even if it is a non-guaranteed interface.
BTW, there are some comments that should be updated based on that
commit, I can submit a patch to fix them.
--Ned.
(I assume that this list is appropriate for this topic, but if it
isn't, I will be grateful for being redirected to the appropriate
place.)
It seems that warnings behave differently in Python 2 and Python
3. Consider the following script.
****************
def func():
warnings.warn("Blabla", RuntimeWarning)
warnings.simplefilter("ignore")
func()
warnings.simplefilter("always")
func()
****************
When run with CPython 2.7, there is no output, when run with
CPython 3.5, there is the following output:
****************
/tmp/test.py:4: RuntimeWarning: Blabla
warnings.warn("Blabla", RuntimeWarning)
****************
Was there indeed a change of how warnings behave in Python 3? I
searched, but couldn't find any documentation for this.
Understanding how warnings work is especially important when
testing for them.
This is how I stumbled across this, in case anyone is interested:
https://gitlab.kwant-project.org/kwant/kwant/issues/1#note_2608
I also note that the current documentation still uses
DeprecationWarning in the example of how to suppress a warning:
https://docs.python.org/dev/library/warnings.html#temporarily-suppressing-w….
Since DeprecationWarning is disabled by default in modern Python,
perhaps a different warning would provide a better example?
Hi Python-Dev,
I'm trying to get my head around on what's accepted in f-strings --
https://www.python.org/dev/peps/pep-0498/ seems very light on the details
on what it does accept as an expression and how things should actually be
parsed (and the current implementation still doesn't seem to be in a state
for a final release, so, I thought asking on python-dev would be a
reasonable option).
I was thinking there'd be some grammar for it (something as
https://docs.python.org/3.6/reference/grammar.html), but all I could find
related to this is a quote saying that f-strings should be something as:
f ' <text> { <expression> <optional !s, !r, or !a> <optional : format
specifier> } <text>
So, given that, is it safe to assume that <expression> would be equal to
the "test" node from the official grammar?
I initially thought it would obviously be, but the PEP says that using a
lamda inside the expression would conflict because of the colon (which
wouldn't happen if a proper grammar was actually used for this parsing as
there'd be no conflict as the lamda would properly consume the colon), so,
I guess some pre-parser steps takes place to separate the expression to
then be parsed, so, I'm interested on knowing how exactly that should work
when the implementation is finished -- lots of plus points if there's
actually a grammar to back it up :)
Thanks,
Fabio
Hi,
Antoine Pitrou asked me to compare Python 3.6 to Python 3.5. Here are
results on the speed-python server using LTO compilation (but not
PGO):
$ python3 -m perf compare_to 2016-11-03_15-37-3.5-89f7386104e2.json
2016-11-03_15-38-3.6-c4319c0d0131.json -G --min-speed=5
Slower (17):
- call_method_slots: 16.7 ms +- 0.2 ms -> 28.3 ms +- 0.7 ms: 1.70x slower
- call_method: 16.9 ms +- 0.2 ms -> 28.6 ms +- 0.8 ms: 1.69x slower
- call_method_unknown: 18.5 ms +- 0.2 ms -> 30.8 ms +- 0.8 ms: 1.67x slower
- python_startup: 20.3 ms +- 0.7 ms -> 26.9 ms +- 0.6 ms: 1.33x slower
- regex_compile: 397 ms +- 4 ms -> 482 ms +- 6 ms: 1.21x slower
- mako: 42.2 ms +- 0.5 ms -> 49.7 ms +- 2.5 ms: 1.18x slower
- deltablue: 17.8 ms +- 0.2 ms -> 20.1 ms +- 0.6 ms: 1.13x slower
- chameleon: 29.1 ms +- 0.4 ms -> 31.9 ms +- 0.4 ms: 1.10x slower
- genshi_text: 89.8 ms +- 4.2 ms -> 97.8 ms +- 1.1 ms: 1.09x slower
- pickle_pure_python: 1.30 ms +- 0.02 ms -> 1.41 ms +- 0.03 ms: 1.08x slower
- logging_simple: 35.8 us +- 0.7 us -> 38.4 us +- 0.7 us: 1.07x slower
- sqlalchemy_declarative: 331 ms +- 3 ms -> 354 ms +- 6 ms: 1.07x slower
- logging_format: 42.6 us +- 0.5 us -> 45.5 us +- 0.8 us: 1.07x slower
- call_simple: 13.1 ms +- 0.4 ms -> 13.9 ms +- 0.2 ms: 1.06x slower
- genshi_xml: 197 ms +- 2 ms -> 209 ms +- 2 ms: 1.06x slower
- richards: 190 ms +- 5 ms -> 201 ms +- 6 ms: 1.06x slower
- go: 607 ms +- 19 ms -> 640 ms +- 26 ms: 1.05x slower
Faster (15):
- xml_etree_iterparse: 498 ms +- 11 ms -> 230 ms +- 5 ms: 2.17x faster
- unpickle_list: 10.6 us +- 0.3 us -> 7.86 us +- 0.16 us: 1.35x faster
- xml_etree_parse: 345 ms +- 6 ms -> 298 ms +- 8 ms: 1.16x faster
- xml_etree_generate: 362 ms +- 5 ms -> 320 ms +- 8 ms: 1.13x faster
- scimark_lu: 589 ms +- 24 ms -> 523 ms +- 18 ms: 1.13x faster
- regex_effbot: 5.88 ms +- 0.06 ms -> 5.23 ms +- 0.05 ms: 1.13x faster
- sympy_sum: 269 ms +- 7 ms -> 244 ms +- 7 ms: 1.10x faster
- spectral_norm: 317 ms +- 6 ms -> 287 ms +- 2 ms: 1.10x faster
- sympy_expand: 1.25 sec +- 0.04 sec -> 1.15 sec +- 0.03 sec: 1.09x faster
- xml_etree_process: 291 ms +- 5 ms -> 268 ms +- 14 ms: 1.09x faster
- float: 336 ms +- 8 ms -> 310 ms +- 7 ms: 1.08x faster
- regex_dna: 323 ms +- 3 ms -> 298 ms +- 3 ms: 1.08x faster
- unpickle: 38.4 us +- 0.6 us -> 35.5 us +- 1.2 us: 1.08x faster
- crypto_pyaes: 267 ms +- 3 ms -> 249 ms +- 2 ms: 1.07x faster
- nbody: 262 ms +- 3 ms -> 246 ms +- 3 ms: 1.07x faster
Benchmark hidden because not significant (31): 2to3, chaos, (...)
---
call_method_* slowdown: http://bugs.python.org/issue28618
python_startup slowdown: http://bugs.python.org/issue28637
I didn't analyze results yet.
Victor
(Re-post without the two attached files of 100 KB each.)
Hi,
You may know that I'm working on benchmarks. I regenerated all
benchmark results of speed.python.org using performance 0.3.2
(benchmark suite). I started to analyze results.
All results are available online on the website:
https://speed.python.org/
To communicate on my work on benchmarks, I tweeted two pictures:
"sympy benchmarks: Python 3.6 is between 8% and 48% faster than Python
2.7 #python #benchmark":
https://twitter.com/VictorStinner/status/794289596683210760
"Python 3.6 is between 25% and 54% slower than Python 2.7 in the
following benchmarks":
https://twitter.com/VictorStinner/status/794305065708376069
Many people were disappointed that Python 3.6 can be up to 54% slower
than Python 2.7. In fact, I know many reasons which explain that, but
it's hard to summarize them in 140 characters ;-)
For example, Python 3.6 is 54% slower than Python 2.7 on the benchmark
pycrypto_aes. This benchmark tests a pure Python implementation of the
crypto cipher AES. You may know that CPython is slow for CPU intensive
functions, especially on integer and floatting point numbers.
"int" in Python 3 is now "long integers" by default, which is known to
be a little bit slower than "short int" of Python 2. On a more
realistic benchmark (see other benchmarks), the overhead of Python 3
"long int" is negligible.
AES is a typical example stressing integers. For me, it's a dummy
benchmark: it doesn't make sense to use Python for AES: modern CPUs
have an *hardware* implemention which is super fast.
Well, I didn't have time to analyze in depth individual benchmarks. If
you want to help me, here is the source code of benchmarks:
https://github.com/python/performance/blob/master/performance/benchmarks/
Raw results of Python 3.6 compared to Python 2.7:
-------------------
$ python3 -m perf compare_to 2016-11-03_15-36-2.7-91f024fc9b3a.json.gz
2016-11-03_15-38-3.6-c4319c0d0131.json.gz -G --min-speed=5
Slower (40):
- python_startup: 7.74 ms +- 0.28 ms -> 26.9 ms +- 0.6 ms: 3.47x slower
- python_startup_no_site: 4.43 ms +- 0.08 ms -> 10.4 ms +- 0.4 ms: 2.36x slower
- unpickle_pure_python: 417 us +- 3 us -> 918 us +- 14 us: 2.20x slower
- call_method: 16.3 ms +- 0.2 ms -> 28.6 ms +- 0.8 ms: 1.76x slower
- call_method_slots: 16.2 ms +- 0.4 ms -> 28.3 ms +- 0.7 ms: 1.75x slower
- call_method_unknown: 18.4 ms +- 0.2 ms -> 30.8 ms +- 0.8 ms: 1.67x slower
- crypto_pyaes: 161 ms +- 2 ms -> 249 ms +- 2 ms: 1.54x slower
- xml_etree_parse: 201 ms +- 5 ms -> 298 ms +- 8 ms: 1.49x slower
- logging_simple: 26.4 us +- 0.3 us -> 38.4 us +- 0.7 us: 1.46x slower
- logging_format: 31.3 us +- 0.4 us -> 45.5 us +- 0.8 us: 1.45x slower
- pickle_pure_python: 986 us +- 9 us -> 1.41 ms +- 0.03 ms: 1.43x slower
- spectral_norm: 208 ms +- 2 ms -> 287 ms +- 2 ms: 1.38x slower
- logging_silent: 660 ns +- 7 ns -> 865 ns +- 31 ns: 1.31x slower
- chaos: 240 ms +- 2 ms -> 314 ms +- 4 ms: 1.31x slower
- go: 490 ms +- 2 ms -> 640 ms +- 26 ms: 1.31x slower
- xml_etree_iterparse: 178 ms +- 2 ms -> 230 ms +- 5 ms: 1.29x slower
- sqlite_synth: 8.29 us +- 0.16 us -> 10.6 us +- 0.2 us: 1.28x slower
- xml_etree_process: 210 ms +- 6 ms -> 268 ms +- 14 ms: 1.28x slower
- django_template: 387 ms +- 4 ms -> 484 ms +- 5 ms: 1.25x slower
- fannkuch: 830 ms +- 32 ms -> 1.04 sec +- 0.03 sec: 1.25x slower
- hexiom: 20.2 ms +- 0.1 ms -> 24.7 ms +- 0.2 ms: 1.22x slower
- chameleon: 26.1 ms +- 0.2 ms -> 31.9 ms +- 0.4 ms: 1.22x slower
- regex_compile: 395 ms +- 2 ms -> 482 ms +- 6 ms: 1.22x slower
- json_dumps: 25.8 ms +- 0.2 ms -> 31.0 ms +- 0.5 ms: 1.20x slower
- nqueens: 229 ms +- 2 ms -> 274 ms +- 2 ms: 1.20x slower
- genshi_text: 81.9 ms +- 0.6 ms -> 97.8 ms +- 1.1 ms: 1.19x slower
- raytrace: 1.17 sec +- 0.03 sec -> 1.39 sec +- 0.03 sec: 1.19x slower
- scimark_monte_carlo: 240 ms +- 7 ms -> 282 ms +- 10 ms: 1.17x slower
- scimark_sor: 441 ms +- 8 ms -> 517 ms +- 12 ms: 1.17x slower
- deltablue: 17.4 ms +- 0.1 ms -> 20.1 ms +- 0.6 ms: 1.16x slower
- sqlalchemy_declarative: 310 ms +- 3 ms -> 354 ms +- 6 ms: 1.14x slower
- call_simple: 12.2 ms +- 0.2 ms -> 13.9 ms +- 0.2 ms: 1.14x slower
- scimark_fft: 613 ms +- 19 ms -> 694 ms +- 23 ms: 1.13x slower
- meteor_contest: 191 ms +- 1 ms -> 215 ms +- 2 ms: 1.13x slower
- pathlib: 46.9 ms +- 0.4 ms -> 52.6 ms +- 0.9 ms: 1.12x slower
- richards: 181 ms +- 1 ms -> 201 ms +- 6 ms: 1.11x slower
- genshi_xml: 191 ms +- 2 ms -> 209 ms +- 2 ms: 1.10x slower
- float: 290 ms +- 5 ms -> 310 ms +- 7 ms: 1.07x slower
- scimark_sparse_mat_mult: 8.19 ms +- 0.22 ms -> 8.74 ms +- 0.15 ms:
1.07x slower
- xml_etree_generate: 302 ms +- 3 ms -> 320 ms +- 8 ms: 1.06x slower
Faster (15):
- telco: 707 ms +- 22 ms -> 22.1 ms +- 0.4 ms: 32.04x faster
- unpickle_list: 15.0 us +- 0.3 us -> 7.86 us +- 0.16 us: 1.90x faster
- pickle_list: 14.7 us +- 0.2 us -> 9.12 us +- 0.38 us: 1.61x faster
- json_loads: 98.7 us +- 2.3 us -> 62.3 us +- 0.7 us: 1.58x faster
- pickle: 40.4 us +- 0.6 us -> 27.1 us +- 0.5 us: 1.49x faster
- sympy_sum: 361 ms +- 10 ms -> 244 ms +- 7 ms: 1.48x faster
- sympy_expand: 1.68 sec +- 0.02 sec -> 1.15 sec +- 0.03 sec: 1.47x faster
- regex_v8: 62.0 ms +- 0.5 ms -> 47.2 ms +- 0.6 ms: 1.31x faster
- sympy_str: 699 ms +- 22 ms -> 537 ms +- 15 ms: 1.30x faster
- regex_effbot: 6.67 ms +- 0.04 ms -> 5.23 ms +- 0.05 ms: 1.28x faster
- mako: 61.5 ms +- 0.7 ms -> 49.7 ms +- 2.5 ms: 1.24x faster
- html5lib: 298 ms +- 7 ms -> 261 ms +- 6 ms: 1.14x faster
- sympy_integrate: 55.9 ms +- 0.3 ms -> 51.8 ms +- 1.0 ms: 1.08x faster
- pickle_dict: 69.4 us +- 0.9 us -> 65.2 us +- 3.2 us: 1.06x faster
- scimark_lu: 551 ms +- 26 ms -> 523 ms +- 18 ms: 1.05x faster
Benchmark hidden because not significant (8): 2to3, dulwich_log,
nbody, pidigits, regex_dna, tornado_http, unpack_sequence, unpickle
Ignored benchmarks (3) of 2016-11-03_15-36-2.7-91f024fc9b3a.json:
hg_startup, pyflate, spambayes
-------------------
Please ignore call_method, call_method_slots, call_method_unknown
benchmarks: it seems like I had an issue on the benchmark server. I
was unable to reproduce he 70% slowdown on my laptop.
JSON files if you want to analyze them yourself:
http://www.haypocalc.com/tmp/2016-11-03_15-36-2.7-91f024fc9b3a.json.gzhttp://www.haypocalc.com/tmp/2016-11-03_15-38-3.6-c4319c0d0131.json.gz
I hope that my work on benchmarks will motive some developers to look
closer at Python 3 performance to find interesting optimizations ;-)
Victor
Hi all
Those of you who follow me on Twitter (@zooba) may have noticed that I
posted one of my rare blog posts over the weekend about the increasing
range of Python installers available for Windows.
I figured I'd draw some attention here in case others are interested in
my rationale for why the main installer is how it is, and why I'm also
experimenting with other forms of installer or package:
http://stevedower.id.au/blog/why-so-many-python-installers/
Arguably this would make a good section in the docs or a PEP, though
it's very point-in-time content, so a blog seems fine for now. There's
also not really anything in the docs about how Linux distributors
package up CPython, so maybe it's best as entirely standalone content.
Cheers,
Steve
(In case anyone is suspicious, I earn no income from my blog and there
are no ads. So I hope I'm not promoting the link for selfish reasons :) )
Hi,
You may know that I'm working on benchmarks. I regenerated all
benchmark results of speed.python.org using performance 0.3.2
(benchmark suite). I started to analyze results.
All results are available online on the website:
https://speed.python.org/
To communicate on my work on benchmarks, I tweeted two pictures:
"sympy benchmarks: Python 3.6 is between 8% and 48% faster than Python
2.7 #python #benchmark":
https://twitter.com/VictorStinner/status/794289596683210760
"Python 3.6 is between 25% and 54% slower than Python 2.7 in the
following benchmarks":
https://twitter.com/VictorStinner/status/794305065708376069
Many people were disappointed that Python 3.6 can be up to 54% slower
than Python 2.7. In fact, I know many reasons which explain that, but
it's hard to summarize them in 140 characters ;-)
For example, Python 3.6 is 54% slower than Python 2.7 on the benchmark
pycrypto_aes. This benchmark tests a pure Python implementation of the
crypto cipher AES. You may know that CPython is slow for CPU intensive
functions, especially on integer and floatting point numbers.
"int" in Python 3 is now "long integers" by default, which is known to
be a little bit slower than "short int" of Python 2. On a more
realistic benchmark (see other benchmarks), the overhead of Python 3
"long int" is negligible.
AES is a typical example stressing integers. For me, it's a dummy
benchmark: it doesn't make sense to use Python for AES: modern CPUs
have an *hardware* implemention which is super fast.
Well, I didn't have time to analyze in depth individual benchmarks. If
you want to help me, here is the source code of benchmarks:
https://github.com/python/performance/blob/master/performance/benchmarks/
Raw results of Python 3.6 compared to Python 2.7:
-------------------
$ python3 -m perf compare_to 2016-11-03_15-36-2.7-91f024fc9b3a.json.gz
2016-11-03_15-38-3.6-c4319c0d0131.json.gz -G --min-speed=5
Slower (40):
- python_startup: 7.74 ms +- 0.28 ms -> 26.9 ms +- 0.6 ms: 3.47x slower
- python_startup_no_site: 4.43 ms +- 0.08 ms -> 10.4 ms +- 0.4 ms: 2.36x slower
- unpickle_pure_python: 417 us +- 3 us -> 918 us +- 14 us: 2.20x slower
- call_method: 16.3 ms +- 0.2 ms -> 28.6 ms +- 0.8 ms: 1.76x slower
- call_method_slots: 16.2 ms +- 0.4 ms -> 28.3 ms +- 0.7 ms: 1.75x slower
- call_method_unknown: 18.4 ms +- 0.2 ms -> 30.8 ms +- 0.8 ms: 1.67x slower
- crypto_pyaes: 161 ms +- 2 ms -> 249 ms +- 2 ms: 1.54x slower
- xml_etree_parse: 201 ms +- 5 ms -> 298 ms +- 8 ms: 1.49x slower
- logging_simple: 26.4 us +- 0.3 us -> 38.4 us +- 0.7 us: 1.46x slower
- logging_format: 31.3 us +- 0.4 us -> 45.5 us +- 0.8 us: 1.45x slower
- pickle_pure_python: 986 us +- 9 us -> 1.41 ms +- 0.03 ms: 1.43x slower
- spectral_norm: 208 ms +- 2 ms -> 287 ms +- 2 ms: 1.38x slower
- logging_silent: 660 ns +- 7 ns -> 865 ns +- 31 ns: 1.31x slower
- chaos: 240 ms +- 2 ms -> 314 ms +- 4 ms: 1.31x slower
- go: 490 ms +- 2 ms -> 640 ms +- 26 ms: 1.31x slower
- xml_etree_iterparse: 178 ms +- 2 ms -> 230 ms +- 5 ms: 1.29x slower
- sqlite_synth: 8.29 us +- 0.16 us -> 10.6 us +- 0.2 us: 1.28x slower
- xml_etree_process: 210 ms +- 6 ms -> 268 ms +- 14 ms: 1.28x slower
- django_template: 387 ms +- 4 ms -> 484 ms +- 5 ms: 1.25x slower
- fannkuch: 830 ms +- 32 ms -> 1.04 sec +- 0.03 sec: 1.25x slower
- hexiom: 20.2 ms +- 0.1 ms -> 24.7 ms +- 0.2 ms: 1.22x slower
- chameleon: 26.1 ms +- 0.2 ms -> 31.9 ms +- 0.4 ms: 1.22x slower
- regex_compile: 395 ms +- 2 ms -> 482 ms +- 6 ms: 1.22x slower
- json_dumps: 25.8 ms +- 0.2 ms -> 31.0 ms +- 0.5 ms: 1.20x slower
- nqueens: 229 ms +- 2 ms -> 274 ms +- 2 ms: 1.20x slower
- genshi_text: 81.9 ms +- 0.6 ms -> 97.8 ms +- 1.1 ms: 1.19x slower
- raytrace: 1.17 sec +- 0.03 sec -> 1.39 sec +- 0.03 sec: 1.19x slower
- scimark_monte_carlo: 240 ms +- 7 ms -> 282 ms +- 10 ms: 1.17x slower
- scimark_sor: 441 ms +- 8 ms -> 517 ms +- 12 ms: 1.17x slower
- deltablue: 17.4 ms +- 0.1 ms -> 20.1 ms +- 0.6 ms: 1.16x slower
- sqlalchemy_declarative: 310 ms +- 3 ms -> 354 ms +- 6 ms: 1.14x slower
- call_simple: 12.2 ms +- 0.2 ms -> 13.9 ms +- 0.2 ms: 1.14x slower
- scimark_fft: 613 ms +- 19 ms -> 694 ms +- 23 ms: 1.13x slower
- meteor_contest: 191 ms +- 1 ms -> 215 ms +- 2 ms: 1.13x slower
- pathlib: 46.9 ms +- 0.4 ms -> 52.6 ms +- 0.9 ms: 1.12x slower
- richards: 181 ms +- 1 ms -> 201 ms +- 6 ms: 1.11x slower
- genshi_xml: 191 ms +- 2 ms -> 209 ms +- 2 ms: 1.10x slower
- float: 290 ms +- 5 ms -> 310 ms +- 7 ms: 1.07x slower
- scimark_sparse_mat_mult: 8.19 ms +- 0.22 ms -> 8.74 ms +- 0.15 ms:
1.07x slower
- xml_etree_generate: 302 ms +- 3 ms -> 320 ms +- 8 ms: 1.06x slower
Faster (15):
- telco: 707 ms +- 22 ms -> 22.1 ms +- 0.4 ms: 32.04x faster
- unpickle_list: 15.0 us +- 0.3 us -> 7.86 us +- 0.16 us: 1.90x faster
- pickle_list: 14.7 us +- 0.2 us -> 9.12 us +- 0.38 us: 1.61x faster
- json_loads: 98.7 us +- 2.3 us -> 62.3 us +- 0.7 us: 1.58x faster
- pickle: 40.4 us +- 0.6 us -> 27.1 us +- 0.5 us: 1.49x faster
- sympy_sum: 361 ms +- 10 ms -> 244 ms +- 7 ms: 1.48x faster
- sympy_expand: 1.68 sec +- 0.02 sec -> 1.15 sec +- 0.03 sec: 1.47x faster
- regex_v8: 62.0 ms +- 0.5 ms -> 47.2 ms +- 0.6 ms: 1.31x faster
- sympy_str: 699 ms +- 22 ms -> 537 ms +- 15 ms: 1.30x faster
- regex_effbot: 6.67 ms +- 0.04 ms -> 5.23 ms +- 0.05 ms: 1.28x faster
- mako: 61.5 ms +- 0.7 ms -> 49.7 ms +- 2.5 ms: 1.24x faster
- html5lib: 298 ms +- 7 ms -> 261 ms +- 6 ms: 1.14x faster
- sympy_integrate: 55.9 ms +- 0.3 ms -> 51.8 ms +- 1.0 ms: 1.08x faster
- pickle_dict: 69.4 us +- 0.9 us -> 65.2 us +- 3.2 us: 1.06x faster
- scimark_lu: 551 ms +- 26 ms -> 523 ms +- 18 ms: 1.05x faster
Benchmark hidden because not significant (8): 2to3, dulwich_log,
nbody, pidigits, regex_dna, tornado_http, unpack_sequence, unpickle
Ignored benchmarks (3) of 2016-11-03_15-36-2.7-91f024fc9b3a.json:
hg_startup, pyflate, spambayes
-------------------
Please ignore call_method, call_method_slots, call_method_unknown
benchmarks: it seems like I had an issue on the benchmark server. I
was unable to reproduce he 70% slowdown on my laptop.
I attached the two compressed JSON files to this email if you want to
analyze them yourself.
I hope that my work on benchmarks will motive some developers to look
closer at Python 3 performance to find interesting optimizations ;-)
Victor