[Speed] performance: remove "pure python" pickle benchmarks?

Victor Stinner victor.stinner at gmail.com
Tue Apr 4 07:43:06 EDT 2017


2017-04-04 12:06 GMT+02:00 Serhiy Storchaka <storchaka at gmail.com>:
> I consider it as a benchmark of Python interpreter itself.

Don't we have enough benchmarks to test the Python interpreter?

I would prefer to have more realistic use cases than "reimplement
pickle in pure Python".

"unpickle_pure_python" name can be misleading as well to users
exploring speed.python.org data, no?

By the way, I'm open to add new benchmarks ;-)


> Unfortunately the Python code is different in different versions. Maybe
> write a single cross-version pickle implementation and use it with these
> benchmarks?

That's another good reason to remove the benchmark :-)

Other bencharks use pinned versions of dependencies (see
performance/requirements.txt), so the code should be more and less the
name on all Python versions. (Expect code differences between Python 2
and Python 3.)


> And indeed, they
> show a progress from 3.5 to 3.7. A slow regression in unpickling in 3.5 may
> be related to adding more strong validation (but may be not related).

3.7 is faster than 3.5. Are you running benchmarks with LTO + PGO +
perf system tune?

https://speed.python.org/comparison/?exe=12%2BL%2Bmaster%2C12%2BL%2B3.5&ben=647%2C675&env=1&hor=true&bas=12%2BL%2B3.5&chart=normal+bars

Victor


More information about the Speed mailing list