Hello, All, We have been tracking Python performance over the last 1.5 years, and results (along with other languages) are published daily at this site: https://languagesperformance.intel.com/ This general regression trend discussed is same as we observed. The Python source codes are being pulled, built, and results published daily following exactly the same process, on exactly the same hardware running with exactly the same operating system image. Take Django_v2 as an example, with 2.7 Default build: comparing 2/10/2017 commitID 54c93e0fe79b0ec7c9acccc35dabae2ffa4d563a, with 8/27/2015 commitID 514f5d6101752f10758c5b89e20941bc3d13008a, the regression is 2.5% PGO build: comparing 2/10/2017 commitID 54c93e0fe79b0ec7c9acccc35dabae2ffa4d563a, with 8/27/2015 commitID 514f5d6101752f10758c5b89e20941bc3d13008a, the regression is 0.47% We turned off hyperthreading, turbo, and ASLR, and set CPU frequency at a constant value to mitigate run to run variation. Currently we are only running limited number of micro-benchmarks, but planning to run a more broad range of benchmark/workload. The one that's under consideration to start with is the Python benchmark suite (all): https://github.com/python/performance We'd love to hear feedback on how to best monitor Python code changes and performance, how to present (look and feel, charts etc) and communicate the results. Thanks, Peter -----Original Message----- From: Python-Dev [mailto:python-dev-bounces+peter.xihong.wang=intel.com@python.org] On Behalf Of Louis Bouchard Sent: Friday, March 03, 2017 7:27 AM To: Victor Stinner <victor.stinner@gmail.com> Cc: Barry Warsaw <barry@python.org>; Nick Coghlan <ncoghlan@gmail.com>; Python-Dev <python-dev@python.org> Subject: Re: [Python-Dev] Help requested with Python 2.7 performance regression Hello, Le 03/03/2017 à 15:37, Louis Bouchard a écrit :
Hello,
Le 03/03/2017 à 15:31, Victor Stinner a écrit :
Out of curiosity, I ran the set of benchmarks in two LXC containers running centos7 (2.7.5 + gcc 4.8.5) and Fedora 25 (2.7.13 + gcc 6.3.x). The benchmarks do run faster in 18 benchmarks, slower on 12 and insignificant for the rest (~33 from memory).
"faster" or "slower" is relative: I would like to see the ?.??x faster/slower or percent value. Can you please share the result? I don't know what is the best output: python3 -m performance compare centos.json fedora.json or the new: python3 -m perf compare_to centos.json fedora.json --table --quiet
Victor
All the results, including the latest are in the spreadsheet here (cited in the analysis document) :
https://docs.google.com/spreadsheets/d/1pKCOpyu4HUyw9YtJugn6jzVGa_zeDm BVNzqmXHtM6gM/edit#gid=1548436297
Third column is the ?.??x value that you are looking for, taken directly out of the 'pyperformance analyze' results.
I didn't know about the new options, I'll give it a spin & see if I can get a better format.
All the benchmark data using the new format have been uploaded to the spreadsheet. Each sheet is prefixed with pct_. HTH, Kind regards, ...Louis -- Louis Bouchard Software engineer, Cloud & Sustaining eng. Canonical Ltd Ubuntu developer Debian Maintainer GPG : 429D 7A3B DD05 B6F8 AF63 B9C4 8B3D 867C 823E 7A61 _______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/peter.xihong.wang%40intel...