[Python-checkins] UGLY Benchmark Results for Python 2.7 2016-02-12

lp_benchmark_robot at intel.com lp_benchmark_robot at intel.com
Fri Feb 12 07:29:35 EST 2016


No new revisions. Here are the previous results:

Results for project Python 2.7, build date 2016-02-12 03:59:30 +0000
commit:		5715a6d9ff12
previous commit:	8c7a8c7a02b9
revision date:	2016-02-10 12:44:29 +0000
environment:	Haswell-EP
	cpu:		Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 2x18 cores, stepping 2, LLC 45 MB
	mem:		128 GB
	os:		CentOS 7.1
	kernel:	Linux 3.10.0-229.4.2.el7.x86_64

Baseline results were generated using release v2.7.10, with hash 15c95b7d81dc
from 2015-05-23 16:02:14+00:00

----------------------------------------------------------------------------------
              benchmark   relative   change since   change since   current rev run
                          std_dev*       last run       baseline          with PGO
----------------------------------------------------------------------------------
:-)           django_v2      0.12%          1.72%          4.51%             4.24%
:-)             pybench      0.11%          0.32%          6.25%             3.97%
:-(            regex_v8      0.78%         -0.28%         -2.60%            10.92%
:-)               nbody      0.26%         -2.74%          4.43%             5.19%
:-)        json_dump_v2      0.18%         -0.86%          4.44%            10.07%
:-(      normal_startup      1.87%          0.33%         -5.55%             2.04%
:-|             ssbench      0.21%         -0.64%          1.54%             2.41%
----------------------------------------------------------------------------------
* Relative Standard Deviation (Standard Deviation/Average)

If this is not displayed properly please visit our results page here: http://languagesperformance.intel.com/ugly-benchmark-results-for-python-2-7-2016-02-12/

Note: Benchmark results for ssbench are measured in requests/second while all
other are measured in seconds.

Subject Label Legend:
Attributes are determined based on the performance evolution of the workloads
compared to the previous measurement iteration.
NEUTRAL: performance did not change by more than 1% for any workload
GOOD: performance improved by more than 1% for at least one workload and there
is no regression greater than 1%
BAD: performance dropped by more than 1% for at least one workload and there is
no improvement greater than 1%
UGLY: performance improved by more than 1% for at least one workload and also
dropped by more than 1% for at least one workload


Our lab does a nightly source pull and build of the Python project and measures
performance changes against the previous stable version and the previous nightly
measurement. This is provided as a service to the community so that quality
issues with current hardware can be identified quickly.

Intel technologies' features and benefits depend on system configuration and may
require enabled hardware, software or service activation. Performance varies
depending on system configuration.


More information about the Python-checkins mailing list