Release early, release often: performance 0.1.2 has been released! The
first version supporting Windows. I renamed the GitHub project from
python/benchmarks to python/performance.
All changes:
* Windows is now supported
* Add a new ``venv`` command to show, create, recrete or remove the
virtual environment.
* Fix pybench benchmark (update to perf 0.7.4 API)
* performance now tries to install the ``psutil`` module on CPython for
better system metrics in metadata and CPU pinning on Python 2.
* The creation of the virtual environment now also tries ``virtualenv``
and ``venv`` Python modules, not only the virtualenv command.
* The development version of performance now installs performance with
"pip install -e <path_to_performance>"
* The GitHub project was renamed from ``python/benchmarks``
to ``python/performance``.
Victor
On Thu, 25 Aug 2016 at 15:08 Victor Stinner <victor.stinner(a)gmail.com>
wrote:
> Hi,
>
> For the first release of the "new" benchmark suite, I chose the name
> "performance", since "benchmark" and "benchmarks" names were already
> reserved on PyPI. It's the name of the Python module, but also of the
> command line tool: "pyperformance".
>
> Since there is an "old" benchmark suite
> (https://hg.python.org/benchmarks), PyPy has its benchmark suite, etc.
> I propose to rename the GitHub project to "performance" to avoid
> confusion.
>
> What do you think?
>
If you want to do then go ahead, but I don't think it will be a big issue
in the grand scheme of things.
>
> Note: I'm not a big fan of the "performance" name, but I don't think
> it matters much. The name only needs to be unique and available on
> PyPI :-D
>
> By the way, I don't know if it's worth it to have a "pyperformance"
> command line tool. You can already use "python3 -m performance ..."
> syntax. But you have to recall the Python version used to install the
> module. "python2 -m performance ..." doesn't work if you only
> installed performance for Python 3!
>
As Antoine pointed out, if it doesn't matter what interpreter has the
script installed to run the benchmarks against another interpreter than a
script makes sense (but do keep it available through -m).
On Fri, 26 Aug 2016 00:07:53 +0200
Victor Stinner <victor.stinner(a)gmail.com>
wrote:
>
> By the way, I don't know if it's worth it to have a "pyperformance"
> command line tool. You can already use "python3 -m performance ..."
> syntax. But you have to recall the Python version used to install the
> module. "python2 -m performance ..." doesn't work if you only
> installed performance for Python 3!
Also, you may have several Python 3s installed (the system 3.4, a
custom 3.4, a custom 3.5, a custom 3.6...) so a CLI script is much
easier to use.
Regards
Antoine.
Hi,
For the first release of the "new" benchmark suite, I chose the name
"performance", since "benchmark" and "benchmarks" names were already
reserved on PyPI. It's the name of the Python module, but also of the
command line tool: "pyperformance".
Since there is an "old" benchmark suite
(https://hg.python.org/benchmarks), PyPy has its benchmark suite, etc.
I propose to rename the GitHub project to "performance" to avoid
confusion.
What do you think?
Note: I'm not a big fan of the "performance" name, but I don't think
it matters much. The name only needs to be unique and available on
PyPI :-D
By the way, I don't know if it's worth it to have a "pyperformance"
command line tool. You can already use "python3 -m performance ..."
syntax. But you have to recall the Python version used to install the
module. "python2 -m performance ..." doesn't work if you only
installed performance for Python 3!
Victor
2016-08-24 17:38 GMT+02:00 Victor Stinner <victor.stinner(a)gmail.com>:
> Now the development version always install performance 0.1.1 (see
> performance/requirements.txt). I should fix this to install the
> development version of performance/ when it is run from the source
> code (when setup.py is available in the parent directory?).
FYI it's now fixed: the development version of the benchmark suite now
installs the performance module using "pip install -e <path>" in
development mode, but "pip install performance==x.y.z" in release
mode.
Victor
Hi,
I released a first version of the Python benchmark suite. (Quickly
followed by a 0.1.1 bugfix ;-))
It is now possible to install it using pip:
python3 -m pip install performance
And run it using:
pyperformance run --python=python2 --rigorous -b all -o py2.json
pyperformance run --python=python3 --rigorous -b all -o py3.json
pyperformance compare py2.json py3.json
Note: the "python3 -m performance ..." syntax works too.
It creates virtual environments in ./venv/ subdirectory. (I may add an
option to choose where to create them.)
performance 0.1.1 version works well on Linux with CPython. There are
some know issues on Windows:
https://github.com/python/benchmarks/issues/5
I don't consider the PyPy support as stable yet.
I used the "performance" name on PyPI, because "benchmark" and
"benchmarks" are already reserved. The Python module is also named
"performance" and comes with "pyperformance" script.
I made a suble bugfix: requirements.txt now uses fixed version rather
than ">=min_version". For example, "perf>=0.7.4" became "perf==0.7.4".
I expect to get more reproductible benchmark results with fixed
versions.
Before a release, we should not forget to update dependencies to test
the most recent versions of Python modules and applications.
Now the development version always install performance 0.1.1 (see
performance/requirements.txt). I should fix this to install the
development version of performance/ when it is run from the source
code (when setup.py is available in the parent directory?).
Victor
Done: I renamed "django" benchmark" to "django_template":
https://github.com/python/benchmarks/commit/d674a99e3a9a10a29c44349b2916740…
Victor
2016-08-21 19:36 GMT+02:00 Victor Stinner <victor.stinner(a)gmail.com>:
> Le 21 août 2016 11:02 AM, "Maciej Fijalkowski" <fijall(a)gmail.com> a écrit :
>> Let's not have it called "bzr" though because it gives the wrong
>> impression.
>
> The benchmark was called "bzr_startup", bit I agree that "django" mus be
> renamed to "django_template". There is an HTTP benchmark, tornado_http,
> using the local link (server and client run in the same process if I recall
> correctly).
>
> Victor
Le 21 août 2016 11:02 AM, "Maciej Fijalkowski" <fijall(a)gmail.com> a écrit :
> Let's not have it called "bzr" though because it gives the wrong
> impression.
The benchmark was called "bzr_startup", bit I agree that "django" mus be
renamed to "django_template". There is an HTTP benchmark, tornado_http,
using the local link (server and client run in the same process if I recall
correctly).
Victor
On 21 August 2016 at 19:02, Maciej Fijalkowski <fijall(a)gmail.com> wrote:
> Let's not have it called "bzr" though because it gives the wrong
> impression. The same way unladen swallow added a benchmark and called
> it "django" while not representing django very well. That said, likely
> not very many people use bzr, but still, would be good if it's called
> bzr-pyc or simpler - have a benchmark that imports a whole bunch of
> pyc from a big project (e.g. pypy :-)
Yeah, I assume the use of bzr for this purpose is mainly an accident
of history - adopting a non-trivial cross-platform Python command line
application, rather than putting together a synthetic benchmark like
Tools/importbench that may not be particular representative of real
workloads (which can use techniques like lazy module imports to defer
startup costs until those features are actually needed).
Perhaps call the benchmark "bzr-startup" to emphasize the aim is to
measure how long it takes bzr to start in general, moreso than how
long it takes to print the help message?
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
On Sun, Aug 21, 2016 at 7:38 AM, Nick Coghlan <ncoghlan(a)gmail.com> wrote:
> On 20 August 2016 at 02:50, Maciej Fijalkowski <fijall(a)gmail.com> wrote:
>> Very likely just pyc import time
>
> As one of the import system maintainers, that's a number I consider
> quite interesting and worth benchmarking :)
>
> It's also one of the key numbers for Linux distro Python usage, since
> it impacts how responsive the system shell feels to developers and
> administrators - an end user can't readily tell the difference between
> "this shell is slow" and "this particular command I am running is
> using a language interpreter with a long startup time", but an
> interpreter benchmark suite can.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
Fair point, let's have such a benchmark.
Let's not have it called "bzr" though because it gives the wrong
impression. The same way unladen swallow added a benchmark and called
it "django" while not representing django very well. That said, likely
not very many people use bzr, but still, would be good if it's called
bzr-pyc or simpler - have a benchmark that imports a whole bunch of
pyc from a big project (e.g. pypy :-)
Cheers,
fijal