Hi Matti,
I'll gladly help setting up the speed site
El El mié, 31 ene 2018 a las 13:47, Victor Stinner <victor.stinner(a)gmail.com>
escribió:
> Hi,
>
> I tried but failed to find someone in PyPy to adjust performance
> benchmarks for PyPy. Currently, the JIT is not properly warmed up, and
> the results can be dishonnest or not reliable.
>
> My latest attempt to support is PyPy is:
> http://vstinner.readthedocs.io/pypy_warmups.html
>
> IMHO we need to develop a statistical methodology in perf to compute
> when values become stable. That's hard to define and it was proven
> that "performance stability" doesn't exist (see " Virtual Machine
> Warmup Blows Hot and Cold" paper).
>
> Fijal from PyPy would like to use hardcoded configuration for the
> number of warmup values. I like the idea, but nobody implemented it
> yet.
>
> Victor
>
> 2018-01-30 21:37 GMT+01:00 Matti Picus <matti.picus(a)gmail.com>:
> > I am cross-posting to both speed and pypy-dev to ask what needs to be
> done
> > to get pypy2 and pypy3 benchmark runners onto speed.python.org
> > I am willing to be the contact person from the PyPy side, who do we need
> to
> > talk to on the speed maintainers' side?
> > Instead of spamming the lists again, we could discuss this off-line, and
> > report back with a plan, required resources, and a timeline.
> >
> > Thanks,
> > Matti Picus
> > _______________________________________________
> > Speed mailing list
> > Speed(a)python.org
> > https://mail.python.org/mailman/listinfo/speed
> _______________________________________________
> Speed mailing list
> Speed(a)python.org
> https://mail.python.org/mailman/listinfo/speed
>
I am cross-posting to both speed and pypy-dev to ask what needs to be
done to get pypy2 and pypy3 benchmark runners onto speed.python.org
I am willing to be the contact person from the PyPy side, who do we need
to talk to on the speed maintainers' side?
Instead of spamming the lists again, we could discuss this off-line, and
report back with a plan, required resources, and a timeline.
Thanks,
Matti Picus
On 19 January 2018 at 20:39, Victor Stinner <victor.stinner(a)gmail.com>
wrote:
> 2018-01-19 11:28 GMT+01:00 Stefan Behnel <stefan_ml(a)behnel.de>:
> > That suggests adding Django 2 as a new Py3-only benchmark.
>
> Again, the practical issue is to install Django 2 and Django 1.11 in
> the same virtual environment. I'm not sure that it's doable.
>
> I would prefer to not have to create a different virtualenv for
> Python3-only dependencies.
>
> I needed to release quickly a bugfix release, fix --track-memory,
> feature asked by Xiang Zhang, so I released performance 0.6.1 which
> only updated Django from 1.11.3 to 1.11.9.
>
> Or we need to redesign how performance install dependencies, but
> that's a larger project :-)
>
It may be worth looking at using pew to set up a separate virtual
environment for each benchmark, but then use `pew add` to share common
components (like perf itself) between them. That way you won't have
conflicting dependencies between benchmarks (since they'll be in separate
venvs), without having to duplicate *all* the common components.
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
INADA Naoki schrieb am 16.01.2018 um 12:37:
>> Even though it's not good for comparing interpreter performance, it's good
>> for people comparing Python 2 and 3.
>>
>> If Django 2.0 on Python 3.7 is much faster than Django 1.11 on Python 2.7,
>> it's nice carrot for people moving forward.
>>
>
> FYI, Django 2 is about 2x faster than 1.11 on django_template benchmark!
> It's because Django 1.11 calls force_text() many times for Python 2
> compatibility.
>
> https://github.com/django/django/blob/419705bbe84e27c3d5be85f198a0352a672...
> https://github.com/django/django/blob/419705bbe84e27c3d5be85f198a0352a672...
>
> Actually, dropping Python 2 support makes Django faster. It's nice news!
That suggests adding Django 2 as a new Py3-only benchmark.
Stefan
> Even though it's not good for comparing interpreter performance, it's good
> for people comparing Python 2 and 3.
>
> If Django 2.0 on Python 3.7 is much faster than Django 1.11 on Python 2.7,
> it's nice carrot for people moving forward.
>
FYI, Django 2 is about 2x faster than 1.11 on django_template benchmark!
It's because Django 1.11 calls force_text() many times for Python 2
compatibility.
https://github.com/django/django/blob/419705bbe84e27c3d5be85f198a0352a672...https://github.com/django/django/blob/419705bbe84e27c3d5be85f198a0352a672...
Actually, dropping Python 2 support makes Django faster. It's nice news!
Bests,
On Thu, 11 Jan 2018 10:36:16 +1000
Nick Coghlan <ncoghlan(a)gmail.com> wrote:
> Hi folks,
>
> Reading https://medium.com/implodinggradients/meltdown-c24a9d5e254e
> prompts me to ask: are speed.python.org benchmark results produced now
> actually going to be comparable with those executed last year?
>
> Or will the old results need to be backfilled again with the new
> baseline OS performance?
AFAIU the Meltdown patches will only significantly affect workloads that
do a lot of system calls. *Ideally* our benchmark suite doesn't, since
it's meant to benchmark Python, not the outside environment :-)
Spectre seems to be more uncertain, as it can affect any code
(userspace code as well as kernel code), and will apparently require
careful auditing of sensitive routines to avoid potential information
leaks. This all depends which particular routines (e.g. something in
glibc) get patched.
In any case we can only tell with actual numbers :-)
Regards
Antoine.
That's not a great comparison there. They've gone from a 2+ year old, somewhat customised kernel, and then compared it to a bleeding edge vanilla upstream kernel.
It's worth pointing out that Ubuntu has currently opted to only put in Meltdown patches with the kernel they released yesterday. Spectre stuff is still to come. (https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown <https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown>) so that benchmark probably won't reflect what anyone actually sees anyway.
Impact is going to be very variable based on what is being benchmarked. I've been doing a whole bunch of benchmarking at work as part of some broad investigations of the impact for our customers, leveraging a few different resources.
I haven't grabbed and run the speed.python.org <http://speed.python.org/> benchmarks but I may be able to spin up a bare metal CentOS instance tomorrow and run through the paces with a pre/post Spectre and Meltdown patched kernel, if it's straightforward to do. (are there any documents people could point me to with instructions?)
I'd consider the patches here to largely invalidate any historical performance benchmarks for a comparison perspective. Even on a CPU with PCID capabilities there is definitely a hit on certain types of operations.
We're also very much in the "patch to make it secure" phase, with an optimisation phase still to come, and it's not certain what Intel, AMD, and/or ARM are going to push out by way of updates that may mitigate the impact. There's going to be a lot of focus on improving the performance situation over the next while, so the odds are reasonable that full historical benchmarks will need to be run repeatedly, I guess?
Paul
On January 11, 2018 12:36:16 AM UTC, Nick Coghlan <ncoghlan(a)gmail.com> wrote:
Hi folks,
Reading https://medium.com/implodinggradients/meltdown-c24a9d5e254e <https://medium.com/implodinggradients/meltdown-c24a9d5e254e>
prompts me to ask: are speed.python.org benchmark results produced now
actually going to be comparable with those executed last year?
Or will the old results need to be backfilled again with the new
baseline OS performance?
Cheers,
Nick.
Hi folks,
Reading https://medium.com/implodinggradients/meltdown-c24a9d5e254e
prompts me to ask: are speed.python.org benchmark results produced now
actually going to be comparable with those executed last year?
Or will the old results need to be backfilled again with the new
baseline OS performance?
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia
On 10 January 2018 at 02:57, Victor Stinner <victor.stinner(a)gmail.com> wrote:
> I would prefer to not have to install Django 1.11 *and* Django 2.0 in
> the same virtual environment (I'm not sure that it's technically
> possible). It means that the Django 1.11 benchmark would be specific
> to Python 2, whereas the Django 2.0 benchmark would be specific to
> Python 3. So with an hypothetical future performance version, it would
> become impossible to compare django template benchmark between Python
> 2 and Python 3. I'm not sure that the Python 2 benchmark would be
> useful.
The point of the benchmark is to test non-trivial string manipulation,
not to test the latest version of Django's template engine.
Just lock the dependency at Django 1.11, and explain why.
Cheers,
Nick.
--
Nick Coghlan | ncoghlan(a)gmail.com | Brisbane, Australia