That's not a great comparison there. They've gone from a 2+ year old, somewhat customised kernel, and then compared it to a bleeding edge vanilla upstream kernel.
It's worth pointing out that Ubuntu has currently opted to only put in Meltdown patches with the kernel they released yesterday. Spectre stuff is still to come. (https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown <https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAndMeltdown>) so that benchmark probably won't reflect what anyone actually sees anyway.
Impact is going to be very variable based on what is being benchmarked. I've been doing a whole bunch of benchmarking at work as part of some broad investigations of the impact for our customers, leveraging a few different resources. I haven't grabbed and run the speed.python.org <http://speed.python.org/> benchmarks but I may be able to spin up a bare metal CentOS instance tomorrow and run through the paces with a pre/post Spectre and Meltdown patched kernel, if it's straightforward to do. (are there any documents people could point me to with instructions?)
I'd consider the patches here to largely invalidate any historical performance benchmarks for a comparison perspective. Even on a CPU with PCID capabilities there is definitely a hit on certain types of operations.
We're also very much in the "patch to make it secure" phase, with an optimisation phase still to come, and it's not certain what Intel, AMD, and/or ARM are going to push out by way of updates that may mitigate the impact. There's going to be a lot of focus on improving the performance situation over the next while, so the odds are reasonable that full historical benchmarks will need to be run repeatedly, I guess?
Paul
On January 11, 2018 12:36:16 AM UTC, Nick Coghlan <ncoghlan@gmail.com> wrote: Hi folks,
Reading https://medium.com/implodinggradients/meltdown-c24a9d5e254e <https://medium.com/implodinggradients/meltdown-c24a9d5e254e> prompts me to ask: are speed.python.org benchmark results produced now actually going to be comparable with those executed last year?
Or will the old results need to be backfilled again with the new baseline OS performance?
Cheers, Nick.