<p dir="ltr"><br>
On Aug 24, 2015 3:51 PM, "Stewart, David C" <<a href="mailto:david.c.stewart@intel.com">david.c.stewart@intel.com</a>> wrote:<br>
><br>
> (Sorry about the format here - I honestly just subscribed to Python-dev so<br>
> be gentle ...)</p>
<p dir="ltr">:)</p>
<p dir="ltr">><br>
> > Date: Sat, 22 Aug 2015 11:25:59 -0600<br>
> > From: Eric Snow <<a href="mailto:ericsnowcurrently@gmail.com">ericsnowcurrently@gmail.com</a>><br>
><br>
> >On Aug 22, 2015 9:02 AM, "Patrascu, Alecsandru" <alecsandru.patrascu at<br>
> ><a href="http://intel.com">intel.com</a> <<a href="https://mail.python.org/mailman/listinfo/python-dev">https://mail.python.org/mailman/listinfo/python-dev</a>>><br>
> >wrote:[snip]> For instance, as shown from attached sample performance<br>
> >results from theGrand Unified Python Benchmark, >20% speed up was<br>
> >observed.<br>
> ><br>
> ><br>
><br>
> Eric I'm the manager of Intel's server scripting language optimization<br>
> team, so I'll answer from that perspective.</p>
<p dir="ltr">Thanks, David!</p>
<p dir="ltr">><br>
> >Are you referring to the tests in the benchmarks repo? [1] How does the<br>
> >real-world performance improvement compare with otherlanguages you are<br>
> >targeting for optimization?<br>
><br>
> Yes, we're using [1].<br>
><br>
> We're seeing up to 10% improvement on Swift (a project in OpenStack) on<br>
> some architectures using the ssbench workload, which is as close to<br>
> real-world as we can get.</p>
<p dir="ltr">Cool.</p>
<p dir="ltr">> Relative to other languages we target, this is<br>
> quite good actually. For example, Java's Hotspot JIT is driven by<br>
> profiling at its core so it's hard to distinguish the value profiling<br>
> alone brings.</p>
<p dir="ltr">Interesting. So pypy (with it's profiling JIT) would be in a similar boat, potentially.</p>
<p dir="ltr">> We have seen a nice boost on PHP running Wordpress using<br>
> PGO, but not as impressive as Python and Swift.</p>
<p dir="ltr">Nice. Presumably this reflects some of the choices we've made on the level of complexity in the interpreter source.</p>
<p dir="ltr">><br>
> By the way, I think letting the compiler optimize the code is a good<br>
> strategy. Not the only strategy we want to use, but it seems like one we<br>
> could do more of.<br>
><br>
> > And thanks for working on this! I have several more questions: What<br>
> >sorts of future changes in CPython's code might interfere with<br>
> >youroptimizations?<br>
> ><br>
> ><br>
><br>
> We're also looking at other source-level optimizations, like the CGOTO<br>
> patch Vamsi submitted in June. Some of these may reduce the value of PGO,<br>
> but in general it's nice to let the compiler do some optimization for you.<br>
><br>
> > What future additions might stand to benefit?<br>
> ><br>
><br>
> It's a good question. Our intent is to continue to evaluate and measure<br>
> different training workloads for improvement. In other words, as with any<br>
> good open source project, this patch should improve things a lot and<br>
> should be accepted upstream, but we will continue to make it better.<br>
><br>
> > What changes in existing code might improve optimization opportunities?<br>
> ><br>
> ><br>
><br>
> We intend to continue to work on source-level optimizations and measuring<br>
> them against GUPB and Swift.</p>
<p dir="ltr">Thanks! These sorts of contribution has far-reaching positive effects.</p>
<p dir="ltr">><br>
> > What is the added maintenance burden of the optimizations on CPython,<br>
> >ifany?<br>
> ><br>
> ><br>
><br>
> I think the answer is none. Our goal was to introduce performance<br>
> improvements without adding to maintenance effort.<br>
><br>
> >What is the performance impact on non-Intel architectures? What<br>
> >aboutolder Intel architectures? ...and future ones?<br>
> ><br>
> ><br>
><br>
> We should modify the patch to make it for Intel only, since we're not<br>
> evaluating non-Intel architectures. Unfortunately for us, I suspect that<br>
> older Intel CPUs might benefit more than current and future ones. Future<br>
> architectures will benefit from other enabling work we're planning.</p>
<p dir="ltr">That's fine though. At the least you're setting the stage for future work, including building a relationship here. :)</p>
<p dir="ltr">><br>
> > What is Intel's commitment to supporting these (or other) optimizations<br>
> >inthe future? How is the practical EOL of the optimizations managed?<br>
> ><br>
> ><br>
><br>
> As with any corporation's budgeting process, it's hard to know exactly<br>
> what my managers will let me spend money on. :-) But we're definitely<br>
> convinced of the value of dynamic languages for servers and the need to<br>
> work on optimization. As far as I have visibility, it appears to be<br>
> holding true.</p>
<p dir="ltr">Sounds good.</p>
<p dir="ltr">><br>
> > Finally, +1 on adding an opt-in Makefile target rather than enabling<br>
> >theoptimizations by default.<br>
> ><br>
> ><br>
><br>
> Frankly since Ubuntu has been running this way for past two years, I think<br>
> it's fine to make it opt-in, but eventually I hope it can be the default<br>
> once we're happy with it.</p>
<p dir="ltr">Given the reaction here that sounds reasonable.</p>
<p dir="ltr">Thanks for answering these questions and to your team for getting involved!</p>
<p dir="ltr">-eric</p>