On Wed, 22 Jul 2020 12:46:40 +0900 Inada Naoki email@example.com wrote:
On Wed, Jul 22, 2020 at 3:43 AM Mark Shannon firstname.lastname@example.org wrote:
On 18/07/2020 9:20 am, Inada Naoki wrote:
It seems great improvement, but I am worrying about performance.
Adding more attributes to the code object will increase memory usage and importing time. Is there some estimation of the overhead?
Zero overhead (approximately). We are just replacing one compressed table with another at the C level. The other attributes are computed.
And I am worrying precise tracing blocks future advanced bytecode optimization. Can we omit precise tracing and line number information when optimization (`-O`) is enabled?
I don't think that is a good idea. Performing any worthwhile performance optimization requires that we can reason about the behavior of programs. Consistent behavior makes that much easier. Inconsistent "micro optimizations" make real optimizations harder.
Tracing output is included in the program behavior?
For example, if two code block is completely equal:
if a == 1: very very long code block elif a == 2: very very long code block
This code can be translated into like this (pseudo code):
if a == 1: goto block1 if a == 2: goto block1 block1: very very long code block
But if we merge two equal code blocks, we can not produce precise line numbers, can we? Is this inconsistent microoptimization that real optimization harder? This optimization must be prohibited in future Python?
All attempts to improve Python performance by compile-time bytecode optimizations have more or less failed (the latter was Victor's, AFAIR). Is there still interest in pursuing that avenue?