On Thu, 26 Apr 2018 19:19:05 +1000 Chris Angelico <rosuav@gmail.com> wrote:
If such were the need, you could very well make it part of the language specification. We are talking about a trivial optimization that any runtime could easily implement (e.g. if a sequence `DUP_TOP, STORE_FAST, POP_TOP` occurs, replace it with `STORE_FAST`).
Not at the REPL, no. At the REPL, you need to actually print out that value. It's semantically different.
The REPL compiles expressions in a different mode than regular modules, so that's entirely a strawman. The REPL doesn't care that it will spend a fraction of microseconds executing two additional bytecodes before presenting something to the user.
Any runtime already has to implement a set of performance properties that's far less trivial than that. For example, any decent runtime is expected to provide amortized O(1) list append or dict insertion. You are breaking user expectations if you don't.
You assume that, but it isn't always the case.
Yeah, so what?
Did you know, for instance, that string subscripting (an O(1) operation in CPython) is allowed to be O(n) in other Python implementations?
Why would I give a sh*t? Here, we are not talking about a non-trivial design decision such as how to represent string values internally (utf-8 vs. fixed-width, etc.). We are talking about optimizing a trivial bytecode sequence into another more trivial bytecode sequence. CPython can easily do it. PyPy can easily do it. Other runtimes can easily do it. If some implementation is significantly slower because it can't optimize away pointless DUP_TOPs *and* it implements DUP_TOP inefficiently enough to have user-noticeable effect, then it's either 1) a deliberate toy, a proof-of-concept not meant for serious use, or 2) a pile of crap produced by incompetent people. So there's zero reason to bother about efficiency issues here. Regards Antoine.