![](https://secure.gravatar.com/avatar/61a537f7b31ecf682e3269ea04056e94.jpg?s=120&d=mm&r=g)
On 2016-02-05 6:14 PM, Andrew Barnert wrote:
On Friday, February 5, 2016 2:26 PM, Yury Selivanov <yselivanov.ml@gmail.com> wrote:
I also had this idea (don't know if it's good or not):
1. have 16 bits per opcode 2. first 8 bits encode the opcode number 3. next 7 bits encode the arg (most args don't need more than 7 bits anyways) 4. if the 16th bit is 1 then the next opcode is the EXTENDED_ARG
Why? The advantage of Serhiy's idea (or my original one) is that it makes it much easier to write bytecode processors (like the peephole optimizer) when everything is fixed width, because jump target and lnotab fixups become trivial. Your idea leaves jump target and lnotab fixups just as hard as they are today, so we get the cost of having to transform back and forth between two representations, without any benefit.
Yes, this was related to how Serhiy original proposal on how we can pack opcodes. It's completely unrelated to the peephole optimizer. Sorry for the confusion.
If you were instead suggesting that as an alternative to the version of wordcode I proposed and prototyped in the other thread, to be actually used by the interpreter, then there _is_ a small benefit to your version: arguments in range [2**17, 2**23) need 4 bytes instead of 6. But I think the cost of having arguments in range [2**7, 2**8) take 4 bytes instead of 2, and the extra complexity and CPU cost in the fetch and peek at the core of the eval loop, and being a bigger change from today, make it a lot less attractive.
I'm sorry, I can't parse what you said there... My idea is based on the fact that most opcode arguments are in [0, 127] range. EXTENDED_ARG isn't used too often, but when it is used, it will be packed more efficiently in my scheme. Yury