
On Thursday, February 4, 2016 6:57 PM, Chris Angelico <rosuav@gmail.com> wrote:
On Fri, Feb 5, 2016 at 9:22 AM, Andrew Barnert via Python-ideas <python-ideas@python.org> wrote:
You may be able to import some of the "spirit" of the first one into the second, but I'm not sure it's worth it. One of the minor problems with wordcode is that args in range [256, 65536) now take 4 bytes instead of 3. If you cram a few bits into the opcode byte, you can push the range farther back. I'm guessing it's pretty rare to do LOAD_CONST 256, but giving, say, 5 bits to JUMP_ABSOLUTE, and 2 bits to each of the various relative jumps, would cut down the need for EXTENDED_ARGS dramatically.
At the cost of complexity, possibly including less readable code. This change can always be done later, if and only if it proves to make enough difference in performance.
Definitely. And, of course, the extra complexity may actually slow things down. I'm just saying it _might_ be worth prototyping and testing if the wordcode experiment turns out to be worth pursuing[^1], and someone produces profiles showing that jumps are now a bigger bottleneck than they used to be. [^1]: We already knew that it was the starting point of a 2.6 fork that claimed to be faster than stock CPython, but that fork had lots of other optimizations, and I don't know of any serious benchmarking or profiling results from it in the first place. We now also know that changing to wordcode is probably easier than it looks. That may add up to "worth a few more hours to build a complete prototype", but it doesn't add up to "do it and merge it and start planning further changes around it" just yet. :)