
Raymond Hettinger wrote:
This is true. I used this in my optimization from two years ago, and moved the oparg preparation into the opcode cases, not doing any test, but just fetching the argument.
What happened to the optimization. It is not in the current code?
No. At that time, ceval speedups were not popular, so I used the optimization just to make Stackless appear faster than CPython, although CPython would have been *even more* faster.
I also turned this into macros which added to the insn pointer only once. Incredible but true: Most of the win I gathered was by typecasting the oparg access differently into a reference to a short int, instead of oring two bytes.
I skipped over that one because I thought that it would fail on a big-endian computer.
Sure it would fail. But on little endian like X86, it produced much faster code, so I only defined it for certain platforms. ciao - chris -- Christian Tismer :^) <mailto:tismer@tismer.com> Mission Impossible 5oftware : Have a break! Take a ride on Python's Johannes-Niemeyer-Weg 9a : *Starship* http://starship.python.net/ 14109 Berlin : PGP key -> http://wwwkeys.pgp.net/ work +49 30 89 09 53 34 home +49 30 802 86 56 pager +49 173 24 18 776 PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/