[Python-Dev] Re: opcode performance measurements
Skip Montanaro
skip@pobox.com
Thu, 31 Jan 2002 14:27:59 -0600
Jeff> Won't there be code that this slows down? For instance, the code
Jeff> generated by ...
Sure, there will be code that slows down. That's why I said what I am
working on is a proof of concept. Right now, each function the optimizer
operates on converts it to the equivalent of
TRACK_GLOBAL x.foo
TRACK_GLOBAL y.bar
TRACK_GLOBAL z
try:
original
function
body
using
x.foo, y.bar and z
finally:
UNTRACK_GLOBAL z
UNTRACK_GLOBAL y.bar
UNTRACK_GLOBAL x.foo
There are no checks for obvious potential problems at the moment. Such
problems include (but are not limited to):
* Only track globals that are accessed in loops. This would eliminate
your corner case and should be easily handled (only work between
SETUP_LOOP and its jump target).
* Only track globals when there are <= 256 globals (half an oparg - the
other half being an index into the fastlocals array). This would also
cure your problem.
* Only track globals that are valid at the start of function execution,
or defer tracking setup until they are. This can generally be avoided
by not tracking globals that are written during the function's
execution, but other safeguards will probably be necessary to insure
that it works properly.
Jeff> ... how do you make sure that this optimization is never a
Jeff> pessimization ...
I expect in the majority of cases either my idea or Jeremy's will be a net
win, especially after seeing his timing data. I'm willing to accept that in
some situations the code will run slower. I'm confident they will be a
small minority.
Tim Peters can construct cases where dicts perform badly. Does that mean
Python shouldn't have dicts? ;-)
Skip