Bytecode optimisation
William Tanksley
wtanksle at dolphin.openprojects.net
Sat May 22 17:27:51 EDT 1999
On Wed, 19 May 1999 00:58:37 GMT, Christian Tismer wrote:
>Corran Webster wrote:
>> Michael Hudson <mwh21 at cam.ac.uk> wrote:
>You can save lookups by semantic changes, as
>the bytecodehacks do already.
>But the majority of time sits in the C api, in the interpreter
>itself, and in malloc (on Windows at least).
Is that true? I keep on hearing that name lookups are a major slowdown.
>> >Have you looked at Skip Montanaro's optimizer:
>> >http://www.automatrix.com/~skip/python/spam7/optimizer.html
>I could whine about all the good work he has put into it,
>compared to the results.
...but if you did, you'd be missing the #1 point about open source
software: if someone else does it, it doesn't take you ANY work.
(Corollary: if it's stupid, but it works, it's not stupid.)
>> Or as I mentioned in my reply to Mark-Andre Lemburg, there may be a case
>> for an optimisation level which assumes sanity for many standard things,
>> so that
>> for x in range(1000):
>> for y in range(1000):
>> do_stuff()
>> could assume that do_stuff() doesn't change the meaning of range and so
>> this could be optimised to:
>> r = range(1000)
>> for x in r:
>> for y in r:
>> do_stuff()
>Looks promising in the first place. But you quickly realize
>that this is no real optimization since you will do it
>by hand, anyway, or use xrange.
But _why_? If I become really interested in speed, I don't want to slave
away making mmy beautiful Python code look like C. I would rather write
in C.
And I would rather have _less_ incentive to write in C, and zero incentive
to write baroque Python (for speed). An autommated optimizer certainly
could handle this.
>The loop opcode will anyway continue to call a sequence_getitem,
>wether it knows the type or not.
Ah, another place for optimization, eh?
>> >if-you-get-better-than-twenty-percent-speed-up-I'll-be-impressed-ly y'rs
>> >Michael
>Guys, you are at the wrong end. This is another complete waste
>of time. The whole interpreter overhead is bound to an average
>of about 30 percent, when everything is C-code. I'd wonder if
>bytecode optimization would give more than 15 percent on
>average, when semantics are not changed.
15% is a KILLER speedup for optimization without algorithm changes.
Personally, I would consider that a vindication -- but anything less is
not obviously a loss.
>You need to invent special new bytecodes for the things
>you want to optimize. If you can find and implement these,
>then it makes sense to optimize with type inference.
This is one reasonn why I brought up token-threading. Think of it as
16-bit "byte"codes. 16K instruction space... or mmaybe even 4G. Enough
to assign a code to every user function in a program (for use only whenn
we're certain, of course).
And Anton's finally published his stack-code optimizer at
http://www.complang.tuwien.ac.at/projects/rafts.html. It's not
complete, and he hasn't published the source yet (but it's based on a
GPLed product), but already it compiles Forth to as much as 80% of C speed
(this is a special case, though).
>no-longer-optimizing-for-less-than-50-percent-ly y'rs - chris
That's not a wrong attitude. You'll accept no lesser result than
innovation. However, incremental gains are also valuable, and you
shouldn't reject or even discourage those who attempt to make them.
I wonder whether Python 2 will be incremental or revolutionary? I'm kind
of hoping for revolutionary, but it sure is a hard choice.
>Christian Tismer :^) <mailto:tismer at appliedbiometrics.com>
--
-William "Billy" Tanksley
"But you shall not escape my iambics."
-- Gaius Valerius Catullus
More information about the Python-list
mailing list