[Python-Dev] Re: Magic number needs upgrade
Guido van Rossum
Tue, 22 Apr 2003 20:49:03 -0400
> > What makes Raymond's changes different?
> * They are thoroughly tested.
> * They are decoupled from the surrounding code and
> will survive changes to ceval.c and newcompile.c.
> * They provide some benefits without hurting anything else.
What are the benefits? I see zero improvement. And more code hurts.
> * They provide a framework for others to build upon.
> The scanning loop and basic block tester make it
> a piece of cake to add/change/remove new code transformations.
> CALL_ATTR ought to go in when it is ready.
No, only if it really makes a difference. We can't expect to beat
Parrot by accumulating an endless string of theoretical improvements
that each contribute 0.1% speedup to the average application.
> It certainly provides measurable speed-up in the targeted behavior.
> It just needs more polish so that it doesn't slow down other
> pathways. The benefit is real, but in real programs it is being
> offset by reduced performance in non-targeted behavior. With some
> more work, it ought to be a real gem. Unfortunately, it is tightly
> coupled to the implementation of new and old-style class. Still, it
> looks like a winner.
That's what I though, until I benchmarked it. It's possible that it
can be saved. It's also possible that we've pretty much reached a
point where any optimization we think of is somehow undone by the
effect of more code and hence less code locality.
> What we're seeing is a consequence of Amdahl's law and Python's
> broad scope. Instead of a single hotspot, Python exercises many
> different types of code and each needs to be optimized separately.
> People have taken on many of these and collectively they are having
> a great effect. The proposals by Ping, Aahz, Brett, and Thomas
> are import steps to address untouched areas.
Possibly. Or possibly we need to step back and redesign the
interpreter from scratch. Or put more effort in e.g. Psyco.
> I took on the task of making sure that the basic pure python code
> slithers along quickly. The basics like "while", "for", "if", "not"
> have all been improved. Lowering the cost of those constructs
> will result in less effort towards by-passing them with vectorized
> code (map, etc). Code in something like sets.py won't show much
> benefit because so much effort had been directed at using filter,
> map, dict.update, and other high volume c-coded functions and
And I'm happy that Python 2.3 is significantly faster than 2.2 (15% in
> Any one person's optimizations will likely help by a few percent
> at most. But, taken together, they will be a big win.
Yet, I expect that we're reaching a limit, or at least crawling up
> > I also wonder why this is done unconditionally, rather than only with
> > -O.
> Neal, Brett, and I had discussed this a bit and I came to the conclusion
> that these code transformations are like the ones already built into the
> compiler -- they have some benefit, but cost almost nothing (two passes
> over the code string at compile time). The -O option makes sense for
> optimizations that have a high time overhead, throw-away debugging
> information, change semantics, or reduce feature access. IOW, -O is
> for when you're trading something away in return for a bit of speed
> in production code.
Yeah, but right now -O does *nothing* except remove asserts. We might
as well get rid of it.
> There is essentially no benefit to not using the optimized bytecode.
Of course not, if you keep putting all optimizations in the default
If we had only optimized unary minus followed by a constant in -O
mode, the (several!) bugs in that optimization would have been caught
PS, Raymond, can I ask you to look at the following bugs and patches
that are assigned to you: bugs 549151 (!), 557704 (!), 665835, 678519,
patches 708374, 685051, 658316, 562501. The (!) ones have priority.
It's okay if you don't have time, but in that case say so so I can
find another way to get them addressed.
--Guido van Rossum (home page: http://www.python.org/~guido/)