Which happens first?
Tim Peters
tim.one at home.com
Sun Apr 8 19:00:02 EDT 2001
[Carlos Alberto Reis Ribeiro]
> Just curious, but is it so hard to change the grammar to detect
> negative integer literals?
I never took the time to look, and probably never will:
> ...
> On the other hand, I can see that this is not a big issue the vast
> majority of time (it would save only a few bytes of bytecode, and
> almost nothing on time...)
Bingo. If it interests *you*, we're not hiding the source code <wink>. The
only real reason in my mind to pursue it would be to stop this old (but rare)
surprise, here using Python on a box with 32-bit ints:
>>> int("-2147483648") # string->int accepts one literal ...
-2147483648
>>> eval("-2147483648") # ... that the compiler can't handle
Traceback (most recent call last):
File "<stdin>", line 1, in ?
OverflowError: integer literal too large
>>>
> ...
> Ok. So let's change my conclusion as follows: 'Avoid any calculation
> on literals inside loops in Python, as the compiler will not optimize
> anything. Put explicitly out of the loop everything that you can.
It's not a rule I follow, because it rarely makes a difference worth the
bother. It's good advice for loops whose speed you care about a lot, but
identifying those is best ignored until after profiling reveals hot spots.
For example, this appears inside a doubly-nested loop in a program I happened
to be working on yesterday:
lnbot = ln(bot)
diff = bot - (top * below) % bot
if diff:
lg2 = (ln(diff) - lnbot)/_log(2)
if lg2 <= smallest_lg:
k = ((top * b2nshow * below)>>1) / bot % (b2nshow * b2)
k = tostr(k, b2)
k = '0' * (NSHOW + 1 - len(k)) + k
k = k[0] + "." + k[1:]
msg(" below", tostr_exp(below, b1, e),
round(lg2 ,2), k)
if lg2 < smallest_lg:
smallest_lg = lg2
smallest_e2 = e2
msg("^" * 60)
diff = (top * above) % bot
if diff:
lg2 = (ln(diff) - lnbot)/_log(2)
if lg2 <= smallest_lg:
k = ((top * b2nshow * above)>>1) / bot % (b2nshow * b2)
... much the same as the above ...
These are mostly long (unbounded) int operations, so repeating computations
like "top * b2nshow" and "b2nshow * b2" can be very expensive, and I could
pee away a lot of time factoring them out by hand. But profiling showed that
less than 1% of 1% of the runtime was being spent in these two sections:
almost all of it was being consumed by the call before it:
below, above = modmin(top, 0, bot, thislow, thishigh)
Replacing a key algorithm in the guts of modmin with a better approach took
about 15 minutes and slashed the total runtime by a factor of 3; making 10
ugly changes to the rest of the loop code could have consumed an hour of my
time and *would* have bought nothing worth getting even it came for free.
BTW, it was a *surprise* to me that these sections weren't accounting for
much more of the runtime than they did, so if I had gone with "my intuition"
I would have wasted time spiffing them up. As is, I no longer like the
approach here at all, and will throw it away.
> This kind of code is optimized in some other languages/compilers,
> such as C or Pascal, but not in Python."
Curiously, in my experience the percentage of time I spend optimizing a
program "by hand" is lower in Python than in C (etc), and I get better
speedups despite spending less time on it. A prime reason is that you're
*never* going to wring low-level machine-limit efficiency out of Python code,
so you spend more of your time dreaming up better *algorithms* than in
endlessly tweaking a poor algorithm that's gotten too complicated to throw
away because of all the obscure tweaking you've already done <0.7 wink>.
> ...
> In my case, I tend to use lots of small Python scripts to parse log
> files, some of them in binary format. So I end up coding stuff like
> this sometimes, to quickly get some information from inside complex
> records or structures.
"Quickly get" is ambiguous: are you primarily interested in minimizing the
time it takes *you* to paste together code sufficient to extract the info of
interest, or in minimizing the time it takes the *computer* to run your code?
If the former, Python is an excellent choice; if the latter, it may or may
not be sufficient for your needs.
Over time, I expect you'll find yourself organizing those "lots of small
Python scripts" into reusable classes and modules, and then reorganizing
*those* over and over again as your ambitions grow. Such continual
refactoring is par for the course in Python, unlike as in C or Pascal,
because it's relatively *easy* in Python. In the end, one of two things
often happens: your ambitions stop growing, and with a now-solid design you
can *start* worrying about speed (if indeed that's still "an issue" -- often
it's not); or you lose interest in the whole area, and then it's a lot easier
on the soul to throw away 500 lines of Python than 10,000 lines of C.
There's a danger here: "optimizing" code by hand *before* you've endured
those dozen rounds of refactoring makes rethinking your approach harder to
implement, which in turn makes it more likely you'll fall back to C-mode
endless tweaking of (what very often is) a poor initial approach. In the
end, your code will be better and also usually faster, if you delight in
Python's expressivity rather than worry over low-level tricks.
As a hypothetical and possibly senseless example, let's say it finally occurs
to you that you could design a pleasant little language for *describing* the
structure of all those log files you're interested in, and a language for
identifying the parts you care about. Then maybe you'll write very general
and pig-slow "interpreters" for those little languages, just to see how it
works. Say it works great. Then you may want to redesign your compiler to
spit out a Python program (as a giant string to be exec'ed) tailored for each
log file type. Since at this point you haven't wasted any of your time
optimizing, tweaking or fiddling the now-doomed interpreter, you *get* to
this point a lot sooner. Say the machine-generated Python programs run a lot
faster, but still not fast enough (both quite possible). Then you look at
what they're doing, and rethink the problem again. Perhaps you'll care to
generate C code instead, or perhaps you'll identify some key
string-processing tasks you simply can't do quickly enough in pure Python and
a write a C extension module for those parts. Whatever, again because you
haven't burned time optimizing, you again get to this point much quicker. In
the end you come up with a perfectly general solution that runs 100x faster
than anything else on the market, you sell it to Oracle for 500 million
dollars, and retire. Or maybe in the end you decide it was a fool's errand,
and just live with the pig-slow interpreters. See? It's win-win <wink>.
treat-python-like-c-and-you'll-bring-c's-problems-with-you-ly y'rs - tim
More information about the Python-list
mailing list