optimization question

Andrew Koenig ark at research.att.com
Mon Aug 12 15:57:25 CEST 2002


Peter> Why is it a problem?  Search and replace!

Peter> Anyway, if you truly have so many that you can't replace them
Peter> easily, it's likely the design is very poor, isn't it?  Why 
Peter> would you have so much duplication without already having 
Peter> refactored, not for reasons of performance but simply to 
Peter> make the code more manageable along the way.

Peter> Maybe I'm spoiled by XP and having so many unit tests that
Peter> I am willing to refactor aggressively like this without any
Peter> qualms...

Maybe you haven't been bitten badly enough yet.

As I understand it, one of the tenets of XP is that once the tests
pass, you're done.  The trouble with that notion is that I have seen
too many cases in which programs have bugs that no amount of testing
can ever reveal with certainty.

Such bugs are often associated with semantically unsafe languages
(such as C or C++) or language features (such as threading), but not
always.  In fact, the hardest such bug that I can remember
encountering in my own code was in a program written in a semantically
safe language.

The program in question was a data-compression algorithm that was
intended ultimately to go on each end of a relatively slow transmission
line connecting a computer and a bitmapped display terminal.  The
final version of that algorithm was in C, but I had written the first
version of it in a language that checked all operations for validity
(as Python does) before executing them.

The algorithm was one from the Ziv-Lempel family of compression
algorithms.  The idea was to keep a collection of recently encountered
strings, together with indices representing them, and then send the
indices rather than the strings themselves.  To do so, the program on
each side of the transmission line maintained a binary-tree structure,
with the two structures being maintained in lock step.  The program
also maintained a LRU (least recently used) queue of strings
corresponding to the leaves of the tree.  In order to ensure that the
string indices could be packed into 12 bits, the tree was carefully
limited to having exactly 4096 leaves, by the strategem of recycling
the oldest leaves each time new leaves were needed.

Obviously, it was easy to test that the code did the right thing when
it tried to create a tree of depth greater than 4096 (in which case
there would be no leaves available for recycling).  Nevertheless, there
turned out to be a sequence of input strings that could cause a leaf
to be recycled while it was still being used--I forget exactly how.

What is important here is that the circumstances under which it could
happen were obscure--so obscure that the problem did not show up until
the program had been running for several days on random strings, and I
do not believe that any hand-generated tests would have had a
significant probability of finding it.

After that experience, I am reluctant in general to make large-scale
transformations in programs without advance evidence that doing so
will not introduce subtle errors.  I understand that such
transformations (in the guise of ``refactoring'') are popular these
days, but I've been in this field long enough to see many popular ideas
come and go, and will reserve my judgment on this one until it has
had ample time to prove itself.

-- 
Andrew Koenig, ark at research.att.com, http://www.research.att.com/info/ark



More information about the Python-list mailing list