tdelaney at avaya.com
Mon Aug 12 09:53:34 CEST 2002
> From: Matt Gerrans [mailto:mgerrans at mindspring.com]
> > All of which are exactly the reasons to make it *work*, then
> > make it fast. Starting with s[i:j] == t without any
> > optimization is a fine way to begin... then refactor when
> > the tests show it's all working perfectly.
> I concur. In fact, call me crazy, but I think it is fun to first get
> something working, then chisel away at the slow parts (if
> necessary) and
> watch as the performance improves. The profile module make this a
> pleasure. If you start out with all the optimizations in the
> first place,
> you could miss out on all this fun (assuming you got them
> right in the first
> place, of course).
Not wrong. I'm currently writing my own python coverage tool (for various
reasons trace.py and coverage.py were not suitable, and it turned out to be
easier to rewrite than to extend ;)
My first efforts were *slow*. Even when excluding the standard library.
My current version is two orders of magnitude faster. Oh sorry - I'm not
excluding the standard library with it at this point ;) That'll probably
improve the speed on my main test five-fold.
I keep getting ideas for what would speed it up. Most are tiny speedups, but
a few have been huge. I like huge speedups :)
I hope once I'm done that I will be able to make it available to all
(perhaps even make it into the standard library ;)
More information about the Python-list