program surgery vs. type safety
donn at u.washington.edu
Thu Nov 13 01:47:56 CET 2003
In article <9a6d7d9d.0311120822.553cd347 at posting.google.com>,
aaron at reportlab.com (Aaron Watters) wrote:
> I'm doing a heart/lung bypass procedure on a largish Python
> program at the moment and it prompted the thought that the
> methodology I'm using would be absolutely impossible with a
> more "type safe" environment like C++, C#, java, ML etcetera.
> Basically I'm ripping apart the organs and sewing them back
> together, testing all the while and the majority of the program
> at the moment makes no sense in a type safe world... Nevertheless,
> since I've done this many times before I'm confident that it
> will rapidly all get fixed and I will ultimately come up with
> something that could be transliterated into a type safe system
> (with some effort). It's the intermediate development stage
> which would be impossible without being able to "cheat". A type
> conscious compiler would go apopleptic attempting to make sense of
> the program in its present form.
> If I were forced to do the transformation in a type safe way
> I would not be able to do as much experimentation and backtracking
> because each step between type safe snapshots that could be tested
> would be too painful and expensive to throw away and repeat.
> This musing is something of a relief for me because I've lately
> been evolving towards the view that type safety is much more
> important in software development than I have pretended in the past.
It's interesting that you lump ML in with the rest of those
languages. There are at least a few people around who reject
any thinking on type safety if it's cast in the context of
C++ or Java, because the strict static typing, type inference
and other tools you don't get with either of those languages
make them poor representatives. But ML has that stuff.
I have the sources here for a largeish Python program. We have
been using it here in production for some months, and I have
a collection of changes to adapt it to our environment. A lot
of changes, by my standards - 4560 lines of context diff, plus
some new modules and programs. I have a minor upgrade from
the author, and this afternoon I finished patching in our changes.
That is, I have run the context diffs through patch, and hand
patched what it couldn't deal with. So I have one automated
structural analysis tool helping me out here - patch. I will
also be able to run them through the "compiler" to verify that
they're still syntactically correct, but that won't help much
here - patch already noticed the kind of local changes that would
make for syntactical breakage. I'm more concerned about non-local
changes - some other function that now behaves differently than
it did in when we wrote a change around it.
There's no guarantee that if this program were written in ML
instead, I'd find every upgrade error, but it would be a hell
of a lot better than patch.
If I were as confident as you that ``it will rapidly all get
fixed,'' then I guess it would not be an issue. But my
experience is that too much of it won't get fixed until it
breaks in production, and I hate to mess with it for that
reason. I find Haskell and Objective CAML kind of liberating
in this way - I can go in and really tear it up, and the
compiler won't let me call it finished until all the boards
are back, wires and fixtures re-connected - stuff that I can't
see but it can.
Donn Cave, donn at u.washington.edu
More information about the Python-list