preemptive OOP?

Mark Elston m.elston at
Wed Oct 4 19:55:32 CEST 2006

* Kent Johnson wrote (on 10/4/2006 10:04 AM):
> Mark Elston wrote:
>> ...
>> Without this prior planning, any expansion (not to mention bug fixing)
>> becomes more difficult and makes the resulting code more brittle.  While
>> not all planning for the future requires OO, this is one mechanism that
>> can be employed effectively *because* it is generally well understood
>> and can be readily grasped *if* it is planned and documented well.
> Unfortunately prior planning is an attempt to predict the future.
> Correctly planning for future requirements is difficult. It is possible
> to expand code without making it brittle.


I work in an environment where we plan our 'feature' implementation
several revisions in the future.  We know where we are going because
we have a backlog of end user requests for new features and a limited
pool of people to implement them.

Knowing what will have to change in the future makes this kind of
planning *much* simpler.

However, I still find it a lot easier to plan for change even without
explicit requests than you seem to indicate.  I have worked in the
field for over 10 years now and I have a pretty good idea of the
kinds of things our users will need.  I also have a pretty good idea
of the kinds of things we can introduce that users will find useful.
Planning for the introduction of these things is pretty useful when
we have short turn-around time between revisions and have to implement
any number of new features during these iterations.

And adding new features and handling new requests in a well thought
out manner helps us to keep the system from being brittle.

BTW, the kind of SW we develop is for large pieces of test equipment
used in semiconductor test.  This SW covers the range from embedded and
driver level supporting custom hardware to end-user applications.  There
is simply no way we could survive if we tried to develop software in any
kind of an ad hoc manner.

> Robert Martin has a great rule of thumb - first, do the simplest thing
> that meets the current requirements. When the requirements change,
> change the code so it will accommodate future changes of the same type.
> Rather than try to anticipate all future changes, make the code easy to
> change.

In our case the requirements don't really change.  They do get
augmented, however.  That is, the users will want to continue to do the
things they already can - in pretty much the same ways.  However, they
will also want to do additional things.

Robert's rule of thumb is applied here as well.  We may have a pretty
good idea of where we are going, but we can get by for now implementing
a minimal subset.  However, if we don't anticipate where we are going
to be in the next revision or two it is very likely to entail some
substantial rewrite of existing code when we get there.  That results in
an unacceptable cost of development - both in terms of $$ and time.  Not
only is the code obsoleted, but so are all the components that depend on
the rewritten code and all of the tests (unit and integration) for all
the affected components.

>> There is certainly a *lot* of 'Gratuitous OOP' (GOOP?) out there.  This
>> isn't a good thing.  However, that doesn't mean that the use of OOP in
>> any given project is bad.  It may be inappropriate.
> In my experience a lot of GOOP results exactly from trying to anticipate 
> future requirements, thus introducing unneeded interfaces, factories, etc.

While we do our best to avoid this, it *does* sometimes happen.
However, it is not as much of a problem as the reverse.  If an interface
is developed that turns out not to be very useful, we can always remove
it with very little cost.  If we have to *replace* or substantially
modify an existing mechanism to support a new feature the ripple effect
can cripple our schedule for several release iterations.

OTOH, your point is a good one:  If we really didn't know where we
wanted to be in the future then making a wild guess and heading off
in some random direction isn't likely to be very beneficial either.
In fact it is likely to be more costly in the long run.

That strikes me as "Crap Shoot Development" (a relative of POOP? :) ).


"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it."

  -- Brian Kernighan of C

More information about the Python-list mailing list