preemptive OOP?

Kent Johnson kent at kentsjohnson.com
Wed Oct 4 13:04:47 EDT 2006


Mark Elston wrote:
> * Kent Johnson wrote (on 9/30/2006 2:04 PM):
>> John Salerno wrote:
>>> So my question in general is, is it a good idea to default to an OOP 
>>> design like my second example when you aren't even sure you will need 
>>> it? I know it won't hurt, and is probably smart to do sometimes, but 
>>> maybe it also just adds unnecessary code to the program.
>> In general, no. I'm a strong believer in You Aren't Going to Need It 
>> (YAGNI):
>> http://c2.com/xp/YouArentGonnaNeedIt.html
>>
>> because it *does* hurt
>> - you have to write the code in the first place
>> - every time you see a reference to MyNotebook you have to remind 
>> yourself that it's just a wx.Notebook
>> - anyone else looking at the code has to figure out that MyNotebook is 
>> just wx.Notebook, and then wonder if they are missing something subtle 
>> because you must have had a reason to create a new class...
>>
>> and so on...Putting in extra complexity because you think you will need 
>> it later leads to code bloat. It's usually a bad idea.
>>
>> Possible exceptions are
>> - If you are really, really, really sure you are going to need it 
>> really, really soon and it would be much, much easier to add it now then 
>> after the next three features go in, then you might consider adding it 
>> now. But are you really that good at predicting the future?
>> - When you are working in a domain that you are very familiar with and 
>> the last six times you did this job, you needed this code, and you have 
>> no reason to think this time is any different.
>>
>> You struck a nerve here, I have seen so clearly at work the difference 
>> between projects that practice YAGNI and those that are designed to meet 
>> any possible contingency. It's the difference between running with 
>> running shoes on or wet, muddy boots.
>>
>> Kent
> 
> I have only caught the tail of this thread so far so I may have missed
> some important info.  However, Kent's response is, I think, a bit of
> an oversimplification.
> 
> The answer to the original question, as quoted above, is ... it depends.
> On several things, actually.

Of course.

> However, when an application (or library) is designed to provide a more
> 'general purpose' solution to one or more problems and is likely to have
> a lifetime beyond the 'short term' (whatever that may mean to you), then
> OO can start to pay off.  In these kinds of applications you see the
> need for future maintenance and a likely need to expand on the existing
> solution to add new features or cover new ground.  This is made easier
> when the mechanism for this expansion is planned for in advance.

I am a fan of OOP and use it all the time. I was just arguing against 
using it when it is not called for.
> 
> Without this prior planning, any expansion (not to mention bug fixing)
> becomes more difficult and makes the resulting code more brittle.  While
> not all planning for the future requires OO, this is one mechanism that
> can be employed effectively *because* it is generally well understood
> and can be readily grasped *if* it is planned and documented well.

Unfortunately prior planning is an attempt to predict the future.
Correctly planning for future requirements is difficult. It is possible
to expand code without making it brittle.

Robert Martin has a great rule of thumb - first, do the simplest thing
that meets the current requirements. When the requirements change,
change the code so it will accommodate future changes of the same type.
Rather than try to anticipate all future changes, make the code easy to
change.

> 
> There is certainly a *lot* of 'Gratuitous OOP' (GOOP?) out there.  This
> isn't a good thing.  However, that doesn't mean that the use of OOP in
> any given project is bad.  It may be inappropriate.

In my experience a lot of GOOP results exactly from trying to anticipate 
future requirements, thus introducing unneeded interfaces, factories, etc.

Kent



More information about the Python-list mailing list