Code correctness, and testing strategies

David wizzardx at
Sun Jun 8 15:20:01 CEST 2008

Thanks for your informative reply.

On Sun, Jun 8, 2008 at 12:28 PM, Ben Finney
<bignose+hates-spam at> wrote:
> David <wizzardx at> writes:


>> My problem is that I haven't run the app once yet during development
>> :-/
> That might be an artifact of doing bottom-up implementation
> exclusively, leading to a system with working parts that are only
> integrated into a whole late in the process.

I did do it in a mostly top-down way, but didn't stop the BDD process
to actually run the app :-)

It sounds like what you are suggesting is something like this:

1) Following BDD, get a skeleton app working

Then, your BDD process gets a few extra steps:

Old steps:

1) Write a test which fails for [new feature]
2) Write code for [new feature] to pass the test
3) Refactor if needed

New steps:

4) Run the app like an end-user, and see that it works for the [new feature]
5) Write an automated test which does (4), and verifies the [new
feature] is working correctly

Does this mean that you leave out the formal 'integration' and
'systems' testing steps? By actually running the app you are doing
those things more or less.

Could you also leave out the unit tests, and just write automated
acceptance tests? I guess that would have problems if you wanted to
re-use code in other apps. Or, if acceptance tests break then it's
harder to see which code is causing the problem.

Also, if you had to implement a few "user stories" to get your app
into a skeleton state, do you need to go back and write all the
missing acceptance tests?

I have a few problems understanding how to write automated acceptance
tests. Perhaps you can reply with a few URLs where I can read more
about this :-)

1) services

If your app starts, and keeps running indefinitely, then how do you
write acceptance tests for it? Does your acceptance tests need to
interact with it from the outside, by manipulating databases, system
time, restarting the service, etc?

I presume also that acceptance tests need to treat your app as a black
box, so they can only check your apps output (log files, database
changes, etc), and not the state of objects etc directly.

2) User interfaces

How do you write an acceptance test for user interfaces? For unit
tests you can mock wx or gtk, but for the 'real' app, that has to be
harder. Would you use specialised testing frameworks that understand X
events or use accessibility/dcop/etc interaction?

3) Hard-to-reproduce cases.

How do you write acceptance tests for hard-to-reproduce cases where
you had to use mock objects for your unit tests?


In cases like the above, would you instead:

- Have a doc with instructions for yourself/testers/qa to manually
check features that can't be automatically tested
- Use 'top-down' integration tests, where you mock parts of the system
so that that features can be automatically tested.
- Some combination of the above


>> Is it worth the time to write integration tests for small apps, or
>> should I leave that for larger apps?
> There is a threshold below which setting up automated build
> infrastructure is too much overhead for the value of the system being
> tested.

There is no 'build' process (yet), since the app is 100% Python. But I
will be making a Debian installer a bit later.

My current 'build' setup is something like this:

1) Make an app (usually C++, shell-script, Python, or mixed)

2) Debianise it (add a debian subdirectory, with control files so
Debian build tools know how to build binaries from my source, and how
they should be installed & uninstalled).

3) When there are new versions, manually test the new version, build a
binary debian installer (usually in a Debian Stable chroot with debian
tools), on my Debian Unstable dev box, and upload the deb file (and
updated Debian repo listing files) to a 'development' or 'unstable'
branch on our internal Debian mirror.

4) Install the new app on a Debian Stable testing box, run it, and
manually check that the new logic works

5) Move the new version to our Debian repo's live release, from where
it will be installed into production.

If I adopt BDD, my updated plan was to use it during app development
and maintenance, but not for later testing. Do you suggest that after
building a .deb in the chroot, the app should also be automatically
installed under a chroot & acceptance tests run on my dev machine? Or
should I package the acceptance tests along with the app, so that they
can be (manually) run on test servers before going into production? Or
do both?

I've considered setting up a centralised build server at work, but
currently I'm the only dev which actually builds & packages software,
so it wouldn't be very useful. We do have other devs (PHP mostly), but
they don't even use version control :-/. When they have new versions
(on their shared PHP dev & testing servers), I copy it into my version
control, confirm the changed files with them, build an installer, and
upload onto our mirror, so it can be installed onto other boxes.


More information about the Python-list mailing list