[Chicago] The current state of Testing Stuff

Mike Steder steder at gmail.com
Sat Dec 11 16:36:14 CET 2010


>
> Message: 1
> Date: Fri, 10 Dec 2010 10:48:20 -0600
> From: Brian Ray <brianhray at gmail.com>
> To: The Chicago Python Users Group <chicago at python.org>
> Subject: [Chicago] The current state of Testing Stuff
> Message-ID:
>        <AANLkTi=poq=VhvJAxk4nxaczanCjNY-8zuwrr1XZgQmc at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> This is an intentionally vague topic regarding testing from
> acceptability to unit tests. I am suggesting things have changed and
> perhaps it is time to review testing methodologies.  I am sure many of
> you have dealt with this change already. At least this could be an
> interesting topic for some I hope.
>
> I recall in 2005 ChiPy had a plethora of talks on unit testing, Mock
> Objects and fitness testing. Later we talked a lot about nose tests
> and runner tools. More recently we had talks on great tools like tox
> but it still included things like hudson to run. Somewhere in between
> we talked about Selenium.  There was a link (oh here it is) that lists
> tools http://pycheesecake.org/wiki/PythonTestingToolsTaxonomy . I am
> not sure this test is exhaustive.
>
> So here comes my question.... What did we learn from these different
> efforts in testing; what has changed in testing; what is the best
> modern method to implement large scale testing that covers the whole
> stack? How much testing in development should be made available for
> testing in QA and acceptability?
>

This is a pretty open ended question.  I think the only real answer is that
there are
enough options to test any and all aspects of your application either and
that there
are more tools now.  I'm not sure there are "better" tools than PyUnit
runners, Fitnesse,
Selenium, Hudson, but there are certainly more options.

I'd say one of the big new trends is BDD which seems to be interested in
sharing more of
the effort of creating tests between development teams and those business
analysts that
own the requirements.  Fitnesse is similar but more clunky than something
like Lettuce.

Anyway, I think it's important to realize that every organizations appetite
for production failures
and their budget to allow developers to write tests is different.  Without
having some sense what
that balance is it's hard to say what the correct tools and balance will be
for that organization.  I've worked
in environments where a complete automated test suite was a series of a 1000
nosetests that generated
HTTP requests against our REST backbone service and there were few if any
unit tests or browser driven tests.

The positive thing about something like that is that the tests give you some
reasonable confidence that the system
works.  The negative thing about that coarse grained approach is that those
tests can break for any number of reasons, are
slower than molasses in January, and can require extensive debugging to fix.

Personally, I think that testing should support the development process and
so while functional test suites and benchmarking tools
like Grinder are great for coverage and confirming that everything
integrates and works together I think that unittests are there to support
a developer workflow that is based around the so called "virtuous cycle".

That is, write a failing test, write some code, make that test pass, repeat.
 The complaint many people have is that unittests can feel
completely specific to the brain of the developer who worked on the feature,
sort of like how not everyone can read "that guys" code, there's
always that one guy who's testing aesthetic is different from everyone else.


However, this is where your agile coach friend comes in and says "well why
aren't you pairing on code and tests?"  and you kind of just blubber about
how no one can understand your snowflake like genius...  *cough* My point is
simply that expecting any testing methodology to be useful to everyone
requires team consensus and can really be assisted by pair programming.


> What is the point of writing test on something that will never fail?
> Have testing tools changed to become more restful? Has anyone ever had
> a test failure from an automated suite that actually pointed to
> something useful? It seems testing the smaller lower level items can
> be covered well with Unit Tests. Higher and middle level items perhaps
> are covered well by fitness and maybe browser level? How do we
> automate useful tests? How does one approach testing more complicated
> things like events and threads?  Now things are becoming so mashed...
> what are people doing to test interpolation with thinks like web
> services that someone else maintains?
>

A test should always start out failing.  If a test never fails it is a bad
test and should be deleted.  I'd do an SVN blame on a test that fails and
ask why that test is there.  This is why folks suggest "test first", it
makes it easier to ensure that the test fails before you've written any
code.  Of course, I have run into cases where there might be 1 sanity test
just to confirm that a testing framework like Fitnesse is correctly
installed as tests can fail due to environmental issues like PYTHONPATH
settings.


> I guess I am also looking for life cycle configuration ideas.  I could
> see someone saying something like this:  "We write UnitTests we run
> from Nose for TDD, for acceptability in development we use urllib2 to
> test RESTFul stuff, and then on top of that we use Selenium for
> browser testing..."
>
> --
>
> Brian Ray
>

Is this a hypothetical question about starting from greenfield, or adding
tests to a system that doesn't have them?

In new projects I think a "3 tier" approach like what you're suggesting is
very reasonable, although I would argue that there's room for a 4th.
For example, unittests are great for developers in my opinion, but suck for
the rest of the organization.  So while you're code does exactly what you
want it to do the business analyst three desks over may not have a very easy
time communicating with you or reasoning about the system in terms of those
tests.  This is where a BDD tool like Lettuce or Fitnesse may come in and
help the business analyst actually write tests in a form
they can read and you can make execute.  Otherwise I think it is a very
powerful approach to write HTTP level tests against any web service and
ideally jsUnit, Selenium, Windmill, etc will be used to cover the UI and
Javascript over your supported browsers.

Anyway, just contributing more cents.

~Mike
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/chicago/attachments/20101211/f657029a/attachment.html>


More information about the Chicago mailing list