TextTest 3.12.1 and PyUseCase 1.4.2 released!
geoff.bache at jeppesen.com
Wed Aug 27 14:49:40 CEST 2008
I combine these notices as both are bugfix releases, and they are
released simultaneously because one depends on the other.
See release notes in the downloads for details.
About (See http://www.texttest.org for more details):
TextTest is a tool for automatic text-based functional testing. This
means running a batch-mode executable in lots of different
ways from the command line, and using the contents of produced text
files as a means of controlling the behaviour of that application.
It is written in Python using PyGTK for its user interfaces, and is
supported on POSIX-based systems and Windows (2000,XP,Vista).
- Filters output to avoid false failure
- Manages test data and isolation from global effects
- Automatic organisation and grouping of test failures
- “Nightjob website” to get a view of test progress over time
- Performance testing
- Integrates with Sun Grid Engine for parallel testing (and LSF)
- Various “data mining” tools for automatic log interpretation (includes
integration with bug trackers)
- Interception techniques to automatically “mock out” third-party
components (command line and network traffic).
- Integrates with xUseCase tools for GUI testing (e.g. PyUseCase below)
About PyUseCase (See also
PyUseCase is a record/replay layer for Python GUIs. It consists of two
modules: usecase.py, which is a generic framework for all Python GUIs
(or even non-GUI programs) and gtkusecase.py, which is specific to PyGTK
GUIs. See www.pygtk.org for more info on PyGTK.
The aim is only to simulate the interactive actions of a user, not to
verify correctness of a program. Essentially it allows an interactive
program to be run in batch mode. Another tool is needed for verification
of behaviour, for example TextTest, also available from SourceForge.
The idea of a "use-case" recorder is described in some detail in a paper
To summarise, the motivation for it is that traditional record/replay
tools, besides being expensive, tend to record very low-level scripts
that are a nightmare to maintain and can only be read by developers.
This is in large part because they record the GUI mechanics rather than
the intent behind the test. (Even though this is usually in terms of
widgets not pixels now)
Use-case recorders like PyUseCase are built around the idea of recording
in a domain language via the developer setting up a mapping between the
actions that can be performed with the UI and names that describe what
the point of these actions is. This incurs an extra setup cost of
course, but it has the dual benefit of making the tests much more
readable and much more resilient to future UI changes than if they are
recorded in a more programming-language-like script.
Another key advantage is that, because we instrument the code anyway to
create the above mapping, it is easy to tell PyUseCase where the script
will need to wait, thus allowing it to record "wait" statements without
the test writer having to worry about it. This is otherwise a common
headache for recorded tests: most other tools require you to explicitly
synchronise the test when writing it (external to the recording).
Example recorded usecase ("test script") for a flight booking system:
wait for flight information to load
select flight SA004
proceed to book seats
# SA004 is full...
accept error message
More information about the Python-announce-list