[pytest-dev] pytest-2.7.0 release package

Peter p-santoro at sbcglobal.net
Sat Mar 28 14:18:58 CET 2015


Outside of issue 701 that I reported just prior to the 2.7.0 release, I 
found no regressions in my usage of pytest (which I believe differs from 
the norm - see below).

I use python with pytest to check for observational equivalence between 
actual and expected results of a business critical C#.NET application. 
My conftest.py script customizes py.test with fixtures that automate the 
running of selected tests and their subsequent analysis. This includes 
the driving and coordination of CLI and GUI .NET applications that were 
not designed to support automated testing.  This automated test facility 
also includes a package to extend pytest's functionality.  For example:

- some fields (e.g. processing date) change with each run of the .NET 
application, which necessitates that certain fields in the actual and 
previously generated expected results are masked prior to using the 
built-in pytest assertions; I created a package to declaratively specify 
what fields are masked for various versioned record structures - the 
actual masking happens prior to using pytest assertions

- the record structures for this .NET application are required via 
legacy downstream systems and are at least 5000 bytes long; if there are 
only one or two differences between the actual and expected results and 
the differences are close together, the built-in pytest assertions 
produce a nice report; if there are multiple differences and they are 
far from each other, the built-in pytest assertion report isn't very 
helpful; I added functionality to produce a list of the indices where 
values differ, which greatly simplifies troubleshooting

The automated test facility that I built also includes a python 
application to automatically create automated test repository content 
for each test (i.e. test data, expected results, and the required 
py.test scripts).

The benefits of using pytest in this manner are as follows:

1) Creating good unit tests requires additional software development 
work - including testing the unit tests themselves.  That's fine, if you 
have sufficiently competent developers and the necessary time to do so. 
  The automated test facility that I built doesn't require any 
additional programming (beyond what I've already done); therefore, even 
business users and less experienced developers can create test data for 
use by the automated test system.

2) Supports rapid regression testing while refactoring the existing .NET 
code base.

3) Manual testing and analysis that often took days (sometimes weeks) 
can now be done in minutes.

4) Troubleshooting regression failures has been greatly simplified.


Thank you for producing a truly great piece of software!


Peter Santoro


More information about the pytest-dev mailing list