[Python-bugs-list] Andrew Dalke: [Fwd: bug in regrtest] (PR#42)

guido@CNRI.Reston.VA.US guido@CNRI.Reston.VA.US
Fri, 30 Jul 1999 13:51:47 -0400 (EDT)

------- Forwarded Message

Date:    Thu, 22 Jul 1999 00:59:54 -0600
From:    Andrew Dalke <dalke@bioreason.com>
To:      guido@cnri.reston.va.us
Subject: [Fwd: bug in regrtest]

This is a multi-part message in MIME format.
- --------------2F4E5F83965C914E037EBC47
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


  In the last couple of days there were no replies in comp.lang.python
to this post (meaning no one verified that this is or isn't a bug) so
I'm passing it up to you directly.

						Andrew Dalke
- --------------2F4E5F83965C914E037EBC47
Content-Type: message/rfc822
Content-Disposition: inline

Path: news.uswest.net!not-for-mail
Message-ID: <3793F7B6.482F02A3@bioreason.com>
From: Andrew Dalke <dalke@bioreason.com>
Organization: Bioreason, Inc.
X-Mailer: Mozilla 4.05C-SGI [en] (X11; U; IRIX64 6.5 IP30)
MIME-Version: 1.0
Newsgroups: comp.lang.python
Subject: bug in regrtest
Content-Type: text/plain; charset=us-ascii
Date: Mon, 19 Jul 1999 22:14:46 -0600
X-Trace: news.uswest.net 932444214 (Mon, 19 Jul 1999 23:16:54 CDT
NNTP-Posting-Date: Mon, 19 Jul 1999 23:16:54 CDT
Xref: news.uswest.net comp.lang.python:9034
Content-Transfer-Encoding: 7bit


  We use a modified version of regrtest (Lib/test/regrtest.py)
for internal regression tests.  I just tracked down a bug in
our version which seems to be present in the original one.

  The regression run is compared to golden output from a
known, correct file.  This is done in "runtest" using an instance
of the Compare class, which compares each output data with
the corresponding data from the standard.  If they are different,
an exception is raised.

  The problem comes when the regression run is finished while
golden data is still available.  There is a `close' method in the
Compare class which can check for this sort of error and raise
an exception, but it is never called.

  The relevant code from regrtest.runtest looks like (with some
editing for clarity):

     if generate:
         cfp = open(outputfile, "w")
     elif verbose:
         cfp = sys.stdout
         cfp = Compare(outputfile)
        save_stdout = sys.stdout
            if cfp:
                sys.stdout = cfp
                print test              # Output file starts with test name
            __import__(test, globals(), locals(), [])
            sys.stdout = save_stdout
     except ....
        return 0
        return 1

It looks like the easiest, correct fix is to add the following:

            if cfp and not generate and not cfp:

just after the __import__ statement.

I tested against the Python 1.5.1 distribution, which has a
slightly older version of regrtest than 1.5.2, although the latest
CVS version still has the flaw.  There are two tests which now fail
the regression tests, test_re and test_select.

I looked more closely at test_select, since it is smaller than
test_re.  The output/test_select file contains output which
agrees with the output of
  python regrtest.py -v test_select.py

*However*, if you look at test_select.py, the proper output will
only be generated when the verbose flag is set, because of constructs

>                 if verbose:
>                         print 'timeout =', tout

In normal tests, test_select.py should generate no output.  This
can be verified by using -g to generate a new output/test_select
file, which will contain only the test name.

So this output difference was masked by the Compare problem
mentioned above.

Looking at the CVS logs, it has been that way for a long time,

						Andrew Dalke

- --------------2F4E5F83965C914E037EBC47--

------- End of Forwarded Message