New tests in stable versions
Hello everyone, I’ve seen recent commits in the default branch (3.3) that improve test coverage (for example logging) or add new test files (cgitb, committed by Brian). Do we have a policy of not adding new test files to stable branches? For existing test files that get more tests, is the commit to stable branches left to the decision of the committer, or should stable versions get the new tests too? Regards
Do we have a policy of not adding new test files to stable branches? New logging tests failed during some weeks. If we add new tests, we may also break some stable buildbots. I don't think that we need to add
Le 20/07/2011 17:58, Éric Araujo a écrit : these new tests to a stable version. Victor
On 7/20/2011 12:25 PM, Victor Stinner wrote:
Do we have a policy of not adding new test files to stable branches? New logging tests failed during some weeks. If we add new tests, we may also break some stable buildbots. I don't think that we need to add
Le 20/07/2011 17:58, Éric Araujo a écrit : these new tests to a stable version.
When bugs are fixed in stable branches, they are usually accompanied by tests that fail without the bugfix. I have understood the policy to be that new tests go into stable branches. Failure indicates a bug in either the not-really-so-stable branch or the test. In the latter case, remove the test everywhere until fixed. In the former case, either fix the bug in the stable branch immediately or open an issue and attach the test code (skipping the test needed stage) or just disable it and note on the issue that a fix patch should re-enable. The logging tests may have been exceptional some way. -- Terry Jan Reedy
On Wed, Jul 20, 2011 at 11:48, Terry Reedy <tjreedy@udel.edu> wrote:
On 7/20/2011 12:25 PM, Victor Stinner wrote:
Le 20/07/2011 17:58, Éric Araujo a écrit :
Do we have a policy of not adding new test files to stable branches?
New logging tests failed during some weeks. If we add new tests, we may also break some stable buildbots. I don't think that we need to add these new tests to a stable version.
When bugs are fixed in stable branches, they are usually accompanied by tests that fail without the bugfix. I have understood the policy to be that new tests go into stable branches. Failure indicates a bug in either the not-really-so-stable branch or the test. In the latter case, remove the test everywhere until fixed. In the former case, either fix the bug in the stable branch immediately or open an issue and attach the test code (skipping the test needed stage) or just disable it and note on the issue that a fix patch should re-enable. The logging tests may have been exceptional some way
Right, but Eric is asking about new tests that do nothing more than improve test coverage, not exercise a fix for a bug. I say don't add new tests for the sake of coverage or adding new tests to stable branches. Tests for bugfixes are practically required.
On Thu, Jul 21, 2011 at 8:16 AM, Brett Cannon <brett@python.org> wrote:
I say don't add new tests for the sake of coverage or adding new tests to stable branches. Tests for bugfixes are practically required.
I don't *object* to enhanced tests going into maintenance branches, but the workflow of committing directly to trunk is so much simpler that I personally would only apply such changes if the new tests actually uncovered implementation bugs. Then backporting the tests would useful either as part of fixing the same bugs or else to demonstrate that the maintenance branch did not have the problem. So slightly more relaxed than Brett's view: - definitely apply bug fixes and their tests to affected maintenance branches - *optionally* apply coverage enhancements to maintenance branches, but don't feel obliged to do so I see it as a productivity trade-off - time spent improving coverage on multiple branches is time not spent on other things. I'm more than willing to sacrifice some test coverage on the maintenance branches to get even better test coverage and appropriate new features on trunk and more bug fixes on maintenance branches. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Le 21/07/2011 01:54, Nick Coghlan a écrit :
[...] So slightly more relaxed than Brett's view: - definitely apply bug fixes and their tests to affected maintenance branches - *optionally* apply coverage enhancements to maintenance branches, but don't feel obliged to do so
I see it as a productivity trade-off - time spent improving coverage on multiple branches is time not spent on other things. [...]
Thanks all! Nick’s viewpoint and other people’s more-or-less according replies are clear and make sense. Cheers
On Jul 20, 2011, at 3:16 PM, Brett Cannon wrote:
On Wed, Jul 20, 2011 at 11:48, Terry Reedy <tjreedy@udel.edu> wrote: On 7/20/2011 12:25 PM, Victor Stinner wrote: Le 20/07/2011 17:58, Éric Araujo a écrit : Do we have a policy of not adding new test files to stable branches? New logging tests failed during some weeks. If we add new tests, we may also break some stable buildbots. I don't think that we need to add these new tests to a stable version.
When bugs are fixed in stable branches, they are usually accompanied by tests that fail without the bugfix. I have understood the policy to be that new tests go into stable branches. Failure indicates a bug in either the not-really-so-stable branch or the test. In the latter case, remove the test everywhere until fixed. In the former case, either fix the bug in the stable branch immediately or open an issue and attach the test code (skipping the test needed stage) or just disable it and note on the issue that a fix patch should re-enable. The logging tests may have been exceptional some way
Right, but Eric is asking about new tests that do nothing more than improve test coverage, not exercise a fix for a bug.
I say don't add new tests for the sake of coverage or adding new tests to stable branches. Tests for bugfixes are practically required.
I concur with Brett. Nothing good will come from backporting tests that aren't aimed at a specific bugfix. Raymond
On 7/21/2011 2:58 AM, Raymond Hettinger wrote:
I concur with Brett. Nothing good will come from backporting tests that aren't aimed at a specific bugfix.
They could catch reversions that otherwise would not be caught. This would mainly apply to 2.7. It would not be an issue for 3.2 if all fixes are forward ported to 3.3 and tested there (before pushing) where there are tests not in 3.2. If people fix in 3.2, test, commit, and push, and just assume OK in 3.3, the new test will not do any good until someone else runs them with the fix. -- Terry Jan Reedy
On Fri, Jul 22, 2011 at 5:20 AM, Terry Reedy <tjreedy@udel.edu> wrote:
On 7/21/2011 2:58 AM, Raymond Hettinger wrote:
I concur with Brett. Nothing good will come from backporting tests that aren't aimed at a specific bugfix.
They could catch reversions that otherwise would not be caught. This would mainly apply to 2.7. It would not be an issue for 3.2 if all fixes are forward ported to 3.3 and tested there (before pushing) where there are tests not in 3.2. If people fix in 3.2, test, commit, and push, and just assume OK in 3.3, the new test will not do any good until someone else runs them with the fix.
None of that contradicts what Raymond and Brett said. Backporting test improvements that aren't targeting specific known bugs does not make efficient use of limited development resources. Forward porting of any changes made to maintenance branches (or explicitly blocking same as being irrelevant), OTOH, is mandatory. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Victor Stinner <victor.stinner <at> haypocalc.com> writes:
New logging tests failed during some weeks. If we add new tests, we may also break some stable buildbots. I don't think that we need to add these new tests to a stable version.
Just for my information, which logging test failures are you referring to? Regards, Vinay Sajip
On 21/07/2011 00:07, Vinay Sajip wrote:
Victor Stinner<victor.stinner<at> haypocalc.com> writes:
New logging tests failed during some weeks. If we add new tests, we may also break some stable buildbots. I don't think that we need to add these new tests to a stable version. Just for my information, which logging test failures are you referring to? Sorry, I don't remember the details, I just remember that some new tests added to test_logging were failing during some days/weeks.
Victor
participants (7)
-
Brett Cannon
-
Nick Coghlan
-
Raymond Hettinger
-
Terry Reedy
-
Victor Stinner
-
Vinay Sajip
-
Éric Araujo