[Python-Dev] Checking in a broken test was: Re: [Python-checkins] r41940 - python/trunk/Lib/test/test_compiler.py

Neal Norwitz nnorwitz at gmail.com
Sat Jan 7 21:45:27 CET 2006


[moving to python-dev]

> On 1/7/06, Reinhold Birkenfeld <reinhold-birkenfeld-nospam at wolke7.net> wrote:
> > Well, it is not the test that's broken... it's compiler.

[In reference to:
http://mail.python.org/pipermail/python-checkins/2006-January/048715.html]

In the past, we haven't checked in tests which are known to be broken.
 There are several good reasons for this.  I would prefer you, 1) also
fix the code so the test doesn't fail, 2) revert the change (there's
still a bug report open, right?), or 3) generalize tests for known
bugs.

I strongly prefer #1, but have been thinking about adding #3.  There
are many open bug reports that fall into two broad categories: 
incorrect behaviour and crashers.  I've been thinking about adding two
tests which incorporate these bugs as a way of consolidating where the
known problems are.  Also, it's great when we have test cases that can
be moved to the proper place once the fix has been checked in.

I'm proposing something like add two files to Lib/test:
outstanding_bugs.py and outstanding_crashes.py.  Both would be normal
test files with info about the bug report and the code that causes
problems.

This test in test_compiler should be moved to outstanding_bugs.py.

And for a different discussion:

On 1/7/06, Jim Jewett <jimjjewett at gmail.com> wrote:
> Maybe.  Guido's statement (maybe short of a pronouncement)
> was that keyword-only arguments were OK in principle, and
> that *args could follow keywords.  It wasn't true yet because
> no one had put in the work, but it would be an acceptable
> change.
>
> I interpret this to mean that
>
>     def f(a=1, b): pass
>
> should not necessarily raise an error, but I would like to see what
> it does to
>
>     def f(a=1, b):
>         print a,b
>     f(b=7)
>
> before saying that it is OK.


More information about the Python-Dev mailing list