[Python-Dev] Nasty trunk test failures
Tim Peters
tim.peters at gmail.com
Thu Mar 30 22:59:06 CEST 2006
test_tokenize started failing on all the trunk buildbots immediately
after this seemingly unrelated checkin:
> Author: armin.rigo
> Date: Thu Mar 30 16:04:02 2006
> New Revision: 43458
>
> Modified:
> python/trunk/Lib/test/test_index.py
> python/trunk/Objects/abstract.c
> python/trunk/Objects/typeobject.c
> Log:
> Minor bugs in the __index__ code (PEP 357), with tests.
I haven't gotten anywhere with this. test_tokenize doesn't fail in
isolation on my box, and at one point didn't even fail after using -f
to reproduce all the steps up to and including test_tokenize from a
failing -r run.
I do have a long sequence of tests now ending with test_tokenize that
fails on my box provided I use -uall (and I should note that
test_index is not in this sequence). I'm using a debug build here,
and don't know about a release build. Alas, it takes a long time to
run, so progress this way is slow. Worse, since this is so flaky I
doubt it's going to lead to something interesting anyway.
More eyeballs? I didn't see anything obviously wrong in Armin's
checkin. Maybe someone else would. Or maybe it's just managing to
provoke a pre-existing C problem.
...
Eww! test_tokenize _does_ fail in isolation here, but only if I use -uall(!):
C:\Code\python\PCbuild>python_d ../Lib/test/regrtest.py test_tokenize
test_tokenize
1 test OK.
[25131 refs]
C:\Code\python\PCbuild>python_d ../Lib/test/regrtest.py -uall test_tokenize
test_tokenize
test test_tokenize crashed -- <class 'exceptions.AssertionError'>:
1 test failed:
test_tokenize
[15147 refs]
OK, test_tokenize does a hell of a lot more work when -uall is
specified. After applying this patch, test_tokenize fails very
quickly (although you still need -uall):
"""
Index: test_tokenize.py
===================================================================
--- test_tokenize.py (revision 43462)
+++ test_tokenize.py (working copy)
@@ -34,6 +34,7 @@
testdir = os.path.dirname(f) or os.curdir
testfiles = glob.glob(testdir + os.sep + 'test*.py')
+testfiles = [testdir + os.sep + 'test_index.py']
if not is_resource_enabled('compiler'):
testfiles = random.sample(testfiles, 10)
"""
IOW, it's specifically the new test_index.py that test_tokenize fails
on. I'm not sure what test_tokenize is doing, but if someone else is,
that seems like a good hint ;-)
More information about the Python-Dev
mailing list