Hi there, I get a number of failing tests when I check out Py and run py.test on it. If I read the output right, the failing tests are: /home/faassen/working/dist-py2/py/misc/test_initpkg.test_importing_all_implementations, line 21 /home/faassen/working/dist-py2/py/test/test/import_test/package/test_import.test_import, line 18 Note that the test.py output is rather intimidating, especially if you do not know the codebase you're testing. This is a realistic scenario for any newbie to the codebase like myself in this case, or possibly for deployment scenarios where someone is asked to run the test suite just to answer the question "which tests are failing?". Perhaps a 'non-verbose' mode would be nice, or a 'sysadmin mode' where only a bit of output is seen directly by the sysadmin, and the rest is dumped into a file that can be emailed to the developers. Anyay, I also seem to have a lot of recorded stdout like this: - - - - - - - - - - - - - - - recorded stdout - - - - - - - - - - - - - - - These modules should be identical: shared_lib: <module '/home/faassen/working/dist-py2/py/test/test/import_test/package/shared_lib/py' from '/home/faassen/working/dist-py2/py/test/test/import_test/package/shared_lib.py'> shared_lib2: <module 'package.shared_lib' from '/home/faassen/working/dist-py2/py/test/test/import_test/package/shared_lib.pyc'> This is all rather mysterious to me as well. Regards, Martijn
Hi Martijn! [Martijn Faassen Fri, Nov 26, 2004 at 09:38:27PM +0100]
Hi there,
I get a number of failing tests when I check out Py and run py.test on it. If I read the output right, the failing tests are:
/home/faassen/working/dist-py2/py/misc/test_initpkg.test_importing_all_implementations, line 21
/home/faassen/working/dist-py2/py/test/test/import_test/package/test_import.test_import, line 18
Yes, Ian Bicking checked these tests in to point to a certain import problem. I just renamed it to "bugtest_import.py" so it doesn't get automatically collected any more. Of course, you can still run the test by directly specifying the file with py.test FILENAME.
Note that the test.py output is rather intimidating, especially if you do not know the codebase you're testing.
What exactly do you find intimidating? Note that there is a "failure" demo file in src/example/test/failure_demo.py which if run by py.test gives failure output examples for some 40 tests.
This is a realistic scenario for any newbie to the codebase like myself in this case, or possibly for deployment scenarios where someone is asked to run the test suite just to answer the question "which tests are failing?".
Yeah, well, probably at least partly because it relates to imports :-) More seriously, though, i tried to produce good output to show what test is failing in which line (including an according call trace). It shows function context so that you don't neccessarily need to read the test file in your editor to understand the failure.
Perhaps a 'non-verbose' mode would be nice, or a 'sysadmin mode' where only a bit of output is seen directly by the sysadmin, and the rest is dumped into a file that can be emailed to the developers.
What would you think about a dense summary at the end with one line for each failing tests aka "filename:lineno XXXError" or so?
Anyay, I also seem to have a lot of recorded stdout like this:
Yip, the test produces this via print statements. For successful tests the output is not shown, though. Do you find this questionable? I have used it with nice results because you only get to see isolated print output for the failing tests which allows to use print statements even in central places and still you get meaningful test-related stdoutput. (Otherwise you get swamped by all the print output and don't know which output exactly related to the failure you want to debug). cheers, holger
Hey Holger, holger krekel wrote:
[Martijn Faassen Fri, Nov 26, 2004 at 09:38:27PM +0100]
Hi there,
I get a number of failing tests when I check out Py and run py.test on it. If I read the output right, the failing tests are: [snip]
Yes, Ian Bicking checked these tests in to point to a certain import problem.
Ah, I do recall seeing some discussion on the list, but I hadn't paid a lot of attention.
I just renamed it to "bugtest_import.py" so it doesn't get automatically collected any more.
Great, I just saw that when I did an "svn up".
Of course, you can still run the test by directly specifying the file with py.test FILENAME.
Right. This is now a bit more clear in the documentation. :)
Note that the test.py output is rather intimidating, especially if you do not know the codebase you're testing.
What exactly do you find intimidating?
There was a lot of it in case of py, showing unfamiliar code, and then there was the collected output. Just the quantity of unfamiliar complicated looking things is intimidating. :) This is of course usually fine and desirable if you're debugging a familiar codebase, but below I tried to describe usage scenarios where you want to avoid such intimidation.
Note that there is a "failure" demo file in src/example/test/failure_demo.py which if run by py.test gives failure output examples for some 40 tests.
Right, I went through the documentation and ran into that failure demo. Actually the failure demo output with some extra text would make for good documentation by itself, I suspect.
This is a realistic scenario for any newbie to the codebase like myself in this case, or possibly for deployment scenarios where someone is asked to run the test suite just to answer the question "which tests are failing?".
Yeah, well, probably at least partly because it relates to imports :-) More seriously, though, i tried to produce good output to show what test is failing in which line (including an according call trace). It shows function context so that you don't neccessarily need to read the test file in your editor to understand the failure.
Yes, and for a developer this is fine. For a developer less familiar with the codebase, or someone who is running the tests on other purposes, a more minimalistic output option would be nice on occasion. Of course part of the problem is that I modified some documentation text and just to make sure wanted to run the test suite before checking it in, even though to my knowledge py.test doesn't do doctets. I was presented with rather scary detailed test failures, something I did not expect at that point. I was expecting "everything is okay". :)
Perhaps a 'non-verbose' mode would be nice, or a 'sysadmin mode' where only a bit of output is seen directly by the sysadmin, and the rest is dumped into a file that can be emailed to the developers.
What would you think about a dense summary at the end with one line for each failing tests aka "filename:lineno XXXError" or so?
I think that would be useful. And then an option to suppress any output but the summary and the nice dots before it. :)
Anyay, I also seem to have a lot of recorded stdout like this:
Yip, the test produces this via print statements. For successful tests the output is not shown, though. Do you find this questionable? I have used it with nice results because you only get to see isolated print output for the failing tests which allows to use print statements even in central places and still you get meaningful test-related stdoutput. (Otherwise you get swamped by all the print output and don't know which output exactly related to the failure you want to debug).
I do not find this a questionable idea at all. I thought it was an intriguing idea when I read it in the documentation. I've edited the documentation somewhat to make the idea more clear to beginners. One thing I realized after editing the documentation is that I'm free to use 'print' in whatever test code I like now, and even check it in, without causing this to outputted except when I really need it -- in case of test failure. Of course this still doesn't leave me free to do the same with application code. :) Is that correct? If so, I shall edit the documentation accordingly. I hope you like the edits I made to doc/test.txt. Regards, Martijn
Hi Martijn, [Martijn Faassen Fri, Nov 26, 2004 at 10:32:33PM +0100]
holger krekel wrote:
[Martijn Faassen Fri, Nov 26, 2004 at 09:38:27PM +0100] ... Right, I went through the documentation and ran into that failure demo. Actually the failure demo output with some extra text would make for good documentation by itself, I suspect.
Yes, i agree.
Yeah, well, probably at least partly because it relates to imports :-) More seriously, though, i tried to produce good output to show what test is failing in which line (including an according call trace). It shows function context so that you don't neccessarily need to read the test file in your editor to understand the failure.
Yes, and for a developer this is fine. For a developer less familiar with the codebase, or someone who is running the tests on other purposes, a more minimalistic output option would be nice on occasion.
It seems reasonable to add an option. Although this might not help a complete newbie as he wouldn't know about the option.
Of course part of the problem is that I modified some documentation text and just to make sure wanted to run the test suite before checking it in, even though to my knowledge py.test doesn't do doctets. I was presented with rather scary detailed test failures, something I did not expect at that point. I was expecting "everything is okay". :)
Yes, and rightfully so. I should have thought about this earlier. Somehow i didn't think that somebody would get irritated by it in the last days and i also was too consumed by the pypy sprint :-)
What would you think about a dense summary at the end with one line for each failing tests aka "filename:lineno XXXError" or so?
I think that would be useful. And then an option to suppress any output but the summary and the nice dots before it. :)
something like "--kiss" because it's a good occassion to mention this pythonic principle :-) Or maybe just "--terseoutput" or "-t" or something. This option would be tied to the TextReporter though. For generating html-output (definitely a plan for the near future) the mileage will vary i suppose.
Yip, the test produces this via print statements. For successful tests the output is not shown, though. Do you find this questionable?
I do not find this a questionable idea at all. I thought it was an intriguing idea when I read it in the documentation. I've edited the documentation somewhat to make the idea more clear to beginners. One thing I realized after editing the documentation is that I'm free to use 'print' in whatever test code I like now, and even check it in, without causing this to outputted except when I really need it -- in case of test failure. Of course this still doesn't leave me free to do the same with application code. :)
Is that correct? If so, I shall edit the documentation accordingly.
That is correct. There is a small glitch with this otherwise nice feature, though: I sometimes spread print statements over all my code and work with the failing test-output. Then the tests pass and i think "cool, checkin time" but of course i still have to get rid of the print statement in my app code. Currently py.test doesn't help with this task and i am not sure yet how to do it (except from some fragile hacks i would envision to reside in the 'magic' part of the py lib :-).
I hope you like the edits I made to doc/test.txt.
They are great! I saw you introduced some references to unittest.py which i usually avoided to not confuse people who don't know anything about testing so far. But by now i guess (and Ian said this earlier already) it's probably better to mention it every now and then for developers coming from unittest.py. cheers, holger
holger krekel wrote:
I hope you like the edits I made to doc/test.txt.
They are great!
Thanks!
I saw you introduced some references to unittest.py which i usually avoided to not confuse people who don't know anything about testing so far. But by now i guess (and Ian said this earlier already) it's probably better to mention it every now and then for developers coming from unittest.py.
I know one reference, somewhat obscurely, was already there, and I improved it. Then I probably added one or two more. I think it makes sense to reference unittest.py as naturally many new people will be coming from that. A separate section for people coming from 'unittest' would be good too, actually. I don't think the references to unittest in the main text right now harm readability for newbies either. By the way, I don't know what procedure was used to regenerate the test.html so I didn't do that. I also don't know what your procedure is for updating the website. :) Regards, Martijn
Hi Martijn, [Martijn Faassen Sat, Nov 27, 2004 at 02:21:42PM +0100]
holger krekel wrote:
I saw you introduced some references to unittest.py which i usually avoided to not confuse people who don't know anything about testing so far. But by now i guess (and Ian said this earlier already) it's probably better to mention it every now and then for developers coming from unittest.py.
I know one reference, somewhat obscurely, was already there, and I improved it. Then I probably added one or two more. I think it makes sense to reference unittest.py as naturally many new people will be coming from that.
Yes.
By the way, I don't know what procedure was used to regenerate the test.html so I didn't do that. I also don't know what your procedure is for updating the website. :)
It's done automatically on checkin :-) You can check that you didn't add any problems by running py.test on the doc directory (it contains a test which does the rest-translation and fails if there are warnings or errors). Alternatively you can run .../src/tool/rest.py (or simply 'rest.py' if you "installed" the py lib per a bashrc-source-in of py/env.py) on the doc directory or a specific rest-file and it will process it so can you view it in a browser by pointing to that directory. cheers, holger
participants (3)
-
holger krekel -
hpk@trillke.net -
Martijn Faassen