From ianb at colorstudy.com Tue Sep 28 01:58:44 2004 From: ianb at colorstudy.com (Ian Bicking) Date: Mon, 27 Sep 2004 18:58:44 -0500 Subject: [py-dev] utest thoughts Message-ID: <4158A934.4040800@colorstudy.com> Here's some things I'd like to do with utest. Maybe some of them are possible now. This is kind of a brainstorming list of features, I guess. * Specify tests to run within a module. The only way to select a module that I see now is by filename. Package name would also be nice. Wildcards could also be useful, e.g., utest modulename.'*transaction*'. I think regular expressions are unnecessarily complex. Maybe a wildcard character other than * would be nice, to keep it from conflicting with shell expansion. A setting, or an optional alternative character? Maybe % (like in SQL). * Data-driven tests, where the same code is tested with many different sets of data. Naturally this is often done in a for loop, but it's better if the data turns into multiple tests, each of which are addressable. There's something called a "unit" in there, I think, that relates to this...? But not the same thing as unittest; I think I saw unittest compatibility code as well. Anyway, with unittest I could provide values to the __init__, creating multiple tests that differed only according to data, but then the runner became fairly useless. I'm hoping that be easier with utest. * Specifying an option to the runner that gets passed through to the tests. It seems like the options are fixed now. I'd like to do something like -Ddatabase=mysql. I can do this with environmental variables now, but that's a little crude. It's easiest if it's just generic, like -D for compilers, but of course it would be nicer if there were specific options. Maybe this could be best achieved by specializing utest and distributing my own runner with the project. * I'm not clear how doctest would fit in. I guess I could turn the doctest into a unit test TestCase, then test that. Of course, it would be nice if this was streamlined. I also have fiddled with doctest to use my own comparison functions when testing if we get the expected output. That's not really in the scope of utest -- that should really go in doctest. Anyway, I thought I'd note its existance. * Code coverage tracking. This should be fairly straight-forward to add. The last time I looked around at test runners, Zope3's seemed the best. Well, it would have been better if I could have gotten it to do something. But it *seemed* best. Mining it for features: * Different levels of tests (-a --at-level or --all; default level is 1, which doesn't run all tests). They have lots of tests, so I'm guessing they like to avoid running tests which are unlikely to fail. * A distinction between unit and functional tests (as in acceptance or system tests). This doesn't seem very generic -- these definitions are very loose and not well agreed upon. There's not even any common language for them. I'm not sure how this fits in with level, but some sort of internal categorization of tests seems useful. * A whole build process. I think they run out of the build/ directory that distutils generates. It is a little unclear how paths work out with utest, depending on where you run it from. Setting PYTHONPATH to include your development code seems the easiest way to resolve these issues with utest. I don't have anything with complicated builds, so maybe there's issues I'm unaware of. * A pychecker option. (-c --pychecker) * A pdb option (-D --debug). I was able to add this to utest with fairly small modifications (at least, if I did it correctly). * An option to control garbage collection (-g --gc-threshold). I guess they encounter GC bugs sometimes. * Run tests in a loop (-L --loop). Also for checking memory leaks. I've thought that running loops of tests in separate threads could also be a useful test, for code that actually was supposed to be used with threads. That might be another place for specializing the runner. * Keep bytecode (-k --keepbytocode). Not interesting in itself, but it implies that they don't normally keep bytecode. I expect this is to deal with code where the .py file has been deleted, but the .pyc file is still around. I've wasted time because of that before, so I can imagine its usefulness. * Profiling (-P --profile). Displays top 50 items, by time and # of calls. * Report only first doctest failure (-1 --report-only-first-doctest-failure). * Time the tests and show the slowest 50 tests (-t --top-fifty). I first thought this was just a bad way of doing profiling, but now that I think about it this is to diagnose problems with the tests runnning slowly. That's all the interesting options, I think. There's also options to select which tests you display, but these seem too complex, while still not all that powerful. From hpk at trillke.net Tue Sep 28 10:39:39 2004 From: hpk at trillke.net (holger krekel) Date: Tue, 28 Sep 2004 10:39:39 +0200 Subject: [py-dev] utest thoughts In-Reply-To: <4158A934.4040800@colorstudy.com> References: <4158A934.4040800@colorstudy.com> Message-ID: <20040928083939.GK19356@solar.trillke.net> Hi Ian, thanks a lot for your input! I will get back to each of your suggestions but this week is very busy for me so it might take a while. This shouldn't keep anyone else from replying just so you know. cheers, holger [Ian Bicking Mon, Sep 27, 2004 at 06:58:44PM -0500] > Here's some things I'd like to do with utest. Maybe some of them are > possible now. This is kind of a brainstorming list of features, I guess. > > > * Specify tests to run within a module. The only way to select a module > that I see now is by filename. Package name would also be nice. > Wildcards could also be useful, e.g., utest modulename.'*transaction*'. > I think regular expressions are unnecessarily complex. Maybe a > wildcard character other than * would be nice, to keep it from > conflicting with shell expansion. A setting, or an optional alternative > character? Maybe % (like in SQL). > > * Data-driven tests, where the same code is tested with many different > sets of data. Naturally this is often done in a for loop, but it's > better if the data turns into multiple tests, each of which are > addressable. There's something called a "unit" in there, I think, that > relates to this...? But not the same thing as unittest; I think I saw > unittest compatibility code as well. > > Anyway, with unittest I could provide values to the __init__, creating > multiple tests that differed only according to data, but then the runner > became fairly useless. I'm hoping that be easier with utest. > > * Specifying an option to the runner that gets passed through to the > tests. It seems like the options are fixed now. I'd like to do > something like -Ddatabase=mysql. I can do this with environmental > variables now, but that's a little crude. It's easiest if it's just > generic, like -D for compilers, but of course it would be nicer if there > were specific options. Maybe this could be best achieved by > specializing utest and distributing my own runner with the project. > > * I'm not clear how doctest would fit in. I guess I could turn the > doctest into a unit test TestCase, then test that. Of course, it would > be nice if this was streamlined. I also have fiddled with doctest to > use my own comparison functions when testing if we get the expected > output. That's not really in the scope of utest -- that should really > go in doctest. Anyway, I thought I'd note its existance. > > * Code coverage tracking. This should be fairly straight-forward to add. > > The last time I looked around at test runners, Zope3's seemed the best. > Well, it would have been better if I could have gotten it to do > something. But it *seemed* best. Mining it for features: > > * Different levels of tests (-a --at-level or --all; default level is 1, > which doesn't run all tests). They have lots of tests, so I'm guessing > they like to avoid running tests which are unlikely to fail. > > * A distinction between unit and functional tests (as in acceptance or > system tests). This doesn't seem very generic -- these definitions are > very loose and not well agreed upon. There's not even any common > language for them. I'm not sure how this fits in with level, but some > sort of internal categorization of tests seems useful. > > * A whole build process. I think they run out of the build/ directory > that distutils generates. It is a little unclear how paths work out > with utest, depending on where you run it from. Setting PYTHONPATH to > include your development code seems the easiest way to resolve these > issues with utest. I don't have anything with complicated builds, so > maybe there's issues I'm unaware of. > > * A pychecker option. (-c --pychecker) > > * A pdb option (-D --debug). I was able to add this to utest with > fairly small modifications (at least, if I did it correctly). > > * An option to control garbage collection (-g --gc-threshold). I guess > they encounter GC bugs sometimes. > > * Run tests in a loop (-L --loop). Also for checking memory leaks. > I've thought that running loops of tests in separate threads could also > be a useful test, for code that actually was supposed to be used with > threads. That might be another place for specializing the runner. > > * Keep bytecode (-k --keepbytocode). Not interesting in itself, but it > implies that they don't normally keep bytecode. I expect this is to > deal with code where the .py file has been deleted, but the .pyc file is > still around. I've wasted time because of that before, so I can imagine > its usefulness. > > * Profiling (-P --profile). Displays top 50 items, by time and # of calls. > > * Report only first doctest failure (-1 > --report-only-first-doctest-failure). > > * Time the tests and show the slowest 50 tests (-t --top-fifty). I > first thought this was just a bad way of doing profiling, but now that I > think about it this is to diagnose problems with the tests runnning slowly. > > That's all the interesting options, I think. There's also options to > select which tests you display, but these seem too complex, while still > not all that powerful. > _______________________________________________ > py-dev mailing list > py-dev at codespeak.net > http://codespeak.net/mailman/listinfo/py-dev > From ianb at colorstudy.com Thu Sep 30 02:46:45 2004 From: ianb at colorstudy.com (Ian Bicking) Date: Wed, 29 Sep 2004 19:46:45 -0500 Subject: [py-dev] Re: [pypy-dev] utest, development and discussion In-Reply-To: <200409260821.i8Q8LqOU032224@ratthing-b246.strakt.com> References: <415672F3.9060808@colorstudy.com> <200409260821.i8Q8LqOU032224@ratthing-b246.strakt.com> Message-ID: <415B5775.3040402@colorstudy.com> Laura Creighton wrote: >>Anyway, I've started working with it some. I've added one small feature >>(dropping into pdb when an exception occurs), and there's sure to be >>some more, particularly documentation. Where should I send patches? >>Where should discussion occur? And maybe a website? > > > Discussion belongs here. Patches too unless you want to get a project login, > a process that Holger handles. There isn't a separate part of the pypy wiki - > or the website for utest. Probably that should change. And documentation > belongs here: http://codespeak.net/pypy/index.cgi?doc which we generate > out of files in pypy/trunk/doc . You write your docs in ReST and a daemon > comes along and makes html out of them for you. Since std (I guess to be named py) isn't under pypy, I assume documentation should go in std/trunk/doc? Can it be set up that this is also turned into HTML? -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From hpk at trillke.net Thu Sep 30 09:15:27 2004 From: hpk at trillke.net (holger krekel) Date: Thu, 30 Sep 2004 09:15:27 +0200 Subject: [py-dev] Re: [pypy-dev] utest, development and discussion In-Reply-To: <415B5775.3040402@colorstudy.com> References: <415672F3.9060808@colorstudy.com> <200409260821.i8Q8LqOU032224@ratthing-b246.strakt.com> <415B5775.3040402@colorstudy.com> Message-ID: <20040930071527.GK19356@solar.trillke.net> [Ian Bicking Wed, Sep 29, 2004 at 07:46:45PM -0500] > Laura Creighton wrote: > >>Anyway, I've started working with it some. I've added one small feature > >>(dropping into pdb when an exception occurs), and there's sure to be > >>some more, particularly documentation. Where should I send patches? > >>Where should discussion occur? And maybe a website? > > > > > >Discussion belongs here. Patches too unless you want to get a project > >login, > >a process that Holger handles. There isn't a separate part of the pypy > >wiki - > >or the website for utest. Probably that should change. And documentation > >belongs here: http://codespeak.net/pypy/index.cgi?doc which we generate > >out of files in pypy/trunk/doc . You write your docs in ReST and a daemon > >comes along and makes html out of them for you. > > Since std (I guess to be named py) isn't under pypy, I assume > documentation should go in std/trunk/doc? Can it be set up that this is > also turned into HTML? yes, this is certainly possible although some of the html generation is loosely tied to pypy. however, i want std/py is to be the first project which will be hosted by 'trac' on the new codespeak setup. holger