Socket servers in the test suite
I've been recently trying to improve the test coverage for the logging package, and have got to a not unreasonable point: logging/__init__.py 99% (96%) logging/config.py 89% (85%) logging/handlers.py 60% (54%) where the figures in parentheses include branch coverage measurements. I'm at the point where to appreciably increase coverage, I'd need to write some test servers to exercise client code in SocketHandler, DatagramHandler and HTTPHandler. I notice there are no utility classes in test.support to help with this kind of thing - would there be any mileage in adding such things? Of course I could add test server code just to test_logging (which already contains some socket server code to exercise the configuration functionality), but rolling a test server involves boilerplate such as using a custom RequestHandler-derived class for each application. I had in mind a more streamlined approach where you can just pass a single callable to a server to handle requests, e.g. as outlined in https://gist.github.com/945157 I'd be grateful for any comments about adding such functionality to e.g. test.support. Regards, Vinay Sajip
On Thu, Apr 28, 2011 at 7:23 AM, Vinay Sajip
I've been recently trying to improve the test coverage for the logging package, and have got to a not unreasonable point:
logging/__init__.py 99% (96%) logging/config.py 89% (85%) logging/handlers.py 60% (54%)
where the figures in parentheses include branch coverage measurements.
I'm at the point where to appreciably increase coverage, I'd need to write some test servers to exercise client code in SocketHandler, DatagramHandler and HTTPHandler.
I notice there are no utility classes in test.support to help with this kind of thing - would there be any mileage in adding such things? Of course I could add test server code just to test_logging (which already contains some socket server code to exercise the configuration functionality), but rolling a test server involves boilerplate such as using a custom RequestHandler-derived class for each application. I had in mind a more streamlined approach where you can just pass a single callable to a server to handle requests, e.g. as outlined in
https://gist.github.com/945157
I'd be grateful for any comments about adding such functionality to e.g. test.support.
If you poke around in the test directory a bit, you may find there is already some code along these lines in other tests (e.g. I'm pretty sure the urllib tests already fire up a local server). Starting down the path of standardisation of that test functionality would be good. For larger components like this, it's also reasonable to add a dedicated helper module rather than using test.support directly. I started (and Antoine improved) something along those lines with the test.script_helper module for running Python subprocesses and checking their output, although it lacks documentation and there are lots of older tests that still use subprocess directly. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
Nick Coghlan
If you poke around in the test directory a bit, you may find there is already some code along these lines in other tests (e.g. I'm pretty sure the urllib tests already fire up a local server). Starting down the path of standardisation of that test functionality would be good.
I have poked around, and each test module pretty much does its own thing. Perhaps that's unavoidable; I'll try and see if there are usable common patterns in the specific instances.
For larger components like this, it's also reasonable to add a dedicated helper module rather than using test.support directly. I started (and Antoine improved) something along those lines with the test.script_helper module for running Python subprocesses and checking their output, although it lacks documentation and there are lots of older tests that still use subprocess directly.
Yes, I thought perhaps it was too specialised for adding to test.support itself. Thanks for the feedback, Vinay
On Thu, 28 Apr 2011 07:23:43 +0000 (UTC)
Vinay Sajip
Nick Coghlan
writes: If you poke around in the test directory a bit, you may find there is already some code along these lines in other tests (e.g. I'm pretty sure the urllib tests already fire up a local server). Starting down the path of standardisation of that test functionality would be good.
I have poked around, and each test module pretty much does its own thing. Perhaps that's unavoidable; I'll try and see if there are usable common patterns in the specific instances.
For larger components like this, it's also reasonable to add a dedicated helper module rather than using test.support directly. I started (and Antoine improved) something along those lines with the test.script_helper module for running Python subprocesses and checking their output, although it lacks documentation and there are lots of older tests that still use subprocess directly.
Yes, I thought perhaps it was too specialised for adding to test.support itself.
You can also take a look at Lib/test/ssl_servers.py. Regards Antoine.
On Thu, 28 Apr 2011 07:23:43 +0000 (UTC)
Vinay Sajip
Nick Coghlan
writes: If you poke around in the test directory a bit, you may find there is already some code along these lines in other tests (e.g. I'm pretty sure the urllib tests already fire up a local server). Starting down the path of standardisation of that test functionality would be good.
I have poked around, and each test module pretty much does its own thing. Perhaps that's unavoidable; I'll try and see if there are usable common patterns in the specific instances.
For larger components like this, it's also reasonable to add a dedicated helper module rather than using test.support directly. I started (and Antoine improved) something along those lines with the test.script_helper module for running Python subprocesses and checking their output, although it lacks documentation and there are lots of older tests that still use subprocess directly.
Yes, I thought perhaps it was too specialised for adding to test.support itself.
You can take a look at Lib/test/ssl_servers.py. Regards Antoine.
Hi,
I'm at the point where to appreciably increase coverage, I'd need to write some test servers to exercise client code in SocketHandler, DatagramHandler and HTTPHandler.
I notice there are no utility classes in test.support to help with this kind of thing - would there be any mileage in adding such things? Of course I could add test server code just to test_logging (which already contains some socket server code to exercise the configuration functionality), but rolling a test server involves boilerplate such as using a custom RequestHandler-derived class for each application. I had in mind a more streamlined approach where you can just pass a single callable to a server to handle requests,
A generic test helper to run a server for tests would be a great addition. In distutils/packaging (due to be merged into 3.3 Really Soon Now™), we also have a server, to test PyPI-related functionality. It’s a tested module providing a server class that runs in a thread, a SimpleHTTPRequest handler able to serve static files and reply to XML-RPC requests, and decorators to start and stop the server for one test method instead of a whole TestCase instance. I’m sure some common ground can be found and all these testing helpers factored out in one module.
For larger components like this, it's also reasonable to add a dedicated helper module rather than using test.support directly. I started (and Antoine improved) something along those lines with the test.script_helper module for running Python subprocesses and checking their output,
+1, script_helper is great. Cheers
Nick Coghlan
sure the urllib tests already fire up a local server). Starting down the path of standardisation of that test functionality would be good.
I've made a start with test_logging.py by implementing some potential server classes for use in tests: in the latest test_logging.py, the servers are between comments containing the text "server_helper". The basic approach for implementing socket servers is traditionally to use a request handler class which implements the custom logic, but for some testing applications this is overkill - you just want to be able to pass a handling callable which is, say, a test case method. So the signatures of the servers are all like this: __init__(self, listen_addr, handler, poll_interval ...) Initialise using the specified listen address and handler callable. Internally, a RequestHandler subclass will be used whose handle() delegates to the handler callable passed in. A zero port number can be passed in, and a port attribute will (after binding) have the actual port number used, so that clients can connect on that port. start() Start the server on a separate thread, using the poll_interval specified in the underlying poll()/select() call. Before this is called, the request handler class could be replaced with a subclass if need be. stop(timeout=None) Ask the server to stop and wait for the server thread to terminate. The server also has a ready attribute which is a threading.Event, set just when the server is entering its service loop. Typical mode of use would be: class ClientTestCase(unittest.TestCase): def setUp(self): self.server = TheAppropriateServerClass(('localhost', 0), self.handle_request, 0.01, ...) self.server.start() self.server.ready.wait() self.handled = threading.Event() def tearDown(self): self.server.stop(1.0) # wait up to 1 sec for thread to stop def handle_request(self, request): # Handle the request, e.g. by setting some attributes based on what # was received at the server # Set the flag to say we finished handling self.handled.set() def test_xxx(self): # set up client and send stuff to server # Wait for server to finish doing stuff self.handled.wait() # make assertions based on the attributes # set during request handling The server classes provided are TestSMTPServer, TestTCPServer, TestUDPServer and TestHTTPServer. There are examples of actual usage in test_logging.py: SMTPHandlerTest, SocketHandlerTest, DatagramHandlerTest, SysLogHandlerTest, HTTPHandlerTest. I'd like some comments on this suggested API. I have not yet looked at how to adapt other stdlib code than test_logging to use these classes, but the above usage mode seems convenient and sufficient for testing applications. No doubt people will be able to suggest problems with/improvements to the approach outlined above. Regards, Vinay Sajip
2011/4/27 Vinay Sajip
I've been recently trying to improve the test coverage for the logging package, and have got to a not unreasonable point:
logging/__init__.py 99% (96%) logging/config.py 89% (85%) logging/handlers.py 60% (54%)
where the figures in parentheses include branch coverage measurements.
I'm at the point where to appreciably increase coverage, I'd need to write some test servers to exercise client code in SocketHandler, DatagramHandler and HTTPHandler.
I notice there are no utility classes in test.support to help with this kind of thing - would there be any mileage in adding such things? Of course I could add test server code just to test_logging (which already contains some socket server code to exercise the configuration functionality), but rolling a test server involves boilerplate such as using a custom RequestHandler-derived class for each application. I had in mind a more streamlined approach where you can just pass a single callable to a server to handle requests, e.g. as outlined in
https://gist.github.com/945157
I'd be grateful for any comments about adding such functionality to e.g. test.support.
Regards,
Vinay Sajip
I agree having a standard server framework for tests woul be useful, because it's something which appears quite often, (e.g. when writing functional tests). See for example: http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_os.py#l1316 http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_ftplib.py#l211 http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_ssl.py#l844 http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_smtpd.py http://hg.python.org/cpython/file/b452559eee71/Lib/test/test_poplib.py#l115 Regards --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/
On 27.04.2011 23:23, Vinay Sajip wrote:
I've been recently trying to improve the test coverage for the logging package, and have got to a not unreasonable point:
logging/__init__.py 99% (96%) logging/config.py 89% (85%) logging/handlers.py 60% (54%)
where the figures in parentheses include branch coverage measurements.
BTW, didn't we agree not to put "pragma" comments into the stdlib code? Georg
On Fri, Apr 29, 2011 at 9:44 PM, Georg Brandl
On 27.04.2011 23:23, Vinay Sajip wrote:
I've been recently trying to improve the test coverage for the logging package, and have got to a not unreasonable point:
logging/__init__.py 99% (96%) logging/config.py 89% (85%) logging/handlers.py 60% (54%)
where the figures in parentheses include branch coverage measurements.
BTW, didn't we agree not to put "pragma" comments into the stdlib code?
I think some folks objected, but since they're essential to keeping track of progress in code coverage improvement efforts, there wasn't a consensus to leave them out. The pragmas themselves are easy enough to grep for, so it isn't like they don't leave a record of which lines may not be getting tested. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
[Georg]
BTW, didn't we agree not to put "pragma" comments into the stdlib code?
I'd be grateful for a link to the prior discussion - it must have passed me by originally, and I searched python-dev on gmane but couldn't find any threads about this. [Nick]
I think some folks objected, but since they're essential to keeping track of progress in code coverage improvement efforts, there wasn't a consensus to leave them out. The pragmas themselves are easy enough to grep for, so it isn't like they don't leave a record of which lines may not be getting tested.
Yes - in theory the pragmas can give a false idea about coverage, but in practice they help increase the signal-to-noise ratio. As maintainer of a module, one'd only be kidding oneself by adding pragmas willy-nilly. The coverage reports are up-front about telling you how many lines were excluded, both in the summary HTML pages and the drill-downs HTML pages for individual modules. BTW, is there a public place somewhere showing stdlib coverage statistics? I looked on the buildbot pages as the likeliest home for them, but perhaps I missed them. Regards, Vinay Sajip
On 4/29/2011 12:09 PM, Vinay Sajip wrote:
BTW, is there a public place somewhere showing stdlib coverage statistics? I looked on the buildbot pages as the likeliest home for them, but perhaps I missed them.
http://docs.python.org/devguide/coverage.html has a link to http://coverage.livinglogic.de/ -- Terry Jan Reedy
On 4/29/2011 3:11 PM, Terry Reedy wrote:
On 4/29/2011 12:09 PM, Vinay Sajip wrote:
BTW, is there a public place somewhere showing stdlib coverage statistics? I looked on the buildbot pages as the likeliest home for them, but perhaps I missed them.
http://docs.python.org/devguide/coverage.html has a link to http://coverage.livinglogic.de/
which, however, currently has nothing for *.py. Perhaps a glitch/bug, as there used to be such. Anyone who knows the page owner might ask about this. -- Terry Jan Reedy
Terry Reedy
which, however, currently has nothing for *.py. Perhaps a glitch/bug, as there used to be such. Anyone who knows the page owner might ask about this.
Thanks for the pointer, nevertheless, Terry. Regards, Vinay Sajip
Hi, Le 29/04/2011 18:09, Vinay Sajip a écrit :
BTW, didn't we agree not to put "pragma" comments into the stdlib code? I'd be grateful for a link to the prior discussion - it must have
[Georg] passed me by originally, and I searched python-dev on gmane but couldn't find any threads about this.
I remember only this: http://bugs.python.org/issue11572#msg131139 Regards
participants (7)
-
Antoine Pitrou
-
Georg Brandl
-
Giampaolo Rodolà
-
Nick Coghlan
-
Terry Reedy
-
Vinay Sajip
-
Éric Araujo