
So (wearing my maintainer hat for unittest) - very happy to consider proposals and patches; I'd very much like to fix some structural APIs in unittest, but I don't have the bandwidth to do so myself at this point. And what you're asking about is largely a structural issue because of the interactions with test reporting, with class/module setup. As Ned says though, the specific asked question is best solved by using the contextmanager protocol and manually entering and exiting: addCleanup is ideal (literally designed for this) for managing that. The fixtures library uses this to make use of fixtures (which are merely enhanced context managers) trivial. We should add an adapter there I think. If I get time I'll put this on stackexchange but: ``` import unittest import fixtures class ContextFixture(fixtures.Fixture): def __init__(self, cm): super().init() self._cm = cm def _setUp(self): self.addCleanup(self._cm.__exit__, None, None, None) self._cm.__enter__() class Example(fixtures.TestWithFixtures): def setUp(self): super().setUp() self._cm_reference_if_I_need_it = self.useFixture(ContextFixture(MyContextManager())) def test_fred(self): 1/0 ``` should (I haven't tested it :P) do the right thing I've written about maintainability in unittest previously [1] [2], and those experiments have worked very well. Your post has reminded me of the stalled work in this space. In particular avoiding inheritance for code reuse has much better maintenance properties. I think we learnt enough to sensibly propose it as an evolution for core unittest though some discussion is needed: for instance, the MIME attachment aspect weirds some folk out, though its very very very useful in the cases where it matters, and pretty ignorable in the cases where it doesn't. Another related thing is getting testresources awkward bits fixed so that it becomes a joy to use - its a much better approach than classsetup and modulesetup if for no other reason than it is concurrency friendly [partition and execute] whereas what was put into unittest isn't unless you also isolate the modules and class instances, which effectively requires processes. Lastly the broad overarching refactor I'd like to do is twofold: - completely separate the different users for testcase: the test executor, report, and the test author are in the same namespace today and its super awkward. Adding a new executor only interface at e.g. case._executor would allow test authors to have much more freedom about what they override and don't without worrying about interactions with the test running framework. Moving all the reporting back up to the executor as a thunk would decouple the reporting logic from the internals of the test case allowing for the elimination of placeholder objects for glue between different test systems. - tweaking the existing pseudo-streaming contracts for the executor to be more purely forward-flow only, aware of concurrency, and more detailed - e.g. provide a means for tests to emit metrics like 'setting up this database took 10 seconds' and have that discarded-or-captured-if-the-reporter-supports it would be very useful in larger test systems. Right now everyone that does this does it in a bespoke fashion. re: hamcrest - love it. Thats what testtools.matchers were inspired by. But we go a bit further I think, in useful ways. Lastly, pytest - its beautiful, great community, some bits that I will never see eye to eye on :). Use it and enjoy, or not - whatever works for you :) -Rob 1: https://rbtcollins.wordpress.com/2010/05/10/maintainable-pyunit-test-suites/ 2: https://rbtcollins.wordpress.com/2010/09/18/maintainable-pyunit-test-suites-... On 24 August 2017 at 21:50, Neil Girdhar <mistersheik@gmail.com> wrote:
Makes sense. Thanks!
On Thu, Aug 24, 2017 at 5:20 AM Nick Coghlan <ncoghlan@gmail.com> wrote:
On 24 August 2017 at 08:20, Neil Girdhar <mistersheik@gmail.com> wrote:
On Wed, Aug 23, 2017 at 3:31 AM Nick Coghlan <ncoghlan@gmail.com> wrote:
However, PEP 550's execution contexts may provide a way to track the test state reliably that's independent of being a method on a test case instance, in which case it would become feasible to offer a more procedural interface in addition to the current visibly object-oriented one.
If you have time, could you expand on that a little bit?
unittest.TestCase provides a few different "config setting" type attributes that affect how failures are reported:
- self.maxDiff (length limit for rich diffs) - self.failureException (exception used to report errors) - self.longMessage (whether custom messages replace or supplement the default ones)
There are also introspection methods about the currently running test:
- self.id() (currently running test ID) - self.shortDescription() (test description)
And some stateful utility functions:
- self.addSubTest() (tracks subtest results) - self.addCleanup() (tracks resource cleanup requests)
At the moment, these are all passed in to test methods as a piece of explicit context (the "self" attribute), and that's what makes it hard to refactor unittest to support standalone top-level test functions and standalone assertion functions: there's currently no way to implicitly make those settings and operations available implicitly instead.
That all changes if there's a robust way for the unittest module to track the "active test case" that owns the currently running test method without passing the test case reference around explicitly:
- existing assertion & helper methods can be wrapped with independently importable snake_case functions that look for the currently active test case and call the relevant methods on it - new assertion functions can be added to separate modules rather than adding yet more methods to TestCase (see https://bugs.python.org/issue18054 for some discussion of that) - given the above enhancements, the default test loader could usefully gain support for top level function definitions (by wrapping them in autogenerated test case instances)
Cheers, Nick.
-- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/