Re: [Python-ideas] Please consider adding context manager versions of setUp/tearDown to unittest.TestCase

TBH you're completely right. Every time I see someone using unittest andItsHorriblyUnpythonicNames, I want to kill a camel. Sometimes, though, I feel like part of the struggle is the alternative. If you dislike unittest, but pytest is too "magical" for you, what do you use? Many Python testing tools like nose are just test *runners*, so you still need something else. In the end, many just end up back at unittest, maybe with nose on top. As much as I hate JavaScript, their testing libraries are leagues above what Python has. -- Ryan (ライアン) Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone elsehttp://refi64.com On Aug 22, 2017 at 5:09 PM, <Chris Barker <chris.barker@noaa.gov>> wrote: ** Caution: cranky curmudgeonly opinionated comment ahead: ** unitest is such an ugly Java-esque static mess of an API that there's really no point in trying to clean it up and make it more pythonic -- go off and use pytest and be happier. -CHB On Tue, Aug 22, 2017 at 5:42 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/

Knowing nothing about the JavaScript ecosystem (other than that leftpad is apparently not a joke and everything needs more jQuery), what are the leagues-above testing libraries? On Tue, Aug 22, 2017 at 5:20 PM, rymg19@gmail.com <rymg19@gmail.com> wrote:

On Tue, Aug 22, 2017 at 06:20:50PM -0400, rymg19@gmail.com wrote:
TBH you're completely right. Every time I see someone using unittest andItsHorriblyUnpythonicNames, I want to kill a camel.
If your only complaint about unittest is that you_miss_writing_underscores_between_all_the_words, then unittest must be pretty good. -- Steve

Getting kind of OT, but: ... pytest is too "magical" for you,
I do get confused a bit sometimes, but for the most part, I simple don't use the magic -- pytest does a great job of making the simple things simple. what do you use? Many Python testing tools like nose are just test
*runners*, so you still need something else.
nose did provide a number of utilities to make testing friendly, but it is apparently dead, and AFAICT, nose2, is mostly a test runner for unittest2 :-( I converted to pytest a while back mostly inspired by it's wonderful reporting of the details of test failures. If your only complaint about unittest is that
you_miss_writing_underscores_between_all_the_words, then unittest must be pretty good.
For my part, I kinda liked StudlyCaps before I drank the pep8 kool-aid. What I dislike about unitest is that it is a pile of almost completely worthless boilerplate that you have to write. what the heck are all those assertThis methods for? I always thought they were ridiculous, but then I went in to write a new one (for math.isclose(), which was rejected, and one of these days I may add it to assertAlmostEqual ... and assertNotAlmostEqual ! ) -- low and behold, the entire purpose of the assert methods is to create a nice message when the test fails. really! This in a dynamic language with wonderful introspection capabilities. So that's most of the code in unitest -- completely worthless boilerplate that just makes you have to type more. Then there is the fixture stuff -- not too bad, but still a lot klunkier than pytest fixtures. And no parameterized testing -- that's a killer feature (that nose provided as well) anyway, that's enough ranting..... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Tue, Aug 22, 2017 at 5:19 PM, Chris Barker <chris.barker@noaa.gov> wrote:
anyway, that's enough ranting.....
Got carried away with the ranting, and didn't flesh out my point. My point is that unittest is a very static, not very pythonic framework -- if you are productive with it, great, but I don't think it's worth trying to add more pythonic niceties to. Chances are pytest (Or nose2?) may already have them, or, if not, the simpler structure of pytest tests make them easier to write yourself. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

Like you, I used nose and then switched to pytest. The reason I proposed this for unittest is because pytest and nose and (I think) most of the other testing frameworks inherit from unittest, so improving unittest has downstream benefits. I may nevertheless propose this to the pytest people if this doesn't make it into unittest. On Tue, Aug 22, 2017 at 8:26 PM Chris Barker <chris.barker@noaa.gov> wrote:

On Tue, Aug 22, 2017 at 7:05 PM, Neil Girdhar <mistersheik@gmail.com> wrote:
not really -- they extend unittest -- in the sense that their test runners can be used with unittest TestCases -- but they don't depend on unitest. so improving unittest has downstream benefits.
only to those using unittest -- a lot of folks do use pytest or nose primarily as a test runner, so those folks would benefit. I may nevertheless propose this to the pytest people if this doesn't make
it into unittest.
Anyway, I'm just being a curmudgeon -- if folks think it would be useful and not disruptive, then why not? -CHB
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On 23 August 2017 at 08:20, rymg19@gmail.com <rymg19@gmail.com> wrote:
A snake_case helper API for unittest that I personally like is hamcrest, since that also separates out the definition of testing assertions from being part of a test case: https://pypi.python.org/pypi/PyHamcrest Introducing such a split natively into unittest is definitely attractive, but would currently be difficult due to the way that some features like self.maxDiff and self.subTest work. However, PEP 550's execution contexts may provide a way to track the test state reliably that's independent of being a method on a test case instance, in which case it would become feasible to offer a more procedural interface in addition to the current visibly object-oriented one. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 24 August 2017 at 08:20, Neil Girdhar <mistersheik@gmail.com> wrote:
unittest.TestCase provides a few different "config setting" type attributes that affect how failures are reported: - self.maxDiff (length limit for rich diffs) - self.failureException (exception used to report errors) - self.longMessage (whether custom messages replace or supplement the default ones) There are also introspection methods about the currently running test: - self.id() (currently running test ID) - self.shortDescription() (test description) And some stateful utility functions: - self.addSubTest() (tracks subtest results) - self.addCleanup() (tracks resource cleanup requests) At the moment, these are all passed in to test methods as a piece of explicit context (the "self" attribute), and that's what makes it hard to refactor unittest to support standalone top-level test functions and standalone assertion functions: there's currently no way to implicitly make those settings and operations available implicitly instead. That all changes if there's a robust way for the unittest module to track the "active test case" that owns the currently running test method without passing the test case reference around explicitly: - existing assertion & helper methods can be wrapped with independently importable snake_case functions that look for the currently active test case and call the relevant methods on it - new assertion functions can be added to separate modules rather than adding yet more methods to TestCase (see https://bugs.python.org/issue18054 for some discussion of that) - given the above enhancements, the default test loader could usefully gain support for top level function definitions (by wrapping them in autogenerated test case instances) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

So (wearing my maintainer hat for unittest) - very happy to consider proposals and patches; I'd very much like to fix some structural APIs in unittest, but I don't have the bandwidth to do so myself at this point. And what you're asking about is largely a structural issue because of the interactions with test reporting, with class/module setup. As Ned says though, the specific asked question is best solved by using the contextmanager protocol and manually entering and exiting: addCleanup is ideal (literally designed for this) for managing that. The fixtures library uses this to make use of fixtures (which are merely enhanced context managers) trivial. We should add an adapter there I think. If I get time I'll put this on stackexchange but: ``` import unittest import fixtures class ContextFixture(fixtures.Fixture): def __init__(self, cm): super().init() self._cm = cm def _setUp(self): self.addCleanup(self._cm.__exit__, None, None, None) self._cm.__enter__() class Example(fixtures.TestWithFixtures): def setUp(self): super().setUp() self._cm_reference_if_I_need_it = self.useFixture(ContextFixture(MyContextManager())) def test_fred(self): 1/0 ``` should (I haven't tested it :P) do the right thing I've written about maintainability in unittest previously [1] [2], and those experiments have worked very well. Your post has reminded me of the stalled work in this space. In particular avoiding inheritance for code reuse has much better maintenance properties. I think we learnt enough to sensibly propose it as an evolution for core unittest though some discussion is needed: for instance, the MIME attachment aspect weirds some folk out, though its very very very useful in the cases where it matters, and pretty ignorable in the cases where it doesn't. Another related thing is getting testresources awkward bits fixed so that it becomes a joy to use - its a much better approach than classsetup and modulesetup if for no other reason than it is concurrency friendly [partition and execute] whereas what was put into unittest isn't unless you also isolate the modules and class instances, which effectively requires processes. Lastly the broad overarching refactor I'd like to do is twofold: - completely separate the different users for testcase: the test executor, report, and the test author are in the same namespace today and its super awkward. Adding a new executor only interface at e.g. case._executor would allow test authors to have much more freedom about what they override and don't without worrying about interactions with the test running framework. Moving all the reporting back up to the executor as a thunk would decouple the reporting logic from the internals of the test case allowing for the elimination of placeholder objects for glue between different test systems. - tweaking the existing pseudo-streaming contracts for the executor to be more purely forward-flow only, aware of concurrency, and more detailed - e.g. provide a means for tests to emit metrics like 'setting up this database took 10 seconds' and have that discarded-or-captured-if-the-reporter-supports it would be very useful in larger test systems. Right now everyone that does this does it in a bespoke fashion. re: hamcrest - love it. Thats what testtools.matchers were inspired by. But we go a bit further I think, in useful ways. Lastly, pytest - its beautiful, great community, some bits that I will never see eye to eye on :). Use it and enjoy, or not - whatever works for you :) -Rob 1: https://rbtcollins.wordpress.com/2010/05/10/maintainable-pyunit-test-suites/ 2: https://rbtcollins.wordpress.com/2010/09/18/maintainable-pyunit-test-suites-... On 24 August 2017 at 21:50, Neil Girdhar <mistersheik@gmail.com> wrote:

Knowing nothing about the JavaScript ecosystem (other than that leftpad is apparently not a joke and everything needs more jQuery), what are the leagues-above testing libraries? On Tue, Aug 22, 2017 at 5:20 PM, rymg19@gmail.com <rymg19@gmail.com> wrote:

On Tue, Aug 22, 2017 at 06:20:50PM -0400, rymg19@gmail.com wrote:
TBH you're completely right. Every time I see someone using unittest andItsHorriblyUnpythonicNames, I want to kill a camel.
If your only complaint about unittest is that you_miss_writing_underscores_between_all_the_words, then unittest must be pretty good. -- Steve

Getting kind of OT, but: ... pytest is too "magical" for you,
I do get confused a bit sometimes, but for the most part, I simple don't use the magic -- pytest does a great job of making the simple things simple. what do you use? Many Python testing tools like nose are just test
*runners*, so you still need something else.
nose did provide a number of utilities to make testing friendly, but it is apparently dead, and AFAICT, nose2, is mostly a test runner for unittest2 :-( I converted to pytest a while back mostly inspired by it's wonderful reporting of the details of test failures. If your only complaint about unittest is that
you_miss_writing_underscores_between_all_the_words, then unittest must be pretty good.
For my part, I kinda liked StudlyCaps before I drank the pep8 kool-aid. What I dislike about unitest is that it is a pile of almost completely worthless boilerplate that you have to write. what the heck are all those assertThis methods for? I always thought they were ridiculous, but then I went in to write a new one (for math.isclose(), which was rejected, and one of these days I may add it to assertAlmostEqual ... and assertNotAlmostEqual ! ) -- low and behold, the entire purpose of the assert methods is to create a nice message when the test fails. really! This in a dynamic language with wonderful introspection capabilities. So that's most of the code in unitest -- completely worthless boilerplate that just makes you have to type more. Then there is the fixture stuff -- not too bad, but still a lot klunkier than pytest fixtures. And no parameterized testing -- that's a killer feature (that nose provided as well) anyway, that's enough ranting..... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On Tue, Aug 22, 2017 at 5:19 PM, Chris Barker <chris.barker@noaa.gov> wrote:
anyway, that's enough ranting.....
Got carried away with the ranting, and didn't flesh out my point. My point is that unittest is a very static, not very pythonic framework -- if you are productive with it, great, but I don't think it's worth trying to add more pythonic niceties to. Chances are pytest (Or nose2?) may already have them, or, if not, the simpler structure of pytest tests make them easier to write yourself. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

Like you, I used nose and then switched to pytest. The reason I proposed this for unittest is because pytest and nose and (I think) most of the other testing frameworks inherit from unittest, so improving unittest has downstream benefits. I may nevertheless propose this to the pytest people if this doesn't make it into unittest. On Tue, Aug 22, 2017 at 8:26 PM Chris Barker <chris.barker@noaa.gov> wrote:

On Tue, Aug 22, 2017 at 7:05 PM, Neil Girdhar <mistersheik@gmail.com> wrote:
not really -- they extend unittest -- in the sense that their test runners can be used with unittest TestCases -- but they don't depend on unitest. so improving unittest has downstream benefits.
only to those using unittest -- a lot of folks do use pytest or nose primarily as a test runner, so those folks would benefit. I may nevertheless propose this to the pytest people if this doesn't make
it into unittest.
Anyway, I'm just being a curmudgeon -- if folks think it would be useful and not disruptive, then why not? -CHB
-- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@noaa.gov

On 23 August 2017 at 08:20, rymg19@gmail.com <rymg19@gmail.com> wrote:
A snake_case helper API for unittest that I personally like is hamcrest, since that also separates out the definition of testing assertions from being part of a test case: https://pypi.python.org/pypi/PyHamcrest Introducing such a split natively into unittest is definitely attractive, but would currently be difficult due to the way that some features like self.maxDiff and self.subTest work. However, PEP 550's execution contexts may provide a way to track the test state reliably that's independent of being a method on a test case instance, in which case it would become feasible to offer a more procedural interface in addition to the current visibly object-oriented one. Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

On 24 August 2017 at 08:20, Neil Girdhar <mistersheik@gmail.com> wrote:
unittest.TestCase provides a few different "config setting" type attributes that affect how failures are reported: - self.maxDiff (length limit for rich diffs) - self.failureException (exception used to report errors) - self.longMessage (whether custom messages replace or supplement the default ones) There are also introspection methods about the currently running test: - self.id() (currently running test ID) - self.shortDescription() (test description) And some stateful utility functions: - self.addSubTest() (tracks subtest results) - self.addCleanup() (tracks resource cleanup requests) At the moment, these are all passed in to test methods as a piece of explicit context (the "self" attribute), and that's what makes it hard to refactor unittest to support standalone top-level test functions and standalone assertion functions: there's currently no way to implicitly make those settings and operations available implicitly instead. That all changes if there's a robust way for the unittest module to track the "active test case" that owns the currently running test method without passing the test case reference around explicitly: - existing assertion & helper methods can be wrapped with independently importable snake_case functions that look for the currently active test case and call the relevant methods on it - new assertion functions can be added to separate modules rather than adding yet more methods to TestCase (see https://bugs.python.org/issue18054 for some discussion of that) - given the above enhancements, the default test loader could usefully gain support for top level function definitions (by wrapping them in autogenerated test case instances) Cheers, Nick. -- Nick Coghlan | ncoghlan@gmail.com | Brisbane, Australia

So (wearing my maintainer hat for unittest) - very happy to consider proposals and patches; I'd very much like to fix some structural APIs in unittest, but I don't have the bandwidth to do so myself at this point. And what you're asking about is largely a structural issue because of the interactions with test reporting, with class/module setup. As Ned says though, the specific asked question is best solved by using the contextmanager protocol and manually entering and exiting: addCleanup is ideal (literally designed for this) for managing that. The fixtures library uses this to make use of fixtures (which are merely enhanced context managers) trivial. We should add an adapter there I think. If I get time I'll put this on stackexchange but: ``` import unittest import fixtures class ContextFixture(fixtures.Fixture): def __init__(self, cm): super().init() self._cm = cm def _setUp(self): self.addCleanup(self._cm.__exit__, None, None, None) self._cm.__enter__() class Example(fixtures.TestWithFixtures): def setUp(self): super().setUp() self._cm_reference_if_I_need_it = self.useFixture(ContextFixture(MyContextManager())) def test_fred(self): 1/0 ``` should (I haven't tested it :P) do the right thing I've written about maintainability in unittest previously [1] [2], and those experiments have worked very well. Your post has reminded me of the stalled work in this space. In particular avoiding inheritance for code reuse has much better maintenance properties. I think we learnt enough to sensibly propose it as an evolution for core unittest though some discussion is needed: for instance, the MIME attachment aspect weirds some folk out, though its very very very useful in the cases where it matters, and pretty ignorable in the cases where it doesn't. Another related thing is getting testresources awkward bits fixed so that it becomes a joy to use - its a much better approach than classsetup and modulesetup if for no other reason than it is concurrency friendly [partition and execute] whereas what was put into unittest isn't unless you also isolate the modules and class instances, which effectively requires processes. Lastly the broad overarching refactor I'd like to do is twofold: - completely separate the different users for testcase: the test executor, report, and the test author are in the same namespace today and its super awkward. Adding a new executor only interface at e.g. case._executor would allow test authors to have much more freedom about what they override and don't without worrying about interactions with the test running framework. Moving all the reporting back up to the executor as a thunk would decouple the reporting logic from the internals of the test case allowing for the elimination of placeholder objects for glue between different test systems. - tweaking the existing pseudo-streaming contracts for the executor to be more purely forward-flow only, aware of concurrency, and more detailed - e.g. provide a means for tests to emit metrics like 'setting up this database took 10 seconds' and have that discarded-or-captured-if-the-reporter-supports it would be very useful in larger test systems. Right now everyone that does this does it in a bespoke fashion. re: hamcrest - love it. Thats what testtools.matchers were inspired by. But we go a bit further I think, in useful ways. Lastly, pytest - its beautiful, great community, some bits that I will never see eye to eye on :). Use it and enjoy, or not - whatever works for you :) -Rob 1: https://rbtcollins.wordpress.com/2010/05/10/maintainable-pyunit-test-suites/ 2: https://rbtcollins.wordpress.com/2010/09/18/maintainable-pyunit-test-suites-... On 24 August 2017 at 21:50, Neil Girdhar <mistersheik@gmail.com> wrote:
participants (7)
-
Chris Barker
-
Neil Girdhar
-
Nick Coghlan
-
Nick Timkovich
-
Robert Collins
-
rymg19@gmail.com
-
Steven D'Aprano