unittest.subTest, hypothesis @given + pytest

From https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests :
```
When there are very small differences among your tests, for instance some parameters, unittest allows you to distinguish them inside the body of a test method using the subTest() context manager.

For example, the following test:

class NumbersTest(unittest.TestCase):

    def test_even(self):
        """
        Test that numbers between 0 and 5 are all even.
        """
        for i in range(0, 6):
            with self.subTest(i=i):
                self.assertEqual(i % 2, 0)
```

 https://hypothesis.readthedocs.io/en/latest/quickstart.html#writing-tests :

Think of a normal unit test as being something like the following:

1. Set up some data.
2. Perform some operations on the data.
3. Assert something about the result.

Hypothesis lets you write tests which instead look like this:

1. For all data matching some specification.
2. Perform some operations on the data.
3. Assert something about the result.

This is often called property-based testing, and was popularised by the Haskell library Quickcheck.

It works by generating arbitrary data matching your specification and checking that your guarantee still holds in that case. If it finds an example where it doesn’t, it takes that example and cuts it down to size, simplifying it until it finds a much smaller example that still causes the problem. It then saves that example for later, so that once it has found a problem with your code it will not forget it in the future.

Writing tests of this form usually consists of deciding on guarantees that your code should make - properties that should always hold true, regardless of what the world throws at you. Examples of such guarantees might be:

 https://hypothesis.readthedocs.io/en/latest/details.html#test-statistics :

```bash
pytest --hypothesis-show-statistics
```

 ```
- during generate phase (0.06 seconds):
    - Typical runtimes: < 1ms, ~ 47% in data generation
    - 100 passing examples, 0 failing examples, 0 invalid examples
- Stopped because settings.max_examples=100
```

 ```python
from hypothesis import given, strategies as st


@given(st.integers(), st.integers())
def test_ints_are_commutative(x, y):
    assert x + y == y + x


@given(x=st.integers(), y=st.integers())
def test_ints_cancel(x, y):
    assert (x + y) - y == x


@given(st.lists(st.integers()))
def test_reversing_twice_gives_same_list(xs):
    # This will generate lists of arbitrary length (usually between 0 and
    # 100 elements) whose elements are integers.
    ys = list(xs)
    ys.reverse()
    ys.reverse()
    assert xs == ys


@given(st.tuples(st.booleans(), st.text()))
def test_look_tuples_work_too(t):
    # A tuple is generated as the one you provided, with the corresponding
    # types in those positions.
    assert len(t) == 2
    assert isinstance(t[0], bool)
    assert isinstance(t[1], str)
```

 https://hypothesis.readthedocs.io/en/latest/details.html#defining-strategies :

> The type of object that is used to explore the examples given to your test function is called a SearchStrategy. These are created using the functions exposed in the hypothesis.strategies module.

https://hypothesis.readthedocs.io/en/latest/details.html#defining-strategies :

> By default, Hypothesis will handle the global random and numpy.random random number generators for you, and you can register others:

https://hypothesis.readthedocs.io/en/latest/supported.html#testing-frameworks :

```
In terms of what’s actually known to work:

- Hypothesis integrates as smoothly with pytest and unittest as we can make it, and this is verified as part of the CI.
- pytest fixtures work in the usual way for tests that have been decorated with @given - just avoid passing a strategy for each argument that will be supplied by a fixture. However, each fixture will run once for the whole function, not once per example. Decorating a fixture function with @given is meaningless.
```

On Tue, Sep 7, 2021, 09:16 Filipe Laíns <lains@archlinux.org> wrote:
On Tue, 2021-09-07 at 02:31 +0000, Leonardo Freua wrote:
> When writing some unit tests with the standard Unittest library, I missed being
> able to create parameterized tests. This functionality exists in PyTest
> (https://docs.pytest.org/en/6.2.x/parametrize.html) and there is also a library
> called *parameterized*(https://github.com/wolever/parameterized) which aims to
> add this functionality.
>
> However, I think it would be beneficial to have a decorator in Python's standard
> Unittest library.
>
> Note: If this functionality exists natively in Python, please put some example
> or documentation link below.
>
> Thanks in advance.

Hi Leonardo,

Please check out subtests, they allow you to achieve what you are looking for :)
https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests

Cheers,
Filipe Laíns
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-leave@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/AJSJKC4PD4KWS53IJLTWORHC7QZGJXDG/
Code of Conduct: http://python.org/psf/codeofconduct/