Using pytest_assertrepr_compare() for marked tests only?
I'm playing with the idea of returning a custom explanation for tests with a particular marker but returning Pytest's standard explanation for tests without the marker. I was hoping to do something like this... # content of conftest.py def pytest_assertrepr_compare(config, op, left, right): if <current test uses "mymark">: return ['My custom explanation!'] # content of test_mystuff.py import pytest def test_mystuff1(): assert 100 == 105 @pytest.mark.mymark def test_mystuff2(): assert 100 == 105 # Desired Output __________ test_mystuff1 __________ def test_mystuff1():
assert 100 == 105
E assert 100 == 105 E -100 E +105 test_mystuff.py:4: AssertionError __________ test_mystuff2 __________ @pytest.mark.mymark def test_mystuff2():
assert 100 == 105
E assert My custom explanation! test_mystuff.py:8: AssertionError I've looked through the plugin API but I don't see a straightforward way of doing this. I was hoping I could get a reference to the current test through the config (even if it was a convoluted reference) but I don't know if this is possible. Digging deeper, I see some undesirable approaches I could probably use to accomplish this (e.g., monkey-patch util._reprcompare or possibly maintain a plugin-level global that points to the current test) but nothing that seems to work with the grain of the plugin API. So I have a few related questions I'm hoping someone can help me with: Q: Is there a way to reference the current test from within pytest_assertrepr_compare()? Q: If not, is there a decent way to implement something like this that isn't too hacky?
On Sun, May 12, 2019 at 07:44:01PM -0400, Shawn Brown wrote:
So I have a few related questions I'm hoping someone can help me with:
Q: Is there a way to reference the current test from within pytest_assertrepr_compare()?
Q: If not, is there a decent way to implement something like this that isn't too hacky?
In your own code, you probably can't do so easily. However, pytest itself does have access to the item when calling pytest_assertrepr_compare - so you might want to consider submitting a contribution to pytest to pass the item to the hook as well! :) Florian -- https://www.qutebrowser.org | me@the-compiler.org (Mail/XMPP) GPG: 916E B0C8 FD55 A072 | https://the-compiler.org/pubkey.asc I love long mails! | https://email.is-not-s.ms/
Hi Shawn, On Sun 12 May 2019 at 19:44 -0400, Shawn Brown wrote:
I'm playing with the idea of returning a custom explanation for tests with a particular marker but returning Pytest's standard explanation for tests without the marker.
Interesting, would you mind describing your usecase a bit more why this makes sense? Not because I want to stop such an feature but really just curious what information you want that you don't have in the type.
I've looked through the plugin API but I don't see a straightforward way of doing this. I was hoping I could get a reference to the current test through the config (even if it was a convoluted reference) but I don't know if this is possible. Digging deeper, I see some undesirable approaches I could probably use to accomplish this (e.g., monkey-patch util._reprcompare or possibly maintain a plugin-level global that points to the current test) but nothing that seems to work with the grain of the plugin API.
So I have a few related questions I'm hoping someone can help me with:
Q: Is there a way to reference the current test from within pytest_assertrepr_compare()?
Q: If not, is there a decent way to implement something like this that isn't too hacky?
As Florian said, currently we didn't think of exposing the Item to this hook, but it sounds reasonable to have. Such a change would be backwards compatible so would be a great feature for you to contribute? IIRC the mechanism of how the hook is provided should allow to add the item to the call. If you're up to creating the feature and would like some guidance I'd be happy to help, perhaps create an issue on github to discuss further details (and follow up on this thread with a link). Kind Regards, Floris
Floris and Florian, Thanks for the feedback. As I mentioned, this is something I was playing with as an experiment. My specific use case would not be satisfied with this change but I thought I'd ask if this were possible since it was an idea I stumbled across while investigating my usecase. I will, however, detail my usecase just so you know what I'm working on. I've been developing a package that repurposes software testing practices for data validation (https://datatest.rtfd.io/). A simple example using the datatest's validate() function: from datatest import validate def test_somedata(): data = {'A', 'B', 'C', 'D'} requirement = {'A', 'B'} validate(data, requirement) The test above will fail with the following exception: ValidationError: does not satisfy set membership (2 differences): [ Extra('C'), Extra('D'), ] The ValidationError is a subclass of AssertionError. It contains one item for each element in `data` that fails to satisfy the `requirement`. I was playing around with the idea of a marker to implement something like the following (see below) but still generate the ValidationError output via pytest's plugin system: import pytest @pytest.mark.validate def test_somedata(): data = {'A', 'B', 'C', 'D'} requirement = {'A', 'B'} assert data == requirement The idea is to use an `assert` statement but still get the ValidationError output (although there are many cases where using "==" is semantically wrong so the validate() function wouldn't go away in all cases). While I could emulate this type of output with pytest_assertrepr_compare(), I need to do more than emulate the error. Here's where things get more involved: The datatest package also implements a system of acceptances that operate on the items contained within the ValidationError. If a user decides that the "Extra" items are acceptable, they can use the accepted() context manager: from datatest import validate, accepted, Extra def test_somedata(): data = {'A', 'B', 'C', 'D'} requirement = {'A', 'B'} with accepted(Extra): validate(data, requirement) In this case, the accepted() context manager accepts all of the items in the error (which suppresses the error entirely). But if it contained other items that were not accepted, then an error would be re-raised with the remainder of the items. Below, I've added comments showing why the assertrepr comparison can't work with the context manager: import pytest from datatest import validate, accepted, Extra @pytest.mark.validate def test_somedata(): data = {'A', 'B', 'C', 'D'} requirement = {'A', 'B'} with accepted(Extra): # <- Never receives an inspectible error! assert data == requirement # <- Raises an assertion that can not be inspected or changed! To implement my dream "validate marker", I would need to manipulate the assertion inside the scope of the context manager--which is what the validate() function does. But doing this is far from what pytest_assertrepr_compare() has access to. One solution I could see is an entirely new hook: def pytest_do_assertion(item, op, left, right=None): """ perform comparison Returns an AssertionError or None """ But from what I can see, implementing a hook like `pytest_do_assertion` could require unreasonably invasive changes to pytest. That said, I would be very interested if something like `pytest_do_assertion()` were possible to add but I understand if it's not. Also, the hook name is just a throwaway idea, it's probably not appropriate. -Shawn
participants (3)
-
Florian Bruhin -
Floris Bruynooghe -
Shawn Brown