Hello pytesters, We are putting together a small Indiegogo campaign for a pytest plugin which should significantly improve the experience of executing a long test suite. We'd be glad to hear any feedback before launching it. The draft is here: https://www.indiegogo.com/project/preview/4fe07d66 Also, is anybody interested in helping with promotion and/or development? Cheers, Tibor
We are putting together a small Indiegogo campaign for a pytest plugin which should significantly improve the experience of executing a long test suite. We'd be glad to hear any feedback before launching it. The draft is here: https://www.indiegogo.com/project/preview/4fe07d66
This certainly looks interesting. The "last failures" part is implemented in pytest-cache, are you aware of that? The reordering by speed sounds nice. Be aware, that fixture setup/teardown may be influenced by reordering and thus change the overall test execution time. I discovered this while extending pytest-random. I also thought about using something like coverage to only run affected tests. How much testing did you do in your prototype? Do you keep track of (pip/setuptools) installed packages (versions)? Did you try changes at various levels like new files, modules, classes, metaclasses, mocks etc? Python is very dynamic as you most likely know :) Regards, Florian Schulze
Yes this could be interesting and I'd consider asking my company to donate if it has clear benefits over what's already out there. As Florian mentioned, pytest-cache lets you rerun failed tests. And pytest-sugar gives you nice progress and instant failures as they happen. pytest-xdist helps parallelize and speed up slow tests. I wonder if pytest-cache should be in pytest core? Personally I always forget to install and use it. (Which maybe says more about me than about pytest or pytest-cache) But _maybe_ if it was in core, it would be more likely to be discovered and used? Sorry, kind of a tangent. Perhaps a demo would make it clearer what you're envisioning? -Marc http://marc-abramowitz.com Sent from my iPhone 4S On Dec 12, 2014, at 7:17 AM, "Florian Schulze" <mail@florian-schulze.net> wrote:
We are putting together a small Indiegogo campaign for a pytest plugin which should significantly improve the experience of executing a long test suite. We'd be glad to hear any feedback before launching it. The draft is here: https://www.indiegogo.com/project/preview/4fe07d66
This certainly looks interesting.
The "last failures" part is implemented in pytest-cache, are you aware of that?
The reordering by speed sounds nice. Be aware, that fixture setup/teardown may be influenced by reordering and thus change the overall test execution time. I discovered this while extending pytest-random.
I also thought about using something like coverage to only run affected tests. How much testing did you do in your prototype? Do you keep track of (pip/setuptools) installed packages (versions)? Did you try changes at various levels like new files, modules, classes, metaclasses, mocks etc? Python is very dynamic as you most likely know :)
Regards, Florian Schulze _______________________________________________ pytest-dev mailing list pytest-dev@python.org https://mail.python.org/mailman/listinfo/pytest-dev
On Fri, Dec 12, 2014 at 07:36 -0800, Marc Abramowitz wrote:
Yes this could be interesting and I'd consider asking my company to donate if it has clear benefits over what's already out there.
As Florian mentioned, pytest-cache lets you rerun failed tests. And pytest-sugar gives you nice progress and instant failures as they happen. pytest-xdist helps parallelize and speed up slow tests.
I wonder if pytest-cache should be in pytest core? Personally I always forget to install and use it. (Which maybe says more about me than about pytest or pytest-cache) But _maybe_ if it was in core, it would be more likely to be discovered and used? Sorry, kind of a tangent.
FWIW Ronny is working on pytest-cache related refactorings and the plan indeed is to put pytest-cache functionality into pytest-core. best, holger
Perhaps a demo would make it clearer what you're envisioning?
-Marc http://marc-abramowitz.com Sent from my iPhone 4S
On Dec 12, 2014, at 7:17 AM, "Florian Schulze" <mail@florian-schulze.net> wrote:
We are putting together a small Indiegogo campaign for a pytest plugin which should significantly improve the experience of executing a long test suite. We'd be glad to hear any feedback before launching it. The draft is here: https://www.indiegogo.com/project/preview/4fe07d66
This certainly looks interesting.
The "last failures" part is implemented in pytest-cache, are you aware of that?
The reordering by speed sounds nice. Be aware, that fixture setup/teardown may be influenced by reordering and thus change the overall test execution time. I discovered this while extending pytest-random.
I also thought about using something like coverage to only run affected tests. How much testing did you do in your prototype? Do you keep track of (pip/setuptools) installed packages (versions)? Did you try changes at various levels like new files, modules, classes, metaclasses, mocks etc? Python is very dynamic as you most likely know :)
Regards, Florian Schulze _______________________________________________ pytest-dev mailing list pytest-dev@python.org https://mail.python.org/mailman/listinfo/pytest-dev
pytest-dev mailing list pytest-dev@python.org https://mail.python.org/mailman/listinfo/pytest-dev
Hello Marc and Florian, I realized earlier that the "last failures" part is already implemented in pytest-cache. Also according to Holger it will move into core for pytest-2.7. Of course we'll use that, not implement it independently. I'll rewrite the campaign text so that the feature is promoted as part of the whole solution but make it clear it's implemented in pytest-cache (or rather core). As for amount of testing in the prototype it was pretty minimal. In this regard it's quite risky project of course, and big compromises might be necessary in order to run substantial projects. I'll make it clear in the description. (and inform about any more results from testing) Thanks for feedback! Tibor On Fri, Dec 12, 2014 at 4:17 PM, Florian Schulze <mail@florian-schulze.net> wrote:
We are putting together a small Indiegogo campaign for a pytest plugin which should significantly improve the experience of executing a long test suite. We'd be glad to hear any feedback before launching it. The draft is here: https://www.indiegogo.com/project/preview/4fe07d66
This certainly looks interesting.
The "last failures" part is implemented in pytest-cache, are you aware of that?
The reordering by speed sounds nice. Be aware, that fixture setup/teardown may be influenced by reordering and thus change the overall test execution time. I discovered this while extending pytest-random.
I also thought about using something like coverage to only run affected tests. How much testing did you do in your prototype? Do you keep track of (pip/setuptools) installed packages (versions)? Did you try changes at various levels like new files, modules, classes, metaclasses, mocks etc? Python is very dynamic as you most likely know :)
Regards, Florian Schulze
Hi Florian Longer ago you had some good points about selecting only tests affected by recent changes as a basic functionality for our plugin - testmon. We did more testing and thinking about these difficult questions and I think they will not be that bad.
I also thought about using something like coverage to only run affected tests. How much testing did you do in your prototype?
E.g. pytest test suite itself runs pretty nicely with testmon.
Do you keep track of (pip/setuptools) installed packages (versions)?
Not yet, but I'm pretty sure a simple detection of changes under sys.path that would trigger a full test run should be easy to implement and quick enough. Did you try changes at various levels like new files, modules, classes,
metaclasses, mocks etc? Python is very dynamic as you most likely know :)
* files: sure * modules, classes, metaclasses, mocks: Unless I'm missing something this is not necessary. All of the dynamic constructs are created deterministically in projects py files. If none of the files involved in execution of a test is changed, then all of the dynamic constructs will be the same in the next test run. Of course I might be missing somethings, so if somebody has a good case please, let us know. These things cause trouble for coveragepy (multiprocessing and gevent stuff is beeing solved): http://nedbatchelder.com/code/coverage/trouble.html These are my thoughts about the dependencies of a test: https://github.com/tarpas/testmon#thoughts Thanks for help, Tibor testmon - make your tests a breeze to execute http://igg.me/at/testmon
Regards, Florian Schulze
Hello everyone, the campaign is live now: http://www.indiegogo.com/at/testmon We simply track the module to module dependencies for now, which is much simpler but still demostrates the benefits nicely. We kindly ask now for your feedback and contributions. Thanks! Tibor On Fri, Dec 12, 2014 at 3:21 PM, Tibor Arpas <tibor.arpas@infinit.sk> wrote:
Hello pytesters,
We are putting together a small Indiegogo campaign for a pytest plugin which should significantly improve the experience of executing a long test suite. We'd be glad to hear any feedback before launching it. The draft is here: https://www.indiegogo.com/project/preview/4fe07d66
Also, is anybody interested in helping with promotion and/or development?
Cheers, Tibor
participants (4)
-
Florian Schulze -
holger krekel -
Marc Abramowitz -
Tibor Arpas