sylvain.marie at se.com
Wed Nov 14 12:37:31 EST 2018
I am very glad to announce my new plugin pytest-harvest https://smarie.github.io/python-pytest-harvest/ , designed to collect selected fixtures and applicative test results so as to create synthesis tables.
A typical use-case (at least the reason why I built it and will use it :) ) is to easily create benchmarks for data science algorithms.
I now plan to work on a "pytest-patterns" project that will host reference examples for some design patterns. The example that I want to create first is (how surprising...) a combination of pytest-cases, pytest-steps and pytest-harvest.
Maybe at some point I will move the 4 projects under a dedicated "pytest-patterns" github organization, I don't know yet.
Best regards and happy pytest 4.x !
(and do not hesitate to provide feedback on the issues page!)
De : pytest-dev [mailto:pytest-dev-bounces+sylvain.marie=se.com at python.org] De la part de Sylvain MARIE
Envoyé : lundi 5 novembre 2018 22:26
À : pytest-dev <pytest-dev at python.org>
Objet : [pytest-dev] parameters and fixtures, storage and benchmarks: feedback appreciated
[External email: Use caution with links and attachments]
Dear pytest development team
In our data science team we develop a lot of machine learning/statistics components and a recurrent need is benchmarking codes against many reference datasets. By benchmarking we often mean "applicative" benchmarking (such as algorithm accuracy), even if execution time may also be looked at as a secondary target. For this reason https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fionelmc%2Fpytest-benchmark&data=02%7C01%7Csylvain.marie%40se.com%7C06866e0d84184e84206708d643653d8e%7C6e51e1adc54b4b39b5980ffe9ae68fef%7C0%7C0%7C636770499433285259&sdata=uXArNxuUZHSIX2mAQYc0NLyDH3xfRVKtwzbTvgVPUP4%3D&reserved=0 does not seem to suit our needs.
I developed several versions of a benchmarking toolkit in the past years for this purpose but I managed to port it entirely to pytest, so as to leverage all the great built-in mechanisms (namely, parameters and fixtures).
It is mostly for this reason that I broke down the problem into smaller pieces and open-sourced each piece as a separate mechanism. You already know pytest-cases (to separate the test logic from the test cases/datasets) and pytest-steps (to break down the test logic into pieces while keeping it quite readable), the last piece I completed is about storing test results so as to get a true "benchmark" functionality with synthesis reports at the end.
I broke down the mechanisms into two parts and would like to have your opinion:
- one part will be a decorator for function-scoped fixtures, so as to say "please store all these fixture's instances" (in a dictionary-like storage object under key=test id)
- another part will be a special fixture, decorated with the above, allowing people to easily inject "results bags" into their test functions to save results and retrieve them at the end
Do you think that this (in particular the first item to store fixture) is compliant with pytest ? From investigating, I found out that params are currently stored in session but not fixtures (purposedly). It therefore seems natural to propose a way for users to store very specific fixtures on a declarative basis.
Concerning the place where to store them (the global storage object), I am currently allowing users either to rely on a global variable or on a fixture. Would there be another preferred way to store information in pytest, across all tests ?
Of course compliance with pytest-xdist will be a holy grail in the future, I do not want to shoot at it directly but pick the 'least bad' choices.
Thanks in advance for your kind review and advices !
pytest-dev mailing list
pytest-dev at python.org
This email has been scanned by the Symantec Email Security.cloud service.
More information about the pytest-dev