Local unittests with lot of fails/warnings

Hello, I tried to run pytest (via "./configure && make test") on my local repo. 148 failed, 155 passed, 53 skipped, 21 warnings Ok, some tests are skipped maye because of a missing SSH setup. But this doesn't explain everything. Some questions to that. Two questions about that? 1. Is someone able (except "Travis") to get all tests passed on a local repo? 2. Can I setup the make-pytest-thing in a way that the hugh output is logged into a file?

have you really done these steps from the cloned project root folder? 1. cd common 2. ./configure 3. make 4. make test And have you installed all dependencies before that?

Hi Jürgen, thanks for the reply. Are you able to get all tests passed on your local machine? Am 08.09.2022 15:52 schrieb python@altfeld-im.de:
I missed point 3 which is not mentioned in the README.md (PR is [1]). I had to "apt install gettext" (because of missing "msgfmt").
And have you installed all dependencies before that?
I thought "configure" and/or "make" will take care of that and minimally warn me about missing dependencies? I also assume that all dependencies are there because I also installed bit via "apt install backintime-qt" in my Debian. But it seems not. Looking into the README I extra installed "sshfs" and "encfs". The rest from the README was present. I ignore the "Qt5 GUI" dependencies and recommends. There are still a lot of red lights. ;) When you can make them all green on your own system I will dive deeper into it why it doesn't work on my system. On the other hand I would assume that it is "usual" because the test system is "hacked" to work with Travis and not with local machines. But I really need to make the test local passing. I can't do anything without that. Kind Christian [1] -- <https://github.com/bit-team/backintime/pull/1285>

I tried to run pytest (via "./configure && make test") on my local repo
BiT uses "unittest", not "pytest"
I thought "configure" and/or "make" will take care of that and minimally warn me about missing dependencies?
This is up to the developer of ./configure to check this and warn about it during "make". Since BiT is python-based (= an interpreted language) missing dependencies do pop up only if you run the code line that imports the (missing) module to provoke an error about this missing dependency. That is why code coverage of unit tests is also helpful to discover missing dependencies... On my local machine the unit tests pass with "make test" as well as "python -m unittest -b" (from within the common folder !!!):
python -m unittest -b .................................................................................................................................ss......................................................................s.......................s..........sssssssssssssssssssssssss...ssssss.........sssssssss.....s.......s..s......s...................s.s...ss...s.............
Ran 356 tests in 7.542s OK (skipped=53) /usr/lib/python3.8/subprocess.py:946: ResourceWarning: subprocess 379551 is still running ~/dev/backintime/common
Can I setup the make-pytest-thing in a way that the hugh output is logged into a file?
"python -m unittest -b" creates a summarized output ("make test" calls every unit test separately which produces lengthy output)... You could redirect the result into a file

Hi Jürgen On 2022-09-08 16:22 python@altfeld-im.de wrote:
I would prefere unittest also. But when I do "make test" this are the first lines and I think it is pytest, isn't it? $ make test /usr/bin/py.test-3 =============================================================================================== test session starts =============================================================================================== platform linux -- Python 3.9.2, pytest-6.0.2, py-1.10.0, pluggy-0.13.0 rootdir: /home/user/ownCloud/my.work/bit/backintime/common plugins: pyfakefs-4.6.3 Maybe the configure-make-thing does choose pytest when it is installed? You understand make files better then me.
On my local machine the unit tests pass with "make test" as well as "python -m unittest -b" (from within the common folder !!!):
Good to know. That is a goal for me. ;) When I use this method my results looks similar then yours FAILED (failures=1, errors=16, skipped=53)
"python -m unittest -b" creates a summarized output ("make test" calls every unit test separately which produces lengthy output)...
But something else seems very different between runinng "unittest" or "make test". What do you got when running "make test"?

When I use this method my results looks similar then yours FAILED (failures=1, errors=16, skipped=53)
Looks good, the one failing unit test may be caused by https://github.com/bit-team/backintime/issues/1181 which I am fixing right now
Gotcha, excellent point! "./configure" chooses between pytest and unittest module (prefers prytest if it is installed) so "make test" is using "unittest" on my machine but "pytest" on yours. You can see this if you open "common/Makefile" in a text editor and search for "unitttest:" make target (which is the same as "test"): I have: unittest: python3 -m unittest -b test/test_applicationinstance.py python3 -m unittest -b test/test_argparser.py python3 -m unittest -b test/test_backintime.py ... You should see "pytest" instead of "unittest". If this is true the big question is: Why does pytest not work on the unit tests...

Dear Jürgen, thanks for holding my hand while this. ;) I could tear down all my errors to permission problems to extern files in the testfolder. There is "common/test/backintime" and "common/test/dummy_test_process.sh". I was using the git config option "core.filemode". This ignores file permissions and because of that all that file where not executable and a "Permission denied" occurred while the tests.
I also tested other machines and removed all my pytest stuff from the current main machine. I assume pytest and unittest have the same results and doesn't make much difference except one is more colorful. But something more general comes to my mind. As you said "make test" execute each test suite separate. At the end there isn't only one summary but multiple ones. This is the same when "make test" does use pytest. This doesn't help the user. For users the output of "python3 -m unittest -b" is much more helpful. So why do we need that make-think? Does Travis need it? About the README.md. Shouldn't we point the users to use "python3 -m unittest" instead of the make-think? Or is there a good reason for users to use make for unittesting?

was using the git config option "core.filemode".
Great, so it works now
I assume pytest and unittest have the same results and doesn't make much difference
pytest can process unittest test files normally, so are you able to run cd common pytest sucessfully now? Me not, I get some error
For users the output of "python3 -m unittest -b" is much more helpful.
Yes, definitely
So why do we need that make-think? Does Travis need it?
"make" is the standard way to go, no developer is able to type each and every command and arguments out-of-memory. I suggest to change the make target "test" (= "unittest") to the summarized version of python3 -m unittest -b and the keep the verbose chatty target "test-v" (= "unittest-v") as-is since it executes each test separately...
Or is there a good reason for users to use make for unittesting?
Unit testing is for developers and package maintainers. Users just need it if something goes wrong (and does only work if the source code from Git is installed). Package maintainer do not include unit tests in productive packages but only in the *-dev version...

I can use pytest and unittest both with the same summarized result. Of course pytest is a bit more colorful. According to the "Contribute" section of the README.md. Do we need to tell the user to use "./configure && make && make test"? Or shouldn't we tell them just to use the "usual python way" doing "python3 -m unittest" or "python3 -m pytest" if they want?

So why do we need that make-think? Does Travis need it?
If have just checked the Travis config: It uses "make unittest-v": https://github.com/bit-team/backintime/blob/master/.travis.yml#L58 This is perfect IMHO since in case of build errors on the travis (which builds and tests on a matrix/combination of different architectures and Python versions - which I don't have at hand on my local machine) I cannot debug on Travis and need every logged information I can get to diagnose the problem...

have you really done these steps from the cloned project root folder? 1. cd common 2. ./configure 3. make 4. make test And have you installed all dependencies before that?

Hi Jürgen, thanks for the reply. Are you able to get all tests passed on your local machine? Am 08.09.2022 15:52 schrieb python@altfeld-im.de:
I missed point 3 which is not mentioned in the README.md (PR is [1]). I had to "apt install gettext" (because of missing "msgfmt").
And have you installed all dependencies before that?
I thought "configure" and/or "make" will take care of that and minimally warn me about missing dependencies? I also assume that all dependencies are there because I also installed bit via "apt install backintime-qt" in my Debian. But it seems not. Looking into the README I extra installed "sshfs" and "encfs". The rest from the README was present. I ignore the "Qt5 GUI" dependencies and recommends. There are still a lot of red lights. ;) When you can make them all green on your own system I will dive deeper into it why it doesn't work on my system. On the other hand I would assume that it is "usual" because the test system is "hacked" to work with Travis and not with local machines. But I really need to make the test local passing. I can't do anything without that. Kind Christian [1] -- <https://github.com/bit-team/backintime/pull/1285>

I tried to run pytest (via "./configure && make test") on my local repo
BiT uses "unittest", not "pytest"
I thought "configure" and/or "make" will take care of that and minimally warn me about missing dependencies?
This is up to the developer of ./configure to check this and warn about it during "make". Since BiT is python-based (= an interpreted language) missing dependencies do pop up only if you run the code line that imports the (missing) module to provoke an error about this missing dependency. That is why code coverage of unit tests is also helpful to discover missing dependencies... On my local machine the unit tests pass with "make test" as well as "python -m unittest -b" (from within the common folder !!!):
python -m unittest -b .................................................................................................................................ss......................................................................s.......................s..........sssssssssssssssssssssssss...ssssss.........sssssssss.....s.......s..s......s...................s.s...ss...s.............
Ran 356 tests in 7.542s OK (skipped=53) /usr/lib/python3.8/subprocess.py:946: ResourceWarning: subprocess 379551 is still running ~/dev/backintime/common
Can I setup the make-pytest-thing in a way that the hugh output is logged into a file?
"python -m unittest -b" creates a summarized output ("make test" calls every unit test separately which produces lengthy output)... You could redirect the result into a file

Hi Jürgen On 2022-09-08 16:22 python@altfeld-im.de wrote:
I would prefere unittest also. But when I do "make test" this are the first lines and I think it is pytest, isn't it? $ make test /usr/bin/py.test-3 =============================================================================================== test session starts =============================================================================================== platform linux -- Python 3.9.2, pytest-6.0.2, py-1.10.0, pluggy-0.13.0 rootdir: /home/user/ownCloud/my.work/bit/backintime/common plugins: pyfakefs-4.6.3 Maybe the configure-make-thing does choose pytest when it is installed? You understand make files better then me.
On my local machine the unit tests pass with "make test" as well as "python -m unittest -b" (from within the common folder !!!):
Good to know. That is a goal for me. ;) When I use this method my results looks similar then yours FAILED (failures=1, errors=16, skipped=53)
"python -m unittest -b" creates a summarized output ("make test" calls every unit test separately which produces lengthy output)...
But something else seems very different between runinng "unittest" or "make test". What do you got when running "make test"?

When I use this method my results looks similar then yours FAILED (failures=1, errors=16, skipped=53)
Looks good, the one failing unit test may be caused by https://github.com/bit-team/backintime/issues/1181 which I am fixing right now
Gotcha, excellent point! "./configure" chooses between pytest and unittest module (prefers prytest if it is installed) so "make test" is using "unittest" on my machine but "pytest" on yours. You can see this if you open "common/Makefile" in a text editor and search for "unitttest:" make target (which is the same as "test"): I have: unittest: python3 -m unittest -b test/test_applicationinstance.py python3 -m unittest -b test/test_argparser.py python3 -m unittest -b test/test_backintime.py ... You should see "pytest" instead of "unittest". If this is true the big question is: Why does pytest not work on the unit tests...

Dear Jürgen, thanks for holding my hand while this. ;) I could tear down all my errors to permission problems to extern files in the testfolder. There is "common/test/backintime" and "common/test/dummy_test_process.sh". I was using the git config option "core.filemode". This ignores file permissions and because of that all that file where not executable and a "Permission denied" occurred while the tests.
I also tested other machines and removed all my pytest stuff from the current main machine. I assume pytest and unittest have the same results and doesn't make much difference except one is more colorful. But something more general comes to my mind. As you said "make test" execute each test suite separate. At the end there isn't only one summary but multiple ones. This is the same when "make test" does use pytest. This doesn't help the user. For users the output of "python3 -m unittest -b" is much more helpful. So why do we need that make-think? Does Travis need it? About the README.md. Shouldn't we point the users to use "python3 -m unittest" instead of the make-think? Or is there a good reason for users to use make for unittesting?

was using the git config option "core.filemode".
Great, so it works now
I assume pytest and unittest have the same results and doesn't make much difference
pytest can process unittest test files normally, so are you able to run cd common pytest sucessfully now? Me not, I get some error
For users the output of "python3 -m unittest -b" is much more helpful.
Yes, definitely
So why do we need that make-think? Does Travis need it?
"make" is the standard way to go, no developer is able to type each and every command and arguments out-of-memory. I suggest to change the make target "test" (= "unittest") to the summarized version of python3 -m unittest -b and the keep the verbose chatty target "test-v" (= "unittest-v") as-is since it executes each test separately...
Or is there a good reason for users to use make for unittesting?
Unit testing is for developers and package maintainers. Users just need it if something goes wrong (and does only work if the source code from Git is installed). Package maintainer do not include unit tests in productive packages but only in the *-dev version...

I can use pytest and unittest both with the same summarized result. Of course pytest is a bit more colorful. According to the "Contribute" section of the README.md. Do we need to tell the user to use "./configure && make && make test"? Or shouldn't we tell them just to use the "usual python way" doing "python3 -m unittest" or "python3 -m pytest" if they want?

So why do we need that make-think? Does Travis need it?
If have just checked the Travis config: It uses "make unittest-v": https://github.com/bit-team/backintime/blob/master/.travis.yml#L58 This is perfect IMHO since in case of build errors on the travis (which builds and tests on a matrix/combination of different architectures and Python versions - which I don't have at hand on my local machine) I cannot debug on Travis and need every logged information I can get to diagnose the problem...
participants (2)
-
c.buhtz@posteo.jp
-
python@altfeld-im.de