From holger at merlinux.eu Thu Jan 2 21:14:38 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 2 Jan 2014 20:14:38 +0000 Subject: [pytest-dev] How to determinate if test is skipped on runtest_teardown hook? In-Reply-To: References: Message-ID: <20140102201438.GB14184@merlinux.eu> Hi Anton, please checkout the following example (and the one below it): http://pytest.org/latest/example/simple.html#post-process-test-reports-failures best, holger On Mon, Dec 30, 2013 at 10:36 +0200, Anton P wrote: > Has anybody any ideas about this? > > Thank you! > Anton > > > On Tue, Dec 24, 2013 at 8:09 PM, Anton P wrote: > > > Hi All, > > > > Is there is possibility to determinate if test is skipped or not on > > pytest_runtest_teardown hook. > > I can see skipif marker in item.keywords, but I don't know it's status. > > > > Thank you in advance! > > Anton > > > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From anton7811 at gmail.com Mon Jan 6 10:16:43 2014 From: anton7811 at gmail.com (Anton P) Date: Mon, 6 Jan 2014 11:16:43 +0200 Subject: [pytest-dev] How to determinate if test is skipped on runtest_teardown hook? In-Reply-To: <20140102201438.GB14184@merlinux.eu> References: <20140102201438.GB14184@merlinux.eu> Message-ID: Hi Holger, Thank you for reply! I need to know test result especially on teardown hook to determine if I need to do some actions or not. But I can use makereport hook to set some flag in pytest.config object and verify it on teardown. As I found makereport is called before teardown. Best regards, Anton On Thu, Jan 2, 2014 at 10:14 PM, holger krekel wrote: > Hi Anton, > > please checkout the following example (and the one below it): > > > http://pytest.org/latest/example/simple.html#post-process-test-reports-failures > > best, > holger > > On Mon, Dec 30, 2013 at 10:36 +0200, Anton P wrote: > > Has anybody any ideas about this? > > > > Thank you! > > Anton > > > > > > On Tue, Dec 24, 2013 at 8:09 PM, Anton P wrote: > > > > > Hi All, > > > > > > Is there is possibility to determinate if test is skipped or not on > > > pytest_runtest_teardown hook. > > > I can see skipif marker in item.keywords, but I don't know it's status. > > > > > > Thank you in advance! > > > Anton > > > > > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Mon Jan 6 15:24:30 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 6 Jan 2014 14:24:30 +0000 Subject: [pytest-dev] Autocompletion on fixture objects as test function input arguments - PyCharm In-Reply-To: <20131227234412.212940@gmx.com> References: <20131227234412.212940@gmx.com> Message-ID: <20140106142430.GL14184@merlinux.eu> Hi Michelle, On Fri, Dec 27, 2013 at 18:44 -0500, Michelle Chartier wrote: > Hey all, > > I recently got the community edition PyCharm IDE and so far like a lot of things about it (especially the ability to run pytest tests from within it). HOWEVER, I am having a problem with auto-completion on fixture functions (I realize this may not be the place to ask this question but I was hoping someone had some info that could help me or point me somewhere :) ) > > So, the problem: in my test function I am using a fixture object as an input argument and want to be able to see all available functions with autocompletion. To see autocompletion in Aptana, I imported the necessary fixture to the test module where my test function resides (the fixture is imported to a conftest in a package I have defined) > > ---------------------------------- > > Example: > > #this test module resides in mypackag.mysubpackage.mysubsubpackage > > from mypackage.mysubpackage.conftest import myobj > > def test_mytest(myobj): > myobj. (<- this is where the autocompletion should show up?) Strictly speaking the import of "myobj" is unrelated to passing the "myobj" parameter to the test function, as far as the IDE is concerned. I don't know why/how Aptana manages to show something. Probably it uses some heuristic? I don't know how PyCharm finds completions internally but maybe we could think about helping it somehow. If you can point to specific information on how PyCharms does completion, let us know. best, holger > ---------------------------------- > > > In Aptana myobj shows all the available functions with autocompletion as expected but in PyCharm (with the exact same code/folders/folder structure) it says "No suggestions." > > Anyone have a similar problem or know of a solution? (I would prefer to stick with PyCharm... :) ) > > Thanks! > Michelle > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From micheyvaert at gmail.com Tue Jan 7 11:09:25 2014 From: micheyvaert at gmail.com (Michael Heyvaert) Date: Tue, 7 Jan 2014 11:09:25 +0100 Subject: [pytest-dev] Using py.test xdist --tx with custom ssh settings Message-ID: Hello, is it possible to use the py.test xdist plugin with custom ssh settings? My use case is to run tests against a vagrant box, which has the vagrant user and a specific ssh key, independent of the actual vm. Depending on the type of the vm, it is also possible that the ssh port is forwarded. I would like to setup a script that parses the vagrant ssh-config output and generates --tx to pass to xdist. Best regards, Micha?l -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Wed Jan 8 11:46:44 2014 From: holger at merlinux.eu (holger krekel) Date: Wed, 8 Jan 2014 10:46:44 +0000 Subject: [pytest-dev] Using py.test xdist --tx with custom ssh settings In-Reply-To: References: Message-ID: <20140108104644.GZ14184@merlinux.eu> hi Michael, the prospective execnet-1.2 release will have an extended ssh syntax to allow passing of command line args, for example: ssh=-p 5000 HOSTNAME You can also pass in an ssh config file (IIRC -F). best, holger On Tue, Jan 07, 2014 at 11:09 +0100, Michael Heyvaert wrote: > Hello, > > is it possible to use the py.test xdist plugin with custom ssh settings? > > My use case is to run tests against a vagrant box, which has the vagrant > user > and a specific ssh key, independent of the actual vm. > Depending on the type of the vm, it is also possible that the ssh port is > forwarded. > > I would like to setup a script that parses the vagrant ssh-config output > and generates --tx to pass to xdist. > > Best regards, > > Micha?l > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From micheyvaert at gmail.com Thu Jan 9 21:45:43 2014 From: micheyvaert at gmail.com (Michael Heyvaert) Date: Thu, 9 Jan 2014 21:45:43 +0100 Subject: [pytest-dev] Using py.test xdist --tx with custom ssh settings In-Reply-To: <20140108104644.GZ14184@merlinux.eu> References: <20140108104644.GZ14184@merlinux.eu> Message-ID: Hi Holger, great to hear, let me know if I can helping with testing something before the release! Micha?l On Wed, Jan 8, 2014 at 11:46 AM, holger krekel wrote: > hi Michael, > > the prospective execnet-1.2 release will have an extended ssh syntax > to allow passing of command line args, for example: > > ssh=-p 5000 HOSTNAME > > You can also pass in an ssh config file (IIRC -F). > > best, > holger > > > On Tue, Jan 07, 2014 at 11:09 +0100, Michael Heyvaert wrote: > > Hello, > > > > is it possible to use the py.test xdist plugin with custom ssh settings? > > > > My use case is to run tests against a vagrant box, which has the vagrant > > user > > and a specific ssh key, independent of the actual vm. > > Depending on the type of the vm, it is also possible that the ssh port is > > forwarded. > > > > I would like to setup a script that parses the vagrant ssh-config output > > and generates --tx to pass to xdist. > > > > Best regards, > > > > Micha?l > > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Fri Jan 10 11:03:45 2014 From: holger at merlinux.eu (holger krekel) Date: Fri, 10 Jan 2014 10:03:45 +0000 Subject: [pytest-dev] pytest bugs / 2.6 roadmap / CI Hosting / GSoc Message-ID: <20140110100345.GL14184@merlinux.eu> Hi pytest contributors and users! hope your initial exploratory "2014" testing worked out ok! Since the pre-xmas release 2.5.1 we got a few more bugs reported: https://bitbucket.org/hpk42/pytest/issues?status=new&status=open&kind=bug Could some of you look into some of them, possibly commenting/owning? Relatedly, I also opened a doodle for the first 2014 bug/issue day: http://doodle.com/uph9qa6ydqaad3ay Please respond quickly as i'd like to announce the final date on Monday. During that day i'd also like to discuss a roadmap for 2.6 and what we want included there. Currently I am thinking of this: - introduce a little warning system so we can emit them from hooks through config.warn(...) for generic warnings and node.warn() for a location-bound warnings. Like lint/pep8 checks warning types should get a number W01, W02 etc. so they can be disabled selectively. They would be reported like skips with e.g. "5 passed, 3 warnings" and you can get details reported via "-rw". Warnings do not cause a non-zero return code. This system should help to resolve a couple of enhancement issues/address complaints, for example when test classes with an __init__ are skipped and someone took hours to find out why the test class is not collected. (These day's it produces a skip 's skipped - move traceback presentation logic from py lib to pytest and provide better Python3 support (nested tracebacks etc.) and maybe some "shorter" traceback options. This addresses and helps to address a number of traceback related enhancement issues. - (maybe) improve marker mechanism to work similarly to fixture one, e.g. a @pytest.marker decorator marks a marker function which can process any parameters (and bail out if parameters are bad). - other issues selected from the issue tracker Note sure yet but we might go for a 2.5-maintenance branch and use "default" for 2.6 targetted development. Comments welcome. FYI i got generous sponsored access to Rackspace hosting resources and would appreciate some help in managing windows/linux hosts into something we can use for pytest continous integration/development. If anyone knows how to automate Windows or Linux setups i can give sub accounts to play with. Lastly if anyone is interested in doing a 2014 Summer of Code project for pytest, let me/us know. I guess we should publish this via a re-tweeted blog entry some time and list some potential projects. cheers, holger From achartier at fastmail.fm Mon Jan 13 20:46:23 2014 From: achartier at fastmail.fm (achartier at fastmail.fm) Date: Mon, 13 Jan 2014 11:46:23 -0800 Subject: [pytest-dev] Autocompletion on fixture objects as test function input arguments - PyCharm In-Reply-To: <20140106142430.GL14184@merlinux.eu> References: <20131227234412.212940@gmx.com> <20140106142430.GL14184@merlinux.eu> Message-ID: <1389642383.19986.70248181.4DE2C211@webmail.messagingengine.com> Hi Holger, PyCharm has a page on type hinting that can be found here: http://www.jetbrains.com/pycharm/webhelp/type-hinting-in-pycharm.html My latest comment on that page (posted by fonzi337) provides further information on the auto-completion issue we are encountering. I hope this helps, Alfonso On Mon, Jan 6, 2014, at 06:24 AM, holger krekel wrote: > Hi Michelle, > > On Fri, Dec 27, 2013 at 18:44 -0500, Michelle Chartier wrote: > > Hey all, > > > > I recently got the community edition PyCharm IDE and so far like a lot of things about it (especially the ability to run pytest tests from within it). HOWEVER, I am having a problem with auto-completion on fixture functions (I realize this may not be the place to ask this question but I was hoping someone had some info that could help me or point me somewhere :) ) > > > > So, the problem: in my test function I am using a fixture object as an input argument and want to be able to see all available functions with autocompletion. To see autocompletion in Aptana, I imported the necessary fixture to the test module where my test function resides (the fixture is imported to a conftest in a package I have defined) > > > > ---------------------------------- > > > > Example: > > > > #this test module resides in mypackag.mysubpackage.mysubsubpackage > > > > from mypackage.mysubpackage.conftest import myobj > > > > def test_mytest(myobj): > > myobj. (<- this is where the autocompletion should show up?) > > Strictly speaking the import of "myobj" is unrelated to passing the > "myobj" parameter to the test function, as far as the IDE is concerned. > I don't know why/how Aptana manages to show something. Probably it uses > some heuristic? > > I don't know how PyCharm finds completions internally but maybe we could > think about helping it somehow. If you can point to specific information > on how PyCharms does completion, let us know. > > best, > holger > > > ---------------------------------- > > > > > > In Aptana myobj shows all the available functions with autocompletion as expected but in PyCharm (with the exact same code/folders/folder structure) it says "No suggestions." > > > > Anyone have a similar problem or know of a solution? (I would prefer to stick with PyCharm... :) ) > > > > Thanks! > > Michelle > > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From anton7811 at gmail.com Tue Jan 14 11:57:33 2014 From: anton7811 at gmail.com (Anton P) Date: Tue, 14 Jan 2014 12:57:33 +0200 Subject: [pytest-dev] HowTo re-run test function N times? Message-ID: Hi All, We need to create plugin that allows us to re-run some test case N times (or until failure). We found 2 possible solutions: 1) Using copy module we copy items on pytest_collection_modifyitems hook. This works good (-x option allows to re-run until the first failure) But this works very slow in case there are a lot of items in collection because filtering isn't performed yet. 2) Also there is alredy developed plugin ( https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py). But it doesn't work with test classes. Does anybody have any ideas about fast and universal solution? Thank you in advance! Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Jan 14 19:18:00 2014 From: holger at merlinux.eu (holger krekel) Date: Tue, 14 Jan 2014 18:18:00 +0000 Subject: [pytest-dev] next bug day: wednesday 22nd jan 2014 Message-ID: <20140114181800.GA1760@merlinux.eu> the doodle has closed and we are going to do the next bug/issue day 22nd Jan 2014 (wednesday). The following people plan to attend: Andreas Pelme Anatoly Bubenkov Floris Bruynooghe Holger Krekel Ronny Pfannschmidt If you can't come please try drop a note the day before. If you didn't register with the doodle but want to come, please feel free to show up on IRC. cheers, holger From lklrmn at gmail.com Tue Jan 14 19:19:45 2014 From: lklrmn at gmail.com (Leah Klearman) Date: Tue, 14 Jan 2014 10:19:45 -0800 Subject: [pytest-dev] HowTo re-run test function N times? In-Reply-To: References: Message-ID: Anton - I would be happy to accept patches on pytest-rerunfailures if you can figure out how to make it go. I'm currently not using either pytest or classSetUp, but i know you're not the only one interested. -Leah On Tue, Jan 14, 2014 at 2:57 AM, Anton P wrote: > Hi All, > > We need to create plugin that allows us to re-run some test case N times > (or until failure). > > We found 2 possible solutions: > > 1) Using copy module we copy items on pytest_collection_modifyitems hook. > This works good (-x option allows to re-run until the first failure) > But this works very slow in case there are a lot of items in collection > because filtering isn't performed yet. > > 2) Also there is alredy developed plugin ( > https://github.com/klrmn/pytest-rerunfailures/blob/master/rerunfailures/plugin.py). > But it doesn't work with test classes. > > Does anybody have any ideas about fast and universal solution? > > Thank you in advance! > Anton > > > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schettino72 at gmail.com Tue Jan 14 23:30:04 2014 From: schettino72 at gmail.com (Eduardo Schettino) Date: Wed, 15 Jan 2014 11:30:04 +1300 Subject: [pytest-dev] next bug day: wednesday 22nd jan 2014 In-Reply-To: <20140114181800.GA1760@merlinux.eu> References: <20140114181800.GA1760@merlinux.eu> Message-ID: On Wed, Jan 15, 2014 at 7:18 AM, holger krekel wrote: > > the doodle has closed and we are going to do the next bug/issue > day 22nd Jan 2014 (wednesday). The following people plan to attend: > > Andreas Pelme > Anatoly Bubenkov > Floris Bruynooghe > Holger Krekel > Ronny Pfannschmidt > > If you can't come please try drop a note the day before. > > If you didn't register with the doodle but want to come, > please feel free to show up on IRC. > > cheers, > holger > Hi Holger, I might join in. Could you clarify the exact time / timezone it will take place. cheers, Eduardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Jan 14 23:46:05 2014 From: holger at merlinux.eu (holger krekel) Date: Tue, 14 Jan 2014 22:46:05 +0000 Subject: [pytest-dev] next bug day: wednesday 22nd jan 2014 In-Reply-To: References: <20140114181800.GA1760@merlinux.eu> Message-ID: <20140114224605.GB1760@merlinux.eu> On Wed, Jan 15, 2014 at 11:30 +1300, Eduardo Schettino wrote: > On Wed, Jan 15, 2014 at 7:18 AM, holger krekel wrote: > > > > > the doodle has closed and we are going to do the next bug/issue > > day 22nd Jan 2014 (wednesday). The following people plan to attend: > > > > Andreas Pelme > > Anatoly Bubenkov > > Floris Bruynooghe > > Holger Krekel > > Ronny Pfannschmidt > > > > If you can't come please try drop a note the day before. > > > > If you didn't register with the doodle but want to come, > > please feel free to show up on IRC. > Hi Holger, > > I might join in. Could you clarify the exact time / timezone it will take > place. great. We'll start around 10am UTC+1. cheers, holger > cheers, > Eduardo From holger at merlinux.eu Mon Jan 20 20:06:19 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 20 Jan 2014 19:06:19 +0000 Subject: [pytest-dev] unitest -> pytest differences / welcome from unittest Message-ID: <20140120190619.GF1760@merlinux.eu> Hi all, if someone is rather freshly into pytest and previously used unittest: could you write about what you experienced as main differences, especially stumbling blocks? longer-time pytest users who well remember things may also contribute of course :) I'd then like to make this into a little "welcome from unittest" page on pytest.org. best & thanks, holger From holger at merlinux.eu Tue Jan 21 14:34:25 2014 From: holger at merlinux.eu (holger krekel) Date: Tue, 21 Jan 2014 13:34:25 +0000 Subject: [pytest-dev] reminder: pytest bug day tomorrow (wednesday) Message-ID: <20140121133425.GL32624@merlinux.eu> Hi all, bug day is tomorrow starting around 9-10AM UTC+1 on #pylib irc.freenode.net. see you, holger ----- Forwarded message from holger krekel ----- Date: Tue, 14 Jan 2014 18:18:00 +0000 From: holger krekel To: pytest-dev at python.org Subject: [pytest-dev] next bug day: wednesday 22nd jan 2014 User-Agent: Mutt/1.5.20 (2009-06-14) the doodle has closed and we are going to do the next bug/issue day 22nd Jan 2014 (wednesday). The following people plan to attend: Andreas Pelme Anatoly Bubenkov Floris Bruynooghe Holger Krekel Ronny Pfannschmidt If you can't come please try drop a note the day before. If you didn't register with the doodle but want to come, please feel free to show up on IRC. cheers, holger _______________________________________________ Pytest-dev mailing list Pytest-dev at python.org https://mail.python.org/mailman/listinfo/pytest-dev ----- End forwarded message ----- From achartier at fastmail.fm Tue Jan 21 23:09:18 2014 From: achartier at fastmail.fm (achartier at fastmail.fm) Date: Tue, 21 Jan 2014 14:09:18 -0800 Subject: [pytest-dev] Changing the behavior of --junitxml Message-ID: <1390342158.22951.73654297.5B238BA9@webmail.messagingengine.com> Hi, I would like to propose a change to the behavior of the --junitxml option. Currently, if the same XML file path is specified for multiple invocations of pytest, the file is overwritten. For our use case, we would like the ability to append new test results to an existing JUnit XML file so we don't lose previous results. This is important for us, as we are automating calls to pytest and would like to avoid the need to create a consolidated JUnit XML results file ourselves (i.e., by specifying a different argument to --junitxml for each invocation and then consolidating all the separate XML files into a single one for consumption by our tool). Is this behavior change something that could be considered for the next release of pytest? If other pytest users are relying on the existing overwriting behavior of --junitxml, perhaps another pytest option could be added to toggle between overwrite and append behavior. Best, Alfonso From jaraco at jaraco.com Tue Jan 21 23:46:01 2014 From: jaraco at jaraco.com (Jason R. Coombs) Date: Tue, 21 Jan 2014 22:46:01 +0000 Subject: [pytest-dev] Changing the behavior of --junitxml In-Reply-To: <1390342158.22951.73654297.5B238BA9@webmail.messagingengine.com> References: <1390342158.22951.73654297.5B238BA9@webmail.messagingengine.com> Message-ID: <4ee6b09aeb464753874dc3c763efafde@BLUPR06MB434.namprd06.prod.outlook.com> Surely some depend on the expectation to overwrite. Most of our tests run under Jenkins where the junitxml file is left around to be processed by Jenkins later. I believe that's the expected behavior for most testing frameworks that generate jUnit XML. > -----Original Message----- > From: Pytest-dev [mailto:pytest-dev- > bounces+jaraco=jaraco.com at python.org] On Behalf Of > achartier at fastmail.fm > Sent: Tuesday, 21 January, 2014 17:09 > To: pytest-dev at python.org > Subject: [pytest-dev] Changing the behavior of --junitxml > > Hi, > > I would like to propose a change to the behavior of the --junitxml option. > > Currently, if the same XML file path is specified for multiple invocations of > pytest, the file is overwritten. For our use case, we would like the ability to > append new test results to an existing JUnit XML file so we don't lose > previous results. This is important for us, as we are automating calls to pytest > and would like to avoid the need to create a consolidated JUnit XML results > file ourselves (i.e., by specifying a different argument to --junitxml for each > invocation and then consolidating all the separate XML files into a single one > for consumption by our tool). > > Is this behavior change something that could be considered for the next > release of pytest? If other pytest users are relying on the existing overwriting > behavior of --junitxml, perhaps another pytest option could be added to > toggle between overwrite and append behavior. > > Best, > > Alfonso > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From nicoddemus at gmail.com Wed Jan 22 00:12:53 2014 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Tue, 21 Jan 2014 21:12:53 -0200 Subject: [pytest-dev] Changing the behavior of --junitxml In-Reply-To: <4ee6b09aeb464753874dc3c763efafde@BLUPR06MB434.namprd06.prod.outlook.com> References: <1390342158.22951.73654297.5B238BA9@webmail.messagingengine.com> <4ee6b09aeb464753874dc3c763efafde@BLUPR06MB434.namprd06.prod.outlook.com> Message-ID: Same here. I think a better approach would be a new flag. Perhaps "--append-to-junitxml"? On Tue, Jan 21, 2014 at 8:46 PM, Jason R. Coombs wrote: > Surely some depend on the expectation to overwrite. Most of our tests run > under Jenkins where the junitxml file is left around to be processed by > Jenkins later. I believe that's the expected behavior for most testing > frameworks that generate jUnit XML. > > > -----Original Message----- > > From: Pytest-dev [mailto:pytest-dev- > > bounces+jaraco=jaraco.com at python.org] On Behalf Of > > achartier at fastmail.fm > > Sent: Tuesday, 21 January, 2014 17:09 > > To: pytest-dev at python.org > > Subject: [pytest-dev] Changing the behavior of --junitxml > > > > Hi, > > > > I would like to propose a change to the behavior of the --junitxml > option. > > > > Currently, if the same XML file path is specified for multiple > invocations of > > pytest, the file is overwritten. For our use case, we would like the > ability to > > append new test results to an existing JUnit XML file so we don't lose > > previous results. This is important for us, as we are automating calls > to pytest > > and would like to avoid the need to create a consolidated JUnit XML > results > > file ourselves (i.e., by specifying a different argument to --junitxml > for each > > invocation and then consolidating all the separate XML files into a > single one > > for consumption by our tool). > > > > Is this behavior change something that could be considered for the next > > release of pytest? If other pytest users are relying on the existing > overwriting > > behavior of --junitxml, perhaps another pytest option could be added to > > toggle between overwrite and append behavior. > > > > Best, > > > > Alfonso > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From achartier at fastmail.fm Wed Jan 22 01:22:26 2014 From: achartier at fastmail.fm (achartier at fastmail.fm) Date: Tue, 21 Jan 2014 16:22:26 -0800 Subject: [pytest-dev] Changing the behavior of --junitxml In-Reply-To: References: <1390342158.22951.73654297.5B238BA9@webmail.messagingengine.com> <4ee6b09aeb464753874dc3c763efafde@BLUPR06MB434.namprd06.prod.outlook.com> Message-ID: <1390350146.20278.73710973.7603092D@webmail.messagingengine.com> A new flag would be fine for our use case (and wouldn't disrupt anyone else's dependence on the existing behavior). Is this something that would be pretty easy to implement? How soon could it be available? On Tue, Jan 21, 2014, at 03:12 PM, Bruno Oliveira wrote: Same here. I think a better approach would be a new flag. Perhaps "--append-to-junitxml"? On Tue, Jan 21, 2014 at 8:46 PM, Jason R. Coombs <[1]jaraco at jaraco.com> wrote: Surely some depend on the expectation to overwrite. Most of our tests run under Jenkins where the junitxml file is left around to be processed by Jenkins later. I believe that's the expected behavior for most testing frameworks that generate jUnit XML. > -----Original Message----- > From: Pytest-dev [mailto:[2]pytest-dev- > bounces+jaraco=[3]jaraco.com at python.org] On Behalf Of > [4]achartier at fastmail.fm > Sent: Tuesday, 21 January, 2014 17:09 > To: [5]pytest-dev at python.org > Subject: [pytest-dev] Changing the behavior of --junitxml > > Hi, > > I would like to propose a change to the behavior of the --junitxml option. > > Currently, if the same XML file path is specified for multiple invocations of > pytest, the file is overwritten. For our use case, we would like the ability to > append new test results to an existing JUnit XML file so we don't lose > previous results. This is important for us, as we are automating calls to pytest > and would like to avoid the need to create a consolidated JUnit XML results > file ourselves (i.e., by specifying a different argument to --junitxml for each > invocation and then consolidating all the separate XML files into a single one > for consumption by our tool). > > Is this behavior change something that could be considered for the next > release of pytest? If other pytest users are relying on the existing overwriting > behavior of --junitxml, perhaps another pytest option could be added to > toggle between overwrite and append behavior. > > Best, > > Alfonso > _______________________________________________ > Pytest-dev mailing list > [6]Pytest-dev at python.org > [7]https://mail.python.org/mailman/listinfo/pytest-dev _______________________________________________ Pytest-dev mailing list [8]Pytest-dev at python.org [9]https://mail.python.org/mailman/listinfo/pytest-dev References 1. mailto:jaraco at jaraco.com 2. mailto:pytest-dev- 3. mailto:jaraco.com at python.org 4. mailto:achartier at fastmail.fm 5. mailto:pytest-dev at python.org 6. mailto:Pytest-dev at python.org 7. https://mail.python.org/mailman/listinfo/pytest-dev 8. mailto:Pytest-dev at python.org 9. https://mail.python.org/mailman/listinfo/pytest-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Wed Jan 22 08:59:45 2014 From: holger at merlinux.eu (holger krekel) Date: Wed, 22 Jan 2014 07:59:45 +0000 Subject: [pytest-dev] pytest bug day TODAY / etherpad Message-ID: <20140122075945.GB22354@merlinux.eu> Hi folks, here is the etherpad for today's BUG day: http://etherpad.osuosl.org/as4wyG3FLn i'll need to bring my son to child care and will then join around 10AM UTC+1. cheers, holger From mount.sarah at gmail.com Wed Jan 22 22:21:19 2014 From: mount.sarah at gmail.com (Sarah Mount) Date: Wed, 22 Jan 2014 21:21:19 +0000 Subject: [pytest-dev] py.test and decorators Message-ID: Hi there, I am currently converting some very idiosyncratic hand-rolled tests into more sensible unit tests with pytest. I had a problem running tests which dealt with decorators and following the advice here: http://stackoverflow.com/questions/19614658/how-do-i-make-pytest-fixtures-work-with-decorated-functions I refactored all my decorators to use the decorator library rather than functools.wraps. The win there was that now the Python 2.7 version of my code behaves in the same way as the Python 3.3 code (whereas before the 2.7 code passed and the 3.3 code errored). The fail is that now none of the tests run at all! I tried running tests without py.test (as in: python -m mylib.test.test_one) and they ran as expected. If I try to use the library just from a REPL session it seems OK. I suspect that the issue has occurred because I haven't yet understood where to use pytest fixtures. The (simplified) code looks roughly like this: ### FILE base.py @process def foo(channel): channel.write(100) @process def bar(channel): channel.read() ### FILE test_one.py def test_one_one(): channel = Channel() par(foo(channel), bar(channel)).start() And the results look like this: $ py.test mylib/ ================================================================ test session starts ================================================================= platform linux2 -- Python 2.7.5 -- pytest-2.5.1 collected 44 items mylib/test/test_one.py (venv)$ I have tried putting the @pytest.fixture decorator on both the functions in base.py and those in test_one.py but neither works. Any ideas? Thanks, Sarah -- Sarah Mount, Senior Lecturer, University of Wolverhampton website: http://www.snim2.org/ twitter: @snim2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Jan 23 19:08:14 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 23 Jan 2014 18:08:14 +0000 Subject: [pytest-dev] py.test and decorators In-Reply-To: References: Message-ID: <20140123180814.GU22354@merlinux.eu> Hi Sarah, On Wed, Jan 22, 2014 at 21:21 +0000, Sarah Mount wrote: > Hi there, > > I am currently converting some very idiosyncratic hand-rolled tests into > more sensible unit tests with pytest. I had a problem running tests which > dealt with decorators and following the advice here: > > http://stackoverflow.com/questions/19614658/how-do-i-make-pytest-fixtures-work-with-decorated-functions > > I refactored all my decorators to use the decorator library rather than > functools.wraps. > > The win there was that now the Python 2.7 version of my code behaves in the > same way as the Python 3.3 code (whereas before the 2.7 code passed and the > 3.3 code errored). The fail is that now none of the tests run at all! > > I tried running tests without py.test (as in: python -m > mylib.test.test_one) and they ran as expected. If I try to use the library > just from a REPL session it seems OK. I suspect that the issue has occurred > because I haven't yet understood where to use pytest fixtures. > > The (simplified) code looks roughly like this: > > ### FILE base.py > > @process > def foo(channel): > channel.write(100) > > @process > def bar(channel): > channel.read() > > ### FILE test_one.py > > def test_one_one(): > channel = Channel() > par(foo(channel), bar(channel)).start() Could you post the code that runs under the unittests framework as well? The test looks OK, maybe "py.test -s mylib" (don't capture output) would give a clue why the test run bails out the way it does. best, holger > > > And the results look like this: > > $ py.test mylib/ > ================================================================ test > session starts > ================================================================= > platform linux2 -- Python 2.7.5 -- pytest-2.5.1 > collected 44 items > > mylib/test/test_one.py (venv)$ > > > I have tried putting the @pytest.fixture decorator on both the functions in > base.py and those in test_one.py but neither works. > > Any ideas? > > Thanks, > > Sarah > > -- > Sarah Mount, Senior Lecturer, University of Wolverhampton > website: http://www.snim2.org/ > twitter: @snim2 > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev From mount.sarah at gmail.com Thu Jan 23 19:27:01 2014 From: mount.sarah at gmail.com (Sarah Mount) Date: Thu, 23 Jan 2014 18:27:01 +0000 Subject: [pytest-dev] py.test and decorators In-Reply-To: <20140123180814.GU22354@merlinux.eu> References: <20140123180814.GU22354@merlinux.eu> Message-ID: The tests were completely ad-hoc and did not use unittest or any other library. They had a hand-rolled runner which ran all the functions like test_one_one from a script. The odd thing is that Python 2.7 + py.test + the functools.wraps version of the decorators worked OK. Could it be that I have inadvertently turned the logging module on and output from that is confusing the py.test runner? Thanks, Sarah On Thu, Jan 23, 2014 at 6:08 PM, holger krekel wrote: > Hi Sarah, > > On Wed, Jan 22, 2014 at 21:21 +0000, Sarah Mount wrote: > > Hi there, > > > > I am currently converting some very idiosyncratic hand-rolled tests into > > more sensible unit tests with pytest. I had a problem running tests which > > dealt with decorators and following the advice here: > > > > > http://stackoverflow.com/questions/19614658/how-do-i-make-pytest-fixtures-work-with-decorated-functions > > > > I refactored all my decorators to use the decorator library rather than > > functools.wraps. > > > > The win there was that now the Python 2.7 version of my code behaves in > the > > same way as the Python 3.3 code (whereas before the 2.7 code passed and > the > > 3.3 code errored). The fail is that now none of the tests run at all! > > > > I tried running tests without py.test (as in: python -m > > mylib.test.test_one) and they ran as expected. If I try to use the > library > > just from a REPL session it seems OK. I suspect that the issue has > occurred > > because I haven't yet understood where to use pytest fixtures. > > > > The (simplified) code looks roughly like this: > > > > ### FILE base.py > > > > @process > > def foo(channel): > > channel.write(100) > > > > @process > > def bar(channel): > > channel.read() > > > > ### FILE test_one.py > > > > def test_one_one(): > > channel = Channel() > > par(foo(channel), bar(channel)).start() > > Could you post the code that runs under the unittests framework > as well? > > The test looks OK, maybe "py.test -s mylib" (don't capture output) > would give a clue why the test run bails out the way it does. > > best, > holger > > > > > > > And the results look like this: > > > > $ py.test mylib/ > > ================================================================ test > > session starts > > ================================================================= > > platform linux2 -- Python 2.7.5 -- pytest-2.5.1 > > collected 44 items > > > > mylib/test/test_one.py (venv)$ > > > > > > I have tried putting the @pytest.fixture decorator on both the functions > in > > base.py and those in test_one.py but neither works. > > > > Any ideas? > > > > Thanks, > > > > Sarah > > > > -- > > Sarah Mount, Senior Lecturer, University of Wolverhampton > > website: http://www.snim2.org/ > > twitter: @snim2 > > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > > -- Sarah Mount, Senior Lecturer, University of Wolverhampton website: http://www.snim2.org/ twitter: @snim2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Jan 23 19:30:08 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 23 Jan 2014 18:30:08 +0000 Subject: [pytest-dev] py.test and decorators In-Reply-To: References: <20140123180814.GU22354@merlinux.eu> Message-ID: <20140123183008.GW22354@merlinux.eu> On Thu, Jan 23, 2014 at 18:27 +0000, Sarah Mount wrote: > The tests were completely ad-hoc and did not use unittest or any other > library. They had a hand-rolled runner which ran all the functions like > test_one_one from a script. > > The odd thing is that Python 2.7 + py.test + the functools.wraps version of > the decorators worked OK. > > Could it be that I have inadvertently turned the logging module on and > output from that is confusing the py.test runner? wouldn't think so. Can you attach a zip file and state the dependencies so we can try to reproduce? Or a repo-url? holger > Thanks, > > Sarah > > > On Thu, Jan 23, 2014 at 6:08 PM, holger krekel wrote: > > > Hi Sarah, > > > > On Wed, Jan 22, 2014 at 21:21 +0000, Sarah Mount wrote: > > > Hi there, > > > > > > I am currently converting some very idiosyncratic hand-rolled tests into > > > more sensible unit tests with pytest. I had a problem running tests which > > > dealt with decorators and following the advice here: > > > > > > > > http://stackoverflow.com/questions/19614658/how-do-i-make-pytest-fixtures-work-with-decorated-functions > > > > > > I refactored all my decorators to use the decorator library rather than > > > functools.wraps. > > > > > > The win there was that now the Python 2.7 version of my code behaves in > > the > > > same way as the Python 3.3 code (whereas before the 2.7 code passed and > > the > > > 3.3 code errored). The fail is that now none of the tests run at all! > > > > > > I tried running tests without py.test (as in: python -m > > > mylib.test.test_one) and they ran as expected. If I try to use the > > library > > > just from a REPL session it seems OK. I suspect that the issue has > > occurred > > > because I haven't yet understood where to use pytest fixtures. > > > > > > The (simplified) code looks roughly like this: > > > > > > ### FILE base.py > > > > > > @process > > > def foo(channel): > > > channel.write(100) > > > > > > @process > > > def bar(channel): > > > channel.read() > > > > > > ### FILE test_one.py > > > > > > def test_one_one(): > > > channel = Channel() > > > par(foo(channel), bar(channel)).start() > > > > Could you post the code that runs under the unittests framework > > as well? > > > > The test looks OK, maybe "py.test -s mylib" (don't capture output) > > would give a clue why the test run bails out the way it does. > > > > best, > > holger > > > > > > > > > > > And the results look like this: > > > > > > $ py.test mylib/ > > > ================================================================ test > > > session starts > > > ================================================================= > > > platform linux2 -- Python 2.7.5 -- pytest-2.5.1 > > > collected 44 items > > > > > > mylib/test/test_one.py (venv)$ > > > > > > > > > I have tried putting the @pytest.fixture decorator on both the functions > > in > > > base.py and those in test_one.py but neither works. > > > > > > Any ideas? > > > > > > Thanks, > > > > > > Sarah > > > > > > -- > > > Sarah Mount, Senior Lecturer, University of Wolverhampton > > > website: http://www.snim2.org/ > > > twitter: @snim2 > > > > > _______________________________________________ > > > Pytest-dev mailing list > > > Pytest-dev at python.org > > > https://mail.python.org/mailman/listinfo/pytest-dev > > > > > > > -- > Sarah Mount, Senior Lecturer, University of Wolverhampton > website: http://www.snim2.org/ > twitter: @snim2 From mount.sarah at gmail.com Fri Jan 24 10:35:37 2014 From: mount.sarah at gmail.com (Sarah Mount) Date: Fri, 24 Jan 2014 09:35:37 +0000 Subject: [pytest-dev] py.test and decorators In-Reply-To: <20140123183008.GW22354@merlinux.eu> References: <20140123180814.GU22354@merlinux.eu> <20140123183008.GW22354@merlinux.eu> Message-ID: On 1/23/14, holger krekel wrote: > On Thu, Jan 23, 2014 at 18:27 +0000, Sarah Mount wrote: >> The tests were completely ad-hoc and did not use unittest or any other >> library. They had a hand-rolled runner which ran all the functions like >> test_one_one from a script. >> >> The odd thing is that Python 2.7 + py.test + the functools.wraps version >> of >> the decorators worked OK. >> >> Could it be that I have inadvertently turned the logging module on and >> output from that is confusing the py.test runner? > > wouldn't think so. Can you attach a zip file and state the dependencies > so we can try to reproduce? Or a repo-url? > Many thanks, I am reluctant to give you a repo-url since there is so much code in there that is irrelevant to this issue it would be hard to figure out what is going on. The attached zip contains the minimum code to reproduce. If you run "show_bug.sh" from the pytest-bug directory this will: 1) install the requirements via pip, 2) run a single test from vanilla python (should pass), 3) run the same test with py.test (should green at this stage) and 4) display some text explaining what to uncomment to reproduce the bug. I have narrowed it down quite a bit, and it turns out that if you comment out most of the code in base.py everything works just fine... which is odd. Thanks for your help, Sarah -- Sarah Mount, Senior Lecturer, University of Wolverhampton website: http://www.snim2.org/ twitter: @snim2 -------------- next part -------------- A non-text attachment was scrubbed... Name: pytest-bug.zip Type: application/zip Size: 12515 bytes Desc: not available URL: From holger at merlinux.eu Wed Jan 29 14:11:34 2014 From: holger at merlinux.eu (holger krekel) Date: Wed, 29 Jan 2014 13:11:34 +0000 Subject: [pytest-dev] pytest-2.5.2: fixes, plugin index, contribution guide Message-ID: <20140129131134.GD13547@merlinux.eu> pytest-2.5.2: fixes, plugin page, contribution guide =========================================================================== pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters and platforms. The 2.5.2 release fixes a few bugs with two maybe-bugs remaining and actively being worked on (and waiting for the bug reporter's input). We also have a new contribution guide thanks to Piotr Banaszkiewicz and an improved 3rd party plugin overview (including py2/py3 tests) page thanks to Bruno Oliveira. See docs at: http://pytest.org As usual, you can upgrade from pypi via:: pip install -U pytest Thanks to the following people who contributed to this release: Anatoly Bubenkov Ronny Pfannschmidt Floris Bruynooghe Bruno Oliveira Andreas Pelme Jurko Gospodneti? Piotr Banaszkiewicz Simon Liedtke lakka Lukasz Balcerzak Philippe Muller Daniel Hahler have fun, holger krekel 2.5.2 ----------------------------------- - fix issue409 -- better interoperate with cx_freeze by not trying to import from collections.abc which causes problems for py27/cx_freeze. Thanks Wolfgang L. for reporting and tracking it down. - fixed docs and code to use "pytest" instead of "py.test" almost everywhere. Thanks Jurko Gospodnetic for the complete PR. - fix issue425: mention at end of "py.test -h" that --markers and --fixtures work according to specified test path (or current dir) - fix issue413: exceptions with unicode attributes are now printed correctly also on python2 and with pytest-xdist runs. (the fix requires py-1.4.20) - copy, cleanup and integrate py.io capture from pylib 1.4.20.dev2 (rev 13d9af95547e) - address issue416: clarify docs as to conftest.py loading semantics - fix issue429: comparing byte strings with non-ascii chars in assert expressions now work better. Thanks Floris Bruynooghe. - make capfd/capsys.capture private, its unused and shouldnt be exposed From holger at merlinux.eu Wed Jan 29 14:15:42 2014 From: holger at merlinux.eu (holger krekel) Date: Wed, 29 Jan 2014 13:15:42 +0000 Subject: [pytest-dev] pytest-xdist-1.10 released (distributed testing plugin) Message-ID: <20140129131542.GE13547@merlinux.eu> Hi all, also released pytest-xdist-1.11 with some little improvements and speedups regarding overheads for distributing tests (maybe 10%, not too much). Install it with: pip install -U pytest-xdist and checkout the pypi page: https://pypi.python.org/pypi/pytest-xdist best, holger 1.10 ------------------------- - add glob support for rsyncignores, add command line option to pass additional rsyncignores. Thanks Anatoly Bubenkov. - fix pytest issue382 - produce "pytest_runtest_logstart" event again in master. Thanks Aron Curzon. - fix pytest issue419 by sending/receiving indices into the test collection instead of node ids (which are not neccessarily unique for functions parametrized with duplicate values) - send multiple "to test" indices in one network message to a slave and improve heuristics for sending chunks where the chunksize depends on the number of remaining tests rather than fixed numbers. This reduces the number of master -> node messages (but not the reverse direction)