From holger at merlinux.eu Wed Oct 22 09:21:07 2014 From: holger at merlinux.eu (holger krekel) Date: Wed, 22 Oct 2014 07:21:07 +0000 Subject: [pytest-dev] improve -k/-m exercise / want to help pytest? Message-ID: <20141022072107.GF7954@merlinux.eu> Hi all, is anyone interested in a little exercise that would improve pytest usage? It doesn't require deep pytest knowledge. This is about the "-k" and "-m" selection mini-language, namely expressions like: marker1 and marker2 not marker1 and (marker2 or marker3) and so forth. The thing is that pytest currently uses Python's "eval" with a custom dictionary which means that names such as "marker1" cannot have dots or other punctuation and also other things are not possible like running all tests that don't have "4.3" (coming from a parametrized test) in it: -k 'not 4.3' This would currently select all tests because "4.3" is true. Various people have run into traps like this. So what is needed here are tests and code that, given a set of names and an expression string, determines if the expression is true or not. And everything except "and" and "or" are valid words. To use the latter two words probably need to have some escaping like "\and" or so but that's a bonus task :) Anyone up for this little exercise? Or other comments? best and thanks, holger From gherman at darwin.in-berlin.de Wed Oct 22 10:20:08 2014 From: gherman at darwin.in-berlin.de (Dinu Gherman) Date: Wed, 22 Oct 2014 10:20:08 +0200 Subject: [pytest-dev] improve -k/-m exercise / want to help pytest? In-Reply-To: <20141022072107.GF7954@merlinux.eu> References: <20141022072107.GF7954@merlinux.eu> Message-ID: holger krekel: > Hi all, > > is anyone interested in a little exercise that would improve pytest usage? > It doesn't require deep pytest knowledge. > > This is about the "-k" and "-m" selection mini-language, > namely expressions like: > > marker1 and marker2 > not marker1 and (marker2 or marker3) > > and so forth. The thing is that pytest currently uses Python's "eval" with > a custom dictionary which means that names such as "marker1" > cannot have dots or other punctuation and also other things are not > possible like running all tests that don't have "4.3" (coming from a > parametrized test) in it: > > -k 'not 4.3' > > This would currently select all tests because "4.3" is true. Various > people have run into traps like this. > > So what is needed here are tests and code that, given a set of names > and an expression string, determines if the expression is true or not. > And everything except "and" and "or" are valid words. To use the latter > two words probably need to have some escaping like "\and" or so but > that's a bonus task :) > > Anyone up for this little exercise? Or other comments? Not sure I have the time, but still a few comments: - I like the planned feature. - It would be nice to have a dotted syntax allowing to specify a specific test method in a specific test class, something like -k TestClassX.test_method_y (like it is possible with unittest). - Is pyparsing an option for doing this (maybe a somewhat heavy solution, not sure)? - Or, could the expression be formulated somewhere else, like in conftest.py (in pure Python), as a test selection function, and referenced by the -k option by name? That way these expressions would also be easier to reuse, since on the command-line they simply get lost. Cheers, Dinu From holger at merlinux.eu Wed Oct 22 10:49:01 2014 From: holger at merlinux.eu (holger krekel) Date: Wed, 22 Oct 2014 08:49:01 +0000 Subject: [pytest-dev] improve -k/-m exercise / want to help pytest? In-Reply-To: References: <20141022072107.GF7954@merlinux.eu> Message-ID: <20141022084901.GG7954@merlinux.eu> On Wed, Oct 22, 2014 at 10:20 +0200, Dinu Gherman wrote: > holger krekel: > > > Hi all, > > > > is anyone interested in a little exercise that would improve pytest usage? > > It doesn't require deep pytest knowledge. > > > > This is about the "-k" and "-m" selection mini-language, > > namely expressions like: > > > > marker1 and marker2 > > not marker1 and (marker2 or marker3) > > > > and so forth. The thing is that pytest currently uses Python's "eval" with > > a custom dictionary which means that names such as "marker1" > > cannot have dots or other punctuation and also other things are not > > possible like running all tests that don't have "4.3" (coming from a > > parametrized test) in it: > > > > -k 'not 4.3' > > > > This would currently select all tests because "4.3" is true. Various > > people have run into traps like this. > > > > So what is needed here are tests and code that, given a set of names > > and an expression string, determines if the expression is true or not. > > And everything except "and" and "or" are valid words. To use the latter > > two words probably need to have some escaping like "\and" or so but > > that's a bonus task :) > > > > Anyone up for this little exercise? Or other comments? > > Not sure I have the time, but still a few comments: > > - I like the planned feature. > good :) > - It would be nice to have a dotted syntax allowing to specify a specific test method in a specific test class, something like -k TestClassX.test_method_y (like it is possible with unittest). Hum, that wouldn't directly work, i am afraid, unless we put the full dotted path of test items into the keywords (which we could i guess) > - Is pyparsing an option for doing this (maybe a somewhat heavy solution, not sure)? Nope, external deps are not allowed for this. > - Or, could the expression be formulated somewhere else, like in conftest.py (in pure Python), as a test selection function, and referenced by the -k option by name? That way these expressions would also be easier to reuse, since on the command-line they simply get lost. That would require a new option, something like "-g" (selected group) or so and is somewhat unrelated to the task at hand i think. cheers, holger From holger at merlinux.eu Thu Oct 23 15:48:23 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 23 Oct 2014 13:48:23 +0000 Subject: [pytest-dev] xdist grouping: request for user stories Message-ID: <20141023134823.GM7954@merlinux.eu> Hi all, Currently, pytest-xdist in load-balancing ("-n4") does not support grouping of tests or influencing how tests are distributed to nodes. Bruno Oliveira did a PR that satisfies his company's particular requirements, in that GUI tests must run after all others and pinned to a particular note. (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) My question is what needs do you have wrt to test distribution with xdist? I'd like to collect and write down some user stories before we design options/mechanisms and then possibly get to implement them. Your input is much appreciated. best and thanks, holger From holger at merlinux.eu Thu Oct 23 16:23:01 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 23 Oct 2014 14:23:01 +0000 Subject: [pytest-dev] xdist grouping: request for user stories In-Reply-To: References: <20141023134823.GM7954@merlinux.eu> Message-ID: <20141023142301.GB22258@merlinux.eu> On Thu, Oct 23, 2014 at 16:20 +0200, Anatoly Bubenkov wrote: > we try to keep tests isolated, so for us it's not useful, but i agree it's > a valid use-case You might still have faster running tests with grouping because, say, you have 3 (independent) session scoped fixtures are used throughout your tests and with "-n4" they will be created four times while with "node-binding" it might only happen 3 times. best, holger P.S: IMHO it'd a pity that Gmail seduces people to always do top level posts. > On 23 October 2014 15:48, holger krekel wrote: > > > Hi all, > > > > Currently, pytest-xdist in load-balancing ("-n4") does not support > > grouping of tests or influencing how tests are distributed to nodes. > > Bruno Oliveira did a PR that satisfies his company's particular > > requirements, > > in that GUI tests must run after all others and pinned to a particular > > note. > > (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) > > > > My question is what needs do you have wrt to test distribution with xdist? > > I'd like to collect and write down some user stories before we design > > options/mechanisms and then possibly get to implement them. Your input > > is much appreciated. > > > > best and thanks, > > holger > > _______________________________________________ > > pytest-dev mailing list > > pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > > > > > > -- > Anatoly Bubenkov From bubenkoff at gmail.com Thu Oct 23 16:20:42 2014 From: bubenkoff at gmail.com (Anatoly Bubenkov) Date: Thu, 23 Oct 2014 16:20:42 +0200 Subject: [pytest-dev] xdist grouping: request for user stories In-Reply-To: <20141023134823.GM7954@merlinux.eu> References: <20141023134823.GM7954@merlinux.eu> Message-ID: we try to keep tests isolated, so for us it's not useful, but i agree it's a valid use-case On 23 October 2014 15:48, holger krekel wrote: > Hi all, > > Currently, pytest-xdist in load-balancing ("-n4") does not support > grouping of tests or influencing how tests are distributed to nodes. > Bruno Oliveira did a PR that satisfies his company's particular > requirements, > in that GUI tests must run after all others and pinned to a particular > note. > (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) > > My question is what needs do you have wrt to test distribution with xdist? > I'd like to collect and write down some user stories before we design > options/mechanisms and then possibly get to implement them. Your input > is much appreciated. > > best and thanks, > holger > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -- Anatoly Bubenkov -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Oct 23 16:46:05 2014 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 23 Oct 2014 12:46:05 -0200 Subject: [pytest-dev] xdist grouping: request for user stories In-Reply-To: <20141023134823.GM7954@merlinux.eu> References: <20141023134823.GM7954@merlinux.eu> Message-ID: Hi all, On Thu, Oct 23, 2014 at 11:48 AM, holger krekel wrote: > Bruno Oliveira did a PR that satisfies his company's particular requirements, in that GUI tests must run after all others and pinned to a particular note. To describe my requirements in a little more detail: We have a lot of GUI tests and most (say, 95%) work in parallel without any problems. A few of them though deal with window input focus, and those tests specifically can't run in parallel with other tests. For example, a test might: 1. create a widget and display it 2. edit a gui control, obtaining input focus 3. lose edit focus on purpose during the test, asserting that some other behavior occurs in response to that When this type of test runs in parallel with others, sometimes the other tests will pop up their own widgets and the first test will lose input focus at an unexpected place, causing it to fail. Other tests take screenshots of 3d rendering windows for regression testing against known "good" screenshots, and those are also susceptible to the same problem, where a window poping up in front of another before a screenshot might change the image and fail the test. In summary, some of our tests can't be run in parallel with any other tests. The current workaround is that we use a special mark in those tests that can't run in parallel and run pytest twice: a session in parallel excluding marked tests, and another regular, non-parallel session with only the marked tests. This works, but it is error prone when developers run the tests from the command line and makes error report cumbersome to read (the first problem is alleviated a bit with a custom script). Others have discussed other use cases in this thread: https://bitbucket.org/hpk42/pytest/issue/175. My PR takes care to distribute marked tests to a single node after all other tests have executed, but after some discussion it is clear that there might be more use cases out there so we would like to hear them before deciding what would be the best interface options. Cheers, On Thu, Oct 23, 2014 at 11:48 AM, holger krekel wrote: > Hi all, > > Currently, pytest-xdist in load-balancing ("-n4") does not support > grouping of tests or influencing how tests are distributed to nodes. > Bruno Oliveira did a PR that satisfies his company's particular > requirements, > in that GUI tests must run after all others and pinned to a particular > note. > (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) > > My question is what needs do you have wrt to test distribution with xdist? > I'd like to collect and write down some user stories before we design > options/mechanisms and then possibly get to implement them. Your input > is much appreciated. > > best and thanks, > holger > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From digitalxero at gmail.com Thu Oct 23 20:30:37 2014 From: digitalxero at gmail.com (Dj Gilcrease) Date: Thu, 23 Oct 2014 11:30:37 -0700 Subject: [pytest-dev] xdist grouping: request for user stories In-Reply-To: References: <20141023134823.GM7954@merlinux.eu> Message-ID: On Thu, Oct 23, 2014 at 7:46 AM, Bruno Oliveira wrote: > Hi all, > > On Thu, Oct 23, 2014 at 11:48 AM, holger krekel > wrote: > > Bruno Oliveira did a PR that satisfies his company's particular > requirements, in that GUI tests must run after all others and pinned to a > particular note. > > To describe my requirements in a little more detail: > > We have a lot of GUI tests and most (say, 95%) work in parallel without > any problems. > > A few of them though deal with window input focus, and those tests > specifically can't run in parallel with other tests. For example, a test > might: > > 1. create a widget and display it > 2. edit a gui control, obtaining input focus > 3. lose edit focus on purpose during the test, asserting that some other > behavior occurs in response to that > > When this type of test runs in parallel with others, sometimes the other > tests will pop up their own widgets and the first test will lose input > focus at an unexpected place, causing it to fail. > > Other tests take screenshots of 3d rendering windows for regression > testing against known "good" screenshots, and those are also susceptible to > the same problem, where a window poping up in front of another before a > screenshot might change the image and fail the test. > > In summary, some of our tests can't be run in parallel with any other > tests. > > The current workaround is that we use a special mark in those tests that > can't run in parallel and run pytest twice: a session in parallel excluding > marked tests, and another regular, non-parallel session with only the > marked tests. This works, but it is error prone when developers run the > tests from the command line and makes error report cumbersome to read (the > first problem is alleviated a bit with a custom script). > > Others have discussed other use cases in this thread: > https://bitbucket.org/hpk42/pytest/issue/175. > > My PR takes care to distribute marked tests to a single node after all > other tests have executed, but after some discussion it is clear that there > might be more use cases out there so we would like to hear them before > deciding what would be the best interface options. > > Cheers, > > On Thu, Oct 23, 2014 at 11:48 AM, holger krekel > wrote: > >> Hi all, >> >> Currently, pytest-xdist in load-balancing ("-n4") does not support >> grouping of tests or influencing how tests are distributed to nodes. >> Bruno Oliveira did a PR that satisfies his company's particular >> requirements, >> in that GUI tests must run after all others and pinned to a particular >> note. >> (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) >> >> My question is what needs do you have wrt to test distribution with xdist? >> I'd like to collect and write down some user stories before we design >> options/mechanisms and then possibly get to implement them. Your input >> is much appreciated. >> >> best and thanks, >> holger >> _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev >> > > > _______________________________________________ > pytest-dev mailing list > pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > > I think it would be useful to pin or exclude marks from individual nodes. We have marks like requires_abc, and right now we have logic that tests if the requirement exists before running tests and marks them as skip if it is not met, but I know beforehand which requirement can run on which nodes and would like to not even send the tests that cannot run to a node. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicoddemus at gmail.com Thu Oct 23 22:32:31 2014 From: nicoddemus at gmail.com (Bruno Oliveira) Date: Thu, 23 Oct 2014 18:32:31 -0200 Subject: [pytest-dev] xdist grouping: request for user stories In-Reply-To: References: <20141023134823.GM7954@merlinux.eu> Message-ID: Hi Dj, How do you know beforehand if a slave satisfies a requirement or not? Is that based on fixtures? Can you provide more details, perhaps with some sample code to illustrate it better? One of the ideas mentioned by Holger would be the ability of controlling test distribution based on fixtures as well, so tests that use a certain fixture would automatically inherit those same properties. From what I could gather, perhaps this would be a good fit for your case. On Thu, Oct 23, 2014 at 4:30 PM, Dj Gilcrease wrote: > > On Thu, Oct 23, 2014 at 7:46 AM, Bruno Oliveira > wrote: > >> Hi all, >> >> On Thu, Oct 23, 2014 at 11:48 AM, holger krekel >> wrote: >> > Bruno Oliveira did a PR that satisfies his company's particular >> requirements, in that GUI tests must run after all others and pinned to a >> particular note. >> >> To describe my requirements in a little more detail: >> >> We have a lot of GUI tests and most (say, 95%) work in parallel without >> any problems. >> >> A few of them though deal with window input focus, and those tests >> specifically can't run in parallel with other tests. For example, a test >> might: >> >> 1. create a widget and display it >> 2. edit a gui control, obtaining input focus >> 3. lose edit focus on purpose during the test, asserting that some other >> behavior occurs in response to that >> >> When this type of test runs in parallel with others, sometimes the other >> tests will pop up their own widgets and the first test will lose input >> focus at an unexpected place, causing it to fail. >> >> Other tests take screenshots of 3d rendering windows for regression >> testing against known "good" screenshots, and those are also susceptible to >> the same problem, where a window poping up in front of another before a >> screenshot might change the image and fail the test. >> >> In summary, some of our tests can't be run in parallel with any other >> tests. >> >> The current workaround is that we use a special mark in those tests that >> can't run in parallel and run pytest twice: a session in parallel excluding >> marked tests, and another regular, non-parallel session with only the >> marked tests. This works, but it is error prone when developers run the >> tests from the command line and makes error report cumbersome to read (the >> first problem is alleviated a bit with a custom script). >> >> Others have discussed other use cases in this thread: >> https://bitbucket.org/hpk42/pytest/issue/175. >> >> My PR takes care to distribute marked tests to a single node after all >> other tests have executed, but after some discussion it is clear that there >> might be more use cases out there so we would like to hear them before >> deciding what would be the best interface options. >> >> Cheers, >> >> On Thu, Oct 23, 2014 at 11:48 AM, holger krekel >> wrote: >> >>> Hi all, >>> >>> Currently, pytest-xdist in load-balancing ("-n4") does not support >>> grouping of tests or influencing how tests are distributed to nodes. >>> Bruno Oliveira did a PR that satisfies his company's particular >>> requirements, >>> in that GUI tests must run after all others and pinned to a particular >>> note. >>> (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) >>> >>> My question is what needs do you have wrt to test distribution with >>> xdist? >>> I'd like to collect and write down some user stories before we design >>> options/mechanisms and then possibly get to implement them. Your input >>> is much appreciated. >>> >>> best and thanks, >>> holger >>> _______________________________________________ >>> pytest-dev mailing list >>> pytest-dev at python.org >>> https://mail.python.org/mailman/listinfo/pytest-dev >>> >> >> >> _______________________________________________ >> pytest-dev mailing list >> pytest-dev at python.org >> https://mail.python.org/mailman/listinfo/pytest-dev >> >> > > I think it would be useful to pin or exclude marks from individual nodes. > We have marks like requires_abc, and right now we have logic that tests if > the requirement exists before running tests and marks them as skip if it is > not met, but I know beforehand which requirement can run on which nodes and > would like to not even send the tests that cannot run to a node. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From digitalxero at gmail.com Thu Oct 23 22:57:43 2014 From: digitalxero at gmail.com (Dj Gilcrease) Date: Thu, 23 Oct 2014 13:57:43 -0700 Subject: [pytest-dev] xdist grouping: request for user stories In-Reply-To: References: <20141023134823.GM7954@merlinux.eu> Message-ID: It isnt based on fixtures, though it could be. Right now it is just a hard coded list of feature to node type mapping. We have ~15 different node types and ~100 features. Right now we have a fixture that is a requirements checker, and each test is tagged something like @pytest.marker.requires_abc (it may have 1 to 20 of these). The requirements checker grabs each test before it runs, checks all the requires_X marks and does a check for each requirement by reading a file we store on each node that has a list of all the features it supports if if the requirement isnt in that list we skip the test. This has added a significant amount of run time to our tests as we are running ~60k each run on all 15 node types and about 10% of the tests will not run on one node type or another. It would be nice if we could do the requirements verification before we send the tests out to the node and only send tests that the node can handle. On Thu, Oct 23, 2014 at 1:32 PM, Bruno Oliveira wrote: > Hi Dj, > > How do you know beforehand if a slave satisfies a requirement or not? Is > that based on fixtures? Can you provide more details, perhaps with some > sample code to illustrate it better? > > One of the ideas mentioned by Holger would be the ability of controlling > test distribution based on fixtures as well, so tests that use a certain > fixture would automatically inherit those same properties. From what I > could gather, perhaps this would be a good fit for your case. > > On Thu, Oct 23, 2014 at 4:30 PM, Dj Gilcrease > wrote: > >> >> On Thu, Oct 23, 2014 at 7:46 AM, Bruno Oliveira >> wrote: >> >>> Hi all, >>> >>> On Thu, Oct 23, 2014 at 11:48 AM, holger krekel >>> wrote: >>> > Bruno Oliveira did a PR that satisfies his company's particular >>> requirements, in that GUI tests must run after all others and pinned to a >>> particular note. >>> >>> To describe my requirements in a little more detail: >>> >>> We have a lot of GUI tests and most (say, 95%) work in parallel without >>> any problems. >>> >>> A few of them though deal with window input focus, and those tests >>> specifically can't run in parallel with other tests. For example, a test >>> might: >>> >>> 1. create a widget and display it >>> 2. edit a gui control, obtaining input focus >>> 3. lose edit focus on purpose during the test, asserting that some other >>> behavior occurs in response to that >>> >>> When this type of test runs in parallel with others, sometimes the other >>> tests will pop up their own widgets and the first test will lose input >>> focus at an unexpected place, causing it to fail. >>> >>> Other tests take screenshots of 3d rendering windows for regression >>> testing against known "good" screenshots, and those are also susceptible to >>> the same problem, where a window poping up in front of another before a >>> screenshot might change the image and fail the test. >>> >>> In summary, some of our tests can't be run in parallel with any other >>> tests. >>> >>> The current workaround is that we use a special mark in those tests that >>> can't run in parallel and run pytest twice: a session in parallel excluding >>> marked tests, and another regular, non-parallel session with only the >>> marked tests. This works, but it is error prone when developers run the >>> tests from the command line and makes error report cumbersome to read (the >>> first problem is alleviated a bit with a custom script). >>> >>> Others have discussed other use cases in this thread: >>> https://bitbucket.org/hpk42/pytest/issue/175. >>> >>> My PR takes care to distribute marked tests to a single node after all >>> other tests have executed, but after some discussion it is clear that there >>> might be more use cases out there so we would like to hear them before >>> deciding what would be the best interface options. >>> >>> Cheers, >>> >>> On Thu, Oct 23, 2014 at 11:48 AM, holger krekel >>> wrote: >>> >>>> Hi all, >>>> >>>> Currently, pytest-xdist in load-balancing ("-n4") does not support >>>> grouping of tests or influencing how tests are distributed to nodes. >>>> Bruno Oliveira did a PR that satisfies his company's particular >>>> requirements, >>>> in that GUI tests must run after all others and pinned to a particular >>>> note. >>>> (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) >>>> >>>> My question is what needs do you have wrt to test distribution with >>>> xdist? >>>> I'd like to collect and write down some user stories before we design >>>> options/mechanisms and then possibly get to implement them. Your input >>>> is much appreciated. >>>> >>>> best and thanks, >>>> holger >>>> _______________________________________________ >>>> pytest-dev mailing list >>>> pytest-dev at python.org >>>> https://mail.python.org/mailman/listinfo/pytest-dev >>>> >>> >>> >>> _______________________________________________ >>> pytest-dev mailing list >>> pytest-dev at python.org >>> https://mail.python.org/mailman/listinfo/pytest-dev >>> >>> >> >> I think it would be useful to pin or exclude marks from individual nodes. >> We have marks like requires_abc, and right now we have logic that tests if >> the requirement exists before running tests and marks them as skip if it is >> not met, but I know beforehand which requirement can run on which nodes and >> would like to not even send the tests that cannot run to a node. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Fri Oct 24 09:29:51 2014 From: holger at merlinux.eu (holger krekel) Date: Fri, 24 Oct 2014 07:29:51 +0000 Subject: [pytest-dev] xdist grouping: request for user stories In-Reply-To: References: <20141023134823.GM7954@merlinux.eu> Message-ID: <20141024072951.GD22258@merlinux.eu> On Thu, Oct 23, 2014 at 13:57 -0700, Dj Gilcrease wrote: > It isnt based on fixtures, though it could be. Right now it is just a hard > coded list of feature to node type mapping. We have ~15 different node > types and ~100 features. Right now we have a fixture that is a requirements > checker, and each test is tagged something like @pytest.marker.requires_abc > (it may have 1 to 20 of these). So you are running in "--dist=each" mode, right? And using ssh or socket connection to run things on remote nodes? > The requirements checker grabs each test before it runs, checks all the > requires_X marks and does a check for each requirement by reading a file we > store on each node that has a list of all the features it supports if if > the requirement isnt in that list we skip the test. This has added a > significant amount of run time to our tests as we are running ~60k each run > on all 15 node types and about 10% of the tests will not run on one node > type or another. It would be nice if we could do the requirements > verification before we send the tests out to the node and only send tests > that the node can handle. I am not sure yet if it would make a big difference, though. All nodes need to collect all tests anyway (in all modes) and skipping them is (or should not be) expensive. There is some overhead for sending them to the master, depending on how fast the connection is. In "each" mode we may even avoid sending the full collection of tests to the master which only needs to know the number, probably. best, holger > > On Thu, Oct 23, 2014 at 1:32 PM, Bruno Oliveira > wrote: > > > Hi Dj, > > > > How do you know beforehand if a slave satisfies a requirement or not? Is > > that based on fixtures? Can you provide more details, perhaps with some > > sample code to illustrate it better? > > > > One of the ideas mentioned by Holger would be the ability of controlling > > test distribution based on fixtures as well, so tests that use a certain > > fixture would automatically inherit those same properties. From what I > > could gather, perhaps this would be a good fit for your case. > > > > On Thu, Oct 23, 2014 at 4:30 PM, Dj Gilcrease > > wrote: > > > >> > >> On Thu, Oct 23, 2014 at 7:46 AM, Bruno Oliveira > >> wrote: > >> > >>> Hi all, > >>> > >>> On Thu, Oct 23, 2014 at 11:48 AM, holger krekel > >>> wrote: > >>> > Bruno Oliveira did a PR that satisfies his company's particular > >>> requirements, in that GUI tests must run after all others and pinned to a > >>> particular note. > >>> > >>> To describe my requirements in a little more detail: > >>> > >>> We have a lot of GUI tests and most (say, 95%) work in parallel without > >>> any problems. > >>> > >>> A few of them though deal with window input focus, and those tests > >>> specifically can't run in parallel with other tests. For example, a test > >>> might: > >>> > >>> 1. create a widget and display it > >>> 2. edit a gui control, obtaining input focus > >>> 3. lose edit focus on purpose during the test, asserting that some other > >>> behavior occurs in response to that > >>> > >>> When this type of test runs in parallel with others, sometimes the other > >>> tests will pop up their own widgets and the first test will lose input > >>> focus at an unexpected place, causing it to fail. > >>> > >>> Other tests take screenshots of 3d rendering windows for regression > >>> testing against known "good" screenshots, and those are also susceptible to > >>> the same problem, where a window poping up in front of another before a > >>> screenshot might change the image and fail the test. > >>> > >>> In summary, some of our tests can't be run in parallel with any other > >>> tests. > >>> > >>> The current workaround is that we use a special mark in those tests that > >>> can't run in parallel and run pytest twice: a session in parallel excluding > >>> marked tests, and another regular, non-parallel session with only the > >>> marked tests. This works, but it is error prone when developers run the > >>> tests from the command line and makes error report cumbersome to read (the > >>> first problem is alleviated a bit with a custom script). > >>> > >>> Others have discussed other use cases in this thread: > >>> https://bitbucket.org/hpk42/pytest/issue/175. > >>> > >>> My PR takes care to distribute marked tests to a single node after all > >>> other tests have executed, but after some discussion it is clear that there > >>> might be more use cases out there so we would like to hear them before > >>> deciding what would be the best interface options. > >>> > >>> Cheers, > >>> > >>> On Thu, Oct 23, 2014 at 11:48 AM, holger krekel > >>> wrote: > >>> > >>>> Hi all, > >>>> > >>>> Currently, pytest-xdist in load-balancing ("-n4") does not support > >>>> grouping of tests or influencing how tests are distributed to nodes. > >>>> Bruno Oliveira did a PR that satisfies his company's particular > >>>> requirements, > >>>> in that GUI tests must run after all others and pinned to a particular > >>>> note. > >>>> (https://bitbucket.org/hpk42/pytest-xdist/pull-request/11/) > >>>> > >>>> My question is what needs do you have wrt to test distribution with > >>>> xdist? > >>>> I'd like to collect and write down some user stories before we design > >>>> options/mechanisms and then possibly get to implement them. Your input > >>>> is much appreciated. > >>>> > >>>> best and thanks, > >>>> holger > >>>> _______________________________________________ > >>>> pytest-dev mailing list > >>>> pytest-dev at python.org > >>>> https://mail.python.org/mailman/listinfo/pytest-dev > >>>> > >>> > >>> > >>> _______________________________________________ > >>> pytest-dev mailing list > >>> pytest-dev at python.org > >>> https://mail.python.org/mailman/listinfo/pytest-dev > >>> > >>> > >> > >> I think it would be useful to pin or exclude marks from individual nodes. > >> We have marks like requires_abc, and right now we have logic that tests if > >> the requirement exists before running tests and marks them as skip if it is > >> not met, but I know beforehand which requirement can run on which nodes and > >> would like to not even send the tests that cannot run to a node. > >> > > > > From holger at merlinux.eu Fri Oct 24 17:20:03 2014 From: holger at merlinux.eu (holger krekel) Date: Fri, 24 Oct 2014 15:20:03 +0000 Subject: [pytest-dev] pytest-2.6.4: bugfix release Message-ID: <20141024152003.GM22258@merlinux.eu> Hi all, just pushed pytest-2.6.4 to pypi, a small bug fix release. pytest is a popular and mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is drop-in compatible to 2.5.2 and 2.6.X. See below for the changes and see docs at: http://pytest.org Thanks to all who contributed, among them: Bruno Oliveira Floris Bruynooghe Dinu Gherman Anatoly Bubenkoff best, holger krekel, merlinux GmbH 2.6.4 ---------- - Improve assertion failure reporting on iterables, by using ndiff and pprint. - removed outdated japanese docs from source tree. - docs for "pytest_addhooks" hook. Thanks Bruno Oliveira. - updated plugin index docs. Thanks Bruno Oliveira. - fix issue557: with "-k" we only allow the old style "-" for negation at the beginning of strings and even that is deprecated. Use "not" instead. This should allow to pick parametrized tests where "-" appeared in the parameter. - fix issue604: Escape % character in the assertion message. - fix issue620: add explanation in the --genscript target about what the binary blob means. Thanks Dinu Gherman. - fix issue614: fixed pastebin support. From rajalokan at gmail.com Sun Oct 26 02:56:20 2014 From: rajalokan at gmail.com (Okan bhan) Date: Sun, 26 Oct 2014 07:26:20 +0530 Subject: [pytest-dev] improve -k/-m exercise / want to help pytest? In-Reply-To: <20141022072107.GF7954@merlinux.eu> References: <20141022072107.GF7954@merlinux.eu> Message-ID: Hi Holger I would like to work on this but I've some very silly doubts which I would like to clarify. Please bear with me if I'm unclear/repeating in my views as this is my first time contributing to any open source project experience. I tried understanding this problem and looking inside source code. With help from nice people on IRC, I could understand how it is implemented (namely matchmark & matchkeyword method in mark.py module). My issue is: 1. I don't understand how we can put dotted path as marker. Currently if I try a marker (like @pytest.mark.a.b), I get error "AttributeError: MarkDecorator instance has no attribute 'b'". Does this feature require a fix of this so that we can allow custom markers with dots. 2. Similarly for 'keywords', in what scenario dotted path (like 4.3) will be a keyword? We can't put it as method_name, class_name etc. Only option left is parametrized testcases but I grasp this. Please guide me to docs/links regarding this. Thanks & Regards Alok On Wed, Oct 22, 2014 at 12:51 PM, holger krekel wrote: > Hi all, > > is anyone interested in a little exercise that would improve pytest usage? > It doesn't require deep pytest knowledge. > > This is about the "-k" and "-m" selection mini-language, > namely expressions like: > > marker1 and marker2 > not marker1 and (marker2 or marker3) > > and so forth. The thing is that pytest currently uses Python's "eval" with > a custom dictionary which means that names such as "marker1" > cannot have dots or other punctuation and also other things are not > possible like running all tests that don't have "4.3" (coming from a > parametrized test) in it: > > -k 'not 4.3' > > This would currently select all tests because "4.3" is true. Various > people have run into traps like this. > > So what is needed here are tests and code that, given a set of names > and an expression string, determines if the expression is true or not. > And everything except "and" and "or" are valid words. To use the latter > two words probably need to have some escaping like "\and" or so but > that's a bonus task :) > > Anyone up for this little exercise? Or other comments? > > best and thanks, > holger > > > _______________________________________________ > Pytest-dev mailing list > Pytest-dev at python.org > https://mail.python.org/mailman/listinfo/pytest-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Sun Oct 26 10:49:50 2014 From: holger at merlinux.eu (holger krekel) Date: Sun, 26 Oct 2014 09:49:50 +0000 Subject: [pytest-dev] improve -k/-m exercise / want to help pytest? In-Reply-To: References: <20141022072107.GF7954@merlinux.eu> Message-ID: <20141026094950.GW22258@merlinux.eu> Hi Okan, On Sun, Oct 26, 2014 at 07:26 +0530, Okan bhan wrote: > Hi Holger > > I would like to work on this but I've some very silly doubts which I would > like to clarify. Please bear with me if I'm unclear/repeating in my views > as this is my first time contributing to any open source project > experience. > > I tried understanding this problem and looking inside source code. With > help from nice people on IRC, I could understand how it is implemented > (namely matchmark & matchkeyword method in mark.py module). My issue is: > > 1. I don't understand how we can put dotted path as marker. Currently if I > try a marker (like @pytest.mark.a.b), I get error "AttributeError: > MarkDecorator instance has no attribute 'b'". Does this feature require a > fix of this so that we can allow custom markers with dots. No, sorry if that was unclear. There is no easy possibility to have a dotted marker. > 2. Similarly for 'keywords', in what scenario dotted path (like 4.3) will > be a keyword? We can't put it as method_name, class_name etc. Only option > left is parametrized testcases but I grasp this. Parametrizing tests can add almost arbitrary string IDs and they become part of the node id against which "-k" is matching. So "-k 1.3" might very well be used from a user wanted to run tests functions using a particular param. Does this now make sense to you? holger > Please guide me to docs/links regarding this. > > Thanks & Regards > Alok > > On Wed, Oct 22, 2014 at 12:51 PM, holger krekel wrote: > > > Hi all, > > > > is anyone interested in a little exercise that would improve pytest usage? > > It doesn't require deep pytest knowledge. > > > > This is about the "-k" and "-m" selection mini-language, > > namely expressions like: > > > > marker1 and marker2 > > not marker1 and (marker2 or marker3) > > > > and so forth. The thing is that pytest currently uses Python's "eval" with > > a custom dictionary which means that names such as "marker1" > > cannot have dots or other punctuation and also other things are not > > possible like running all tests that don't have "4.3" (coming from a > > parametrized test) in it: > > > > -k 'not 4.3' > > > > This would currently select all tests because "4.3" is true. Various > > people have run into traps like this. > > > > So what is needed here are tests and code that, given a set of names > > and an expression string, determines if the expression is true or not. > > And everything except "and" and "or" are valid words. To use the latter > > two words probably need to have some escaping like "\and" or so but > > that's a bonus task :) > > > > Anyone up for this little exercise? Or other comments? > > > > best and thanks, > > holger > > > > > > _______________________________________________ > > Pytest-dev mailing list > > Pytest-dev at python.org > > https://mail.python.org/mailman/listinfo/pytest-dev > > From gherman at darwin.in-berlin.de Sun Oct 26 20:48:42 2014 From: gherman at darwin.in-berlin.de (Dinu Gherman) Date: Sun, 26 Oct 2014 20:48:42 +0100 Subject: [pytest-dev] Python syntax highlighting when displaying failing code? Message-ID: <1C93B1ED-2B04-4452-9048-CDCCF14D11D2@darwin.in-berlin.de> Hi, I just wondered how much people here would appreciate a new feature for py.test which would add Python syntax highlighting to failing code shown before asserts etc.? In the following example below that would mean the three lines I marked on the right side with '<'. If we had this the lines already displayed in red and with a leading 'E' might perhaps need to be displaying in reversed mode and still red to stand out more than before. I've done something similar when using fabric for a while wrapped inside of my own package, so I could write "myfab --source mytask" and see the respective highlighted code that would be executed. To do that I used pygments which I think is not too heavy weight to include in py.test, but Holger's mileage may vary. Just let me know what you think. If this is considered worth the effort, I might ask for some little hints pointing me to the code to touch, because in fact I'm not very familiar with the py.test code and lack the time now to make an in-depth code analysis. Cheers, Dinu PS: $ py.test fib.py ============================= test session starts ============================== platform darwin -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4 plugins: timeout, pep8, xdist, cache, cov collected 1 items fib.py F =================================== FAILURES =================================== _____________________________________ test _____________________________________ def test(): < "Test fib(10), should not be 10." < > assert fib(10) == 10 < E assert 55 == 10 E + where 55 = fib(10) fib.py:24: AssertionError =========================== 1 failed in 0.04 seconds =========================== From rajalokan at gmail.com Mon Oct 27 06:53:31 2014 From: rajalokan at gmail.com (Okan bhan) Date: Mon, 27 Oct 2014 11:23:31 +0530 Subject: [pytest-dev] improve -k/-m exercise / want to help pytest? In-Reply-To: <20141026094950.GW22258@merlinux.eu> References: <20141022072107.GF7954@merlinux.eu> <20141026094950.GW22258@merlinux.eu> Message-ID: Yes. These seems to clear my doubts. Thanks for answering these. I kind of understood usability of '-k 1.3' and how currently it selects all tests. Looks like it is because of eval(matchexpr) and eval('1.3') always returns 1.3 which makes testcase true. I will work on PR and raise a fix by adding new 'if' condition. Meanwhile I'm looking into any other way (something without eval). Any suggestions/pointers? Thanks & Regards Alok On Sun, Oct 26, 2014 at 3:19 PM, holger krekel wrote: > Hi Okan, > > On Sun, Oct 26, 2014 at 07:26 +0530, Okan bhan wrote: > > Hi Holger > > > > I would like to work on this but I've some very silly doubts which I > would > > like to clarify. Please bear with me if I'm unclear/repeating in my views > > as this is my first time contributing to any open source project > > experience. > > > > I tried understanding this problem and looking inside source code. With > > help from nice people on IRC, I could understand how it is implemented > > (namely matchmark & matchkeyword method in mark.py module). My issue is: > > > > 1. I don't understand how we can put dotted path as marker. Currently if > I > > try a marker (like @pytest.mark.a.b), I get error "AttributeError: > > MarkDecorator instance has no attribute 'b'". Does this feature require a > > fix of this so that we can allow custom markers with dots. > > No, sorry if that was unclear. There is no easy possibility to have > a dotted marker. > > > 2. Similarly for 'keywords', in what scenario dotted path (like 4.3) will > > be a keyword? We can't put it as method_name, class_name etc. Only option > > left is parametrized testcases but I grasp this. > > Parametrizing tests can add almost arbitrary string IDs and they become > part of the node id against which "-k" is matching. So "-k 1.3" might very > well be used from a user wanted to run tests functions using a particular > param. > > Does this now make sense to you? > holger > > > > Please guide me to docs/links regarding this. > > > > Thanks & Regards > > Alok > > > > On Wed, Oct 22, 2014 at 12:51 PM, holger krekel > wrote: > > > > > Hi all, > > > > > > is anyone interested in a little exercise that would improve pytest > usage? > > > It doesn't require deep pytest knowledge. > > > > > > This is about the "-k" and "-m" selection mini-language, > > > namely expressions like: > > > > > > marker1 and marker2 > > > not marker1 and (marker2 or marker3) > > > > > > and so forth. The thing is that pytest currently uses Python's "eval" > with > > > a custom dictionary which means that names such as "marker1" > > > cannot have dots or other punctuation and also other things are not > > > possible like running all tests that don't have "4.3" (coming from a > > > parametrized test) in it: > > > > > > -k 'not 4.3' > > > > > > This would currently select all tests because "4.3" is true. Various > > > people have run into traps like this. > > > > > > So what is needed here are tests and code that, given a set of names > > > and an expression string, determines if the expression is true or not. > > > And everything except "and" and "or" are valid words. To use the latter > > > two words probably need to have some escaping like "\and" or so but > > > that's a bonus task :) > > > > > > Anyone up for this little exercise? Or other comments? > > > > > > best and thanks, > > > holger > > > > > > > > > _______________________________________________ > > > Pytest-dev mailing list > > > Pytest-dev at python.org > > > https://mail.python.org/mailman/listinfo/pytest-dev > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Mon Oct 27 08:10:59 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 27 Oct 2014 07:10:59 +0000 Subject: [pytest-dev] improve -k/-m exercise / want to help pytest? In-Reply-To: References: <20141022072107.GF7954@merlinux.eu> <20141026094950.GW22258@merlinux.eu> Message-ID: <20141027071059.GY22258@merlinux.eu> On Mon, Oct 27, 2014 at 11:23 +0530, Okan bhan wrote: > Yes. These seems to clear my doubts. Thanks for answering these. > > I kind of understood usability of '-k 1.3' and how currently it selects all > tests. Looks like it is because of eval(matchexpr) and eval('1.3') always > returns 1.3 which makes testcase true. > > I will work on PR and raise a fix by adding new 'if' condition. Meanwhile > I'm looking into any other way (something without eval). Any > suggestions/pointers? There is no need for an intermediate PR here. To solve the problem properly you can probably use regular expressions. If you haven't worked with them before you should work through some regex related tutorials first to understand them better. To avoid misunderstanding allow me to clarify that i am fine to give hints/ideas but i didn't plan to make this a teaching exercise, just one for a programmer to contribute something useful to pytest without requiring deep pytest knowledge. best, holger > Thanks & Regards > Alok > > On Sun, Oct 26, 2014 at 3:19 PM, holger krekel wrote: > > > Hi Okan, > > > > On Sun, Oct 26, 2014 at 07:26 +0530, Okan bhan wrote: > > > Hi Holger > > > > > > I would like to work on this but I've some very silly doubts which I > > would > > > like to clarify. Please bear with me if I'm unclear/repeating in my views > > > as this is my first time contributing to any open source project > > > experience. > > > > > > I tried understanding this problem and looking inside source code. With > > > help from nice people on IRC, I could understand how it is implemented > > > (namely matchmark & matchkeyword method in mark.py module). My issue is: > > > > > > 1. I don't understand how we can put dotted path as marker. Currently if > > I > > > try a marker (like @pytest.mark.a.b), I get error "AttributeError: > > > MarkDecorator instance has no attribute 'b'". Does this feature require a > > > fix of this so that we can allow custom markers with dots. > > > > No, sorry if that was unclear. There is no easy possibility to have > > a dotted marker. > > > > > 2. Similarly for 'keywords', in what scenario dotted path (like 4.3) will > > > be a keyword? We can't put it as method_name, class_name etc. Only option > > > left is parametrized testcases but I grasp this. > > > > Parametrizing tests can add almost arbitrary string IDs and they become > > part of the node id against which "-k" is matching. So "-k 1.3" might very > > well be used from a user wanted to run tests functions using a particular > > param. > > > > Does this now make sense to you? > > holger > > > > > > > Please guide me to docs/links regarding this. > > > > > > Thanks & Regards > > > Alok > > > > > > On Wed, Oct 22, 2014 at 12:51 PM, holger krekel > > wrote: > > > > > > > Hi all, > > > > > > > > is anyone interested in a little exercise that would improve pytest > > usage? > > > > It doesn't require deep pytest knowledge. > > > > > > > > This is about the "-k" and "-m" selection mini-language, > > > > namely expressions like: > > > > > > > > marker1 and marker2 > > > > not marker1 and (marker2 or marker3) > > > > > > > > and so forth. The thing is that pytest currently uses Python's "eval" > > with > > > > a custom dictionary which means that names such as "marker1" > > > > cannot have dots or other punctuation and also other things are not > > > > possible like running all tests that don't have "4.3" (coming from a > > > > parametrized test) in it: > > > > > > > > -k 'not 4.3' > > > > > > > > This would currently select all tests because "4.3" is true. Various > > > > people have run into traps like this. > > > > > > > > So what is needed here are tests and code that, given a set of names > > > > and an expression string, determines if the expression is true or not. > > > > And everything except "and" and "or" are valid words. To use the latter > > > > two words probably need to have some escaping like "\and" or so but > > > > that's a bonus task :) > > > > > > > > Anyone up for this little exercise? Or other comments? > > > > > > > > best and thanks, > > > > holger > > > > > > > > > > > > _______________________________________________ > > > > Pytest-dev mailing list > > > > Pytest-dev at python.org > > > > https://mail.python.org/mailman/listinfo/pytest-dev > > > > > > From rajalokan at gmail.com Mon Oct 27 08:59:40 2014 From: rajalokan at gmail.com (Okan bhan) Date: Mon, 27 Oct 2014 13:29:40 +0530 Subject: [pytest-dev] improve -k/-m exercise / want to help pytest? In-Reply-To: <20141027071059.GY22258@merlinux.eu> References: <20141022072107.GF7954@merlinux.eu> <20141026094950.GW22258@merlinux.eu> <20141027071059.GY22258@merlinux.eu> Message-ID: Thanks for the suggestion. I will work on it. On 27/10/2014 12:40 pm, "holger krekel" wrote: > On Mon, Oct 27, 2014 at 11:23 +0530, Okan bhan wrote: > > Yes. These seems to clear my doubts. Thanks for answering these. > > > > I kind of understood usability of '-k 1.3' and how currently it selects > all > > tests. Looks like it is because of eval(matchexpr) and eval('1.3') always > > returns 1.3 which makes testcase true. > > > > I will work on PR and raise a fix by adding new 'if' condition. Meanwhile > > I'm looking into any other way (something without eval). Any > > suggestions/pointers? > > There is no need for an intermediate PR here. > > To solve the problem properly you can probably use regular expressions. > If you haven't worked with them before you should work through some > regex related tutorials first to understand them better. > > To avoid misunderstanding allow me to clarify that i am fine to give > hints/ideas but i didn't plan to make this a teaching exercise, just > one for a programmer to contribute something useful to pytest without > requiring deep pytest knowledge. > > best, > holger > > > > Thanks & Regards > > Alok > > > > On Sun, Oct 26, 2014 at 3:19 PM, holger krekel > wrote: > > > > > Hi Okan, > > > > > > On Sun, Oct 26, 2014 at 07:26 +0530, Okan bhan wrote: > > > > Hi Holger > > > > > > > > I would like to work on this but I've some very silly doubts which I > > > would > > > > like to clarify. Please bear with me if I'm unclear/repeating in my > views > > > > as this is my first time contributing to any open source project > > > > experience. > > > > > > > > I tried understanding this problem and looking inside source code. > With > > > > help from nice people on IRC, I could understand how it is > implemented > > > > (namely matchmark & matchkeyword method in mark.py module). My issue > is: > > > > > > > > 1. I don't understand how we can put dotted path as marker. > Currently if > > > I > > > > try a marker (like @pytest.mark.a.b), I get error "AttributeError: > > > > MarkDecorator instance has no attribute 'b'". Does this feature > require a > > > > fix of this so that we can allow custom markers with dots. > > > > > > No, sorry if that was unclear. There is no easy possibility to have > > > a dotted marker. > > > > > > > 2. Similarly for 'keywords', in what scenario dotted path (like 4.3) > will > > > > be a keyword? We can't put it as method_name, class_name etc. Only > option > > > > left is parametrized testcases but I grasp this. > > > > > > Parametrizing tests can add almost arbitrary string IDs and they become > > > part of the node id against which "-k" is matching. So "-k 1.3" might > very > > > well be used from a user wanted to run tests functions using a > particular > > > param. > > > > > > Does this now make sense to you? > > > holger > > > > > > > > > > Please guide me to docs/links regarding this. > > > > > > > > Thanks & Regards > > > > Alok > > > > > > > > On Wed, Oct 22, 2014 at 12:51 PM, holger krekel > > > wrote: > > > > > > > > > Hi all, > > > > > > > > > > is anyone interested in a little exercise that would improve pytest > > > usage? > > > > > It doesn't require deep pytest knowledge. > > > > > > > > > > This is about the "-k" and "-m" selection mini-language, > > > > > namely expressions like: > > > > > > > > > > marker1 and marker2 > > > > > not marker1 and (marker2 or marker3) > > > > > > > > > > and so forth. The thing is that pytest currently uses Python's > "eval" > > > with > > > > > a custom dictionary which means that names such as "marker1" > > > > > cannot have dots or other punctuation and also other things are not > > > > > possible like running all tests that don't have "4.3" (coming from > a > > > > > parametrized test) in it: > > > > > > > > > > -k 'not 4.3' > > > > > > > > > > This would currently select all tests because "4.3" is true. > Various > > > > > people have run into traps like this. > > > > > > > > > > So what is needed here are tests and code that, given a set of > names > > > > > and an expression string, determines if the expression is true or > not. > > > > > And everything except "and" and "or" are valid words. To use the > latter > > > > > two words probably need to have some escaping like "\and" or so but > > > > > that's a bonus task :) > > > > > > > > > > Anyone up for this little exercise? Or other comments? > > > > > > > > > > best and thanks, > > > > > holger > > > > > > > > > > > > > > > _______________________________________________ > > > > > Pytest-dev mailing list > > > > > Pytest-dev at python.org > > > > > https://mail.python.org/mailman/listinfo/pytest-dev > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Mon Oct 27 10:00:17 2014 From: flub at devork.be (Floris Bruynooghe) Date: Mon, 27 Oct 2014 09:00:17 +0000 Subject: [pytest-dev] Python syntax highlighting when displaying failing code? In-Reply-To: <1C93B1ED-2B04-4452-9048-CDCCF14D11D2@darwin.in-berlin.de> References: <1C93B1ED-2B04-4452-9048-CDCCF14D11D2@darwin.in-berlin.de> Message-ID: Hi Dinu, On 26 October 2014 19:48, Dinu Gherman wrote: > Hi, > > I just wondered how much people here would appreciate a new feature for py.test which would add Python syntax highlighting to failing code shown before asserts etc.? I think this could be a very nice feature. It might be worth trying to make this as a plugin, at least initially, because it will introduce a new dependency (pygments I assume). If some coordination is required like a new hook or a way to detect which part is source code, I'm sure patches like that would be possible to merge into the core right away. > In the following example below that would mean the three lines I marked on the right side with '<'. If we had this the lines already displayed in red and with a leading 'E' might perhaps need to be displaying in reversed mode and still red to stand out more than before. There's probably a few options here to try out, one as you suggest or try just having the 'E' in red or maybe even something else. Only trying it will show which works better I guess. > Just let me know what you think. If this is considered worth the effort, I might ask for some little hints pointing me to the code to touch, because in fact I'm not very familiar with the py.test code and lack the time now to make an in-depth code analysis. No worries, feel free to ask questions here or on IRC. Regards, Floris -- Debian GNU/Linux -- The Power of Freedom www.debian.org | www.gnu.org | www.kernel.org