Hello all, I think it would be lovely if trial caught up to the last decade of advances in coverage measurement technology. I *think* this means integrating with coverage.py <https://pypi.org/project/coverage> - probably the hands-down leader in Python coverage technology for at least the last 10 years, if not more - instead of the stdlib "trace" module which is ... something else. Or maybe there's an even better option out there somewhere - it would be amazing if all of the trial-based test suites out there got *whatever* the best current option is - why should ever project have to figure this out for itself? When was the last time anyone ran trial --coverage on purpose? Did they realize they were choosing the bad option? I know that you can hack around this situation roughly like this: python -m coverage run -m twisted.trial ... but this has some shortcomings. 1. If trial --coverage exists shouldn't it be the *good* option? 2. python -m coverage run -m twisted.trial -jN ... is a bad time. How about some coverage measurement that's multi-core friendly? It's a *real* drag going from a 30 second no-coverage test run using 16 cores to a 15 minute coverage-measuring run on a single core. Does anyone agree that this is something short of an ideal situation? Is anyone interested in helping address it? Jean-Paul
On Fri, Feb 11, 2022 at 5:36 PM Jean-Paul Calderone < exarkun@twistedmatrix.com> wrote:
Hello all,
I think it would be lovely if trial caught up to the last decade of advances in coverage measurement technology. I *think* this means integrating with coverage.py <https://pypi.org/project/coverage> - probably the hands-down leader in Python coverage technology for at least the last 10 years, if not more - instead of the stdlib "trace" module which is ... something else. Or maybe there's an even better option out there somewhere - it would be amazing if all of the trial-based test suites out there got *whatever* the best current option is - why should ever project have to figure this out for itself?
When was the last time anyone ran trial --coverage on purpose? Did they realize they were choosing the bad option?
I know that you can hack around this situation roughly like this:
python -m coverage run -m twisted.trial ...
but this has some shortcomings.
1. If trial --coverage exists shouldn't it be the *good* option? 2. python -m coverage run -m twisted.trial -jN ... is a bad time. How about some coverage measurement that's multi-core friendly? It's a *real* drag going from a 30 second no-coverage test run using 16 cores to a 15 minute coverage-measuring run on a single core.
Does anyone agree that this is something short of an ideal situation? Is anyone interested in helping address it?
Jean-Paul
Anyone?
On Feb 16, 2022, at 11:41 AM, Jean-Paul Calderone <exarkun@twistedmatrix.com> wrote:
On Fri, Feb 11, 2022 at 5:36 PM Jean-Paul Calderone <exarkun@twistedmatrix.com> wrote: Hello all,
I think it would be lovely if trial caught up to the last decade of advances in coverage measurement technology. I think this means integrating with coverage.py - probably the hands-down leader in Python coverage technology for at least the last 10 years, if not more - instead of the stdlib "trace" module which is ... something else. Or maybe there's an even better option out there somewhere - it would be amazing if all of the trial-based test suites out there got whatever the best current option is - why should ever project have to figure this out for itself?
When was the last time anyone ran trial --coverage on purpose? Did they realize they were choosing the bad option?
I know that you can hack around this situation roughly like this:
python -m coverage run -m twisted.trial ...
but this has some shortcomings. If trial --coverage exists shouldn't it be the good option? python -m coverage run -m twisted.trial -jN ... is a bad time. How about some coverage measurement that's multi-core friendly? It's a real drag going from a 30 second no-coverage test run using 16 cores to a 15 minute coverage-measuring run on a single core. Presumably trial -jN uses us processes, so https://coverage.readthedocs.io/en/6.3.1/subprocess.html is worth a read.
Does anyone agree that this is something short of an ideal situation? Is anyone interested in helping address it?
Jean-Paul
Anyone?
_______________________________________________ Twisted mailing list -- twisted@python.org To unsubscribe send an email to twisted-leave@python.org https://mail.python.org/mailman3/lists/twisted.python.org/ Message archived at https://mail.python.org/archives/list/twisted@python.org/message/7IHGMNFFEJE... Code of Conduct: https://twisted.org/conduct
On Feb 16, 2022, at 11:28 AM, Colin Dunklau <colin.dunklau@gmail.com> wrote:
On Feb 16, 2022, at 11:41 AM, Jean-Paul Calderone <exarkun@twistedmatrix.com> wrote:
On Fri, Feb 11, 2022 at 5:36 PM Jean-Paul Calderone <exarkun@twistedmatrix.com <mailto:exarkun@twistedmatrix.com>> wrote: Hello all,
I think it would be lovely if trial caught up to the last decade of advances in coverage measurement technology. I think this means integrating with coverage.py <https://pypi.org/project/coverage> - probably the hands-down leader in Python coverage technology for at least the last 10 years, if not more - instead of the stdlib "trace" module which is ... something else. Or maybe there's an even better option out there somewhere - it would be amazing if all of the trial-based test suites out there got whatever the best current option is - why should ever project have to figure this out for itself?
When was the last time anyone ran trial --coverage on purpose? Did they realize they were choosing the bad option?
I know that you can hack around this situation roughly like this:
python -m coverage run -m twisted.trial ...
but this has some shortcomings. If trial --coverage exists shouldn't it be the good option? python -m coverage run -m twisted.trial -jN ... is a bad time. How about some coverage measurement that's multi-core friendly? It's a real drag going from a 30 second no-coverage test run using 16 cores to a 15 minute coverage-measuring run on a single core. Presumably trial -jN uses us processes, so https://coverage.readthedocs.io/en/6.3.1/subprocess.html <https://coverage.readthedocs.io/en/6.3.1/subprocess.html> is worth a read.
Best practice today is, at least in your CI: pip install coverage_enable_subprocess <https://pypi.org/project/coverage_enable_subprocess/> set COVERAGE_PROCESS_START env var to your coveragerc trial -j `coverage combine` after running This works for trial (and also works for any other tool that you might want to support coverage). One of the reason I think there’s been relatively little movement on this obvious flaw in Twisted is that this is fairly easy to set up. That also means that it’s fairly easy to implement within Trial though, were someone interested.
Does anyone agree that this is something short of an ideal situation? Is anyone interested in helping address it?
This should definitely be addressed. Ideally, trial would both supply its own `--coverage` option (largely in order to have somewhere to hang the documentation in `--help`) and also Just Work under `python -m coverage run` or `coverage run`.
Jean-Paul
Anyone?
I don’t think I can commit to actually doing this, but if I find some time it’s definitely something I’d love to have resolved. I can definitely do some quick code reviews, if someone else wants to make the implementation happen.
_______________________________________________ Twisted mailing list -- twisted@python.org To unsubscribe send an email to twisted-leave@python.org https://mail.python.org/mailman3/lists/twisted.python.org/ Message archived at https://mail.python.org/archives/list/twisted@python.org/message/7IHGMNFFEJE... Code of Conduct: https://twisted.org/conduct
Twisted mailing list -- twisted@python.org To unsubscribe send an email to twisted-leave@python.org https://mail.python.org/mailman3/lists/twisted.python.org/ Message archived at https://mail.python.org/archives/list/twisted@python.org/message/T6JSDDVCGZC... Code of Conduct: https://twisted.org/conduct
On 16/02/2022 17:41, Jean-Paul Calderone wrote:
On Fri, Feb 11, 2022 at 5:36 PM Jean-Paul Calderone <exarkun@twistedmatrix.com <mailto:exarkun@twistedmatrix.com>> wrote:
I know that you can hack around this situation roughly like this:
python -m coverage run -m twisted.trial ...
but this has some shortcomings.
1. If trial --coverage exists shouldn't it be the *good* option? 2. python -m coverage run -m twisted.trial -jN ... is a bad time. How about some coverage measurement that's multi-core friendly? It's a /real/ drag going from a 30 second no-coverage test run using 16 cores to a 15 minute coverage-measuring run on a single core.
Does anyone agree that this is something short of an ideal situation? Is anyone interested in helping address it?
Anyone?
At this point, it feels like any available energy could be more usefully employed in getting a pytest plugin that really supported the Twisted reactor in place. Re-inventing wheels like coverage just doesn't seem sensible at this point. I've noticed trial itself routinely leaks failures across tests, resulting in some random test down the line spuriously failing. I've hacked up tooling to run `trial -u` for 10s for each test case to try and find these happening, but feels like something the test runner should really cater for. I guess a lot of this, and indeed a non-testing use case I have, would be served be asyncio-style event loops rather than one single monolithic and unrestartable reactor. This isn't meant to come across as negatively as it may well seem; there's a reason I haven't ripped Twisted out of the major project I'm involved in where it's used, but Twisted as a whole and trial in particular really are feeling their 20yrs of age ;-) cheers, Chris
On Sun, Feb 20, 2022, at 13:44, Chris Withers wrote:
At this point, it feels like any available energy could be more usefully employed in getting a pytest plugin that really supported the Twisted reactor in place. Re-inventing wheels like coverage just doesn't seem sensible at this point.
I don't expect to invest a lot of time in pytes-twisted, but I am curious what you mean by supporting the Twisted reactor in place. A new reactor for each test? Cheers, -kyle
Hi Kyle, On 20/02/2022 22:43, Kyle Altendorf wrote:
On Sun, Feb 20, 2022, at 13:44, Chris Withers wrote:
At this point, it feels like any available energy could be more usefully employed in getting a pytest plugin that really supported the Twisted reactor in place. Re-inventing wheels like coverage just doesn't seem sensible at this point.
I don't expect to invest a lot of time in pytes-twisted, but I am curious what you mean by supporting the Twisted reactor in place. A new reactor for each test?
I last looked in depth at pytest-twisted in 2018, but after a quick scan, it doesn't appear that much has changed. The concerns I had were mainly that trial does a *lot* to manage test isolation, reactor cleanup, etc (and it still isn't enough!) and I don't see any of that in pytest-twisted. What I *do* see are references to greenlets, a thing that looks like inlineCallbacks but isn't and a general worry that pytest twisted adds more complexity for less robustness. Given how incredibly complicated Twisted already is (oh for a more simplified inlineCallbacks!), these are not things I'm looking for when it comes to testing. Now, I freely admit I may be way off base with these comments, so take them with a bucket of salt... cheers, Chris
On Mon, Feb 21, 2022, at 03:10, Chris Withers wrote:
Hi Kyle,
On 20/02/2022 22:43, Kyle Altendorf wrote:
On Sun, Feb 20, 2022, at 13:44, Chris Withers wrote:
At this point, it feels like any available energy could be more usefully employed in getting a pytest plugin that really supported the Twisted reactor in place. Re-inventing wheels like coverage just doesn't seem sensible at this point.
I don't expect to invest a lot of time in pytes-twisted, but I am curious what you mean by supporting the Twisted reactor in place. A new reactor for each test?
I last looked in depth at pytest-twisted in 2018, but after a quick scan, it doesn't appear that much has changed.
Mostly what has changed since 2018, if I remember correctly, would be the addition of async/await support for tests and fixtures. Oh, and support for non-default reactors, in particular the Qt-related ones.
The concerns I had were mainly that trial does a *lot* to manage test isolation, reactor cleanup, etc (and it still isn't enough!) and I don't see any of that in pytest-twisted.
There has been chatter about this, but no action.
What I *do* see are references to greenlets, a thing that looks like inlineCallbacks but isn't and a general worry that pytest twisted adds more complexity for less robustness. Given how incredibly complicated Twisted already is (oh for a more simplified inlineCallbacks!), these are not things I'm looking for when it comes to testing.
greenlets are the basic tool used to cooperate with pytest while having a long-lived reactor. This allows long-lived fixtures, when you want them. I don't recall particularly having a lot of issues with the greenlets from a user perspective. But sure, it is indeed 'fun' handling another layer of concurrency in the implementation.
Now, I freely admit I may be way off base with these comments, so take them with a bucket of salt...
Pretty sure you aren't. :] Cheers, -kyle
On Sun, 20 Feb 2022 at 22:48, Kyle Altendorf <sda@fstab.net> wrote:
On Sun, Feb 20, 2022, at 13:44, Chris Withers wrote:
At this point, it feels like any available energy could be more usefully employed in getting a pytest plugin that really supported the Twisted reactor in place. Re-inventing wheels like coverage just doesn't seem sensible at this point.
I don't expect to invest a lot of time in pytes-twisted, but I am curious what you mean by supporting the Twisted reactor in place. A new reactor for each test?
FYIW I'm also +1 to try to get Twisted support for pytest , stdlib unit test or nose. I guess that many of the existing Twisted based projects are using `trial` so things are not that easy. For now, for my project I am running Twisted tests with trial and some custom realtor start and stop, but the long term plan is to migrate to pytest. Regards -- Adi Roiban
On Sun, Feb 20, 2022 at 1:44 PM Chris Withers <chris@withers.org> wrote:
On 16/02/2022 17:41, Jean-Paul Calderone wrote:
On Fri, Feb 11, 2022 at 5:36 PM Jean-Paul Calderone <exarkun@twistedmatrix.com <mailto:exarkun@twistedmatrix.com>> wrote:
I know that you can hack around this situation roughly like this:
python -m coverage run -m twisted.trial ...
but this has some shortcomings.
1. If trial --coverage exists shouldn't it be the *good* option? 2. python -m coverage run -m twisted.trial -jN ... is a bad time. How about some coverage measurement that's multi-core friendly? It's a /real/ drag going from a 30 second no-coverage test run using 16 cores to a 15 minute coverage-measuring run on a single core.
Does anyone agree that this is something short of an ideal situation? Is anyone interested in helping address it?
Anyone?
At this point, it feels like any available energy could be more usefully employed in getting a pytest plugin that really supported the Twisted reactor in place. Re-inventing wheels like coverage just doesn't seem sensible at this point.
Thanks for this input Chris. Personally, I have no interest in pursuing work on integrating Twisted and pytest at this time - so I'll leave that to others (given the number of other posts in this thread that are about pytest and not trial, I suppose there is interest in that direction). Considering the lack of posts from people interested in doing the work on trial (but thanks, glyph!) for now I will conclude that indeed trial does not have much of a developer community behind it - so I will probably not immediately make plans to undertake any serious efforts to maintain or improve it myself (since I doubt that I have the resources to meaningfully accomplish the necessary work on my own).
I've noticed trial itself routinely leaks failures across tests, resulting in some random test down the line spuriously failing. I've hacked up tooling to run `trial -u` for 10s for each test case to try and find these happening, but feels like something the test runner should really cater for.
This is neither here nor there but this sounds like what `--force-gc` is for and I've almost always had success attributing a failure to the correct test using this flag. This is exactly the kind of poor developer experience that I would love to work with a team of folks on improving, though.
I guess a lot of this, and indeed a non-testing use case I have, would be served be asyncio-style event loops rather than one single monolithic and unrestartable reactor.
The fact that trial shares its reactor with test code is another area where trial is lacking, certainly.
This isn't meant to come across as negatively as it may well seem; there's a reason I haven't ripped Twisted out of the major project I'm involved in where it's used, but Twisted as a whole and trial in particular really are feeling their 20yrs of age ;-)
I'm not sure this is so much about age as about lack of interest and lack of maintenance. I am certain that Twisted as a whole and trial could quite readily have many of their rough edges smoothed out - if only anyone cared to try. Jean-Paul
On Feb 21, 2022, at 6:40 AM, Jean-Paul Calderone <exarkun@twistedmatrix.com> wrote:
On Sun, Feb 20, 2022 at 1:44 PM Chris Withers <chris@withers.org <mailto:chris@withers.org>> wrote:
On 16/02/2022 17:41, Jean-Paul Calderone wrote:
On Fri, Feb 11, 2022 at 5:36 PM Jean-Paul Calderone <exarkun@twistedmatrix.com <mailto:exarkun@twistedmatrix.com> <mailto:exarkun@twistedmatrix.com <mailto:exarkun@twistedmatrix.com>>> wrote:
I know that you can hack around this situation roughly like this:
python -m coverage run -m twisted.trial ...
but this has some shortcomings.
1. If trial --coverage exists shouldn't it be the *good* option? 2. python -m coverage run -m twisted.trial -jN ... is a bad time. How about some coverage measurement that's multi-core friendly? It's a /real/ drag going from a 30 second no-coverage test run using 16 cores to a 15 minute coverage-measuring run on a single core.
Does anyone agree that this is something short of an ideal situation? Is anyone interested in helping address it?
Anyone?
At this point, it feels like any available energy could be more usefully employed in getting a pytest plugin that really supported the Twisted reactor in place. Re-inventing wheels like coverage just doesn't seem sensible at this point.
Thanks for this input Chris. Personally, I have no interest in pursuing work on integrating Twisted and pytest at this time - so I'll leave that to others (given the number of other posts in this thread that are about pytest and not trial, I suppose there is interest in that direction).
Considering the lack of posts from people interested in doing the work on trial (but thanks, glyph!) for now I will conclude that indeed trial does not have much of a developer community behind it - so I will probably not immediately make plans to undertake any serious efforts to maintain or improve it myself (since I doubt that I have the resources to meaningfully accomplish the necessary work on my own).
I’m in a similar place myself. I could probably put more effort forth if we could get a bit more of a small commitment from more other developers. I do not want to spend my time as a full-time unpaid twisted maintainer, or simply slogging through reviewing old tickets while never developing any interesting new features myself. But, it seems like unless I do, then my own feature development will simply languish forever at the end of a year-long queue. It makes me wish we could have a sort of open source mutually-assured maintenance system, where we all put in some number of hours and get some small reward (like bragging rights, a little badge?) out of meeting that commitment. But that also requires some volunteers[1] to go build it. This trial maintenance is also something I’m definitely interested in, but I don’t think just a small commitment from me and JP would be quite enough to get it somewhere meaningful. [1]: not me
I've noticed trial itself routinely leaks failures across tests, resulting in some random test down the line spuriously failing. I've hacked up tooling to run `trial -u` for 10s for each test case to try and find these happening, but feels like something the test runner should really cater for.
This is neither here nor there but this sounds like what `--force-gc` is for and I've almost always had success attributing a failure to the correct test using this flag. This is exactly the kind of poor developer experience that I would love to work with a team of folks on improving, though.
Yeah, Trial could absolutely be more discoverable. And probably have some more facilities for managing leaks of global reactor state, to make it clearer what’s going on when you get this sort of inscrutable mess of a failure.
I guess a lot of this, and indeed a non-testing use case I have, would be served be asyncio-style event loops rather than one single monolithic and unrestartable reactor.
The fact that trial shares its reactor with test code is another area where trial is lacking, certainly.
This isn't meant to come across as negatively as it may well seem; there's a reason I haven't ripped Twisted out of the major project I'm involved in where it's used, but Twisted as a whole and trial in particular really are feeling their 20yrs of age ;-)
I'm not sure this is so much about age as about lack of interest and lack of maintenance. I am certain that Twisted as a whole and trial could quite readily have many of their rough edges smoothed out - if only anyone cared to try.
I think it’s more about the trying than the caring. I get the sense that folks do still care about Twisted but most of the core people are quite busy. So is anyone waiting out there in the wings interested in doing some more of the day-to-day of just reviewing tickets, responding to contributors, and keeping discussions like this one going? i.e. recording mailing list & chat consensus on tickets, checking to make sure PRs are getting updated, etc? I know the project has a bit of an intimidating reputation due to some of its more complex areas, but quite a lot of what needs to be done here is quite simple and could be a great learning opportunity. -g
On 21/02/2022 12:45, Adi Roiban wrote:
On Sun, 20 Feb 2022 at 22:48, Kyle Altendorf <sda@fstab.net> wrote:
On Sun, Feb 20, 2022, at 13:44, Chris Withers wrote:
At this point, it feels like any available energy could be more usefully employed in getting a pytest plugin that really supported the Twisted reactor in place. Re-inventing wheels like coverage just doesn't seem sensible at this point.
I don't expect to invest a lot of time in pytes-twisted, but I am curious what you mean by supporting the Twisted reactor in place. A new reactor for each test?
FYIW I'm also +1 to try to get Twisted support for pytest , stdlib unit test or nose.
I think nose is pretty much dead and note sure I know anyone using the stdlib discovery stuff...
I guess that many of the existing Twisted based projects are using `trial` so things are not that easy.
If you're doing a twisted project, I think you kinda have to be using trial. I wrote carly to cover the "real networking in tests" pattern, no docs, but its test suite shows you what it does, eg: https://github.com/cjw296/carly/blob/master/tests/test_web_site.py
For now, for my project I am running Twisted tests with trial and some custom realtor start and stop, but the long term plan is to migrate to pytest.
Interested to hear more about custom reactor start and stop... Chris
On 21/02/2022 14:40, Jean-Paul Calderone wrote:
Considering the lack of posts from people interested in doing the work on trial (but thanks, glyph!) for now I will conclude that indeed trial does not have much of a developer community behind it - so I will probably not immediately make plans to undertake any serious efforts to maintain or improve it myself (since I doubt that I have the resources to meaningfully accomplish the necessary work on my own).
I'm sorry to be brutally honest here, but this covers Twisted, not just trial. All the energy is around asyncio stuff like FastAPI and the stack it's built on. Twisted is great, but feels seriously dated (eg: humpyCase, yield instead of await, trial instead of pytest, "only one event loop that you can't restart") and debugging is hellish. I have a massive amount of respect for the framework, to be clear, but I also understand why there's not much of an active developer community around it: it's not a thing that's fun to work on; 20yrs of history rarely is...
I've noticed trial itself routinely leaks failures across tests, resulting in some random test down the line spuriously failing. I've hacked up tooling to run `trial -u` for 10s for each test case to try and find these happening, but feels like something the test runner should really cater for.
This is neither here nor there but this sounds like what `--force-gc` is for
Yep, we use --force-gc on all our CI runs, but plenty still leak through. The most robust way I've found is running `trial -u` in a subprocess for 10s per test and "passing" if it's still going. It's certainly niggly that there's no way to exit trial -u without an ugly stracktrace and a nonzero return code. cheers, Chris
On 21/02/2022 20:44, Glyph wrote:
I’m in a similar place myself. I could probably put more effort forth if we could get a bit more of a small commitment from more other developers. I do not want to spend my time as a full-time unpaid twisted maintainer, or simply slogging through reviewing old tickets while never developing any interesting new features myself.
Heh, have the tickets auto-close after 30 days... While I find this infuriating myself, it's certainly effective triage and a lot of python projects with a high ratio of users to maintainers are doing it. If an issue is hitting a lot of people, it'll get re-raised soon enough ;-)
It makes me wish we could have a sort of open source mutually-assured maintenance system, where we all put in some number of hours and get some small reward (like bragging rights, a little badge?) out of meeting that commitment. But that also requires some volunteers[1] to go build it.
This trial maintenance is also something I’m definitely interested in, but I don’t think /just/ a small commitment from me and JP would be quite enough to get it somewhere meaningful.
Speaking just for myself, my problem is that while I respect Twisted and really heavily hammer it while being massively impressed how it behaves, I don't enjoy working on the code base :-/ I've yet to hit a "bug that I need to fix" in the 4 years since I came back to Twisted, and that's a massive testament to the quality of the software... Chris
On 22/02/2022 7:44 am, Glyph wrote:
I’m in a similar place myself. I could probably put more effort forth if we could get a bit more of a small commitment from more other developers. I do not want to spend my time as a full-time unpaid twisted maintainer, or simply slogging through reviewing old tickets while never developing any interesting new features myself. But, it seems like unless I do, then my own feature development will simply languish forever at the end of a year-long queue.
It makes me wish we could have a sort of open source mutually-assured maintenance system, where we all put in some number of hours and get some small reward (like bragging rights, a little badge?) out of meeting that commitment. But that also requires some volunteers[1] to go build it.
This trial maintenance is also something I’m definitely interested in, but I don’t think /just/ a small commitment from me and JP would be quite enough to get it somewhere meaningful.
I would like to contribute more to twisted, but I've found the review process discouraging. I suppose I would call myself a "frustrated twisted developer". maintaining trial is beyond my knowledge but are there any other small desired tasks ? [I've not felt competent to do code reviews until I'd had *one* PR accepted] Ian
On 22. Feb 2022, at 14:03, meejah <meejah@meejah.ca> wrote:
On Tue, 22 Feb 2022, at 02:44, Chris Withers wrote: [..] yield instead of await, trial instead of pytest, [..]
I don't have any problem using "await" + "async-def" and pytest in Twisted-based projects. Are you having specific problems with these parts?
FWIW I haven’t touched trial in years either. I use pytest-twisted together with Twisted’s TestCase without any problems in all of my Twisted projects both at work and in FOSS.
On 2/22/22 21:10, Ian Haywood wrote:
On 22/02/2022 7:44 am, Glyph wrote:
I’m in a similar place myself. I could probably put more effort forth if we could get a bit more of a small commitment from more other developers. I do not want to spend my time as a full-time unpaid twisted maintainer, or simply slogging through reviewing old tickets while never developing any interesting new features myself. But, it seems like unless I do, then my own feature development will simply languish forever at the end of a year-long queue.
It makes me wish we could have a sort of open source mutually-assured maintenance system, where we all put in some number of hours and get some small reward (like bragging rights, a little badge?) out of meeting that commitment. But that also requires some volunteers[1] to go build it.
This trial maintenance is also something I’m definitely interested in, but I don’t think /just/ a small commitment from me and JP would be quite enough to get it somewhere meaningful.
I would like to contribute more to twisted, but I've found the review process discouraging. I suppose I would call myself a "frustrated twisted developer".
maintaining trial is beyond my knowledge but are there any other small desired tasks ?
[I've not felt competent to do code reviews until I'd had *one* PR accepted]
Ian
+1 for Ian's comment, I see myself in the same position. After having dragged nevow from py2 to py3 to a point where most (or all?) of the trial tests ran green I just gave up on breaking the myriad of very simple changes into single PRs. I'm really in for proper adherence to the process whatever it might entail but trial itself was more of a stumbling block than a honed tool providing the right amount of insight into tests and why they failed. trial was extremely difficult to handle for all nevow/athena related tests where the tests spawn sub processes dealing with JavaScript. I don't know how other testing frameworks would fare in such multilanguage/cross network situations, but being able to have clean third party real network clients tied into the framework for protocol testing would have made me tackle the move forward to break athena out of nevow and refresh and renew the in my opinion still unrivaled in ease of use for bidirectional RPC framework. Mahalo, Werner
On Wed, Feb 23, 2022 at 2:28 AM Ian Haywood <ian@haywood.id.au> wrote:
On 22/02/2022 7:44 am, Glyph wrote:
I’m in a similar place myself. I could probably put more effort forth if we could get a bit more of a small commitment from more other developers. I do not want to spend my time as a full-time unpaid twisted maintainer, or simply slogging through reviewing old tickets while never developing any interesting new features myself. But, it seems like unless I do, then my own feature development will simply languish forever at the end of a year-long queue.
It makes me wish we could have a sort of open source mutually-assured maintenance system, where we all put in some number of hours and get some small reward (like bragging rights, a little badge?) out of meeting that commitment. But that also requires some volunteers[1] to go build it.
This trial maintenance is also something I’m definitely interested in, but I don’t think *just* a small commitment from me and JP would be quite enough to get it somewhere meaningful.
I would like to contribute more to twisted, but I've found the review process discouraging. I suppose I would call myself a "frustrated twisted developer".
maintaining trial is beyond my knowledge but are there any other small desired tasks ?
[I've not felt competent to do code reviews until I'd had *one* PR accepted]
Hi Ian! I think "maintaining trial" is a big job to ask anyone to *volunteer* to do. I hope that "help maintain trial" is a little bit more reasonable - if we can get 3 or 4 people involved. Also, it has now been so long since anyone active on Twisted worked on trial that to - at least to some extent - maintaining it is beyond *anyone's* knowledge. However, it's also not the most complex piece of software ever so for someone interested who has worked on it in the past, it will probably come back fairly quickly and for someone interested who has not worked on it before, the learning curve should not be particularly painful. I deeply sympathize with the way the moribund review workflow serves as a strongly discouraging force in Twisted development. This is part of the reason I want to try to raise some interest in a specific area of Twisted before diving in. A few people working in one area can get a lot more done than a few people each working in their own separate areas. I believe that current project policy is that a non-committer may approve a committer's PR - so technically, you and I could go off and make progress on trial (because at least one of us is a committer). I know glyph expressed a willingness to contribute too - and I won't turn that down - but it would be great to have one or two other people get involved as well. I think there's a lot of relatively straightforward work to do on trial and I'd rather glyph spend any Twisted hacking time he might have on more complex or subtle areas. As far as small, desirable work - I still think switching trial to coverage.py would be a good thing (and I don't think anyone in the thread thought that specifically was a bad idea, rather people weren't convinced working on trial *at all* was a good idea). I don't know exactly how easy that switch will be - it wouldn't surprise me if there is some non-obvious plumbing involved. All the pieces *look* straightforward though. Apart from that, a couple other trial issues that bothered me recently are: - https://twistedmatrix.com/trac/ticket/10311 - https://twistedmatrix.com/trac/ticket/10312 I think these are probably smaller so might be better starting places. So ... anyone else up for focusing some attention on trial? Jean-Paul
On 23/02/2022 22:04, Jean-Paul Calderone wrote:
Apart from that, a couple other trial issues that bothered me recently are:
* https://twistedmatrix.com/trac/ticket/10311 <https://twistedmatrix.com/trac/ticket/10311> * https://twistedmatrix.com/trac/ticket/10312 <https://twistedmatrix.com/trac/ticket/10312>
If these bother you, I feel like I must be missing something obvious: What's the correct way to end a trial -u run such that you don't get an ugly traceback and a non-zero return code when it's been running for a while and no tests have failed? cheers, Chris
On 24/02/2022 7:18 pm, Chris Withers wrote:
On 23/02/2022 22:04, Jean-Paul Calderone wrote:
Apart from that, a couple other trial issues that bothered me recently are:
* https://twistedmatrix.com/trac/ticket/10311 <https://twistedmatrix.com/trac/ticket/10311> * https://twistedmatrix.com/trac/ticket/10312 <https://twistedmatrix.com/trac/ticket/10312>
If these bother you, I feel like I must be missing something obvious: What's the correct way to end a trial -u run such that you don't get an ugly traceback and a non-zero return code when it's been running for a while and no tests have failed?
I have to to confess I've never used -u and I would assume it's for catching some subtle nondeterministic bugs (hardware, threads or other strangeness) if it has a more general use, i'd be keen to know Anyway, the problem is it's behaviour changes with -jN Internally -jN causes a separate test runner to be used, see https://github.com/twisted/twisted/blob/trunk/src/twisted/trial/_dist/disttr... this is where the difference in behaviour lies. The fix seems straightforward enough (I think the two tickets can be fixed in one PR) Not sure how to write a test case though Ian
cheers,
Chris _______________________________________________ Twisted mailing list -- twisted@python.org To unsubscribe send an email to twisted-leave@python.org https://mail.python.org/mailman3/lists/twisted.python.org/ Message archived at https://mail.python.org/archives/list/twisted@python.org/message/IK6NBM3LV2T... Code of Conduct: https://twisted.org/conduct
If you know how to write the implementation but can’t figure out a test, perhaps make a draft PR so someone else might chime in? :)
On Feb 24, 2022, at 2:19 PM, Ian Haywood <ian@haywood.id.au> wrote:
this is where the difference in behaviour lies. The fix seems straightforward enough (I think the two tickets can be fixed in one PR)
Not sure how to write a test case though
On Thu, Feb 24, 2022 at 5:22 PM Ian Haywood <ian@haywood.id.au> wrote:
On 24/02/2022 7:18 pm, Chris Withers wrote:
On 23/02/2022 22:04, Jean-Paul Calderone wrote:
Apart from that, a couple other trial issues that bothered me recently are:
* https://twistedmatrix.com/trac/ticket/10311 <https://twistedmatrix.com/trac/ticket/10311> * https://twistedmatrix.com/trac/ticket/10312 <https://twistedmatrix.com/trac/ticket/10312>
If these bother you, I feel like I must be missing something obvious: What's the correct way to end a trial -u run such that you don't get an ugly traceback and a non-zero return code when it's been running for a while and no tests have failed?
I have to to confess I've never used -u and I would assume it's for catching some subtle nondeterministic bugs (hardware, threads or other strangeness)
if it has a more general use, i'd be keen to know
Anyway, the problem is it's behaviour changes with -jN
Internally -jN causes a separate test runner to be used, see
https://github.com/twisted/twisted/blob/trunk/src/twisted/trial/_dist/disttr...
this is where the difference in behaviour lies. The fix seems straightforward enough (I think the two tickets can be fixed in one PR)
Not sure how to write a test case though
From just a quick skim of the part of the implementation dealing with `--until-failure` behavior, I guess that I would try to refactor so that `trial -u -jN` and `trial -u` share their implementation of this functionality instead of each implementing it separately. If `trial -u` already has tests and you can get rid of the dedicated `trial -u -jN` code that's a big step towards the testing goal - and always better to delete unnecessary code than to keep it, fix it, and have to write and maintain tests for it. However, like I said, I only gave the code a brief skim. For all I know, there is some major hurdle in the way of such a refactoring. Jean-Paul
Ian
cheers,
Chris _______________________________________________ Twisted mailing list -- twisted@python.org To unsubscribe send an email to twisted-leave@python.org https://mail.python.org/mailman3/lists/twisted.python.org/ Message archived at
https://mail.python.org/archives/list/twisted@python.org/message/IK6NBM3LV2T...
Code of Conduct: https://twisted.org/conduct
Twisted mailing list -- twisted@python.org To unsubscribe send an email to twisted-leave@python.org https://mail.python.org/mailman3/lists/twisted.python.org/ Message archived at https://mail.python.org/archives/list/twisted@python.org/message/TUQ25WTXTEM... Code of Conduct: https://twisted.org/conduct
On 26/02/2022 1:00 am, Jean-Paul Calderone wrote:
On Thu, Feb 24, 2022 at 5:22 PM Ian Haywood <ian@haywood.id.au>
From just a quick skim of the part of the implementation dealing with `--until-failure` behavior, I guess that I would try to refactor so that `trial -u -jN` and `trial -u` share their implementation of this functionality instead of each implementing it separately. If `trial -u` already has tests and you can get rid of the dedicated `trial -u -jN` code that's a big step towards the testing goal - and always better to delete unnecessary code than to keep it, fix it, and have to write and maintain tests for it.
However, like I said, I only gave the code a brief skim. For all I know, there is some major hurdle in the way of such a refactoring.
it appears trial is almost two programs in one: a distributed and a non-distributed version. Obviously -jN selects which version you get and the decision is made very early: in scripts/trial So IMHO single implementation of -u isn't possible. I have a minimalist solution to #10312 https://github.com/twisted/twisted/pull/1702 Ian
On Thu, Mar 10, 2022 at 10:41 PM Ian Haywood <ian@haywood.id.au> wrote:
On 26/02/2022 1:00 am, Jean-Paul Calderone wrote:
On Thu, Feb 24, 2022 at 5:22 PM Ian Haywood <ian@haywood.id.au>
From just a quick skim of the part of the implementation dealing with `--until-failure` behavior, I guess that I would try to refactor so that `trial -u -jN` and `trial -u` share their implementation of this functionality instead of each implementing it separately. If `trial -u` already has tests and you can get rid of the dedicated `trial -u -jN` code that's a big step towards the testing goal - and always better to delete unnecessary code than to keep it, fix it, and have to write and maintain tests for it.
However, like I said, I only gave the code a brief skim. For all I know, there is some major hurdle in the way of such a refactoring.
it appears trial is almost two programs in one: a distributed and a non-distributed version. Obviously -jN selects which version you get and the decision is made very early: in scripts/trial
So IMHO single implementation of -u isn't possible.
It seems like some substantial refactoring will be required before a single implementation is possible, anyway. One idea that might be worth exploring is to have trial without `--jobs` be equivalent to `trial --jobs=1` (with no degradation in functionality). This would be one way to remove one of the two programs in trial.
I have a minimalist solution to #10312
Thanks. That is so succinct that it seems worth landing quickly and dealing with any further factoring improvements to trial separately. I took the liberty of pushing a test for the fix and a news fragment. Sadly this pushes the total line count of the diff above 10 ... but only to 11. I hope someone is available for a prompt review. If any non-committer wants to start learning how to get involved in Twisted development, this is a great ticket / PR to jump in with - the code change itself is very simple, leaving plenty of attention for learning the process. Jean-Paul
participants (10)
-
Adi Roiban
-
Chris Withers
-
Colin Dunklau
-
Glyph
-
Hynek Schlawack
-
Ian Haywood
-
Jean-Paul Calderone
-
Kyle Altendorf
-
meejah
-
Werner Thie