hi there, folks:
I'd really like to release 0.7.0 but I would like it to be at least a
little bit tested before I do so. Could those of you with CVS trees check
everything out and see if it performs as advertised? Deeper bugs than
that will have to wait for the next release, but I'd at least like to know
if it works for someone other than me.
______ __ __ _____ _ _
| ____ | \_/ |_____] |_____|
|_____| |_____ | | | |
@ t w i s t e d m a t r i x . c o m
I submitted this patch to drop support for Python 3.3:
1. Python 3.3 was declared EOL on Sep. 29, 2017
2. In terms of major Linux distributions, Python 3.4 came with Python 3.4
is in Debian 8, Ubuntu 14.04, Fedora 21
So after this patch, Twisted would run on:
Python 2.7 and Python 3.4+
I'm not sure how to write this email, but please let me try. I'd like to
address something that I see as a limitation in Twisted. It might be
that my use case is odd or that I'm outside the scope of Twisted, but
non the less, I'd hope this could be a relevant topic.
Unhandled exceptions can leave the application in a half-working state,
and the in-app observability for them is difficult to obtain. Instead of
terminating the whole application, the rest of the app can still keep
running, and can be completely unaware of the failure.
This applies to unhandled errbacks in Deferred and principally to any
other reactor callbacks. E.g. it can occur in Deferreds being used
internally in Twisted, where direct access to the object isn't available
to the caller.
As a user of Twisted, I would like to have the option to catch or fail
my application completely when these unhandled exceptions occur, as
would be expected in a sequential program.
I have a larger application using many simultaneous TCP, UDP and UNIX
connections. As with Twisted, the app is grouped in functions, where
most of the heavy lifting are done in black-box-ish modules. There is of
course, no guarantee for everything to work smoothly and if something
fails, the entire application stops as a clear indication of the
failure. However, there have been some occasions where this application
is found to be half-dead, due to a failure occurring in a reactor-based
callback that can only be seen by reading the logs. The main application
is unfortunately unaware of its own failure.
AFAIK Twisted has no direct mechanism for handling errors that might
occur when user code is called from the reactor. Or even worse, the
caller does not know about the occurred failure unless the caller has
direct access to the failing object. I believe this is more dangerous to
reliability than the plain failing applications is, due to lower
Lets say the following code is used in a running application:
from twisted.internet.task import LoopingCall
self.loop = LoopingCall(self.cb)
self.count += 1
# Main app does this:
foo = Foo()
print "Won't happen"
The code will fail due to the programmical error in cb, but the calling
application won't fail and thinks everything is fine. The methodology in
debugging errors like this is by looking through the logs.
Everywhere a function is being called from the reactor, the user is
responsible to handling all exceptions. As is the current case.
However, this is not completely straight forward. try-expect are great
to catch expected errors, but it's easy to forget and ignore the
unexpected ones. Like in the example above. The safeguard for this would
be something like:
self.count += 1
print "Whoops. Unexpected"
And in a large application, there are many entrypoints (e.g. methods in
a protcol handler), so the code becomes very cluttered. Plus it puts the
responsibility for the user to implement the signal_main_app() framework.
The ideal solution would be if there were a way to configure Twisted to
inform about unhandled exceptions. It can be a addSystemEventTrigger(),
or a SW signal, or a process signal, or perhaps a global
execute-last-errback function. Possibly in a debug-context.
With this one could inform the application that one deferred object has
not handled its errbacks. Then the main application is given a choice to
respond appropriately, like shutting down.
Is my concern about the non-observability of unhandled exceptions at all
warranted? Is the thinking wrong? Are there any other types of solutions
to this problem? (I would like to avoid having to patch Twisted to do it.)
I'm pleased to announce the release of txAWS 0.5.0. txAWS is a library for
interacting with Amazon Web Services (AWS) using Twisted.
You can download the release from PyPI <https://pypi.python.org/pypi/txAWS>.
Since the last release, the following enhancements have been made:
> - txaws.s3.client.S3Client.get_bucket now accepts a ``prefix`` parameter
> selecting a subset of S3 objects. (#78)
> - txaws.ec2.client.EC2Client now has a ``get_console_output`` method
> the ``GetConsoleOutput`` API. (#82)
Thanks to everyone who contributed and to Least Authority TFA GmbH
<https://leastauthority.com/> for sponsoring my work on this release.
With more and more ecosystem projects on Github, more or less maintained by the same team that maintains Twisted proper, our "review keyword in trac" workflow is increasingly awkward.
Github itself has an individualized "review queue" of sorts - https://github.com/pulls/review-requested <https://github.com/pulls/review-requested> - where you can see open pull requests where your review has been requested. What makes this particularly interesting is that you can request review from a team, rather than from an individual.
Of course, like everything interesting (ahem, labels), you need repo:write permission to do this, so we'd still need a bot.
It appears that the CODEOWNERS file - https://help.github.com/articles/about-codeowners/ <https://help.github.com/articles/about-codeowners/> - can automatically request code reviews when pull requests touch relevant code - so we could add "* @twisted/$PROJECT-reviewers" to each project on that file and, I believe, have a fairly orderly review queue that can be seen in a nice unified view by each org member, and by using -reviewers teams, we could allow for folks to join or leave any individual repo review responsibility that they want.
Actually setting up all the stuff needed to test this requires more time than I have on my hands at the moment. Does anyone else know for sure if this works as I imagine it would - specifically, that third party contributors would get an automatic review request opened, at least initially?
We might still need a bot to re-request reviews, but this seems like a much less ad-hoc way of doing this process migration than labels.
I often refer to Ampoule as a way to get simple process pooling with Twisted. However, it's been somewhat moribund (no maintenance, no py3 support) for the better part of a decade, despite more or less doing what it needs to do.
As such, I've created a friendly fork (hopefully only until Valentino checks his email and officially retires as the maintainer :)), at https://github.com/glyph/ampoule <https://github.com/glyph/ampoule>, and released it as https://pypi.python.org/pypi/ampoul3.
If you've wanted to use Ampoule, or something like it, with Python 3, check it out.