hi there, folks:
I'd really like to release 0.7.0 but I would like it to be at least a
little bit tested before I do so. Could those of you with CVS trees check
everything out and see if it performs as advertised? Deeper bugs than
that will have to wait for the next release, but I'd at least like to know
if it works for someone other than me.
______ __ __ _____ _ _
| ____ | \_/ |_____] |_____|
|_____| |_____ | | | |
@ t w i s t e d m a t r i x . c o m
Hello all, my first post here - only been using Twisted for about a
month and am also a relative newcomer to Python but have been coding
professionally for 20+ years. I was attracted to Twisted and Python
for a particular project purely because after research it seemed to be
the best tool for the job, and have actually been enjoying both Python
and Twisted much more than I ever thought I would.
The project I am coding towards is creating a sensor data collection
gateway. First iteration needs are simply pulling data from ModBus TCP
slave PLCs and writing it to a MySQL database, but goals beyond that
are making the source of the data and its destination(s) very
flexible(pluggable). Therefore I am trying to create a good clean
architecture from the outset so as I iterate forwards I don't finish
up having to take too many steps backwards before heading forwards.
I am using pymodbus to pull the data which works well for my devices,
has a twisted async API, and have created more than a few prototypes
that demonstrate all works as I expect. Where I am a bit stalled is
getting to grips with a good architecture that fulfills my needs - my
intention is that the application that meets my first goal will be a
The new ClientService class seems like it will fit my needs very
closely but I am struggling with how to handle the reconnections... I
have been using the whenConnected() method to grab the Protocol for
the initial connection and then use a method of this to poll the
connected slave. When the connection is lost I get an errback from
this method's deferred which I use as a signal to abandon the Protocol
and call whenConnected() again... at this point I have an issue though
as the returned deferred immediately gives me a callback with the same
Protocol which has just lost its connection, and thus loop...
Before I got on this mailing list I posted this Q to stackoverflow
with some example code:
but no solution or much attention there yet.
As I say there, I realize I have probably just made a bad pattern
choice for how to use this API, but I have not been able to work out a
better choice which seems clean and fits my needs/understanding well.
I have tried deriving my own Protocol/Factory and handling the polling
there but this seems to get really messy once I start to add code to
get the collected data to a destination at that level, involving
giving the Protocol too much knowledge of how the data is to be
Any advice, good patterns, or pointers to other projects which do
something similar is appreciated,
Daniel Sutcliffe <dansut(a)gmail.com>
Are you in Portland, Oregon this week? Perhaps for PyCon? Want to have
dinner with other people who want to have dinner with people who know
McMenamins Broadway Pub
1504 N.E. Broadway
Portland, OR 97232
7:00pm, Tuesday, May 31
There may be some people gathering at the convention venue shortly
beforehand to all migrate over in a gaggle, but it's not far.
Send me a note if you're planning on coming so I can get an idea of the
and watch this space for updates.
The project I am working on uses pymodbus which I am sure shares a
fairly common attribute with many other modules of using Python's
standard Logging mechanism - a very reasonable choice even for a
module that supports Twisted, the library can also be used entirely
synchronously and thus would not want a required dependency of
It struck me that it would be great to be able to redirect the
standard logging library to use twisted.logger by some sort of 'Clever
Monkey Patching' and that this may be a relatively common
requirement... however after extensive searching, and asking on the
pymodbus list, I can't find any evidence that such a thing has ever
been attempted or discussed.
The reverse mechanism of sending twisted.logger's output to the
standard library is of course handled by the
twisted.logger.STDLibLogObserver (and similar in twisted legacy
logging) but the documentation for this even suggests why this is a
bad idea: 'Warning: specific logging configurations (example: network)
can lead to this observer blocking.' which seems to me why it would be
better to attempt this the other way around...
Am I crazy to even think this? is it just the rambling of
Python/Twisted newb? Or is there something I'm missing that would make
this impossible to do generically, and awkward to provide a vague
recipe of how to do?
I do appreciate that twisted.logger offers a more feature rich
(structured) API and the Logging API would only be able to provide
level and text but it would be better than loosing any possibly useful
log messages from used modules in my mind.
If anyone can enlighten me I would be most appreciative,
Daniel Sutcliffe <dansut(a)gmail.com>
Hooray! We're on github now. Next: there's the question of how to deal with pull requests?
It occurs to me that what we really need from our code review system is mainly one thing: the review queue: a single place for reviewers to look to find things that need to be reviewed. This is important because proposed changes need to be responded to in a timely manner, so that the code in them doesn't get stale, and so that contributors don't get frustrated. We have limited resources for doing so, of course, so sometimes we fall short of this objective, but the point is we need to apply our limited resources to it.
The operations on the queue are:
Proposing a change should put it into the queue.
Accepting a change should remove it from the queue.
Reviewing a change should remove it from the queue.
Responding to review feedback should re-add it to the queue.
A reviewer should be able to examine just the things in the queue so they can quickly grab the next one, without seeing noise.
Our current workflow maps this into Trac via the following:
Proposing: add the "review" keyword.
Accepting: remove the "review" keyword, merge.
Reviewing: removing the "review" keyword, reassign
Responding: add the "review" keyword again
It is therefore tempting to map it into GitHub via labels and webhooks and bot workflows. However, I think a better mapping would be this:
Proposing: Just open a pull request. Any open pull request should be treated as part of the queue.
Accepting: A committer pushes the big green button; this
Reviewing: This is the potentially slightly odd part. I believe a review that doesn't result in acceptance should close the PR. We need to be careful to always include some text that explains that closing a PR does not mean that the change is rejected, just that the submitter must re-submit. Initially this would just mean opening a new PR, but Mark Williams is working on a bot to re-open pull requests when a submitter posts a "please review" comment: https://github.com/markrwilliams/txghbot
Responding: A submitter can open a new PR, or, once we start running txghbot, reopen their closed PR.
The one thing that this workflow is missing from trac is a convenient way for committers, having eyeballed a patch for any obvious malware, to send it to the buildbots.
We could also potentially just replace our buildbot build farm with a combination of appveyor and travis-ci; this would remove FreeBSD from our list of supported platforms, as well as eliminating a diversity of kernel integrations. However, for the stuff that doesn't work in containers (mostly inotify) we could run one builder on non-container-based infrastructure, and for everything else (integrating with different system toolchains) we can test using Docker: https://docs.travis-ci.com/user/docker/. I am very much on the fence about this, since I don't want to move backwards in terms of our test coverage, but this would accelerate the contribution process so much that it's probably worth discussing.
10 years ago or so, we would routinely discover kernel bugs in our integration test harness and they would be different on different platforms. But today's platform realities are considerably less harsh, since there are a lot more entities in the ecosystem that have taken responsibility for testing that layer of the stack; I couldn't find anything since 2008 or so where we really saw a difference between Fedora and Ubuntu at the kernel level, for example.
On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 16.2!
Just in time for PyCon US, this release brings a few headlining features (like the haproxy endpoint) and the continuation of the modernisation of the codebase. More Python 3, less deprecated code, what's not to like?
- twisted.protocols.haproxy.proxyEndpoint, a wrapper endpoint that gives some extra information to the wrapped protocols passed by haproxy;
- Migration of twistd and other twisted.application.app users to the new logging system (twisted.logger);
- Porting of parts of Twisted Names' server to Python 3;
- The removal of the very old MSN client code and the deprecation of the unmaintained ICQ/OSCAR client code;
- More cleanups in Conch in preparation for a Python 3 port and cleanups in HTTP code in preparation for HTTP/2 support;
- Over thirty tickets overall closed since 16.1.
For more information, check the NEWS file (link provided below).
You can find the downloads at <https://pypi.python.org/pypi/Twisted> (or alternatively <http://twistedmatrix.com/trac/wiki/Downloads>). The NEWS file is also available at <https://github.com/twisted/twisted/blob/twisted-16.2.0/NEWS>.
Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!
Amber Brown (HawkOwl)
Dear fellow friends of asynchronous software,
maybe some of you have already bumped into the Prometheus monitoring system <https://prometheus.io> and liked it like I do (in any case, I’d like to invite you to my PyCon US talk on that topic: <https://us.pycon.org/2016/schedule/presentation/1601/>!)
And while it’s great that Python is a first class citizen due to the official Python client library <https://github.com/prometheus/client_python>, asyncio and Twisted sadly aren’t!
That’s why I’ve just released prometheus_async: https://prometheus-async.readthedocs.io/
First and foremost it wraps the metrics from the official client (you don’t want *me* to do math!) and makes them work properly on coroutines and Deferreds (and makes them well-behaved decorators too but that’s a topic for another day…).
Additionally, it adds a few goodies:
- Metric-exposure via aiohttp that ist much more flexible than what comes with the stdlib-based official solution.
- …that can also be started in a separate thread. That means you can use them in regular, *synchronous* Python 3 applications as well (I instrument all my Pyramid apps like that).
- Integration with service discovery. Listen on port 0 and leave registration to Consul Agent (integration is pluggable, just implement two methods)!
Sadly the goodies are asyncio-only so far. Partly because the official client has some Twisted Web support merged but not released yet. Contributions are very welcome!
Keeping the infrastructure updates coming! I'm going to be moving the Buildbot to new hardware, so it may be down for today, plus however long DNS takes to propagate. This should only affect Twisted committers running tests on the builder infrastructure; Trac, docs, and everything else will remain online.