A short while ago Twisted's trac installation was changed to reject new tickets
from anyone except a special white listed group.
As of yesterday I have reverted this change, (and turned the spam filter back
on). If you experience any issues trying to post, or notice and spam, please
feel free to reach out to me.
hoping someone can answer this question-
i can't seem to get my logging right while using supervisord
i'm not using twisted logging, just the regular python logging infrastructure.
the two end-results I've had:
- everything double-logs into twistd.log & /var/log/supervisor.log
- anything this is `print`'ed appears in my twistd.log , but none of the `log.debug()` lines appear
the end result I want, is for debug info from the twisted process to be recorded in a single file
Dear Twisted Developers,
As a quick introduction, my name is Rob Oakes. I'm the lead developer
for a company called Guru Labs. I'm writing for two reasons:
# Reason 1: Thank You
First and foremost, I'm writing to express thanks for creating an
excellent framework. We use Twisted extensively in our infrastructure,
and it is typically our go-to tool for any sort of custom server.
The other two reasons have to do with some questions about the
development of Twisted. Before I dive into those, however, let me
provide some background.
For the better part of a year, we've been using some code out of the
websockets (twisted.web.websockets.WebsocketsResource) branch to wrap a
trio of custom protocols we use in one of our web based products. I
know that the code is still pre-release, but we've generally found it
to be stable and work very well.
# Reason Two: Websockets Development
This brings me to the second reason I'm writing. Over the past year, it
seems as though the development on the Twisted websockets branch has
stalled. We would like to unstall it. For this reason, Guru Labs would
be interested in:
1.) Contracting with one of the core Twisted devs of Twisted who might
be interested in finishing the implementation of the websockets
wrappers (resolving the issues described in ticket 4173) or another
party. We are happy to pay hourly rates, set a bounty, make a donation
to the Twisted project ... whatever.
A stable implementation of websockets available in Twisted Web is a
priority to us, and if it's a matter of money, we are happy to throw
money where it might be needed. (If this of interest, please contact me
off-list at roakes(a)gurulabs.com.)
2.) Assigning a Guru Labs developer (probably me) who might complete
I've been studying the issues which are still open (specifically 4173),
and the various branches associated with websocket development
(websocket-4173-3 and websocket-4173-4) and it seems that most of the
major concerns are related to the manner in which websocket connections
Assuming that someone else doesn't step forward ... I've already merged
the most recent version of trunk with these branches, but I've found
myself with several questions on how to best continue with the work.
a. Which of the two websocket branches should be used as the basis for
On GitHub, websocket-4173-4 is marked as closed with a note telling the
contributor to see the contribution guidelines. There are also several
additions to the code which do not follow Twisted conventions (as I
understand them). The last set of commits also seem to come from
approximately the same time.
websocket-4173-4 includes code which attempts to resolve issues noted
in 4173 that is not present in 4173-3, but there aren't really any
comments to determine if this should be incorporated or not. At this
point, I'm really not familiar enough with the code to draw my own
b. In general, the protocol wrapper seems to work quite well. However
when merged with the most recent version of trunk (post twisted 14),
I've been seeing frequent unhandled deferred errors. The most common
"twisted.internet.error.ConnectionLost: Connection to the other side
was lost in a non-clean fashion: Connection lost"
It happens when:
* Connections are closed from the server, using transport.loseConnection
* Connections do not transmit a "close" frame prior to disconnecting
* To reiterate, this issue only started appearing after merging the
websocket code with Twisted 14. The previous version of Twisted we were
using (Twisted 12.3 worked flawlessly.)
No error is raised if the client correctly closes the connection or
when using non-browser based clients (like the Python ws4py websocket
We mostly see the exception when the objects are garbage collected
(based upon the deferred documentation at
another way, we see a whole string of errors upon stopping the reactor.
Despite the exception, we don't see any errors in the browser client.
Also somewhat frustratingly, the traceback isn't terribly helpful. This
is a fairly routine example:
Unhandled error in Deferred:
Traceback (most recent call last):
Failure: twisted.internet.error.ConnectionLost: Connection to the other
side was lost in a non-clean fashion: Connection lost.
Technically, the error is probably appropriate, as they appear when the
connection is lost in a non-clean fashion. I am a little concerned,
though, in that I haven't found a good way to catch or suppress the
error. Moreover, it doesn't seem like the defferreds are getting
garbage collected which seems like a memory leak waiting to happen.
Can someone clarify if this is:
* intended behavior, and if so, what might be a strategy I can use for
managing the error in my wrapped protocols
* which part of the websocket code I should be looking at in order to
try and fix the issue
Thoughts would be greatly appreciated.
Hello, I've been playing with the new tubes that are being implemented:
Here are few things that I did with it. I won't publish the full code now,
as in it's current shape it could implode eyeballs of twisted devs and
possibly make them summon some of the elder gods, but I'll see if I can
produce something less vile as I merge the ongoing changes to the tubes
So far I wrote relatively simple app that read logfiles, parse them and
insert what they got out of them into a database. First issue that I've
dealt with is stopping the tubes. When I read the whole of the input I want
to wait until all of it was parsed (atm synchronous code, but I can imagine
eg. some expensive processing being done in thread / external process) and
then wait until it's commited to the database before shutting the reactor
As of #42908 which I pulled for experimenting the support for passing
flowStopped(reason) through pipeline (or series if you want) was not
working, an issue with None being returned from stopped() ended the
processing prematurely, which I fixed with:
=== modified file 'tubes7/tube.py'
--- tubes7/tube.py 2014-08-01 18:32:48 +0000
+++ tubes7/tube.py 2014-08-01 21:20:44 +0000
@@ -441,6 +446,8 @@
if iterableOrNot is None:
+ if self._flowStoppingReason is not None:
self._pendingIterator = iter(iterableOrNot)
if self._tfount.drain is None:
Also the ProtocolFount didn't really do what it should, so I made it
implement IHalfCloseableProtocol and made it call flowStopped() accordingly.
One more thing about it I did is that I made it invoke flowStopped() on any
drain that is newly attached to it - apparently when I used the stdio
endpoint it managed to close it when reading from /dev/null even before I
managed to set up the series/pipeline.
That still didn't make it possible for me to wait on DB being written to
properly. What I had to do is to implement CloseableDrain that has
waitDone() method that emits a Deferred that fires when the drain's
flowStopped() was called and all it should do has been done. This makes it
quite handy to use from react()-style setup since I can just return this
Deferred, or DeferredList of all ongoing pipelines.
For the next pipeline I had one more issue: this pipeline can be run either
as a log reader, or as essential part of running program that emits such
logs. In the latter case I need to generate confirmation messages for
specific entries that are being inserted and send them back to the
originator, after they has been safely written to the DB. This I resolved by
adding another field into the values I pass into PostgreSQLDrain - deferred
that will be fired as txpostgres's runOperation finishes. This resolution
works pretty well but it took me quite a while to come up with it, so I'm
not sure if it's intuitive design pattern or if we could come up with
Then I had to run both pipelines in parallel, after implementing the fan-in
pattern (fan-out was already done by glyph), I wrote this helper function:
out = Out()
in_ = In()
out._drain.nextFount = in_
for tube in tubes:
in_.stop_on_empty = True
The nextFount attribute on _OutDrain is what is returned from flowingFrom()
so this function can be used as a part of series. What I'm unsure about is
how to handle stopping of the fan-in. Currently I don't make it stop until
the stop_on_empty is set (so I can add/remove things during it's
initialization) and then I make it stop when the last fount that's flowing
in has stopped (and removed from input founts set) and I use the reason it
passes into flowStopped() to propagate along to the rest of series,
effectively discarding any reason objects passed to all the founts except
the last one.
What I'll have to deal with is a lack of sensible flow control in some parts
of the code. For example the part that generates the log files should not be
stopped just because there's some delay in writing the logs. This made me
wonder if the flow control and perhaps processing confirmation should not be
run not as a part of the main interface but instead something that runs
alongside, where applicable, in the opposite direction. But I don't have any
specific API in my mind at the moment. On the other hand, both are perfectly
solvable with current design - implementing FIFO buffers or message droppers
for flow control and the above mentioned deferred passing for confirmations.
As for data representation that I choose to pass between each tube I've
started with simple namedtuples and following that I've built a simple
"datatype" class somewhat reminiscent of
which I learned of few moments after I finished polishing my own
implementation. What I have there is added layer above namedtuples that
autogenerate zope Interfaces (so I can have adaptation), do field type and
value validation/adaptation and possibly (as a future extension) provide
easy way to make them into AMP commands so the series can be split into
communicating processes as needed. (What would be interesting imo is
something like ampoule for tubes, or perhaps a ThreadTube and SubprocessTube
for performing blocking operations)
Also maybe of note is the implementation of Pipes in Async library for OCaml
which I've been examining lately. What they seem to do there is that they
push values downstream and the function called in each processing step may
return deferred signifying a pause is requested until this deferred is
fired. For those interested in the details you can refer to:
and the relevant section of Real World OCaml book (available online).
Looking forward to further tubes development :-)
CcxCZ (freenode) | Jan Pobříslo (IRL)
I've been trying to address ticket 7274
To do this, I am trying to understand the PB protocol. While I found a spec
for banana in twisted-daniel/docs/core/specifications/banana.rst, I have
not found anything similar for pb. I've been piecing it together by writing
little test scripts, but it is slow going. In particular, it is very
difficult to understand the meaning of verbs like "cook" and "preserve" and
nouns like "persistent store" without some global picture of what's going
1. Is there some kind of narrative documentation on how pb works under the
2. Is there a specification for the pb dialect of banana?
3. Is there anyone else out there interested enough in pb to want to work
with me to figure things out and produce documentation if there isn't any
Moving thread here to avoid polluting trac (thx to glyph).
1. the real issue: t.p.syslog doesn't support logLevel
- if you think it's worth a patch before switching to t.p.logger I'll provide
a patch (it's quite simple).
- if you think we should just move to t.p.logger and get rid of t.p.log and
t.p.syslog see 2.
2. moving twistd to t.p.logger
- I've started looking around
- t.p.logger doesn't still support syslog
- I'll try to add some tests
- feedback welcome :D
Babel - a business unit of Par-Tec S.p.A. - http://www.babel.it
T: +39.06.9826.9651 M: +39.340.652.2736 F: +39.06.9826.9680
P.zza S.Benedetto da Norcia, 33 - 00040 Pomezia (Roma)
CONFIDENZIALE: Questo messaggio ed i suoi allegati sono di carattere
confidenziale per i destinatari in indirizzo.
E' vietato l'inoltro non autorizzato a destinatari diversi da quelli indicati
nel messaggio originale.
Se ricevuto per errore, l'uso del contenuto e' proibito; si prega di
comunicarlo al mittente e cancellarlo immediatamente.
I have a twisted plugin that I created to replace the Django dev server
for our devs. It sets up a separate twisted service for serving media, so
we don't need any urls.py tomfoolery in dev. It also sets up an
in the near future.
The core of the code looks like this:
resource = WSGIResource(reactor, reactor.getThreadPool(), WSGIHandler())
endpoint = 'tcp:port=8000'
server = strports.service(endpoint, server.Site(resource))
This has worked great for a while. However, we have some views that we
require https on, and so this dev server doesn't allow us to get to those
views at all. I generated a .key file and a .crt file with openssl, and
then cat'd them together to make a pem, and then changed the endpoint to
endpoint = 'ssl:port=8000:privateKey=/path/to/key.pem'
Now when I open my browser and type https://localhost:8000, chrome just
hangs. I don't really know how to diagnose this, because I don't really
know anything about SSL (it's all just magic security goodness to me). I
don't necessarily need a direct answer (though it will certainly make me
look good to all the other devs), but maybe some pointers in the right
direction would help.
Flocker is an open source data volume manager and multi-host Docker
cluster management tool. With it you can control your data using the
same tools you use for your stateless applications. This means that you
can run your databases, queues and key-value stores in Docker and move
them around as easily as the rest of your app.
It's written with Twisted (of course) and features the work of Twisted
developers Jean-Paul Calderone, Tom Prince, Richard Wall and myself.
Very much a preliminary release though, so there's a bunch of code that
needs to be Twisted-ified :)