Maciej from the PyPy team has graciously offered to upgrade all of
Twisted's existing old-style classes to new-style. The reason this is now
also a mailing list thread is because it may, potentially, break some
things, and we want people to have a heads up. Of course, Twisted will
never break your stuff without warning (at least not intentionally) and the
normal compatibility policy is in effect as always :)
The catch is that Maciej wants some kind of guarantee that at some point,
this will be on by default. At least two committers (myself and Glyph) want
this, so I'm confident this is the case: unless someone highlights a huge
flaw in my reasoning that shows that we can't actually do this :)
- We get to use everything that requires new-style classes, e.g. the
descriptor protocol, and by extension classmethods, staticmethods,
- Performance benefit on PyPy
- Consistency of behavior between 2.x and 3.x
- It's a lot of work. That's true, but shouldn't concern you: we have
someone who says they're willing to actually do that work.
- It will break things. That is probably true, but Twisted never promises
not to break anything ever. It has a compatibility policy, the guiding line
for which is "the first one is always free". As long as we have a full,
real release where we *warn* people that something is going to happen and
they should test it now, we're satisfying that policy. The problem is that
AFAICT there's no obvious way to identify where problems will occur (since
it's a consequence of how people use current old-style things). The
suggested approach to this is that we have a release where all classes that
are going to be new-style are old-style by default, but, given e.g. an
environment variable, all of them *can* be new-style. The warning could
then be that you should turn on that environment variable (requiring
conscious action and being easy enough to undo if your code breaks).
I would suggest doing the transition in small steps, and adding the
environment variable as a priority. That way, people who know about this
can at least already run their tests before the entire process is complete.
Also, just because it has to be complete (but off by default) for at least
one release, doesn't mean it has be exactly one release :)
For new development, IIUC, new-style classes are already a requirement, so
this doesn't affect it.
I'm working on what is just my second project using Twisted-Web, so I'm
still a relative newbee on the subject.
I'm working on a project that uses Twisted Web as a simple authorization
proxy. All requests to my proxy contain an authorization-token and are
either handled by the proxy, or are relayed to an other server. For all
GET stuff and small POST stuff this is not a problem. When I want to
process large POST requests however, I run into my limits of understanding
how Twisted Web actually works.
1) I figured out that next to the 'process' in my request handler, I need
to also overload handleContentChunk, parse the form body-parts in the
first chunk myself and open a proxy connection (self.agent.request) if the
authorization token checks out.
2) When it comes to appending the data received in handleContentChunk, and
if needed throttling the client if the server couldn't keep up, I can't
figure out how to connect handleContentChunk and my self.agent.request
3) When the token does not check out, or the connection to the server
fails, it remains a mystery to me how I should throw an error in such a
way that it allows me to send a proper error message to the client, while
not having to first accept the whole large file. That is, it seems rather
silly that I would know things failed after the first POST body chunk, but
would have to wait for and accept hundreds of megabytes or maybe even a
few gigabytes of post data before I can notify the client that something
It seems I am either missing something blindingly obvious or Twisted Web
simply isn't meant to be used this way. I hope someone can give me some
directions how to make this giant-file-post forwarding and early-bail-out
scenario working with Twisted Web.
*I posted this to python-list and tutor-list and received no replies.
Any advice would be much appreciated. Thank you.*
What is the best approach to writing a concurrent daemon that can
execute callbacks for different types of events (AMQP messages, parsed
output of a subprocess, HTTP requests)?
I am considering [twisted], the built-in [threading] module, and
[greenlet]. I must admit that I am very unfamiliar with concurrent
programming and Python programming in general (formerly a data analysis
driven procedural programmer). Any resources on threaded/concurrent
programming (specifically daemons...not just multi-threading a single
task) would be much appreciated.
1) Listens into AMQP messaging queues and executes callbacks when
Example: Immediately after startup, the daemon continuously listens to
the [Openstack Notifications messaging queue]. When a virtual machine
is launched, a notification is generated by Openstack with the hostname,
IP address, etc. The daemon should read this message and write some info
to a log (or POST the info to a server, or notify the user...something
2) Parse the output of a subprocess and execute callbacks based on the
Example: Every 30 seconds, a system command "[qstat]" is run to query
a job resource manager (e.g. TORQUE). Similar callbacks to 1).
3) Receive requests from a user and process them. I think this will be
via WSGI HTTP.
Example: User submits an XML template with virtual machine templates.
The daemon does some simple XML parsing and writes a job script for the
job resource manager. The job is submitted to the resource manager and
the daemon continually checks for the status of the job with "qstat" and
for messages from AMQP. It should return "live" feedback to the user and
write to a log.
Justin Chiu TRIUMF
> Date: Mon, 08 Jul 2013 13:31:49 -0700
> From: Justin Chiu <c.justin88(a)gmail.com>
> Hi all,
> *I posted this to python-list and tutor-list and received no replies.
> Any advice would be much appreciated. Thank you.*
> What is the best approach to writing a concurrent daemon that can
> execute callbacks for different types of events (AMQP messages, parsed
> output of a subprocess, HTTP requests)?
There's the txamqp package, available on launchpad.
It's not very actively developed, but seems
usable in its current state. There's also the txamqp-helpers package
that integrates with twistd.
> Justin Chiu TRIUMF
David Serafini | ConsultingMTS | Oracle | david.serafini(a)oracle.com |
Unfortunately, baelnorn seems to be having network problems again,
leaving a large number of builders currently unable to complete builds
Also, the easy_install builders seem to be having problems completing
the correct builds. For example, a recent trunk build failed with a
websocket error - but the websocket branch has not been merged. Maybe
this is related to the recent changes to make it use git instead of bzr?
Additionally, the "Built packages" link on
http://buildbot.twistedmatrix.com/builds goes to the wrong place (still?
again?), so build artifacts seem to be mostly unavailable right now.
Can someone give me a hand in how to daemonize one application?
My application has two udp servers that subclass from DatagramProtocol and
are launched via reactor.listenUDP.
I think that the way to go is twistd, however some doubts come to me as
flask is involved and even I don't know if this is the proper place to ask
(maybe the flask support list?)
My current implementation goes like this:
def datagramReceived(self, data, (host, port)):
sever = Server()
To deploy the app, in the application framework doc is said that you can
create a .tac file for using with twistd. In this tac file I think that I
need to wrap my "DatagramProtocol" into a Factory, then within a Service
and link that service to the application. No problem on that (I think)
However, I'm also using flask for exposing a basic restful api. And I have
no idea in how I can wrap it into a factory/service/application.
In my code flask is tied to the reactor in the following way:
flask_app = Flask(__name__, static_folder="www", static_url_path="")
flask_resource = WSGIResource(reactor, reactor.getThreadPool(), flask_app)
flask_site = Site(flask_resource)
In the flask website it's said that you can run a flask application using a
standalone wsgi container like the default twisted one with:
twistd web --wsgi myproject.app
However I think that it has nothing to do with my problem.
I'm not a twisted expert however I've been using it from the past six
months, but running it under Eclipse and directly under the console,
however now is time to deploy and I'm really confused with all this
Application, Service, MultiService, Factory, Protocol and such stuff.
listenTCP and listenUDP seemed so easy!!! ;)
At this point I'm in a dead end trying to run the app at startup, any
suggestion?, other alternative to twistd?
Thanks in advance, best regards, Beth.
I am having a requirement on to send async some 300,000 or more rest api
When I work with the code for 150,000 it works fine .It takes around 8
minutes.Is it possible to improve the performance
When the count increased to some 200,000 ,I am getting the error.
Traceback (most recent call last):
Failure: twisted.internet.error.TimeoutError: User timeout caused
Unhandled error in Deferred:
Traceback (most recent call last):
As my server is 8 CPU quadcore box, 2.40 Ghz, with 96G of RAM
*Can you please suggest me how can I overcome this error *
*Make the performance better*
This is my piece of code where I have implemented twisted
from twisted.internet import defer, reactor, task
from twisted.web.client import getPage
import sys, argparse, csv, collections, time, datetime
urls = ""
for url in urls.split(','):
d = getPage(url)
deferreds = 
coop = task.Cooperator()
work = doWork()
for i in xrange(maxRun):
d = coop.coiterate(work)
dl = defer.DeferredList(deferreds)
if __name__ == '__main__':
filename = sys.argv
#I open the file ,read the files and create the url's