hi there, folks:
I'd really like to release 0.7.0 but I would like it to be at least a
little bit tested before I do so. Could those of you with CVS trees check
everything out and see if it performs as advertised? Deeper bugs than
that will have to wait for the next release, but I'd at least like to know
if it works for someone other than me.
______ __ __ _____ _ _
| ____ | \_/ |_____] |_____|
|_____| |_____ | | | |
@ t w i s t e d m a t r i x . c o m
I couldn't find Twisted-specific group, so posting here.
Recently Twisted 16.4.0 got released. Yesterday I've tried to upgrade it
for my apps and got an error.
I've created simple example, which demonstrates it.
from twisted.application import service, internet
application = service.Application("Demo application")
""" Stub function """
If you try to run it with twistd -y service.tac you'll get an error:
== output ==============================
Traceback (most recent call last):
line 648, in run
25, in runApp
line 379, in run
self.application = self.createOrGetApplication()
line 444, in createOrGetApplication
application = getApplication(self.config, passphrase)
--- <exception caught here> ---
line 455, in getApplication
application = service.loadApplication(filename, style, passphrase)
line 411, in loadApplication
223, in loadValueFromFile
eval(codeObj, d, d)
File "service.tac", line 7, in <module>
exceptions.ImportError: No module named mymodule
Failed to load application: No module named mymodule
The errors comes down to this: twistd script does not add current
working directory to python path (or removes it, I don't know what
exactly happens), so it fails to import any packages/modules from it.
The issue does not appear in previous version (Twisted 16.3.2).
Any ideas what caused it?
I looked at latest Travis builds on Linux:
August 11, 2016 https://travis-ci.org/twisted/twisted/jobs/151440581
Tests on Python 2.7
11640 (skips=2083, successes=9557)
Tests on Python 3.5
9647 (skips=2008, successes=7639)
On Thu, Jun 30, 2016 at 3:43 PM, Craig Rodrigues <rodrigc(a)crodrigues.org
> I looked at these two Travis builds on Linux
> DATE BUILD
> ==== =====
> June 3, 2016 https://travis-ci.org/twisted/twisted/builds/135219940
> June 30, 2016 https://travis-ci.org/twisted/twisted/builds/141426367
> I noticed:
> DATE Tests on Python 2.7
> ==== =====================================
> June 3, 2016 11438 (skips=3013, successes=8425)
> June 30, 2016 11496 (skips=2063, successes=9433)
> DATE Tests on Python 3.5
> ==== =====================================
> June 3, 2016 6367 (skips=1533, successes=4834)
> June 30, 2016 7902 (skips=1693, successes=6209)
I'm kicking off this discussion on the mailing list as I don't have
anything well-formed enough to take to the bug tracker, and I am hoping to
get some more engagement on the matter.
Currently there is no way to explicitly compose Twisted endpoints, but
several endpoint implementations have arisen that explicitly wrap another
endpoint, and so have needed a way to do this. So far, this has been
implementing by passing in an endpoint description, and then calling
serverFromString/clientFromString internally in the endpoint to construct
the wrapped endpoint. I've seen two different ways of encoding the "inner"
1. Backslash escaping; for example:
This has the advantage that it is endlessly nestable, for example:
It has the disadvantage that it is a bit tricky to read and write.
2. Splitting keyword and positional arguments; for example:
This has the advantage that it is easier to read and write, but the
disadvantage that it isn't nestable. It also starts to break down when you
have a lot of parameters, as the positional syntax becomes much harder to
Neither of these solutions is entirely satisfactory; I initially followed
approach 2 for txacme, but now that I need to add more parameters to the
le:/lets: endpoints, it is starting to break down.
Cory suggested a third possibility; an explicit syntax for composing
endpoints. In this model, the endpoint string parsing machinery would
construct the different endpoints, and compose them together (presumably
the API of the parsers would need to be extended a bit for this). For
A less whimsical syntax than "->" might be better; for example, semicolons,
or something like that.
Is there a plan (or an implementation) to support gRPC within Twisted Python? My understanding is that gRPC is built using Futures and creates its own threads for all its event handling. There is also a gRPC Python package (grpcio 1.0.0) that is available for python 2.7. In order to use gRPC with Twisted Python in 2.7 is the only way to have gRPC run in its own thread?
We are planning to get us some challenge coins, which is exciting! You
should also be excited with us because this means:
- we can now give it out to contributors at sprints and meetups.
- challenge coins are awesome.
Each coin costs about $6, and the minimum order is 100 coins. Glyph thinks
everyone should have some to give out as tokens of appreciation to other
contributors, and glyph's thoughts are usually worth following.
So, if you are a Twisted contributor and would like to own a challenge
coin, please respond to this email. Also, do mention how many coins you
would like, and where you are located so that we can send you your coin(s).
Problem: How do you determine the buffer size of a transport, to know how
much data is waiting to be transmitted from using transport.write?
Wait! You're going to say: use the Producer Consumer API (
To do what: So that instead of using back pressure I can check the buffer
and when it's "too big/full" can decide to do something to the transport I
am writing to:
Slow transport handling options:
- Buffer to disk instead of memory
- Kill the transport
- Decide to skip sending some data
- Send an error or message to the transport I am writing to
- Reduce the resolution, increase the compression (things like video or
Why not use back pressure?: Some use-cases and some protocols this doesn't
- You're sending video and if the receiver can't keep up you want to
downgrade or upgrade the quality of the video, but if you don't know if you
can't tell how much it buffering.
- You're receiving from one connection and then broadcasting what you
received to multiple clients and you want to handle it by sending an error
to a client that doesn't keep up
- You're sending a real-time protocol and want to skip sending some data
that's no longer relevant if the buffer is too slow.
On a server what are the consequences:
Too much buffering in many transport write buffer can cause a server to
fail. I don't know how to keep track of this to proactively without access
to the buffer sizes. Resolutions can then be to, not accept new
connections when memory pressure is too high, kill connections with the
weakest/slowest clients or have a protocol that can have client switch
connections to new servers when instructed to spread out the load.
1) I would like to hear on how people would solve this sort of problem in
Twisted for a server?
2) I would like to hear people opinions on this in general.
Tobias Oberstein - BCC'ed you on this email because you seems to have
tackled similar problems (based on the mailing list) and would really love
to get your take on this too.
Glyph and Jean-Paul, you're also big on those threads so any opinions you
have would be appreciated as well.
Some of my background research
* Later but good in the chain:
* Twisted receiving buffers swamped?
* Summary: Great thread but runs into a tangent for a while and picks up
good at the end again discussing producer again and the need for
* Scenario: "Twisted is reading off the TCP stack from the kernel and
buffering in userspace faster than the echo server is pushing out stuff to
the TCP stack into the kernel. Hence, no TCP backpressure results, netperf
happily sends more and more, and the memory of the Twisted process runs
* Confirmed: Data isn't Buffered in "userspace inside Twisted, and before
data is handled by the app in dataReceived."
* How to cap the buffering size of data to be sent in Protocol class
* Summary: Same issue, very short no good info
* Limit on transport.write
* Summary: Similar issue, very short no good info, glyph confirms that
transport.write buffers everything sent to it.
* pushing out same message on 100k TCPs
* Summary: Interesting discussion but different issue - interesting aside
about irc spanning trees for a large broadcasts
* Question on push/pull producers inter-workings, was : "Is there a
simple Producer/Consumer example or tutorial?"
* Summary: Related and goes into what a producer is - an explanation of it
* When do calls to transport.write() block ?
* Summary: Discusses the right issue, talks about buffer call back if
full which would be great (if configurable)
* Summary: c buffer implementation that's incomplete thought might be
* Summary: Was indicating a similar use-case but source doesn't seem to
exist on the internet
* Summary: Documentation on Producer and Consumers but only help with a
* Summary: Discussion on how producer-consumer API's work and future
enhancement no help
* MANY MORE THAT 100% NOT RELEVANT
Steve Morin | Hacker, Entrepreneur, Startup Advisor
twitter.com/SteveMorin | stevemorin.com
*Live the dream start a startup. Make the world ... a better place.*