hi there, folks:
I'd really like to release 0.7.0 but I would like it to be at least a
little bit tested before I do so. Could those of you with CVS trees check
everything out and see if it performs as advertised? Deeper bugs than
that will have to wait for the next release, but I'd at least like to know
if it works for someone other than me.
______ __ __ _____ _ _
| ____ | \_/ |_____] |_____|
|_____| |_____ | | | |
@ t w i s t e d m a t r i x . c o m
The @deprecated decorator (at leas on py2.7) does not work when paired
For deprecated instance variables, our deprecation policy recommend
converting them into properties and emitted the warning from there.
It would be nice if we could use the standard @deprecated decorator here.
The problem is that when property are used the fget/fset are received by
the @deprecated wrapper as functions and not as methods
Is there a way to get the class in which a property is defined... or there
is no way to use the @deprecated decorator with the @property .. and we
should create a dedicated deprecatedMember method which is called as a
On Tue, Nov 17, 2015 at 8:57 AM, Adi Roiban <adi(a)roiban.ro> wrote:
> For now, the funds were raised to migrate to GitHub, so we can not use
> them to do other things.
> We will stay on Trac for issues... at least for now.
> I have no idea how we can migrate to any issue tracker without losing
> data if we don't have full access to the database.
It is possible to migrate to another issue tracker and not lose
data. I've done Trac -> Redmine, and it works, but there was an existing
script I could use.
For migrating to a cloud based bug tracker, you would need to take every
in the existing Trac database, and see if there would be a way to map
the existing users to the cloud database, such as GitHub. It's a lot of
work, but possible.
However, for the scope of this project, if staying with Trac for issues is
what is required, that is fine.
> We don't plan to migrate to GitHub Issues / GitHub Wiki / GitHub Pages
So based on what you have listed, I would say that most of the work will be
working with Git post commit hooks.
I would say the plan should be something like this.
A.1 https://github.com/twisted/twisted will be the "repository of truth"
-> Twisted releases will be done from GitHub
-> the Twisted developers who are now "core committers" for SVN,
given access to be "core committers" to
A.2 On the Trac server, a local git mirror of the GitHub must be set up.
Every bug tracker I've seen that integrates with git needs a local
mirror of the repo
in order to parse the git history in order to update the bug
This mirror should be read-only, and the only thing updating this
repo should be the Trac GitHub plugin.
A.3 On the Trac server, this plugin must be installed:
A.4 On the GitHub server, a post-commit web hook must be configured. The
workflow will be this:
[core committer does push to https://github.com/twisted/twisted]
-> [post commit GitHub hook will be called to poke the Trac
-> [Trac GitHub plugin will update the local git repo on the
-> [Trac GitHub will parse the git history for new commits and
I would recommend that steps (1) - (4) be made to work in a staging
environment, with a separate
GitHub repo, and a separate copy of the Trac database. That way, you can
test things out without derailing
Twisted developers. When you are confident that this workflow works, then
the transition plan will be something
like the following.
B.1 Send an e-mail to the mailing list and pick one day for the
This will warn folks when they should take a holiday from Twisted
B.2 When maintenance is about to begin, send a [HEADSUP] mail saying that
repo will be unavailable.
B.3 Create Subversion pre-commit hook to disable all commits to
B.4 Set up steps A.1 - A.4
B.5 Verify that B.4 works. Have someone (Glyph?) do a commit to
make sure that Trac works.
B.6 Once the Twisted core team are satisified that everything works, send
an e-mail to the mailing list
that the maintenance window is over, and GitHub is now where the
B.7 Update all wiki documentation to change all references to getting code
to getting code from GitHub.
B.8 Update all systems which used Subversion to use GitHub. For example,
# Problem Statement
Thanks for your feedback on my HTTP/2 questions. I’ve started work implementing a spike of a HTTP/2 protocol for twisted.web. I’m aiming to have something that works in at least some cases by the end of the day.
As part of my dive into twisted.web, I noticed something that surprised me: it seems to have no support for ‘streaming’ request bodies. By this I mean that the Request.requestReceived() method is not actually called until the complete request body has been received. This is a somewhat unexpected limitation for Twisted: why should I have to wait until the entire body has been uploaded to start doing things with it?
This problem is thrown into sharp relief with HTTP/2, which essentially always chunks the body, even if a content-length is provided. This means that it is now very easy to receive data in delimited chunks, which an implementation may want to have semantic meaning. However, the request is unable to access this data in this way. It also makes it impossible to use a HTTP/2 request/response pair as a long-running communication channel, as we cannot safely call requestReceived until the response is terminated (which also terminates the HTTP/2 stream).
Adi pointed me at a related issue, #6928, which itself points at what appears to be an issue tracking exactly this request. That issue is issue #288, which is 12 years old(!). This has clearly been a pain point for quite some time.
Issue #6928 has glyph suggesting that we come to the mailing list to discuss this, but the last time it was raised no responses were received. I believe that with HTTP/2 on the horizon, this issue is more acute than it was before, and needs solving if Twisted is going to continue to remain relevant for the web. It should also allow people to build more performant web applications, as they should be able to handle how the data queues up in their apps.
This does not immediately block my HTTP/2 work, so we can take some time and get this right.
# Proposed Solution
To help us move forward, I’m providing a proposal for how I’d solve this problem. This is not necessarily going to be the final approach, but is instead a straw-man we can use to form the basis of a discussion about what the correct fix should be.
My proposal is to deprecate the current Request/Resource model. It currently functions and should continue to function, but as of this point we should consider it a bad way to do things, and we should push people to move to a fully asynchronous model.
We should then move to an API that is much more like the one used by Go: specifically, that by default all requests/responses are streamed. Request objects (and, logically, any other object that handles requests/responses, such as Resource) should be extended to have a chunkReceived method that can be overridden by users. If a user chooses not to override that method, the default implementation would continue to do what is done now (save to a buffer). Once the request/response is complete (marked by receipt of a zero-length chunk, or a frame with END_STREAM set, or when the remaining content-length is 0), request/responseComplete would be called. For users that did not override chunkReceived can now safely access the content buffer: other users can do whatever they see fit. We’d also update requestReceived to ensure that it’s called when all the *headers* are received, rather than waiting for the body.
A similar approach should be taken with sending data: we should assume that users want to chunk it if they do not provide a content-length. An extreme position to take (and I do) is that this should be sufficiently easy that most users actually *accidentally* end up chunking their data: that is, we do not provide special helpers to set content-length, instead just checking whether that’s a header users actually send, and if they don’t we chunk the data.
This logic would make it much easier to work with HTTP/2 *and* with WebSockets, requiring substantially less special-case code to handle the WebSocket upgrade (when the headers are complete, we can spot the upgrade easily).
What do people think of this approach?
I'm trying to toubleshoot network connectivity issues we have in one of
our office and I would like to monitor some metrics which seems to be
relevant for us, especially when trying to open TCP connections towards
In particular, I'm looking for a way to get the following information
(let's say I want to monitor the connectivity towards the swisscom.com,
port 80 using TCP):
* how long does it take to resolv the domain name to (at least) one of
its IP address
- against a specified name server or using the system configured
- how many tries did it require
* if there were several tries, the timing of each ones
* how long does it take to get the first bytes of the endpoint
- how long does it take to complete the TCP connection handshake
- the status of the packets exchanged (how many retries, how many
packets lost, etc.)
It's not exactly the same, but the curl option --write-out allows to get
this kind of values (especially time_namelookup, time_connect,
time_pretransfer, time_starttransfer and time_total) but I would like to
have more flexibility and more in-depth informations (like the state of
the packets exchanged, etc.)
How far can I do this kind of things with Twisted? I know I can somewhat
easily get the timings of the name resolution, the TCP connection
handshake also and the time to first byte(s), but what about the
packets? I haven't look at the code of Twisted Names yet, but if it's
doing the DNS request by itself, I may be able to plug-in somewhere and
have my request counter and the timers associated, but I'm not sure if
the underlying details of the TCP protocol are exposed to the upper
layer such as Twisted?
Thanks for the help!
im not really sure why one would use the line:
self.remaining = 1024 * 10
this suggest to me that one knew what kind of page size was
expected...but what if you dont?
wouldnt it make more sense to use someting like this:
def dataReceived(self, bytes):
self.page_content = self.page_content + bytes
this would sum up all the data until connectionLost is called.
and in connectionLost():
def connectionLost(self, reason):
print 'Finished receiving body:', reason.getErrorMessage()
and then print it?
also i dont get why one would use
in cbRequest. where is this finished returned to?
its called via:
isnt the result from cbRequest thrown away?
i would expect the line to read:
new_deferred = d.addCallback(cbRequest)
thx for your answers
We have patch for review which works towards allowing the current
STMP/ESTMP server implementation to be sublcassed in order to implement
Here is the ticket
If you care about SMTP/ESMTP please send your feedback, here or on the
I am not an SMTP/ESMTP expert and I need help in reviewing this ticket.
My main questions regarding ESTMP extensions are :
1. Do we really want to implement them using sublcassing or using
composition/interfaces/components ... or something else?
2. From https://en.wikipedia.org/wiki/Extended_SMTP#Extensions
Each service extension is defined in an approved format in subsequent RFCs
and registered with the IANA.
Are there ESTMP servers in the wild which provide extensions which are not
defined by RFCs?
Do we want to encourage multiple implementation for the same extensions or
encourage people to collaborate toward a single implementation which is
hosted by Twisted?
On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 15.5!
The sixth (!!) release in 2015 has quite a few goodies in it -- incrementalism is the name of the game here, and everything is just a little better than it was before. Some of the highlights of this release are:
- Python 3.5 support on POSIX was added, and Python 2.6 support was dropped. We also only support x64 Python on Windows 7 now.
- More than nine additional modules have been ported to Python 3, ranging from Twisted Web's Agent and downloadPage, twisted.python.logfile, and many others, as well as...
- twistd is ported to Python 3, and its first plugin, web, is ported.
- twisted.python.url, a new URL/IRI abstraction, has been introduced to answer the question "just what IS a URL" in Twisted, once and for all.
- NPN and ALPN support has been added to Twisted's TLS implementation, paving the way for HTTP/2.
- Conch now supports the DH group14-sha1 and group-exchange-sha256 key exchange algorithms, as well as hmac-sha2-256 and hmac-sha2-512 MAC algorithms. Conch also works nicer with newer OpenSSH implementations.
- Twisted's IRC support now has a sendCommand() method, which enables the use of sending messages with tags.
- 55+ closed tickets overall.
For more information, check the NEWS file (link provided below).
You can find the downloads at <https://pypi.python.org/pypi/Twisted> (or alternatively <http://twistedmatrix.com/trac/wiki/Downloads>) .
The NEWS file is also available at <https://github.com/twisted/twisted/blob/twisted-15.5.0/NEWS>.
Also worth noting is the two Twisted Software Foundation fellows -- Adi Roiban and myself -- who have been able to dedicate time to reviewing tickets and generally pushing things along in the process. We're funded by the Twisted Software Foundation, which is, in turn, funded by donators and sponsors -- potentially like you! If you would like to know how you can assist in the continued funding of the Fellowship program, see our website: https://twistedmatrix.com/trac/wiki/TwistedSoftwareFoundation#BenefitsofSpo…
Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!
Amber "Hawkie" Brown
Twisted Release Manager, Twisted Fellow