[Twisted-Python] Scaling Twisted Web on multicore
Hi, I have done some testing of scaling Twisted Web on multicore and wanted to share: https://github.com/oberstet/scratchbox/tree/master/python/twisted/sharedsock... For those that are running short in time or want to have a teaser first, here are the results, including comparison with Nginx: https://github.com/oberstet/scratchbox/raw/master/python/twisted/sharedsocke... Personally, I think the results are quite encouraging. I'd love to hear any feedback and comments! Cheers, Tobias
On 10/30/2013 07:28 PM, Tobias Oberstein wrote:
Hi,
I have done some testing of scaling Twisted Web on multicore and wanted to share:
https://github.com/oberstet/scratchbox/tree/master/python/twisted/sharedsock...
This was PyPy, yes?
I have done some testing of scaling Twisted Web on multicore and wanted to share:
https://github.com/oberstet/scratchbox/tree/master/python/twisted/sharedsoc ket
This was PyPy, yes?
Yes. All testing details are in the README.md ..
On Thu, Oct 31, 2013 at 12:28 AM, Tobias Oberstein <tobias.oberstein@tavendo.de> wrote:
Hi,
I have done some testing of scaling Twisted Web on multicore and wanted to share:
https://github.com/oberstet/scratchbox/tree/master/python/twisted/sharedsock...
For those that are running short in time or want to have a teaser first, here are the results, including comparison with Nginx:
https://github.com/oberstet/scratchbox/raw/master/python/twisted/sharedsocke...
Personally, I think the results are quite encouraging. I'd love to hear any feedback and comments!
Looks nice. It's something that has been around in poor form for a long time in several places (I'm thinking about http://twistedmatrix.com/trac/browser/sandbox/exarkun/copyover/ which inspired http://twistedmatrix.com/trac/browser/sandbox/therve/prefork/). It would be good to have some documented examples. It would be even better to have a proper Twisted API for that. Note that for testing static files, sendfile may be an interesting boost: https://tm.tl/585 Also, on BSDs (not sure about OS X) and recent Linux, you can use SO_REUSEPORT which would make for an even simpler code. -- Thomas
Looks nice. It's something that has been around in poor form for a long time in several places (I'm thinking about http://twistedmatrix.com/trac/browser/sandbox/exarkun/copyover/ which inspired http://twistedmatrix.com/trac/browser/sandbox/therve/prefork/).
Wow. 10 years ago.;) I already had given credit in the README.md to Jean-Paul, but referring to a relatively recent answer by him on Stackoverflow. I didn't knew it was around so long, and also not aware of your tests.
It would be good to have some documented examples. It would be even better to have a proper Twisted API for that.
Note that for testing static files, sendfile may be an interesting boost: https://tm.tl/585
The thing is: sendfile() doesn't support TLS. As far as I can see. Further: the tests that are getting "close" (up to 50%-70%) to Nginx performance are using a "Fixed Resource" .. that is merely a Resource returning a string. When I take static.File, the performance gap widens up considerable (factor 4-5 vs Nginx). Which brought me to the following idea .. not sure if that would work: Why not completely cache a Twisted Web HTTP _response_ (including headers and all) upon the first request to that resource _in RAM_? If the underlying is a file, maybe use FS notify to invalidate the cache entry. That would still allow doing TLS and push octets from RAM. A static.CachingFile resource. Or a general CachingWrapper factory, wrapping any resource hierarchy. Of course that breaks for real dynamic sites .. but it is useful: e.g. we use FrozenFlask to freeze Flask and deploy to S3, and normal Flask for easy and standard development. I can still use all the routing goodies of Flask and have a set of statics in the end.
Also, on BSDs (not sure about OS X) and recent Linux, you can use SO_REUSEPORT which would make for an even simpler code.
This is very interesting. Thanks for pointing me there .. didn't knew that. It is useful and I will try, since as https://lwn.net/Articles/542629/ notes: "when multiple threads are waiting in the accept() call, wake-ups are not fair, so that, under high load, incoming connections may be distributed across threads in a very unbalanced fashion." I have seen this behavior also. Accept from multiple processes is skewed. I am fine with support for FreeBSD and Linux only (for now). So: I will further explore: 1) SO_REUSEPORT 2) CachingWrapper Thanks for feedback and hints! /Tobias
-- Thomas
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
participants (3)
-
Itamar Turner-Trauring
-
Thomas Hervé
-
Tobias Oberstein