
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I'm doing some performance testing of one of our Twisted applications. And what I came across was a surprising amount of time being spent in twisted.python.failure.Failure.__getstate__ We're doing a fair amount with exceptions. And under cProfile, I found this: 29 0 0.4081 0.2940 twisted.python.failure:416(__getstate__) +34492 0 0.1141 0.0235 +twisted.python.reflect:557(safe_repr) So under profiling, we spent 408ms in __getstate__. I then changed Failure.cleanFailure to just 'return', and I saw a real-world improvement of ~480ms down to about 240ms. I then restored cleanFailure, but changed Failure.__init__ to always set 'tb=None' before it does anything. And in that case the time went down to 180ms. (And when I dug into it, 150ms of that is time spent waiting for an XMLRPC response.) I'm wondering if there is a tasteful way to disable Traceback processing in a production machine. I realize you never know when you are going to need the state in order to figure out what went wrong. But it is costing 2-5x runtime speed. (The other answer is to write all of our code to avoid Deferred.addErrback...) In our case, there are many exceptions that are used as 'signaling' rather than indicating a real failure. I suppose one option would be a whitelist/blacklist that would indicate whether a given exception class is worthy of a traceback. Thoughts? John =:-> -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk2QntIACgkQJdeBCYSNAAPNPACeNtNUb1mifyTTSpAymcktWwQg 3UAAoLcbbADvaj2QYSxkgFnRdmmWjtPm =SLqh -----END PGP SIGNATURE-----