On Oct 14, 2009, at 2:07 AM, Glyph Lefkowitz wrote:

On Tue, Oct 13, 2009 at 8:02 PM, Steve Steiner (listsin) <listsin@integrateddevcorp.com> wrote:
I've been hunting down a problem that I've finally found the cause of
and I'd like to know what's the Twisted way to catch this "error
within the code handling the error"  type of error.

The right way to catch this is to write tests for your code and run them before deploying it to production :). 

Yes, we're working on it but it's a large code base and we started with exactly zero tests.  While that leaves infinite room for improvement, it's a little overwhelming.  Oh well, at least we know where to concentrate first ;-0.

Trial will helpfully fail tests which cause exceptions to be logged, so you don't need to write any special extra test to make sure that nothing is blowing up; just test your error-handling case, and if it blows up you will see it.

We've just been using nose; is that something Trial handles specially for Twisted?

Basically, in one branch of the errBack, there was a typo.  A simple
typo that caused an unhandled NameError exception, but only once in  a
few thousand runs.

If it's a NameError, you also could have used Pyflakes to catch it :).

That's in our list of 'things to put in the commit pre-hook' as well.  I'm not sure pyflakes would have caught this one, though because it's a legitimate instance variable, it's just not set to something usable before this particular error condition comes up.

The exception got caught and "displayed" by Twisted, but it wasn't
going anyplace anyone was looking (buried under zillions of lines of
logging) and the app continues on as if nothing went wrong.

The real lesson here is that you should be paying attention to logged tracebacks.

There are many ways to do this.  Many operations teams running Twisted servers will trawl the logs with regular expressions.  Not my preferred way of doing it, but I'm not really an ops person :).

I'm not much on the ops end either but I guess I'm learning...

If you want to handle logged exceptions specially, for example to put them in a separate file, or to e-mail them to somebody, consider writing a log observer that checks for the isError key and does something special there.  You can find out more about writing log observers here: <http://twistedmatrix.com/projects/core/documentation/howto/logging.html>.

This is an area of Twisted I haven't explored at all since the code's all using the standard Python logging.   

That's the thing about Twisted; sometimes it's hard to know whether the stuff that has been built into standard Python since Twisted 'rolled their own' is a superset, a subset, or a completely different beast.  Logging is a good case in point.  Since we're using Python's logging everywhere, I wasn't even sure whether there would be an advantage to learning Twisted's similar system.  Twisted's trial is another example; we've just been using nose.  Seems like there's always some little extra that makes the Twisted stuff worth knowing.

What is the best way to handle programming errors like this in
deferreds so they don't slip by, unnoticed?

I'm answering a question you didn't ask, about logged errors, because I think it's the one you meant to ask.  The answer to the question you are actually asking here, i.e. "how do I handle errors in an errback", is quite simple: add another errback.  This is sort of like asking how to handle exceptions in an 'except:' block in Python.  For example, if you want to catch errors from this code:

try:
  foo()
except:
  oops()

you could modify it to look like this:

try:
  foo()
except:
  try:
    oops()
  except:
    handleOopsOops()

which is what adding another errback is like.  But, as I said: I don't think this is what you want, since it will only let you handle un-handled errors in Deferreds (not unhandled errors in, for example, protocols) and you will have to attach your error-handling callbacks everywhere (not to mention trying to guess a sane return value for the error-handler-error-handler.

Right, I started thinking down that infinitely nested slippery slope and figured there must be a better way.  I think the logging question you answered that I didn't ask was the one I meant ;-).

Thanks again for another enlightening answer.

S