AW: Re: [Twisted-Python] Can a deferred catch a segmentation fault ?
Setting up such signal handler wouldn't be trivial (as in unsafe). The "normal" thing in such a situation is to longjmp (in C) or to let the process to die (which is the safe thing to do), Now presuming that you cannot fix the segfaults (which as sad it sounds is quite common when using commercial 3rd party DLLs. Practice shows that being even a huge customer often get a quick turn around on bugs especially if its a hard to reproduce random crash. Driver DLLs for hardware costing $10K upwards come to mind, *shudder*), your only safe way is to isolate the driver usage into a seperate process. (If you are unlucky not even that is enough because some hardware gets stuck in a state that needs manual cold reseting, but let's be optimistic here). With a seperate process, you can restart it should it die. Andreas -- Ursprüngl. Mitteil. -- Betreff: Re: [Twisted-Python] Can a deferred catch a segmentation fault? Von: Jean-Paul Calderone <exarkun@divmod.com> Datum: 20.09.2007 16:34 On Thu, 20 Sep 2007 09:14:11 -0700, "Gerald John M. Manipon" <geraldjohn.m.manipon@jpl.nasa.gov> wrote:
Hi,
I'm using deferToThread to run some python code which may or may not be safe. In particular, the code may load some modules that could seg fault. In this case, deferToThread won't work and the python process will bomb so I was wondering what Defer strategy I could use to handle this situation:
Deferreds don't do anything which will directly help in this situation. If your process receives a signal for which the action is to exit, then it will exit. If you want, you can change the action associated with certain signals (even SIGSEGV). What action you *do* cause to happen in such a case is independent of Deferreds. You could, conceivably, turn a segfault into an exception and then errback a Deferred. However, I wouldn't actually recommend this. I would recommend not using software which segfaults, fixing segfault bugs you find in software you are using, and having comprehensive unit tests so that if something is going to segfault, it does so during the course of development, not on whatever server or client machines your software is deployed to. Jean-Paul _______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
Here I go again touting my AsynQueue package. Sorry, but it just seems to be a very appropriate solution to many of the problems being raised recently. I've recently added a "processworker" module that does just what it sounds like. You can now queue up jobs to be run on a separate Python interpreter. If the interpreter crashes due to a segfault or anything else, you just construct a new worker instance and attach it to the queue, and the jobs continue merrily along. In addition to deferred-based priority queuing, the queue object has powerful capabilities for hiring and firing workers, letting workers resign when they can't perform their duties any more, assigning tasks to appropriate workers, and re-assigning tasks from terminated workers. See http://tinyurl.com/349k2o (http://foss.eepatents.com/AsynQueue/browser/projects/AsynQueue/trunk/asynque...) By the way, AsynQueue (without the new processworker stuff) is now available in Debian testing, thanks to efforts of Eric Evans. Just apt-get install python-asynqueue. Best regards, Ed
On Thu, 2007-09-20 at 11:26 -0700, Ed Suominen wrote:
Here I go again touting my AsynQueue package. Sorry, but it just seems to be a very appropriate solution to many of the problems being raised recently.
Well, to be fair it's an excellent bit of code.
I've recently added a "processworker" module that does just what it sounds like. You can now queue up jobs to be run on a separate Python interpreter. If the interpreter crashes due to a segfault or anything else, you just construct a new worker instance and attach it to the queue, and the jobs continue merrily along.
Interesting. I see for the process worker jobs you pass in a python string. One of the conceptual difficulties I've always had with creating a farm of subprocesses is ensuring the module import status would be valid, so that you could pass a function and class instances across the pickle boundary (or whatever) Did you consider this approach?
In addition to deferred-based priority queuing, the queue object has powerful capabilities for hiring and firing workers, letting workers resign when they can't perform their duties any more, assigning tasks to appropriate workers, and re-assigning tasks from terminated workers.
See http://tinyurl.com/349k2o (http://foss.eepatents.com/AsynQueue/browser/projects/AsynQueue/trunk/asynque...)
By the way, AsynQueue (without the new processworker stuff) is now available in Debian testing, thanks to efforts of Eric Evans. Just apt-get install python-asynqueue.
Best regards, Ed
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
On Fri, 2007-09-21 at 10:24 +0100, Phil Mayers wrote:
On Thu, 2007-09-20 at 11:26 -0700, Ed Suominen wrote:
Here I go again touting my AsynQueue package. Sorry, but it just seems to be a very appropriate solution to many of the problems being raised recently.
Well, to be fair it's an excellent bit of code.
I've recently added a "processworker" module that does just what it sounds like. You can now queue up jobs to be run on a separate Python interpreter. If the interpreter crashes due to a segfault or anything else, you just construct a new worker instance and attach it to the queue, and the jobs continue merrily along.
Interesting. I see for the process worker jobs you pass in a python string. One of the conceptual difficulties I've always had with creating a farm of subprocesses is ensuring the module import status would be valid, so that you could pass a function and class instances across the pickle boundary (or whatever)
Did you consider this approach?
In addition to deferred-based priority queuing, the queue object has powerful capabilities for hiring and firing workers, letting workers resign when they can't perform their duties any more, assigning tasks to appropriate workers, and re-assigning tasks from terminated workers.
See http://tinyurl.com/349k2o (http://foss.eepatents.com/AsynQueue/browser/projects/AsynQueue/trunk/asynque...)
By the way, AsynQueue (without the new processworker stuff) is now available in Debian testing, thanks to efforts of Eric Evans. Just apt-get install python-asynqueue.
Best regards, Ed
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python
_______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python -- George Pauly Ring Development www.ringdevelopment.com
participants (4)
-
Andreas Kostyrka
-
Ed Suominen
-
George Pauly
-
Phil Mayers