Re: [Twisted-Python] multiple workers

Why it is not supported? I want behaviour like nginx http://nginx.org/, and misunderstand why i can't implemented it throw twisted. Its' so easy. Every process have it's own set sockets, and they doesn't share this sockets between each other. "OnConnect" event happens only once and which process handle this event depend on operation system(select epoll, kqueue), in my case this happens like round robin(FreeBSD 7.2-RELEASE-p8). Where here is unsupported behaviour?

On 05:26 pm, ruslan.usifov@gmail.com wrote:
Why it is not supported?
"Why" it is not supported is that no one has decided to implement and support it. If it's interesting behavior for you, then we would completely welcome you implementing it, and we'll even maintain the support for it once you've done that. :) If you were asking about what specific implementation details cause it not to work now (more of a "how" question, sort of), then the answer to that probably varies from reactor to reactor, but it is all about how things end up being shared across the multiple processes created by fork. I want behaviour like nginx http://nginx.org/, and
So, for example, epoll descriptors do survive fork(). However, kqueue descriptors don't. So one necessary change for kqueue reactor to support this kind of behavior is to have the reactor somehow re- initialize itself after the fork. Another problem is that certain resources are not simply duplicated by a fork(). A specific example is the one you brought up in your earlier post. A unix socket only has one entity corresponding to it in the filesystem. Twisted takes responsibility for cleaning these up, but after you fork(), there are two unix sockets and still only one filesystem entity. This confuses one of the processes, since it believes it needs to delete the file. Hardly rocket science to fix, but it's a specific case which needs to be handled. And I'm sure you'll come across quite a few more specific cases which need to be handled. This might get us back to the "why" a little - actually ensuring that everything will work properly when arbitrary forks are added is a major challenge. I don't see any way to do it comprehensively, really. That would leave you with a long, long adventure of fixing one little issue at a time for months or even years to come. And each problem would only become evident after it bit you somehow. That's probably why we have a ticket for an explicit file descriptor passing API, rather than a ticket for supporting arbitrary fork calls. The former is easier to test and be confident in than the latter. Jean-Paul

On 05:26 pm, ruslan.usifov@gmail.com wrote:
Why it is not supported?
"Why" it is not supported is that no one has decided to implement and support it. If it's interesting behavior for you, then we would completely welcome you implementing it, and we'll even maintain the support for it once you've done that. :) If you were asking about what specific implementation details cause it not to work now (more of a "how" question, sort of), then the answer to that probably varies from reactor to reactor, but it is all about how things end up being shared across the multiple processes created by fork. I want behaviour like nginx http://nginx.org/, and
So, for example, epoll descriptors do survive fork(). However, kqueue descriptors don't. So one necessary change for kqueue reactor to support this kind of behavior is to have the reactor somehow re- initialize itself after the fork. Another problem is that certain resources are not simply duplicated by a fork(). A specific example is the one you brought up in your earlier post. A unix socket only has one entity corresponding to it in the filesystem. Twisted takes responsibility for cleaning these up, but after you fork(), there are two unix sockets and still only one filesystem entity. This confuses one of the processes, since it believes it needs to delete the file. Hardly rocket science to fix, but it's a specific case which needs to be handled. And I'm sure you'll come across quite a few more specific cases which need to be handled. This might get us back to the "why" a little - actually ensuring that everything will work properly when arbitrary forks are added is a major challenge. I don't see any way to do it comprehensively, really. That would leave you with a long, long adventure of fixing one little issue at a time for months or even years to come. And each problem would only become evident after it bit you somehow. That's probably why we have a ticket for an explicit file descriptor passing API, rather than a ticket for supporting arbitrary fork calls. The former is easier to test and be confident in than the latter. Jean-Paul
participants (2)
-
exarkun@twistedmatrix.com
-
ruslan usifov