Shutting down a cross-platform multithreaded app
rosuav at gmail.com
Sat Sep 19 12:59:31 CEST 2015
On Sat, Sep 19, 2015 at 8:48 PM, James Harris <james.harris.1 at gmail.com> wrote:
> "Chris Angelico" <rosuav at gmail.com> wrote in message
> news:mailman.13.1442657702.21674.python-list at python.org...
>> On Sat, Sep 19, 2015 at 7:49 PM, James Harris <james.harris.1 at gmail.com>
>>> "Chris Angelico" <rosuav at gmail.com> wrote in message
>>> news:mailman.8.1442612439.21674.python-list at python.org...
>>>> If you're using select() to monitor the sockets, you don't actually
>>>> then have to _do_ anything with the shutdown socket. You could have a
>>>> single socket that sends the shutdown signal to all your workers.
>>> I don't understand how a single socket could send the signal to all the
>>> workers. I did consider some form of multicast but thought it too
>>> complicated (and possibly infeasible).
>> The way I'm describing it, the workers never actually read from the
>> socket. Once that socket becomes readable, they immediately shut down,
>> without making the socket no-longer-readable.
> Understood. Good idea. Initial thoughts on it: Would work for threads. Would
> save on the number of sockets required. Would not work for processes if the
> model was ever changed. Would make it easier for rogue packets to shut a
> worker down. Would not allow any way to distinguish between shutdown
> priorities (not something I have mentioned). Definitely feasible. I'll keep
> it in mind.
If you go to multiple processes, yeah, it probably wouldn't work *on
Windows*. On Unix, you can fork and have multiple processes with the
same socket. (It's very common to have, for instance, a subprocess's
stdin/stdout/stderr linked to pipes of some sort; the calling process
still retains control.)
It would be _harder_ for rogue packets to shut a worker down this way.
>> TCP sockets work on the basis of a master socket and any number of
>> spawned sockets. The master is what gives you an open port; each
>> spawned socket represents one connection with one client. Once you
>> have an established connection, the master should be able to be closed
>> without disrupting that. No other process will be able to connect to
>> you, but you'll still be able to use one end of the socket to make the
>> other end readable.
> Agreed but I need the listening socket to remain open and listening for new
> connections (at least until the whole program is told to shut down).
Not sure why. The sole purpose of this socket is to establish a
(single) socket pair used for the termination signals - nothing more.
You shouldn't need to listen for any new connections, even if you
spawn new workers.
>> With UDP, any process that can send a UDP packet can flood the system
>> with them until your workers shut down. You wouldn't even notice until
>> it succeeds.
> Is that true? You seem to be describing a non-forged attack but to get the
> source UDP port right wouldn't the attacker have to be runing on the same
> machine *and* to bind to the same port that the machine had allocated to my
> program? I might be wrong but I don't think the UDP stack would allow the
> same port to be bound again before the original had been closed.
UDP basically doesn't have protections like that. TCP does, and though
it _is_ possible to forge packets, it requires raw socket access. I
don't know what protections Windows has around that, but certainly a
software firewall should be able to notice that some program is
spewing raw packets.
> Let's see. If I stick with my original plan then each worker would have a
> TCP socket and a UDP socket. The "listener" thread would have its single
> listening TCP socket plus it would have a UDP socket for each worker thread.
> Total of three sockets per worker, two of which would be UDP sockets with
> port numbers assigned and thus consumed.
> If I go with a single "shutdown socket" then I would have just one socket
> per worker. That would use fewer sockets, for sure.
Yeah, it's pretty cheap either way. Your most scarce resource here
would be UDP port numbers, and there's 64K of those. That'd let you go
as far as 32K workers, and I don't think you can have that many
threads without saturating something somewhere else.
More information about the Python-list