Frozen socket.
David Bolen
db3l at fitlinxx.com
Wed Jun 21 18:00:59 EDT 2000
"C. Porter Bassett" <porter at et.byu.edu> writes:
> I made two small programs, a server and a client, and I got the client to
> "freeze" in the exact same way. My test server accepts three messages,
> but after that, it freezes. For some reason that I don't understand, the
> client keeps sending messages (about 1000 of them, just like before), even
> though the server is not receiving any. After about 1000 messages, it
> freezes, just like the program I am debugging.
Um, right - this pretty much maps to the behavior I mentioned in the
last response. If your server is not actively receiving data, then it
will begin to back up within the queues within the system. There is
some amount of data that may be held on the server's receiving end
(within the networking kernel buffers), and data in networking queues
on the sending end. There's also some data in transit in the network
- while that's not normally additive to the overall queue size as
perceived by the application.
> Why is the client able to keep sending messages if the server is not
> receiving them?
Because the messages are being queued up (either on the client or
server end) until you max out the internal network queues provided by
the systems involved. Queue sizes may also be configured for a given
socket or default values via network kernel configuration.
When you use send() you are handing off your data to the network
kernel, and in particular the TCP protocol. That protocol queues up
the data to be transmitted as soon as possible, but there might be
some earlier data that TCP is still working on, so it understands how
to hold onto your data. It's these internal buffers that provide the
"elasticity" you are seeing. You aren't actually transmitting data
with send() - you are just passing it off to be transmitted by the
underlying network layers.
> In short, what is really happening, and how do I safeguard against it?
Well, it's working as designed, so I'm not entirely sure what you are
trying to safeguard against. TCP works as a stream protocol - you
send it any amount of data and it'll consider it a stream of bytes
that need to get to the other side eventually. Since TCP is in
complete control of the transmission, eventually might take a long
time - in this case your eventual blocked send() could wait for a long
time unless the server shuts down the connection - but if the server
starts reading again, everything will immediately pick up again. So
in one respect, this is the simplest behavior for the application.
You just keep using send() and you know you're data will get to the
other end, or eventually you'll get an error from send(). You don't
really care if it blocks because it'll eventually start up again when
the server is ready for more data.
If you need different behavior, then you have to design it into your
application protocol, and TCP may or may not be the appropriate
underlying network protocol to use.
For example, if you want to be sure that your information is getting
through, you could do a simple ACK/NAK sort of setup where the server
sends a response to the client for each transmission. This would let
the client know for sure the information is received before trying to
send again.
But without knowing the semantics you are looking for at the
application level, it's hard to say what else you might want to do.
But in this particular scenario you are testing here, things are
pretty much working as designed, so it may be a mismatch between your
expectations and the design, or just that you need to consider the
ramifications of a real, variable network in the middle of your two
apps :-)
--
-- David
--
/-----------------------------------------------------------------------\
\ David Bolen \ E-mail: db3l at fitlinxx.com /
| FitLinxx, Inc. \ Phone: (203) 708-5192 |
/ 860 Canal Street, Stamford, CT 06902 \ Fax: (203) 316-5150 \
\-----------------------------------------------------------------------/
More information about the Python-list
mailing list