Question on Socket Timeouts
abhi.forall at gmail.com
Mon Nov 19 06:12:43 CET 2012
I also tried looking at SO_RCVTIMEO option.
Turns out that also resets if data is received.
And yeah I implemented that as a separate logic in my code.
I was wondering if sockets natively provided this functionality.
Thanks again for clarifying.
On Mon, Nov 19, 2012 at 12:40 AM, Cameron Simpson <cs at zip.com.au> wrote:
> On 18Nov2012 03:27, Abhijeet Mahagaonkar <abhi.forall at gmail.com> wrote:
> | I'm new to network programming.
> | I have a question.
> | Can we set a timeout to limit how long a particular socket can read or
> | write?
> On the socket itself? Probably not. But...
> | I have used a settimeout() function.
> | The settimeout() works fine as long as the client doesnt send any data
> | x seconds.
> | The data that I receive in the server after accept()ing a connect() from
> | client I check if the client is sending any invalid data.
> | I'm trying to ensure that a client sending invalid data constantly cannot
> | hold the server. So is there a way of saying I want the client to use
> | socket for x seconds before I close it, no matter what data I receive?
> Not the time you set up the socket, or when you accept the client's
> connection. Thereafter, ever time you get some data, look at the clock.
> If enough time has elapsed, close the socket yourself.
> So, not via an interface to the socket but as logic in your own code.
> Cameron Simpson <cs at zip.com.au>
> Their are thre mistakes in this sentence.
> - Rob Ray DoD#33333 <rray at linden.msvu.ca>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-list