threading with time limit

Marc Tardif intmktg at Gloria.CAM.ORG
Mon Jun 5 18:33:31 EDT 2000


> >For example, consider a situation where all the links from multiple URL's
> >must be retrieved at the same time. By setting a time limit, slow sites
> >don't slow down the whole process and appropriate error messages are
> >returned stating which sites wheren't completed in y seconds.
> 
> Then you need to use a method of retrieving that recognizes time limits. In 
> your case, you have 3 choices:
> 
> - a version of urllib where urlopen uses select with a timeout and aborts 
> if the timeout expires (Aahz may have done this?).
> - a version of urllib that uses non-blocking sockets (then no threads 
> needed).

What do you mean by "no threads needed"? If I use non-blocking sockets, I
can kill whichever connection takes too long to respond, but there's still
the matter of retrieving each url simultaneously. Considering I'm quite
new to network programming, please explain how else can I reach my
objective other than using threads or forking processing as you suggest
below.

> - use separate processes instead of separate threads (because you *can* 
> kill a process and properly release resources).
> 
> If none of those options are available to you, then you're doing roughly 
> the right thing - just ignore the thread if it's too slow.

In the code I provided, how can I ignore a thread if it's too slow? In
other words, how can I determine if a thread hasn't finished executing?
Would this be appropriate:

for thread in threadlist:
    if not thread.isAlive():
        # continue processing here




More information about the Python-list mailing list