Re: [stdlib-sig] futures - a new package for asynchronous execution

On 7 Nov 2009, at 18:37, inhahe wrote:
i don't understand this at all i hope you will provide a basic explanation of what we're doing for us simpletons :P
Sure, but next time could you ask a more precise question? ;-) # Create a pool of threads to execute calls. with futures.ThreadPoolExecutor(max_threads=5) as executor: # Schedule the given calls to run using the thread pool created above and # return a FutureList (a list of Futures + some convenience methods). Called # without the "return_when" argument, waits until all calls are complete. futures_list = executor.run_to_futures(...) # Iterate through every Future. A Future represents one of the asynchronous # calls. for url, future in zip(URLS, future_list): # Check if the call raised an exception. if future.exception() is not None: # Print the exception print('%r generated an exception: %s' % (url, future.exception())) else: # The call returned successfully so future.result() contains the return # value. print('%r page is %d bytes' % (url, len(future.result()))) Cheers, Brian
import futures import functools import urllib.request
URLS = [ 'http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/']
def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read()
# Use a thread pool with 5 threads to download the URLs. Using a pool # of processes would involve changing the initialization to: # with futures.ProcessPoolExecutor(max_processes=5) as executor with futures.ThreadPoolExecutor(max_threads=5) as executor:
future_list = executor.run_to_futures( [functools.partial(load_url, url, 30) for url in URLS])
# Check the results of each future. for url, future in zip(URLS, future_list): if future.exception() is not None: print('%r generated an exception: %s' % (url, future.exception())) else: print('%r page is %d bytes' % (url, len(future.result())))
In this example, executor.run_to_futures() returns only when every url has been retrieved but it is possible to return immediately, on the first completion or on the first failure depending on the desired work pattern.
The complete docs are here: http://sweetapp.com/futures/
participants (1)
-
Brian Quinlan