[Python-Dev] [PEP 3148] futures - execute computations asynchronously

Calvin Spealman ironfroggy at gmail.com
Fri Mar 5 13:45:46 CET 2010


A young library solving an old problem in a way that conflicts with
many of the other implementations available for years and with zero
apparent users in the wild is not an appropriate candidate for a PEP.

On Fri, Mar 5, 2010 at 1:03 AM, Brian Quinlan <brian at sweetapp.com> wrote:
> Hi all,
>
> I recently submitted a daft PEP for a package designed to make it easier to
> execute Python functions asynchronously using threads and processes. It lets
> the user focus on their computational problem without having to build
> explicit thread/process pools and work queues.
>
> The package has been discussed on stdlib-sig but now I'd like this group's
> feedback.
>
> The PEP lives here:
> http://python.org/dev/peps/pep-3148/
>
> Here are two examples to whet your appetites:
>
> """Determine if several numbers are prime."""
> import futures
> import math
>
> PRIMES = [
>    112272535095293,
>    112582705942171,
>    112272535095293,
>    115280095190773,
>    115797848077099,
>    1099726899285419]
>
> def is_prime(n):
>    if n % 2 == 0:
>        return False
>
>    sqrt_n = int(math.floor(math.sqrt(n)))
>    for i in range(3, sqrt_n + 1, 2):
>        if n % i == 0:
>            return False
>    return True
>
> # Uses as many CPUs as your machine has.
> with futures.ProcessPoolExecutor() as executor:
>    for number, is_prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
>        print('%d is prime: %s' % (number, is_prime))
>
>
> """Print out the size of the home pages of various new sites (and Fox
> News)."""
> import futures
> import urllib.request
>
> URLS = ['http://www.foxnews.com/',
>        'http://www.cnn.com/',
>        'http://europe.wsj.com/',
>        'http://www.bbc.co.uk/',
>        'http://some-made-up-domain.com/']
>
> def load_url(url, timeout):
>    return urllib.request.urlopen(url, timeout=timeout).read()
>
> with futures.ThreadPoolExecutor(max_workers=5) as executor:
>    # Create a future for each URL load.
>    future_to_url = dict((executor.submit(load_url, url, 60), url)
>                         for url in URLS)
>
>    # Iterate over the futures in the order that they complete.
>    for future in futures.as_completed(future_to_url):
>        url = future_to_url[future]
>        if future.exception() is not None:
>            print('%r generated an exception: %s' % (url,
>                                                     future.exception()))
>        else:
>            print('%r page is %d bytes' % (url, len(future.result())))
>
> Cheers,
> Brian
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com
>



-- 
Read my blog! I depend on your acceptance of my opinion! I am interesting!
http://techblog.ironfroggy.com/
Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy


More information about the Python-Dev mailing list