[Python-Dev] [PEP 3148] futures - execute computations asynchronously
Brian Quinlan
brian at sweetapp.com
Sat Mar 6 10:32:24 CET 2010
On 6 Mar 2010, at 07:38, Brett Cannon wrote:
> The PEP says that futures.wait() should only use keyword arguments
> past its first positional argument, but the PEP has the function
> signature as ``wait(fs, timeout=None, return_when=ALL_COMPLETED)``.
> Should it be ``wait(fs, *, timeout=None, return_when=ALL_COMPLETED)``?
Hi Brett,
That recommendation was designed to make it easy to change the API
without breaking code.
I'd don't think that recommendation makes sense anymore any I'll
update the PEP.
Cheers,
Brian
> On Thu, Mar 4, 2010 at 22:03, Brian Quinlan <brian at sweetapp.com>
> wrote:
> Hi all,
>
> I recently submitted a daft PEP for a package designed to make it
> easier to execute Python functions asynchronously using threads and
> processes. It lets the user focus on their computational problem
> without having to build explicit thread/process pools and work queues.
>
> The package has been discussed on stdlib-sig but now I'd like this
> group's feedback.
>
> The PEP lives here:
> http://python.org/dev/peps/pep-3148/
>
> Here are two examples to whet your appetites:
>
> """Determine if several numbers are prime."""
> import futures
> import math
>
> PRIMES = [
> 112272535095293,
> 112582705942171,
> 112272535095293,
> 115280095190773,
> 115797848077099,
> 1099726899285419]
>
> def is_prime(n):
> if n % 2 == 0:
> return False
>
> sqrt_n = int(math.floor(math.sqrt(n)))
> for i in range(3, sqrt_n + 1, 2):
> if n % i == 0:
> return False
> return True
>
> # Uses as many CPUs as your machine has.
> with futures.ProcessPoolExecutor() as executor:
> for number, is_prime in zip(PRIMES, executor.map(is_prime,
> PRIMES)):
> print('%d is prime: %s' % (number, is_prime))
>
>
> """Print out the size of the home pages of various new sites (and
> Fox News)."""
> import futures
> import urllib.request
>
> URLS = ['http://www.foxnews.com/',
> 'http://www.cnn.com/',
> 'http://europe.wsj.com/',
> 'http://www.bbc.co.uk/',
> 'http://some-made-up-domain.com/']
>
> def load_url(url, timeout):
> return urllib.request.urlopen(url, timeout=timeout).read()
>
> with futures.ThreadPoolExecutor(max_workers=5) as executor:
> # Create a future for each URL load.
> future_to_url = dict((executor.submit(load_url, url, 60), url)
> for url in URLS)
>
> # Iterate over the futures in the order that they complete.
> for future in futures.as_completed(future_to_url):
> url = future_to_url[future]
> if future.exception() is not None:
> print('%r generated an exception: %s' % (url,
>
> future.exception()))
> else:
> print('%r page is %d bytes' % (url, len(future.result())))
>
> Cheers,
> Brian
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20100306/bd4b87eb/attachment.html>
More information about the Python-Dev
mailing list