Complement to the proposed idea.
Some time ago I thought how cool it would be if one could implement a custom scheduler for Python's asyncio that used runtime metrics (defined by the user) to dynamically adjust priorities (with respect to the defined execution order). E.g. if the developer prioritizes database related operations over maintenance and current application load demands a lot of the former, the scheduler will proactively delay maintenance. In code it may look like decorator annotations assigning tasks to groups.
On Sep 25, 2019, at 2:02 PM, Viktor Roytman firstname.lastname@example.org wrote:
The asyncio and threading modules include a number of synchronization primitives. In particular, a Semaphore allows you to limit the number of concurrent tasks if you need to stay under some capacity constraint.
However, none of the existing primitives provide for rate limiting, as in making sure there are no more than n tasks started per second.
I believe this Stack Overflow question shows that adding such a primitive would be useful: https://stackoverflow.com/questions/35196974/aiohttp-set-maximum-number-of-r... The asker clearly wants rate limiting, but the answers provided limit concurrency instead.
I found an excellent answer by Martijn Pieters, which includes an implementation of the leaky bucket algorithm, here https://stackoverflow.com/a/45502319/1475412 The AsyncLeakyBucket is used in exactly the same way as a Semaphore. _______________________________________________ Python-ideas mailing list -- email@example.com To unsubscribe send an email to firstname.lastname@example.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://email@example.com/message/45ZKM6... Code of Conduct: http://python.org/psf/codeofconduct/