Re: [Async-sig] [python-tulip] Python 3.6b2 will have C implemented Future

[+async-sig@python.org <async-sig@python.org>, which is the new home for these kinds of discussions] Tornado's tests are now failing on nightly with "TypeError: can't send non-None value to a FutureIter": https://travis-ci.org/tornadoweb/tornado/jobs/167252979 -Ben On Tue, Oct 11, 2016 at 3:55 PM INADA Naoki <songofacandy@gmail.com> wrote:
If you have asyncio based project, and it uses Travis-CI,
please add "nightly" to your .travis.cnf [2].
[2]
https://docs.travis-ci.com/user/languages/python/#Choosing-Python-versions-t...
Travis changed the "nightly" version to 3.7
Now "3.6-dev" is for Python 3.6beta (Still 3.6b1, it may be upgraded soon).
--
INADA Naoki <songofacandy@gmail.com>

Thanks, Ben. Both are very helpful information! On Thu, Oct 13, 2016 at 2:25 PM, Ben Darnell <ben@bendarnell.com> wrote:
[+async-sig@python.org, which is the new home for these kinds of discussions]
Tornado's tests are now failing on nightly with "TypeError: can't send non-None value to a FutureIter": https://travis-ci.org/tornadoweb/tornado/jobs/167252979
-Ben
On Tue, Oct 11, 2016 at 3:55 PM INADA Naoki <songofacandy@gmail.com> wrote:
If you have asyncio based project, and it uses Travis-CI,
please add "nightly" to your .travis.cnf [2].
[2]
https://docs.travis-ci.com/user/languages/python/#Choosing-Python-versions-t...
Travis changed the "nightly" version to 3.7
Now "3.6-dev" is for Python 3.6beta (Still 3.6b1, it may be upgraded soon).
--
INADA Naoki <songofacandy@gmail.com>
-- INADA Naoki <songofacandy@gmail.com>

Pulsar's tests are now run against 3.6-dev and all passing. Nice! Getting used to the C Future ;-) On 13 October 2016 at 06:37, INADA Naoki <songofacandy@gmail.com> wrote:
Thanks, Ben.
Both are very helpful information!
On Thu, Oct 13, 2016 at 2:25 PM, Ben Darnell <ben@bendarnell.com> wrote:
[+async-sig@python.org, which is the new home for these kinds of discussions]
Tornado's tests are now failing on nightly with "TypeError: can't send non-None value to a FutureIter": https://travis-ci.org/tornadoweb/tornado/jobs/167252979
-Ben
On Tue, Oct 11, 2016 at 3:55 PM INADA Naoki <songofacandy@gmail.com> wrote:
If you have asyncio based project, and it uses Travis-CI,
please add "nightly" to your .travis.cnf [2].
[2]
Choosing-Python-versions-to-test-against
Travis changed the "nightly" version to 3.7
Now "3.6-dev" is for Python 3.6beta (Still 3.6b1, it may be upgraded soon).
--
INADA Naoki <songofacandy@gmail.com>
-- INADA Naoki <songofacandy@gmail.com>

But tests taking 1.48 longer to run on average! Anything I should know about 3.6 and performance? On 18 November 2016 at 22:42, Luca Sbardella <luca.sbardella@gmail.com> wrote:
Pulsar's tests are now run against 3.6-dev and all passing. Nice! Getting used to the C Future ;-)
On 13 October 2016 at 06:37, INADA Naoki <songofacandy@gmail.com> wrote:
Thanks, Ben.
Both are very helpful information!
On Thu, Oct 13, 2016 at 2:25 PM, Ben Darnell <ben@bendarnell.com> wrote:
[+async-sig@python.org, which is the new home for these kinds of discussions]
Tornado's tests are now failing on nightly with "TypeError: can't send non-None value to a FutureIter": https://travis-ci.org/tornadoweb/tornado/jobs/167252979
-Ben
On Tue, Oct 11, 2016 at 3:55 PM INADA Naoki <songofacandy@gmail.com> wrote:
If you have asyncio based project, and it uses Travis-CI,
please add "nightly" to your .travis.cnf [2].
[2]
Python-versions-to-test-against
Travis changed the "nightly" version to 3.7
Now "3.6-dev" is for Python 3.6beta (Still 3.6b1, it may be upgraded soon).
--
INADA Naoki <songofacandy@gmail.com>
-- INADA Naoki <songofacandy@gmail.com>

On Nov 18, 2016, at 5:53 PM, Luca Sbardella <luca.sbardella@gmail.com> wrote:
But tests taking 1.48 longer to run on average! Anything I should know about 3.6 and performance?
That shouldn’t happen. Are you sure you aren’t running them in debug mode? Try to comment out imports of ‘_asyncio’ in futures.py and tasks.py and run benchmarks in 3.6 to compare Py Futures to C Futures. Also, which Python 3.6 version are you using? Please try to build one from the repo, I’ve fixed a couple of bugs since 3.6b2. Yury

Also, are you using uvloop or vanilla asyncio? Try to benchmark vanilla first. And if you have time, please try to test different combinations on vanilla asyncio: Python 3.5 + vanilla asyncio Python 3.6 + vanilla asyncio Python 3.6 + Py Future + Py Task Python 3.6 + Py Future + C Task Python 3.6 + C Future + C Task Python 3.6 + Py Future + Py Task Yury
On Nov 18, 2016, at 6:02 PM, Yury Selivanov <yselivanov@gmail.com> wrote:
On Nov 18, 2016, at 5:53 PM, Luca Sbardella <luca.sbardella@gmail.com> wrote:
But tests taking 1.48 longer to run on average! Anything I should know about 3.6 and performance?
That shouldn’t happen. Are you sure you aren’t running them in debug mode? Try to comment out imports of ‘_asyncio’ in futures.py and tasks.py and run benchmarks in 3.6 to compare Py Futures to C Futures.
Also, which Python 3.6 version are you using? Please try to build one from the repo, I’ve fixed a couple of bugs since 3.6b2.
Yury

On 18 November 2016 at 23:09, Yury Selivanov <yselivanov@gmail.com> wrote:
Also, are you using uvloop or vanilla asyncio? Try to benchmark vanilla first. And if you have time, please try to test different combinations on vanilla asyncio:
Python 3.5 + vanilla asyncio Python 3.6 + vanilla asyncio Python 3.6 + Py Future + Py Task Python 3.6 + Py Future + C Task Python 3.6 + C Future + C Task Python 3.6 + Py Future + Py Task
These are results of an asyncio-based benchmark - no sockets involved - highly nested coroutines The --io uv flag indicates a run with uvloop. py35 setup.py bench -a coroutine TestCoroutine.test_coroutine: repeated 10(x1000) times, average 0.50100 secs, stdev 1.13 % py35 setup.py bench -a coroutine --io uv TestCoroutine.test_coroutine: repeated 10(x1000) times, average 0.08246 secs, stdev 3.87 % py36 setup.py bench -a coroutine TestCoroutine.test_coroutine: repeated 10(x1000) times, average 0.26563 secs, stdev 2.35 % py36 setup.py bench -a coroutine --io uv TestCoroutine.test_coroutine: repeated 10(x1000) times, average 0.06762 secs, stdev 5.43 % I'll benchmark with sockets next

That shouldn’t happen. Are you sure you aren’t running them in debug mode? Try to comment out imports of ‘_asyncio’ in futures.py and tasks.py and run benchmarks in 3.6 to compare Py Futures to C Futures.
Also, which Python 3.6 version are you using? Please try to build one from the repo, I’ve fixed a couple of bugs since 3.6b2.
Running 3.6-dev from travis, currently b4+.
No debug mode, that is an order of magnitude slower ;-) Having said that, running another test suite, more asyncio centric, I get the 3.6-dev running at 68% speed v 3.5.2. So it looks very good indeed! Pulsar test suite is large and does not represent asyncio 1:1. Sorry for confusion, I will investigate further -- http://lucasbardella.com

On Nov 18, 2016, at 6:16 PM, Luca Sbardella <luca.sbardella@gmail.com> wrote:
Running 3.6-dev from travis, currently b4+. No debug mode, that is an order of magnitude slower ;-)
FWIW I found that I can’t really trust the build times on travis for measuring any kind of performance regressions. You don’t know how it’s compiled there, and more importantly, if Python 3.6 is compiled with the same settings as 3.5. Yury
participants (4)
-
Ben Darnell
-
INADA Naoki
-
Luca Sbardella
-
Yury Selivanov