From report at bugs.python.org Sat Jun 1 00:07:01 2019 From: report at bugs.python.org (=?utf-8?b?5p6X6Ieq5Z2H?=) Date: Sat, 01 Jun 2019 04:07:01 +0000 Subject: [New-bugs-announce] [issue37119] Equality on dict.values() are inconsistent between 2 and 3 Message-ID: <1559362021.42.0.783299654915.issue37119@roundup.psfhosted.org> New submission from ??? : When I create 2 different dicts with the same literal, their dict.values() are equal in python2 but not equal in python3. Here is an example in python2: $ python2 Python 2.7.16 (default, Mar 4 2019, 09:02:22) [GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> a = {'hello': 'world'} >>> b = {'hello': 'world'} >>> a.values() == b.values() True >>> a.keys() == b.keys() True However, the dict.values() are not equal in python3: $ python3 Python 3.7.2 (default, Feb 12 2019, 08:16:38) [Clang 10.0.0 (clang-1000.11.45.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> a = {'hello': 'world'} >>> b = {'hello': 'world'} >>> a.values() == b.values() False >>> a.keys() == b.keys() True Is this a bug? Or is this behavior specified somewhere in the documentation? Thanks. Note: it's inspired by this StackOverflow question: https://stackoverflow.com/questions/56403613/questions-about-python-dictionary-equality ---------- components: Library (Lib) messages: 344145 nosy: johnlinp priority: normal severity: normal status: open title: Equality on dict.values() are inconsistent between 2 and 3 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 01:05:29 2019 From: report at bugs.python.org (Nathaniel Smith) Date: Sat, 01 Jun 2019 05:05:29 +0000 Subject: [New-bugs-announce] [issue37120] Provide knobs to disable session ticket generation on TLS 1.3 Message-ID: <1559365529.23.0.311490427939.issue37120@roundup.psfhosted.org> New submission from Nathaniel Smith : Maybe we should expose the SSL_CTX_set_num_tickets function: https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_num_tickets.html This is a new function added in OpenSSL 1.1.1, that lets you control the number of tickets issued by a TLS 1.3 connection. It turns out that by default, openssl 1.1.1 issues 2 tickets at the end of the server handshake. In principle this can cause deadlocks and data corruption: https://github.com/openssl/openssl/issues/7967 https://github.com/openssl/openssl/issues/7948 And my problem specifically is that this pretty much kills all of Trio's fancy protocol testing helpers, because any protocol that's built on top of TLS is automatically broken as far as the helpers are concerned. And they're right. Right now I have to disable TLS 1.3 entirely to get Trio's test suite to avoid deadlocking. Hopefully the openssl devs will fix this, but so far their latest comment is that they consider this a feature and so they think it has to stay broken for at least RHEL 8's lifetime. Currently the only workaround is to either disable TLS 1.3, or disable tickets. But the 'ssl' module doesn't expose any way to control tickets. This issue proposes to add that. Fortunately, exposing SSL_CTX_set_num_tickets should be pretty trivial, at least in theory. Questions: Do we have a preferred convention for how to expose these kinds of settings at the Python level? Attribute on SSLContext? There's both an SSL* and a SSL_CTX* ? I guess the CTX version is sufficient, or is there another convention? As a bonus complication, openssl sometimes ignores the configured number of tickets, and uses a completely different mechanism: > The default number of tickets is 2; the default number of tickets sent following a resumption handshake is 1 but this cannot be changed using these functions. The number of tickets following a resumption handshake can be reduced to 0 using custom session ticket callbacks (see https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_session_ticket_cb.html) Do we want to open the can-of-worms involved in wrapping this too? I think if we only wrapped SSL_CTX_set_num_tickets then that would be enough to let me kluge my tests into passing and pretend that things are just fine, so long as we don't test session resumption... ---------- assignee: christian.heimes components: SSL messages: 344148 nosy: alex, christian.heimes, dstufft, janssen, njs priority: normal severity: normal status: open title: Provide knobs to disable session ticket generation on TLS 1.3 type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 10:13:52 2019 From: report at bugs.python.org (Lasha Gogua) Date: Sat, 01 Jun 2019 14:13:52 +0000 Subject: [New-bugs-announce] =?utf-8?b?W2lzc3VlMzcxMjFdICfhg5AnLnVwcGVy?= =?utf-8?b?KCkgc2hvdWxkIHJldHVybiAn4YOQJw==?= Message-ID: <1559398432.1.0.631402061489.issue37121@roundup.psfhosted.org> New submission from Lasha Gogua : Python's .upper() string method still translates Georgian characters this to "'?'" which is not right Python 3.7.3 (default, May 11 2019, 00:45:16) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> '?' '?' >>> '?'.upper() '?' >>> print('?'.upper()) ? >>> and it works in Python 3.6.x and below Python 3.6.3 (default, Jan 4 2018, 16:40:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> '?' '?' >>> '?'.upper() '?' >>> print('?'.upper()) ? >>> now i have found the solution but is it correct? What's changed in the new version (Python 3.7.x)? Python 3.7.3 (default, May 11 2019, 00:45:16) [GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> print('?'.upper()) ? >>> print('?'.encode().upper().decode()) ? >>> print('?'.encode('utf-8').upper().decode('utf-8')) ? >>> ---------- components: Unicode messages: 344174 nosy: Lasha Gogua, ezio.melotti, vstinner priority: normal severity: normal status: open title: '?'.upper() should return '?' type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 11:46:08 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 01 Jun 2019 15:46:08 +0000 Subject: [New-bugs-announce] [issue37122] Make co->co_argcount represent the total number of positional arguments in the code object Message-ID: <1559403968.0.0.258896403168.issue37122@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Originally proposed in: https://mail.python.org/pipermail/python-dev/2019-June/157812.html ---------- components: Interpreter Core messages: 344178 nosy: pablogsal, scoder, serhiy.storchaka priority: normal severity: normal status: open title: Make co->co_argcount represent the total number of positional arguments in the code object versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 12:05:31 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 01 Jun 2019 16:05:31 +0000 Subject: [New-bugs-announce] [issue37123] test_multiprocessing fails randomly on Windows Message-ID: <1559405131.96.0.0818992217462.issue37123@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : https://buildbot.python.org/all/#/builders/58/builds/2539 https://buildbot.python.org/all/#/builders/58/builds/2539 Traceback (most recent call last): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_venv.py", line 327, in test_multiprocessing out, err = check_output([envpy, '-c', File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_venv.py", line 42, in check_output raise subprocess.CalledProcessError( subprocess.CalledProcessError: Command '['d:\\temp\\tmp_qt0ywa6\\Scripts\\python_d.exe', '-c', 'from multiprocessing import Pool; print(Pool(1).apply_async("Python".lower).get(3))']' returned non-zero exit status 3221225477. ====================================================================== FAIL: test_mymanager (test.test_multiprocessing_spawn.WithManagerTestMyManager) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\_test_multiprocessing.py", line 2806, in test_mymanager self.assertEqual(manager._process.exitcode, 0) AssertionError: -15 != 0 ---------------------------------------------------------------------- ---------- messages: 344180 nosy: pablogsal priority: normal severity: normal status: open title: test_multiprocessing fails randomly on Windows _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 12:15:26 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 01 Jun 2019 16:15:26 +0000 Subject: [New-bugs-announce] [issue37124] test_msilib is potentially leaking references and memory blocks Message-ID: <1559405726.45.0.528544648803.issue37124@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : BUILDBOT FAILURE REPORT ======================= Builder name: AMD64 Windows8.1 Refleaks 2.7 Builder url: https://buildbot.python.org/all/#/builders/33/ Build url: https://buildbot.python.org/all/#/builders/33/builds/604 Failed tests ------------ Test leaking resources ---------------------- - test_msilib is leaking references Build summary ------------- == Tests result: FAILURE then FAILURE == 360 tests OK. 10 slowest tests: - test_bsddb3: 4042.9s - test_mmap: 434.0s - test_multiprocessing: 313.8s - test_shelve: 246.5s - test_mailbox: 239.3s - test_ssl: 227.2s - test_lib2to3: 135.6s - test_largefile: 131.5s - test_decimal: 126.0s - test_urllib2_localnet: 122.6s 1 test failed: test_msilib 42 tests skipped: test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl test_commands test_crypt test_curses test_dbm test_dl test_epoll test_fcntl test_fork1 test_gdb test_gdbm test_gl test_grp test_imgfile test_ioctl test_kqueue test_linuxaudiodev test_macos test_macostools test_mhlib test_nis test_openpty test_ossaudiodev test_pipes test_poll test_posix test_pty test_pwd test_readline test_resource test_scriptpackages test_spwd test_sunaudiodev test_threadsignals test_wait3 test_wait4 test_zipfile64 2 skips unexpected on win32: test_gdb test_readline 1 re-run test: test_msilib Total duration: 1 hour 20 min Tracebacks ---------- Current builder status ---------------------- The builder is failing currently Commits ------- Other builds with similar failures ---------------------------------- - https://buildbot.python.org/all/#/builders/80/builds/608 - https://buildbot.python.org/all/#/builders/132/builds/505 ---------- components: Tests messages: 344182 nosy: pablogsal priority: normal severity: normal status: open title: test_msilib is potentially leaking references and memory blocks type: behavior versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 12:31:17 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 01 Jun 2019 16:31:17 +0000 Subject: [New-bugs-announce] [issue37125] math.comb is leaking references Message-ID: <1559406677.46.0.632223828362.issue37125@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : ./python Lib/test/bisect_cmd.py test_math -R 2:2 ... WARNING: Running tests with --huntrleaks/-R and less than 3 warmup repetitions can give false positives! Run tests sequentially 0:00:00 load avg: 1.32 [1/1] test_math beginning 4 repetitions 1234 .... test_math leaked [12352, 12352] references, sum=24704 test_math leaked [1, 1] memory blocks, sum=2 test_math failed == Tests result: FAILURE == 1 test failed: test_math Total duration: 663 ms Tests result: FAILURE ran 1 tests/2 exit 2 Tests failed: continuing with this subtest Tests (1): * test.test_math.IsCloseTests.testComb Bisection completed in 13 iterations and 0:00:11 ---------- components: Interpreter Core, Tests messages: 344185 nosy: pablogsal, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: math.comb is leaking references versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 13:00:48 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 01 Jun 2019 17:00:48 +0000 Subject: [New-bugs-announce] [issue37126] test_threading is leaking references Message-ID: <1559408448.65.0.831974411649.issue37126@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : https://buildbot.python.org/all/#/builders/1/builds/601 OK (skipped=1) . test_threading leaked [9770, 9772, 9768] references, sum=29310 test_threading leaked [3960, 3961, 3959] memory blocks, sum=11880 2 tests failed again: test_asyncio test_threading ---------- components: Tests messages: 344191 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_threading is leaking references versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 15:11:46 2019 From: report at bugs.python.org (Eric Snow) Date: Sat, 01 Jun 2019 19:11:46 +0000 Subject: [New-bugs-announce] [issue37127] Handling pending calls during runtime finalization may cause problems. Message-ID: <1559416306.37.0.716771519489.issue37127@roundup.psfhosted.org> New submission from Eric Snow : In Python/lifecycle.c (Py_FinalizeEx) we call _Py_FinishPendingCalls(), right after we stop all non-daemon Python threads but before we've actually started finalizing the runtime state. That call looks for any remaining pending calls (for the main interpreter) and runs them. There's some evidence of a bug there. In bpo-33608 I moved the pending calls to per-interpreter state. We saw failures (sometimes sporadic) on a few buildbots (e.g. FreeBSD) during runtime finalization. However, nearly all of the buildbots were fine, so it may be a question of architecture or slow hardware. See bpo-33608 for details on the failures. There are a number of possibilities, but it's been tricky reproducing the problem in order to investigate. Here are some theories: * daemon threads (a known weak point in runtime finalization) block pending calls from happening until some time after portions of the runtime have already been cleaned up * there's a race that causes the pending calls machinery to get caught in some sort infinite loop (e.g. a pending call fails and gets re-queued) * a corner case in the pending calls logic that triggers only during finalization Here are some other points to consider: * do we have the same problem during subinterpreter finalization (Py_EndInterpreter() rather than runtime finalization)? * perhaps the problem extends beyond finalization, but the conditions are more likely there * the change for bpo-33608 could have introduced the bug rather that exposing an existing one ---------- assignee: eric.snow components: Interpreter Core messages: 344202 nosy: eric.snow priority: normal severity: normal stage: test needed status: open title: Handling pending calls during runtime finalization may cause problems. type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 16:47:29 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 01 Jun 2019 20:47:29 +0000 Subject: [New-bugs-announce] [issue37128] Add math.perm() Message-ID: <1559422049.94.0.857716315829.issue37128@roundup.psfhosted.org> New submission from Serhiy Storchaka : The function which returns the number of ways to choose k items from n items without repetition and without order was added in issue35431. This functions is always goes in pair with other function, which returns the number of ways to choose k items from n items without repetition and with order. These functions are always learned together in curses of combinatorics. Often C(n,k) is determined via P(n,k) (and both are determined via factorial). P(n, k) = n! / (n-k)! C(n, k) = P(n, k) / k! The proposed PR adds meth.perm(). It shares most of the code with math.comb(). ---------- components: Library (Lib) messages: 344226 nosy: lemburg, mark.dickinson, serhiy.storchaka, stutzbach priority: normal severity: normal status: open title: Add math.perm() type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 17:51:52 2019 From: report at bugs.python.org (Michal) Date: Sat, 01 Jun 2019 21:51:52 +0000 Subject: [New-bugs-announce] [issue37129] Add RWF_APPEND flag Message-ID: <1559425912.96.0.374088169798.issue37129@roundup.psfhosted.org> Change by Michal : ---------- components: Library (Lib) nosy: bezoka, pablogsal, vstinner priority: normal severity: normal status: open title: Add RWF_APPEND flag type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 1 21:18:12 2019 From: report at bugs.python.org (N.P. Khelili) Date: Sun, 02 Jun 2019 01:18:12 +0000 Subject: [New-bugs-announce] [issue37130] pathlib.with_name() doesn't like unnamed files. Message-ID: <1559438292.33.0.601764456145.issue37130@roundup.psfhosted.org> New submission from N.P. Khelili : Hi guys! I'm new to python and working on my first real project with it.... I'm sorry if it's not the right place for posting this. I noticed that that pathlib.with_name() method does not accept to give a name to a path that does not already have one. It seems a bit inconsistent knowing that the Path constructor does not require one... >>> Path() PosixPath('.') >>> Path().resolve() PosixPath('/home/nono') but: >>> Path().with_name('dodo') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.7/pathlib.py", line 819, in with_name raise ValueError("%r has an empty name" % (self,)) ValueError: PosixPath('.') has an empty name whereas if you do: >>> Path().resolve().with_name('dodo') PosixPath('/home/dodo') I first tought "explicit is better than implicit" and then why not allways use resolve first! That was not a big deal but then I tried: >>> Path('..').with_name('dudu').resolve() PosixPath('/home/nono/dudu') ( ! ) >>> Path('..').resolve().with_name('dudu') PosixPath('/dudu') It seems that the dots and slashes are in fact not really interpreted leading to: >>> Path('../..').with_name('dudu') PosixPath('../dudu') >>> Path('../..').with_name('dudu').resolve() PosixPath('/home/dudu') ( ! ) >>> Path('../..').resolve().with_name('dudu') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.7/pathlib.py", line 819, in with_name raise ValueError("%r has an empty name" % (self,)) ValueError: PosixPath('/') has an empty name Even if the doc briefly tells about this, I found this behavior quite disturbing.... I don't know what could be the correct answer to this, maybe making Path('..') as invalid as Path('.'), or adding a few more lines in the doc... Sincerly yours, ---------- components: Library (Lib) messages: 344250 nosy: Nophke priority: normal severity: normal status: open title: pathlib.with_name() doesn't like unnamed files. type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 2 05:39:49 2019 From: report at bugs.python.org (Terji) Date: Sun, 02 Jun 2019 09:39:49 +0000 Subject: [New-bugs-announce] [issue37131] all(range()...)) is needlessley slow Message-ID: <1559468389.01.0.935675553387.issue37131@roundup.psfhosted.org> New submission from Terji : Checking if a range's items are ll truthy can be done y checking if 0 the range contains 0, however currently Python iterates over the range, making the operation slower than needed. >>> rng = range(1, 1_000_000) >>> timeit all(rng) 19.9 ms ? 599 ?s per loop If the all function could special case range, this could be like made fast like this: if isinstance(obj, range): if 0 in obj: return False return True ---------- components: Interpreter Core messages: 344263 nosy: Petersen priority: normal severity: normal status: open title: all(range()...)) is needlessley slow type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 2 06:54:57 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 02 Jun 2019 10:54:57 +0000 Subject: [New-bugs-announce] [issue37132] Add a module for integer related math functions Message-ID: <1559472897.43.0.110105466613.issue37132@roundup.psfhosted.org> New submission from Serhiy Storchaka : The math module contains as function for floating-point numbers, as wells as functions for integer only numbers: factorial(), gcd(). Yet few integer specific functions was added in 3.8: isqrt(), perm(), comb(). The proposed PR adds the new imath module, adds into it old functions factorial() and gcd() and moves new functions. It also adds two additional functions: as_integer_ratio() and ilog2(). There are plans for adding more integer functions: divide_and_round(), is_prime(), primes(), but the work in progress. ---------- components: Library (Lib) messages: 344269 nosy: mark.dickinson, rhettinger, serhiy.storchaka, stutzbach, tim.peters priority: normal severity: normal status: open title: Add a module for integer related math functions type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 2 10:21:12 2019 From: report at bugs.python.org (=?utf-8?b?0JDQvdC00YDQtdC5INCa0LDQt9Cw0L3RhtC10LI=?=) Date: Sun, 02 Jun 2019 14:21:12 +0000 Subject: [New-bugs-announce] [issue37133] Erro "ffi.h: No such file" when build python 3.8 (branch master) on windows 10 Message-ID: <1559485272.43.0.308074768888.issue37133@roundup.psfhosted.org> New submission from ?????? ???????? : Where to get "ffi.h" file? ---------- components: Build messages: 344287 nosy: heckad priority: normal severity: normal status: open title: Erro "ffi.h: No such file" when build python 3.8 (branch master) on windows 10 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 2 12:18:06 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 02 Jun 2019 16:18:06 +0000 Subject: [New-bugs-announce] [issue37134] Use PEP570 syntax in the documentation Message-ID: <1559492286.96.0.468855325599.issue37134@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : In the documentation, there are a lot of mismatches regarding function and method signatures that use positional-only arguments with respect to the docstrings (that properly use PEP570 syntax for documenting positional-only parameters). Not that the official syntax for positional-only parameters is accepted (the "/"), the documentation needs to be updated to reflect positional-only parameters in the functions and methods that use them as covered in the PEP document. ---------- assignee: docs at python components: Documentation messages: 344295 nosy: docs at python, pablogsal priority: normal severity: normal status: open title: Use PEP570 syntax in the documentation versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 2 18:54:17 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 02 Jun 2019 22:54:17 +0000 Subject: [New-bugs-announce] [issue37135] test_multiprocessing_spawn segfaults on AMD64 FreeBSD CURRENT Shared 3.x Message-ID: <1559516057.97.0.679920443323.issue37135@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : test_import (test.test_multiprocessing_spawn._TestImportStar) ... ok ---------------------------------------------------------------------- Ran 352 tests in 322.667s OK (skipped=34) Warning -- files was modified by test_multiprocessing_spawn Before: [] After: ['python.core'] 0:11:39 load avg: 4.96 [202/423/1] test_winconsoleio skipped test_winconsoleio skipped -- test only relevant on win32 https://buildbot.python.org/all/#/builders/168/builds/1124/steps/5/logs/stdio ---------- components: Interpreter Core, Tests keywords: buildbot messages: 344332 nosy: pablogsal priority: high severity: normal status: open title: test_multiprocessing_spawn segfaults on AMD64 FreeBSD CURRENT Shared 3.x type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 2 21:09:22 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Jun 2019 01:09:22 +0000 Subject: [New-bugs-announce] [issue37136] Travis CI: Documentation tests fails with Sphinx 2.1 Message-ID: <1559524162.91.0.6239630561.issue37136@roundup.psfhosted.org> New submission from STINNER Victor : Travis CI: "Documentation tests" failed with: Successfully installed Jinja2-2.10.1 MarkupSafe-1.1.1 Pygments-2.4.2 Sphinx-2.1.0 alabaster-0.7.12 babel-2.7.0 blurb-1.0.7 certifi-2019.3.9 chardet-3.0.4 docutils-0.14 idna-2.8 imagesize-1.1.0 packaging-19.0 pyparsing-2.4.0 python-docs-theme-2018.7 pytz-2019.1 requests-2.22.0 six-1.12.0 snowballstemmer-1.2.1 sphinxcontrib-applehelp-1.0.1 sphinxcontrib-devhelp-1.0.1 sphinxcontrib-htmlhelp-1.0.2 sphinxcontrib-jsmath-1.0.1 sphinxcontrib-qthelp-1.0.2 sphinxcontrib-serializinghtml-1.1.3 urllib3-1.25.3 (...) Warning, treated as error: duplicate object description of email.message, other instance in library/email.message, use :noindex: for one of them https://travis-ci.org/python/cpython/jobs/540532864 ref: https://github.com/python/cpython/pull/13762 -- Sphinx 2.1 was released yesterday: https://www.sphinx-doc.org/en/master/changes.html#release-2-1-0-released-jun-02-2019 ---------- components: Build messages: 344352 nosy: lukasz.langa, mdk, vstinner priority: release blocker severity: normal status: open title: Travis CI: Documentation tests fails with Sphinx 2.1 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 2 21:18:08 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Jun 2019 01:18:08 +0000 Subject: [New-bugs-announce] [issue37137] test_asyncio: test_cancel_gather_2() dangling thread Message-ID: <1559524688.39.0.485790490316.issue37137@roundup.psfhosted.org> New submission from STINNER Victor : s390x RHEL 3.x buildbot: https://buildbot.python.org/all/#/builders/21/builds/3150 test_cancel_gather_1 (test.test_asyncio.test_tasks.CTaskSubclass_PyFuture_Tests) Ensure that a gathering future refuses to be cancelled once all ... ok test_cancel_gather_2 (test.test_asyncio.test_tasks.CTaskSubclass_PyFuture_Tests) ... Warning -- threading_cleanup() failed to cleanup -1 threads (count: 0, dangling: 1) Dangling thread: <_MainThread(MainThread, started 4396153272064)> ok test_cancel_inner_future (test.test_asyncio.test_tasks.CTaskSubclass_PyFuture_Tests) ... ok test_cancel_task_catching (test.test_asyncio.test_tasks.CTaskSubclass_PyFuture_Tests) ... ok -- AMD64 FreeBSD 10-STABLE Non-Debug 3.x: https://buildbot.python.org/all/#/builders/167/builds/1181 test_cancel_gather_2 (test.test_asyncio.test_tasks.CTaskSubclass_PyFuture_Tests) ... Warning -- threading_cleanup() failed to cleanup -1 threads (count: 0, dangling: 1) ---------- components: Tests messages: 344353 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_asyncio: test_cancel_gather_2() dangling thread versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 00:16:10 2019 From: report at bugs.python.org (Gregory P. Smith) Date: Mon, 03 Jun 2019 04:16:10 +0000 Subject: [New-bugs-announce] [issue37138] PEP 590 method_vectorcall calls memcpy with NULL src Message-ID: <1559535370.56.0.947989437279.issue37138@roundup.psfhosted.org> New submission from Gregory P. Smith : The undefined behavior sanitizer buildbot is flagging a bunch of issues in master (3.8) of late: AssertionError: 'Objects/classobject.c:74:29: runtime erro[139 chars]re\n' != '' - Objects/classobject.c:74:29: runtime error: null pointer passed as argument 2, which is declared to never be null - /usr/include/string.h:43:28: note: nonnull attribute specified here (see https://buildbot.python.org/all/#/builders/135/builds/1937/steps/5/logs/stdio) This appears to be coming from a relatively new classobject.c:method_vectorcall() function method_vectorcall(PyObject *method, PyObject *const *args, size_t nargsf, PyObject *kwnames) Which looks like it is probably being called with NULL args value and thus winds up calling memcpy() with src=NULL. This was introduced in https://github.com/python/cpython/commit/aacc77fbd77640a8f03638216fa09372cc21673d for the PEP 590 implementation. ---------- assignee: Mark.Shannon components: Interpreter Core messages: 344378 nosy: Mark.Shannon, gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: PEP 590 method_vectorcall calls memcpy with NULL src type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 04:52:55 2019 From: report at bugs.python.org (Louis Abraham) Date: Mon, 03 Jun 2019 08:52:55 +0000 Subject: [New-bugs-announce] [issue37139] Inconsistent behavior of email.header.decode_header Message-ID: <1559551975.38.0.600668913696.issue37139@roundup.psfhosted.org> New submission from Louis Abraham : Hi, (this is my first issue btw) I think has a broken behavior. It returns a list of `(decoded_string, charset)` pairs. However, `decoded_string` can be either a string or bytes. I have no preference but it should really be of one type. ---------- messages: 344392 nosy: louis.abraham at yahoo.fr priority: normal severity: normal status: open title: Inconsistent behavior of email.header.decode_header _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 05:21:37 2019 From: report at bugs.python.org (Petr Viktorin) Date: Mon, 03 Jun 2019 09:21:37 +0000 Subject: [New-bugs-announce] [issue37140] ctypes change made clang fail to build Message-ID: <1559553697.78.0.201685591504.issue37140@roundup.psfhosted.org> New submission from Petr Viktorin : Hello, I haven't investigated this fully yet, and won't be able to find time today. I'm forwarding the report here in case someone familiar with the code wants to take a look. On Fedora, "clang" fails to build with Python 3.8, probably due this change (which was supposed to be Windows-only): https://github.com/python/cpython/commit/32119e10b792ad7ee4e5f951a2d89ddbaf111cc5#diff-998bfefaefe2ab83d5f523e18f158fa4R413 According to serge_sans_paille: if ``self->b_ptr`` contains pointer, the ``memcpy`` creates sharing, and this is dangerous: if a ``__del__`` happens to free the original pointer, we end up with a dangling reference in ``new_ptr``. As far as I can tell, this is what happens in the clang bindings code. Fedora discussion with link to logs: https://bugzilla.redhat.com/show_bug.cgi?id=1715016 ---------- components: ctypes messages: 344394 nosy: petr.viktorin priority: normal severity: normal status: open title: ctypes change made clang fail to build versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 05:32:30 2019 From: report at bugs.python.org (Bruce Merry) Date: Mon, 03 Jun 2019 09:32:30 +0000 Subject: [New-bugs-announce] [issue37141] Allow multiple separators in StreamReader.readuntil Message-ID: <1559554350.7.0.981137522594.issue37141@roundup.psfhosted.org> New submission from Bruce Merry : Text-based protocols sometimes allow a choice of newline separator - I work with one that allows either \r or \n. Unfortunately that doesn't work with StreamReader.readuntil, which only accepts a single separator, so I've had to do some hacky things to obtain lines without having to >From discussion in issue 32052, it sounded like extending StreamReader.readuntil to support a tuple of separators would be feasible. ---------- components: asyncio messages: 344397 nosy: asvetlov, bmerry, yselivanov priority: normal severity: normal status: open title: Allow multiple separators in StreamReader.readuntil type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 08:23:44 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Jun 2019 12:23:44 +0000 Subject: [New-bugs-announce] [issue37142] test_asyncio timed out on AMD64 FreeBSD CURRENT Shared 3.x Message-ID: <1559564624.91.0.895932729881.issue37142@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/168/builds/1135 running: test_asyncio (23 min 23 sec) running: test_asyncio (23 min 53 sec) running: test_asyncio (24 min 23 sec) running: test_asyncio (24 min 53 sec) 0:38:48 load avg: 0.32 [423/423/1] test_asyncio crashed (Exit code 1) Timeout (0:25:00)! Thread 0x0000000800ac3000 (most recent call first): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/selectors.py", line 558 in select File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/asyncio/base_events.py", line 1808 in _run_once File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/asyncio/base_events.py", line 563 in run_forever File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/asyncio/base_events.py", line 595 in run_until_complete File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/test_asyncio/test_streams.py", line 1531 in test_stream_server_abort File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/case.py", line 628 in _callTestMethod File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/case.py", line 671 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/case.py", line 731 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 122 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 84 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 122 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 84 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 122 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 84 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 122 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 84 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 122 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/suite.py", line 84 in __call__ File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/unittest/runner.py", line 176 in run File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/support/__init__.py", line 1984 in _run_suite File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/support/__init__.py", line 2080 in run_unittest File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/runtest.py", line 203 in _test_module File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/runtest.py", line 228 in _runtest_inner2 File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/runtest.py", line 264 in _runtest_inner File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/runtest.py", line 135 in _runtest File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/runtest.py", line 187 in runtest File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/runtest_mp.py", line 66 in run_tests_worker File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/main.py", line 611 in _main File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/main.py", line 588 in main File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/libregrtest/main.py", line 663 in main File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/regrtest.py", line 46 in _main File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/regrtest.py", line 50 in File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/runpy.py", line 85 in _run_code File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/runpy.py", line 192 in _run_module_as_main == Tests result: FAILURE == ... ====================================================================== ERROR: test_stdin_stdout (test.test_asyncio.test_subprocess.SubprocessMultiLoopWatcherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/test_asyncio/test_subprocess.py", line 130, in test_stdin_stdout exitcode, stdout = self.loop.run_until_complete(task) File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/asyncio/base_events.py", line 608, in run_until_complete return future.result() File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/asyncio/tasks.py", line 464, in wait_for raise exceptions.TimeoutError() asyncio.exceptions.TimeoutError ---------- components: Tests messages: 344415 nosy: vstinner priority: normal severity: normal status: open title: test_asyncio timed out on AMD64 FreeBSD CURRENT Shared 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 08:27:45 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Jun 2019 12:27:45 +0000 Subject: [New-bugs-announce] [issue37143] multiprocessing crashed with EXCEPTION_ACCESS_VIOLATION on Python on x86 Windows7 3.x Message-ID: <1559564865.07.0.954701975633.issue37143@roundup.psfhosted.org> New submission from STINNER Victor : Maybe it's related to bpo-33608: https://bugs.python.org/issue33608#msg340143 https://buildbot.python.org/all/#/builders/58/builds/2558 0:42:36 load avg: 5.63 [299/423/1] test_venv crashed (Exit code 1) Timeout (0:35:00)! Thread 0x00000e00 (most recent call first): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\subprocess.py", line 1332 in _readerthread File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 865 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 923 in _bootstrap_inner File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 885 in _bootstrap Thread 0x00000150 (most recent call first): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\subprocess.py", line 1332 in _readerthread File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 865 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 923 in _bootstrap_inner File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 885 in _bootstrap Thread 0x00000d40 (most recent call first): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 1015 in _wait_for_tstate_lock File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 999 in join File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\subprocess.py", line 1361 in _communicate File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\subprocess.py", line 999 in communicate File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_venv.py", line 40 in check_output File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_venv.py", line 327 in test_multiprocessing File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\case.py", line 628 in _callTestMethod File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\case.py", line 671 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\case.py", line 731 in __call__ File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 122 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 84 in __call__ File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 122 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 84 in __call__ File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 122 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 84 in __call__ File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\runner.py", line 176 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 1984 in _run_suite File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 2080 in run_unittest File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 203 in _test_module File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 228 in _runtest_inner2 File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 264 in _runtest_inner File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 135 in _runtest File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 187 in runtest File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest_mp.py", line 66 in run_tests_worker File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 611 in _main File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 588 in main File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 663 in main File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\regrtest.py", line 46 in _main File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\regrtest.py", line 50 in File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\runpy.py", line 85 in _run_code File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\runpy.py", line 192 in _run_module_as_main ... ERROR: test_multiprocessing (test.test_venv.BasicTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_venv.py", line 327, in test_multiprocessing out, err = check_output([envpy, '-c', File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_venv.py", line 42, in check_output raise subprocess.CalledProcessError( subprocess.CalledProcessError: Command '['d:\\temp\\tmpf38yk5w1\\Scripts\\python_d.exe', '-c', 'from multiprocessing import Pool; print(Pool(1).apply_async("Python".lower).get(3))']' returned non-zero exit status 3221225477. where 3221225477 = 0xc0000005 = EXCEPTION_ACCESS_VIOLATION ---------- components: Tests messages: 344416 nosy: vstinner priority: normal severity: normal status: open title: multiprocessing crashed with EXCEPTION_ACCESS_VIOLATION on Python on x86 Windows7 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 10:54:20 2019 From: report at bugs.python.org (DM) Date: Mon, 03 Jun 2019 14:54:20 +0000 Subject: [New-bugs-announce] [issue37144] tarfile.open: improper handling of path-like object Message-ID: <1559573660.5.0.124771167649.issue37144@roundup.psfhosted.org> New submission from DM : According to the documentation, tarfile.open accepts a path-like object since Python 3.6 However, it is not the case when writing a compressed gzip (w|gz). See the attached file for minimal POC Note 1: tested on 3.6 and 3.7 Note 2: https://docs.python.org/3.6/library/tarfile.html#tarfile.open ---------- components: Library (Lib) files: poc.py messages: 344423 nosy: dm priority: normal severity: normal status: open title: tarfile.open: improper handling of path-like object type: behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48386/poc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 12:48:36 2019 From: report at bugs.python.org (Geoffrey Sneddon) Date: Mon, 03 Jun 2019 16:48:36 +0000 Subject: [New-bugs-announce] [issue37145] collections.abc.MappingView mixins rely on undocumented _mapping Message-ID: <1559580516.49.0.942559591286.issue37145@roundup.psfhosted.org> New submission from Geoffrey Sneddon : The mixin methods on MappingView and its subclasses all rely on the undocumented MappingView._mapping (set in the undocumented MappingView.__init__). This should probably be documented at least insofar as Set._from_iterable is. ---------- assignee: docs at python components: Documentation messages: 344447 nosy: docs at python, gsnedders priority: normal severity: normal status: open title: collections.abc.MappingView mixins rely on undocumented _mapping type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 3 16:34:43 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 03 Jun 2019 20:34:43 +0000 Subject: [New-bugs-announce] [issue37146] opcode cache for LOAD_GLOBAL emits false alarm in memory leak hunting Message-ID: <1559594083.93.0.475089830437.issue37146@roundup.psfhosted.org> New submission from STINNER Victor : opcode cache for LOAD_GLOBAL introduced false alarm in memory leak hunting (python3 -m test -R 3:3 ...). => opcache: bpo-26219. Before the change: $ git checkout 91234a16367b56ca03ee289f7c03a34d4cfec4c8^ $ make && ./python -m test -R 3:3 test_pprint ... Tests result: SUCCESS After the change: $ git checkout 91234a16367b56ca03ee289f7c03a34d4cfec4c8 $ make && ./python -m test -R 3:3 test_pprint ... test_pprint leaked [4, 2, 4] memory blocks, sum=10 ... The problem is that at each iteration of regrtest -R 3:3 (6 iterations), a few more code objects get this opcache allocated. There are different solutions to fix regrtest -R 3:3. (*) Always optimize: diff --git a/Python/ceval.c b/Python/ceval.c index 411ba3d73c..6cd148efba 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -103,7 +103,7 @@ static long dxp[256]; #endif /* per opcode cache */ -#define OPCACHE_MIN_RUNS 1024 /* create opcache when code executed this time */ +#define OPCACHE_MIN_RUNS 1 /* create opcache when code executed this time */ #define OPCACHE_STATS 0 /* Enable stats */ #if OPCACHE_STATS $ make && ./python -m test -R 3:3 test_pprint ... Tests result: SUCCESS (*) Never optimmize: disable opcache until a better fix can be found diff --git a/Python/ceval.c b/Python/ceval.c index 411ba3d73c..3c85df6fea 100644 --- a/Python/ceval.c +++ b/Python/ceval.c @@ -1230,6 +1230,7 @@ _PyEval_EvalFrameDefault(PyFrameObject *f, int throwflag) f->f_stacktop = NULL; /* remains NULL unless yield suspends frame */ f->f_executing = 1; +#if 0 if (co->co_opcache_flag < OPCACHE_MIN_RUNS) { co->co_opcache_flag++; if (co->co_opcache_flag == OPCACHE_MIN_RUNS) { @@ -1244,6 +1245,7 @@ _PyEval_EvalFrameDefault(PyFrameObject *f, int throwflag) #endif } } +#endif #ifdef LLTRACE lltrace = _PyDict_GetItemId(f->f_globals, &PyId___ltrace__) != NULL; $ make && ./python -m test -R 3:3 test_pprint ... Tests result: SUCCESS (*) Find a way to explicitly deoptimize all code objects Modules/gcmodule.c has a clear_freelists() function called by collect() if generation == NUM_GENERATIONS-1: on when gc.collect() is collected explicitly for example. Lib/test/libregrtest/refleak.py also has a dash_R_cleanup() function which clears many caches. Problem: currently, code objects are not explicitly tracked (for example, they are not tracked in a double linked list). (*) Add way more warmup iterations to regrtest in buildbots. I dislike this option. A build on a refleak buildbot worker already takes 2 to 3 hours. Adding more warmup would make a build even way more slower. ---------- components: Interpreter Core messages: 344469 nosy: inada.naoki, vstinner, yselivanov priority: normal severity: normal status: open title: opcode cache for LOAD_GLOBAL emits false alarm in memory leak hunting versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 04:54:23 2019 From: report at bugs.python.org (Aldwin Pollefeyt) Date: Tue, 04 Jun 2019 08:54:23 +0000 Subject: [New-bugs-announce] [issue37147] f-string debugging f"{x=[}" adds [filename:lineno] as prefix Message-ID: <1559638463.74.0.509291608796.issue37147@roundup.psfhosted.org> New submission from Aldwin Pollefeyt : >From this idea [0] by Karthikeyan Singaravelan and added to his code in hack [1]. name = "karthikeyan" print(f"{name =[}") print(f"{name=[}") print(f"{age = [}") print(f"{age= [}") [prog.py:2] name ='karthikeyan' [prog.py:3] name='karthikeyan' [prog.py:4] name = 'karthikeyan' [prog.py:5] name= 'karthikeyan' [0] https://tirkarthi.github.io/programming/2019/05/08/f-string-debugging.html [1] https://github.com/tirkarthi/cpython/commit/d0fcbe67f6bb8ad60744b0a4973c4dc69fda65a9 ---------- messages: 344533 nosy: aldwinaldwin, xtreak priority: normal severity: normal status: open title: f-string debugging f"{x=[}" adds [filename:lineno] as prefix type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 06:08:45 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 04 Jun 2019 10:08:45 +0000 Subject: [New-bugs-announce] [issue37148] test_asyncio fails on refleaks buildbots Message-ID: <1559642925.13.0.289383724886.issue37148@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : https://buildbot.python.org/all/#/builders/1/builds/609/steps/5/logs/stdio ===================================================================== FAIL: test_stream_reader_create_warning (test.test_asyncio.test_streams.StreamTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/buildbot/buildarea/cpython/3.x.ware-gentoo-x86.refleak/build/Lib/test/test_asyncio/test_streams.py", line 1233, in test_stream_reader_create_warning asyncio.StreamReader AssertionError: DeprecationWarning not triggered ====================================================================== FAIL: test_stream_writer_create_warning (test.test_asyncio.test_streams.StreamTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/buildbot/buildarea/cpython/3.x.ware-gentoo-x86.refleak/build/Lib/test/test_asyncio/test_streams.py", line 1237, in test_stream_writer_create_warning asyncio.StreamWriter AssertionError: DeprecationWarning not triggered The problem is that when the test is repeated using the -R, asyncio has cached already StreamReader and StreamWriter into globals, not triggering the warning again. ---------- components: Tests, asyncio keywords: buildbot messages: 344537 nosy: asvetlov, lukasz.langa, pablogsal, vstinner, yselivanov priority: normal severity: normal stage: needs patch status: open title: test_asyncio fails on refleaks buildbots type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 07:59:26 2019 From: report at bugs.python.org (Guilloux) Date: Tue, 04 Jun 2019 11:59:26 +0000 Subject: [New-bugs-announce] [issue37149] link to official documentation tkinter failed !!! Message-ID: <1559649566.96.0.759993950384.issue37149@roundup.psfhosted.org> New submission from Guilloux : On the page https://docs.python.org/3/library/tkinter.html there is a link to "Tkinter reference: a GUI for Python" which is "https://infohost.nmt.edu/tcc/help/pubs/tkinter/web/index.html". BIG PROBLEM : Since several days this link doesn't work any more !!! It's really important to fix it because most of official online documentation about tkinter was provided by this way !!! Fortunately i have a copy of the documentation (cf pdf file). I have downloaded this file thanks to the website above just before the link was failed ! (Yes, i know, i'm lucky...) Unfortunately I can't send you this pdf because I have a message "request entity too large" :-( ---------- assignee: docs at python components: Documentation, Tkinter messages: 344549 nosy: docs at python, xameridu priority: normal severity: normal status: open title: link to official documentation tkinter failed !!! versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 09:22:00 2019 From: report at bugs.python.org (Demid) Date: Tue, 04 Jun 2019 13:22:00 +0000 Subject: [New-bugs-announce] [issue37150] Do not allow to pass FileType class object instead of instance in add_argument Message-ID: <1559654520.68.0.135252915467.issue37150@roundup.psfhosted.org> New submission from Demid : There is a possibility that someone (like me) accidentally will omit parentheses with FileType arguments after FileType, and parser will contain wrong file until someone will try to use it. Example: parser = argparse.ArgumentParser() parser.add_argument('-x', type=argparse.FileType) ---------- components: Library (Lib) messages: 344568 nosy: zygocephalus priority: normal severity: normal status: open title: Do not allow to pass FileType class object instead of instance in add_argument type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 10:40:44 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Tue, 04 Jun 2019 14:40:44 +0000 Subject: [New-bugs-announce] [issue37151] Calling code cleanup after PEP 590 Message-ID: <1559659244.35.0.514840211553.issue37151@roundup.psfhosted.org> New submission from Jeroen Demeyer : Now that PEP 590 has been implemented, a lot of old code can be cleaned up. In particular: - get rid of _PyMethodDef_RawFastCallXXX() functions and replace them by vectorcall functions for each calling convention - get rid of FastCallDict() implementations for specific types, but keep the generic _PyObject_FastCallDict() - get rid of some specific tp_call implementations: try to use tp_call=PyVectorcall_Call in more places ---------- components: Interpreter Core messages: 344577 nosy: Mark.Shannon, jdemeyer, petr.viktorin priority: normal severity: normal status: open title: Calling code cleanup after PEP 590 type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 11:05:38 2019 From: report at bugs.python.org (carlo) Date: Tue, 04 Jun 2019 15:05:38 +0000 Subject: [New-bugs-announce] [issue37152] Add AF_LOCAL alias for AF_UNIX Message-ID: <1559660738.02.0.723341796578.issue37152@roundup.psfhosted.org> New submission from carlo : both mean the same thing and are used to reference local socket files if you do a `man socket` one of the lines says PF_LOCAL Host-internal protocols, formerly called PF_UNIX, (AF and PF mean the same thing here, but importantly it shows that PF_UNIX has been depreciated) this change gives users the option to type socket.AF_LOCAL as well as socket.AF_UNIX when specifying the address family for the socket. this is my first issue so lmk if there is anything I'm missing ) ---------- components: Library (Lib) messages: 344585 nosy: frosty00 priority: normal severity: normal status: open title: Add AF_LOCAL alias for AF_UNIX versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 12:30:09 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 04 Jun 2019 16:30:09 +0000 Subject: [New-bugs-announce] [issue37153] test_venv: test_multiprocessing() hangs randomly on x86 Windows7 3.x Message-ID: <1559665809.74.0.244417716328.issue37153@roundup.psfhosted.org> New submission from STINNER Victor : x86 Windows7 3.x: https://buildbot.python.org/all/#/builders/58/builds/2573 0:42:21 load avg: 4.40 [283/423/1] test_venv crashed (Exit code 1) Timeout (0:35:00)! Thread 0x000000c0 (most recent call first): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\subprocess.py", line 1332 in _readerthread File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 865 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 923 in _bootstrap_inner File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 885 in _bootstrap Thread 0x00000c6c (most recent call first): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\subprocess.py", line 1332 in _readerthread File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 865 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 923 in _bootstrap_inner File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 885 in _bootstrap Thread 0x00000350 (most recent call first): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 1015 in _wait_for_tstate_lock File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\threading.py", line 999 in join File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\subprocess.py", line 1361 in _communicate File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\subprocess.py", line 999 in communicate File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_venv.py", line 40 in check_output File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_venv.py", line 327 in test_multiprocessing File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\case.py", line 628 in _callTestMethod File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\case.py", line 671 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\case.py", line 731 in __call__ File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 122 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 84 in __call__ File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 122 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 84 in __call__ File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 122 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\suite.py", line 84 in __call__ File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\unittest\runner.py", line 176 in run File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 1984 in _run_suite File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 2080 in run_unittest File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 203 in _test_module File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 228 in _runtest_inner2 File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 264 in _runtest_inner File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 135 in _runtest File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest.py", line 187 in runtest File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\runtest_mp.py", line 66 in run_tests_worker File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 611 in _main File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 588 in main File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 663 in main File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\regrtest.py", line 46 in _main File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\regrtest.py", line 50 in File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\runpy.py", line 85 in _run_code File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\runpy.py", line 192 in _run_module_as_main (...) Re-running failed tests in verbose mode Re-running test_venv in verbose mode test_defaults (test.test_venv.BasicTest) ... ok test_executable (test.test_venv.BasicTest) ... ok test_executable_symlinks (test.test_venv.BasicTest) ... skipped 'Needs symlinks' test_isolation (test.test_venv.BasicTest) ... ok test_multiprocessing (test.test_venv.BasicTest) ... ok test_overwrite_existing (test.test_venv.BasicTest) ... ok test_prefixes (test.test_venv.BasicTest) ... ok test_prompt (test.test_venv.BasicTest) ... ok test_symlinking (test.test_venv.BasicTest) ... skipped 'Needs symlinks' test_unicode_in_batch_file (test.test_venv.BasicTest) ... ok test_unoverwritable_fails (test.test_venv.BasicTest) ... ok test_upgrade (test.test_venv.BasicTest) ... ok test_devnull (test.test_venv.EnsurePipTest) ... ok test_explicit_no_pip (test.test_venv.EnsurePipTest) ... ok test_no_pip_by_default (test.test_venv.EnsurePipTest) ... ok test_with_pip (test.test_venv.EnsurePipTest) ... ok ---------------------------------------------------------------------- Ran 16 tests in 59.907s OK (skipped=2) == Tests result: FAILURE then SUCCESS == (...) 1 re-run test: test_venv Total duration: 1 hour 6 min Tests result: FAILURE then SUCCESS Traceback (most recent call last): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 1006, in temp_dir yield path File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 1058, in temp_cwd yield cwd_dir File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 588, in main self._main(tests, kwargs) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 658, in _main sys.exit(0) SystemExit: 0 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 329, in _force_run return func(*args) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'D:\\cygwin\\home\\db3l\\buildarea\\3.x.bolen-windows7\\build\\build\\test_python_3040\\worker_3236' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\runpy.py", line 192, in _run_module_as_main return _run_code(code, main_globals, None, File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\__main__.py", line 2, in main() File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 663, in main Regrtest().main(tests=tests, **kwargs) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\libregrtest\main.py", line 588, in main self._main(tests, kwargs) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 1058, in temp_cwd yield cwd_dir File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 1011, in temp_dir rmtree(path) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 451, in rmtree _rmtree(path) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 392, in _rmtree _waitfor(_rmtree_inner, path, waitall=True) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 340, in _waitfor func(pathname) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 389, in _rmtree_inner _force_run(fullname, os.rmdir, fullname) File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\support\__init__.py", line 335, in _force_run return func(*args) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'D:\\cygwin\\home\\db3l\\buildarea\\3.x.bolen-windows7\\build\\build\\test_python_3040\\worker_3236' ---------- components: Tests messages: 344600 nosy: vstinner priority: normal severity: normal status: open title: test_venv: test_multiprocessing() hangs randomly on x86 Windows7 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 12:55:02 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 04 Jun 2019 16:55:02 +0000 Subject: [New-bugs-announce] [issue37154] test_utf8_mode: test_env_var() fails on AMD64 Fedora Rawhide Clang Installed 3.7 Message-ID: <1559667302.55.0.345195224781.issue37154@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Fedora Rawhide Clang Installed 3.7: https://buildbot.python.org/all/#/builders/195/builds/104 FAIL: test_env_var (test.test_utf8_mode.UTF8ModeTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.7.cstratak-fedora.installed/build/target/lib/python3.7/test/test_utf8_mode.py", line 93, in test_env_var self.assertEqual(out, '0') AssertionError: '1' != '0' - 1 + 0 On the buildbot: vstinner at python-builder-rawhide$ env|grep -E 'LC_|LANG' LANG=en_US.UTF-8 vstinner at python-builder-rawhide$ locale locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" (...) LC_ALL= Extract of the test: # Cannot test with the POSIX locale, since the POSIX locale enables # the UTF-8 mode if not self.posix_locale(): # PYTHONUTF8 should be ignored if -E is used out = self.get_output('-E', '-c', code, PYTHONUTF8='1') self.assertEqual(out, '0') The problem seems to be posix_locale() which fails if the C locale has been coerced by PEP 538: POSIX_LOCALES = ('C', 'POSIX') def posix_locale(self): loc = locale.setlocale(locale.LC_CTYPE, None) return (loc in POSIX_LOCALES) This code doesn't work if LC_CTYPE is already coerced. ---------- components: Tests messages: 344603 nosy: vstinner priority: normal severity: normal status: open title: test_utf8_mode: test_env_var() fails on AMD64 Fedora Rawhide Clang Installed 3.7 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 13:38:15 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 04 Jun 2019 17:38:15 +0000 Subject: [New-bugs-announce] [issue37155] test_asyncio: test_stdin_broken_pipe() failed on AMD64 FreeBSD CURRENT Shared 3.x Message-ID: <1559669895.88.0.0230112144047.issue37155@roundup.psfhosted.org> New submission from STINNER Victor : See also bpo-33531 and bpo-30382. AMD64 FreeBSD CURRENT Shared 3.x: https://buildbot.python.org/all/#/builders/168/builds/1154 ====================================================================== FAIL: test_stdin_broken_pipe (test.test_asyncio.test_subprocess.SubprocessFastWatcherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/test_asyncio/test_subprocess.py", line 243, in test_stdin_broken_pipe self.assertRaises((BrokenPipeError, ConnectionResetError), AssertionError: (, ) not raised by run_until_complete ---------- components: Tests messages: 344612 nosy: vstinner priority: normal severity: normal status: open title: test_asyncio: test_stdin_broken_pipe() failed on AMD64 FreeBSD CURRENT Shared 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 16:20:58 2019 From: report at bugs.python.org (Steve Dower) Date: Tue, 04 Jun 2019 20:20:58 +0000 Subject: [New-bugs-announce] [issue37156] Fix libssl DLL tag in Tools/msi project Message-ID: <1559679658.46.0.279980896564.issue37156@roundup.psfhosted.org> New submission from Steve Dower : Mostly a note to self to fix the tag. Right now the x64 build gets an extra suffix, which is incorrect and causes installer builds to fail. ---------- assignee: steve.dower components: Windows messages: 344638 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Fix libssl DLL tag in Tools/msi project type: compile error versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 4 17:36:40 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 04 Jun 2019 21:36:40 +0000 Subject: [New-bugs-announce] [issue37157] shutil: add reflink=False to file copy functions to control clone/CoW copies (use copy_file_range) Message-ID: <1559684200.61.0.18008315195.issue37157@roundup.psfhosted.org> New submission from STINNER Victor : bpo-26826 added a new os.copy_file_range() function: https://docs.python.org/dev/library/os.html#os.copy_file_range As os.sendfile(), this new Linux syscall avoids memory copies between kernel space and user space. It matters for performance, especially since Meltdown vulnerability required Windows, Linux, FreeBSD, etc. to use a different address space for the kernel (like Linux Kernel page-table isolation, KPTI). shutil has been modified in Python 3.8 to use os.sendfile() on Linux: https://docs.python.org/dev/whatsnew/3.8.html#optimizations But according to Pablo Galindo Salgado, copy_file_range() goes further: "But copy_file_rane can leverage more filesystem features like deduplication and copy offload stuff." https://bugs.python.org/issue26826#msg344582 Giampaolo Rodola' added: "I think data deduplication / CoW / reflink copy is better implemented via FICLONE. "cp --reflink" uses it, I presume because it's older than copy_file_range(). I have a working patch adding CoW copy support for Linux and OSX (but not Windows). I think that should be exposed as a separate shutil.reflink() though, and copyfile() should just do a standard copy." "Actually "man copy_file_range" claims it can do server-side copy, meaning no network traffic between client and server if *src* and *dst* live on the same network fs. So I agree copy_file_range() should be preferred over sendfile() after all. =) I have a wrapper for copy_file_range() similar to what I did in shutil in issue33671 which I can easily integrate, but I wanted to land this one first: https://bugs.python.org/issue37096 Also, I suppose we cannot land this in time for 3.8?" https://bugs.python.org/issue26826#msg344586 -- There was already a discussion about switching shutil to copy-on-write: https://bugs.python.org/issue33671#msg317989 One problem is that modifying the "copied" file can suddenly become slower if it was copied using "cp --reflink". It seems like adding a new reflink=False parameter to file copy functions to control clone/CoW copies is required to prevent bad surprises. ---------- components: Library (Lib) messages: 344648 nosy: giampaolo.rodola, pablogsal, vstinner priority: normal severity: normal status: open title: shutil: add reflink=False to file copy functions to control clone/CoW copies (use copy_file_range) type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 01:24:04 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Wed, 05 Jun 2019 05:24:04 +0000 Subject: [New-bugs-announce] [issue37158] Speed-up statistics.fmean() Message-ID: <1559712244.32.0.839962107784.issue37158@roundup.psfhosted.org> New submission from Raymond Hettinger : fmean() can be sped-up by converting count() from a function to a generator and by using enumerate() to do the counting. -- Baseline --- $ ./python.exe -m timeit -r11 -s 'from statistics import fmean' -s 'data=list(map(float, range(1000)))' 'fmean(iter(data))' 2000 loops, best of 11: 108 usec per loop -- Patched -- $ ./python.exe -m timeit -r11 -s 'from statistics import fmean' -s 'data=list(map(float, range(1000)))' 'fmean(iter(data))' 5000 loops, best of 11: 73.1 usec per loop ---------- assignee: steven.daprano components: Library (Lib) messages: 344670 nosy: rhettinger, steven.daprano priority: normal severity: normal status: open title: Speed-up statistics.fmean() type: performance versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 01:24:51 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Wed, 05 Jun 2019 05:24:51 +0000 Subject: [New-bugs-announce] [issue37159] Have shutil.copyfile() use copy_file_range() Message-ID: <1559712291.98.0.707685827942.issue37159@roundup.psfhosted.org> New submission from Giampaolo Rodola' : This is a follow up of issue33639 (zero-copy via sendfile()) and issue26828 (os.copy_file_range()). On [Linux 4.5 / glib 2.27] shutil.copyfile() will use os.copy_file_range() instead of os.sendfile(). According to my benchmarks performances are the same but when dealing with NFS copy_file_range() is supposed to attempt doing a server-side copy, meaning there will be no exchange of data between client and server, making the copy operation an order of magnitude faster. Before proceeding unit-tests for big-file support should be added first (issue37096). We didn't hit the 3.8 deadline but I actually prefer to land this in 3.9 as I want to experiment with it a bit (copy_file_range() is quite new, issue26828 is still a WIP). ---------- components: Library (Lib) files: patch.diff keywords: patch messages: 344671 nosy: giampaolo.rodola priority: normal severity: normal status: open title: Have shutil.copyfile() use copy_file_range() type: performance versions: Python 3.9 Added file: https://bugs.python.org/file48392/patch.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 03:14:33 2019 From: report at bugs.python.org (David Carlier) Date: Wed, 05 Jun 2019 07:14:33 +0000 Subject: [New-bugs-announce] [issue37160] thread native id netbsd support Message-ID: <1559718873.81.0.214455069797.issue37160@roundup.psfhosted.org> Change by David Carlier : ---------- components: Interpreter Core nosy: David Carlier priority: normal pull_requests: 13714 severity: normal status: open title: thread native id netbsd support versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 06:40:49 2019 From: report at bugs.python.org (Steven D'Aprano) Date: Wed, 05 Jun 2019 10:40:49 +0000 Subject: [New-bugs-announce] [issue37161] Pre-populate user editable text in input() Message-ID: <1559731249.86.0.916028878199.issue37161@roundup.psfhosted.org> New submission from Steven D'Aprano : When asking for user input, it is often very helpful to be able to pre-populate the user's input string and allow them to edit it, rather than expecting them to re-type the input from scratch. I propose that the input() built-in take a second optional argument, defaulting to the empty string. The first argument remains the prompt, the second argument is the initial value of the editable text. Example use-case: pathname = "~/Documents/untitled.txt" pathname = input("Save the file as...? ", pathname) On POSIX systems, this can be done with readline: import readline def myinput(prompt, initial=''): readline.set_startup_hook(lambda: readline.insert_text(initial)) try: response = input(prompt) finally: readline.set_startup_hook(None) return response but it requires readline (doesn't exist on Windows), and it clobbers any pre-existing startup hook the caller may have already installed. ---------- messages: 344701 nosy: steven.daprano priority: normal severity: normal status: open title: Pre-populate user editable text in input() type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 06:55:09 2019 From: report at bugs.python.org (Matthias Klose) Date: Wed, 05 Jun 2019 10:55:09 +0000 Subject: [New-bugs-announce] [issue37162] new importlib dependencies csv, email and zipfile Message-ID: <1559732109.6.0.0655643851833.issue37162@roundup.psfhosted.org> New submission from Matthias Klose : compared to 3.8 alpha4, beta1's importlib has three new dependencies csv, email and zipfile. Is this intended, or should the toplevel imports in importlib.metadata be moved into the functions where these are once used? ---------- components: Library (Lib) messages: 344703 nosy: brett.cannon, doko, eric.snow, ncoghlan priority: normal severity: normal status: open title: new importlib dependencies csv, email and zipfile versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 07:13:04 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 05 Jun 2019 11:13:04 +0000 Subject: [New-bugs-announce] [issue37163] dataclasses.replace() fails with the field named "obj" Message-ID: <1559733184.92.0.103225044701.issue37163@roundup.psfhosted.org> New submission from Serhiy Storchaka : >>> from dataclasses import * >>> @dataclass ... class D: ... obj: object ... >>> replace(D(123), obj='abc') Traceback (most recent call last): File "", line 1, in TypeError: replace() got multiple values for argument 'obj' ---------- components: Library (Lib) messages: 344704 nosy: eric.smith, serhiy.storchaka priority: normal severity: normal status: open title: dataclasses.replace() fails with the field named "obj" type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 10:51:24 2019 From: report at bugs.python.org (Dane Howard) Date: Wed, 05 Jun 2019 14:51:24 +0000 Subject: [New-bugs-announce] [issue37164] dict creation with converted zip objects produces inconsistent behavior Message-ID: <1559746284.31.0.288092384603.issue37164@roundup.psfhosted.org> New submission from Dane Howard : confirmed on the following versions: 3.6.3 (Windows 10) 3.7.0 (Debian 9 & Windows 10) 3.7.1 (Debian 9) a dictionary created with the dict() function will not always return the appropriate dictionary object. The following code produces the bug on all stated versions except 3.6.3/Win10, which exhibits slightly different behavior. a = ['1','2','3'] b = [1,2,3] c = zip(a,b) print(dict(list(c))) #gives empty dict print(dict(list(zip(a,b)))) #gives {'1':1,'2':2,'3':3} d = zip(b,a) print(dict(list(d))) #gives {1:'1',':'2',3:'3'} ---------- components: Windows files: pybug.png messages: 344729 nosy: Dane Howard, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: dict creation with converted zip objects produces inconsistent behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48394/pybug.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 14:45:07 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Wed, 05 Jun 2019 18:45:07 +0000 Subject: [New-bugs-announce] [issue37165] Convert _collections._count_elements() to the Argument Clinic Message-ID: <1559760307.2.0.195086878247.issue37165@roundup.psfhosted.org> New submission from Raymond Hettinger : This lets _count_elements use METH_FASTCALL. ---------- components: Library (Lib) messages: 344759 nosy: rhettinger priority: normal severity: normal status: open type: performance versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 15:46:26 2019 From: report at bugs.python.org (Tim Hatch) Date: Wed, 05 Jun 2019 19:46:26 +0000 Subject: [New-bugs-announce] [issue37166] inspect.findsource doesn't handle shortened files gracefully Message-ID: <1559763986.42.0.903204336475.issue37166@roundup.psfhosted.org> New submission from Tim Hatch : inspect.findsource() can trigger IndexError when co_firstlineno is larger than len(linecache.getlines()). If you have a situation where the file that linecache finds doesn't match the imported module, then you're not guaranteed that co_firstlineno on the code objects makes any sense. We hit this in a special kind of par file, but it could be triggered by shortening the file, like doing a git checkout of an older version while it's running. a.py (3 lines): # line 1 # line 2 def foo(): pass # co_firstlineno=3 a.py.v2 (1 line): def foo(): pass repro: import a # modification happens, cp a.py.v2 a.py inspect.getsource(a.foo) Although linecache.getline() does bounds checking, `inspect.findsource()` (which is used by various other functions, including `inspect.stack()`) grabs all the lines and loops through them. The bug is in this section of `inspect.py`: if iscode(object): if not hasattr(object, 'co_firstlineno'): raise OSError('could not find function definition') lnum = object.co_firstlineno - 1 pat = re.compile(r'^(\s*def\s)|(\s*async\s+def\s)|(.*(? 0: if pat.match(lines[lnum]): break lnum = lnum - 1 return lines, lnum raise OSError('could not find code object') Found through future.utils.raise_from which actually doesn't need to be using `inspect.stack()`, it can switch to `sys._getframe(2)` or so. We should have a PR ready shortly, but wondering if this can be backported to at least 3.6? ---------- components: Library (Lib) messages: 344761 nosy: lisroach, thatch, vstinner priority: normal severity: normal status: open title: inspect.findsource doesn't handle shortened files gracefully versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 16:30:03 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 05 Jun 2019 20:30:03 +0000 Subject: [New-bugs-announce] [issue37167] Cannot build Windows python_d.exe in master branch Message-ID: <1559766603.61.0.264760022754.issue37167@roundup.psfhosted.org> New submission from Terry J. Reedy : In master, pcbuild/build -e -d (the 2nd run) gave the errors in build_errors.txt. Many or most were like f:\dev\3x\externals\xz-5.2.2\src\liblzma\common\block_encoder.h(1): error C2018 : unknown character '0x2' [F:\dev\3x\PCbuild\liblzma.vcxproj] >chkdsk found no problems. A fix is high priority for me because my most recent 3.8.0a4 build in master, on 6/2, no longer imports tkinter. Hence IDLE fails also. >>> import tkinter Traceback (most recent call last): File "", line 1, in File "F:\dev\3x\lib\tkinter\__init__.py", line 36, in import _tkinter # If this fails your Python may not be configured for Tk ImportError: Module use of python39_d.dll conflicts with this version of Python. Building python_d.exe for 3.8 and 3.7 had no problem. ---------- components: Build, Windows files: build_errors.txt messages: 344768 nosy: paul.moore, steve.dower, terry.reedy, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Cannot build Windows python_d.exe in master branch type: compile error versions: Python 3.9 Added file: https://bugs.python.org/file48395/build_errors.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 18:05:21 2019 From: report at bugs.python.org (Phil Frost) Date: Wed, 05 Jun 2019 22:05:21 +0000 Subject: [New-bugs-announce] [issue37168] Decimal divisions sometimes 10x or 100x too large Message-ID: <1559772321.45.0.42770583159.issue37168@roundup.psfhosted.org> New submission from Phil Frost : We've observed instances of Decimal divisions being incorrect a few times per day in a web application serving millions of requests daily. I've been unable to reproduce the issue but have been investigating core dumps which suggest either some issue in the C Python implementation or its execution environment. Python 2.7.15 on Alpine Linux Docker containers. A slightly modified decimal.py is attached. It's been modified to check that the answer is approximately correct and dumps core if not. It also has a few local variables added to provide additional insight. I've annotated the file with values from the core dump and my reasoning about what should happen. The crux of the problem is the loop at line 1389. There are 3 ways to determine how many times this loop executed: 1. the number of zeros removed from coeff 2. the number of times exp is incremented 3. the number of times division_counter is incremented Oddly, #1 and #3 indicate the loop happened 27 times, while #2 indicates the loop happened 28 times. One possible explanation (which makes about as much sense as any other) is that `exp += 1` sometimes adds 2 instead of 1. This means the loop happens 1 time fewer than it should, leaving `coeff` 10 times bigger than it should properly be. Or (very rarely) this happens twice and the result is 100 times bigger. I find it very odd that something as basic as `+= 1` should not work, but at the moment it is the best explanation I have. Unfortunately I can't share the core dump, as I've only been able to observe the issue in production so the core contains private information. But I'd welcome any ideas for further exploration. Or perhaps someone is aware of an existing bug that would explain this behavior. ---------- components: Interpreter Core files: decimal.py messages: 344772 nosy: Phil Frost priority: normal severity: normal status: open title: Decimal divisions sometimes 10x or 100x too large versions: Python 2.7 Added file: https://bugs.python.org/file48396/decimal.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 5 19:51:51 2019 From: report at bugs.python.org (Matej Cepl) Date: Wed, 05 Jun 2019 23:51:51 +0000 Subject: [New-bugs-announce] [issue37169] test_pyobject_is_freed_free fails with 3.8.0beta1 Message-ID: <1559778711.54.0.15966452891.issue37169@roundup.psfhosted.org> New submission from Matej Cepl : When building openSUSE package for Python-3.8.0b1 (on x86_64 build system with the latest openSUSE/Tumbleweed) in the package which previously worked all the way up to 3.8.0.a4, I get this test failing: [ 5771s] ====================================================================== [ 5771s] FAIL: test_pyobject_is_freed_free (test.test_capi.PyMemMallocDebugTests) [ 5771s] ---------------------------------------------------------------------- [ 5771s] Traceback (most recent call last): [ 5771s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.0b1/Lib/test/test_capi.py", line 729, in test_pyobject_is_freed_free [ 5771s] self.check_pyobject_is_freed('pyobject_freed') [ 5771s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.0b1/Lib/test/test_capi.py", line 720, in check_pyobject_is_freed [ 5771s] assert_python_ok('-c', code, PYTHONMALLOC=self.PYTHONMALLOC) [ 5771s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.0b1/Lib/test/support/script_helper.py", line 157, in assert_python_ok [ 5771s] return _assert_python(True, *args, **env_vars) [ 5771s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.0b1/Lib/test/support/script_helper.py", line 143, in _assert_python [ 5771s] res.fail(cmd_line) [ 5771s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.0b1/Lib/test/support/script_helper.py", line 70, in fail [ 5771s] raise AssertionError("Process return code is %d\n" [ 5771s] AssertionError: Process return code is 1 [ 5771s] command line: ['/home/abuild/rpmbuild/BUILD/Python-3.8.0b1/python', '-X', 'faulthandler', '-c', '\nimport gc, os, sys, _testcapi\n# Disable the GC to avoid crash on GC collection\ngc.disable()\nobj = _testcapi.pyobject_freed()\nerror = (_testcapi.pyobject_is_freed(obj) == False)\n# Exit immediately to avoid a crash while deallocating\n# the invalid object\nos._exit(int(error))\n'] [ 5771s] [ 5771s] stdout: [ 5771s] --- [ 5771s] [ 5771s] --- [ 5771s] [ 5771s] stderr: [ 5771s] --- [ 5771s] [ 5771s] --- [ 5771s] [ 5771s] ---------------------------------------------------------------------- ---------- components: Interpreter Core messages: 344782 nosy: mcepl priority: normal severity: normal status: open title: test_pyobject_is_freed_free fails with 3.8.0beta1 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 02:52:41 2019 From: report at bugs.python.org (Antti Haapala) Date: Thu, 06 Jun 2019 06:52:41 +0000 Subject: [New-bugs-announce] [issue37170] Wrong return value from PyLong_AsUnsignedLongLongMask on PyErr_BadInternalCall Message-ID: <1559803961.74.0.856942246401.issue37170@roundup.psfhosted.org> New submission from Antti Haapala : Hi, while checking the longobject implementation for a Stack Overflow answer, I noticed that the functions `_PyLong_AsUnsignedLongLongMask` and `PyLong_AsUnsignedLongLongMask` erroneously return `(unsigned long)-1` on error when bad internal call is thrown. First case: https://github.com/python/cpython/blob/cb65202520e7959196a2df8215692de155bf0cc8/Objects/longobject.c#L1379 static unsigned long long _PyLong_AsUnsignedLongLongMask(PyObject *vv) { PyLongObject *v; unsigned long long x; Py_ssize_t i; int sign; if (vv == NULL || !PyLong_Check(vv)) { PyErr_BadInternalCall(); return (unsigned long) -1; <<<< } Second case: https://github.com/python/cpython/blob/cb65202520e7959196a2df8215692de155bf0cc8/Objects/longobject.c#L1407 They seem to have been incorrect for quite some time, the other one blames back to the SVN era. The bug seems to be in 2.7 alike: https://github.com/python/cpython/blob/20093b3adf6b06930fe994527670dfb3aee40cc7/Objects/longobject.c#L1025 The correct return value should of course be `(unsigned long long)-1` ---------- components: Interpreter Core messages: 344789 nosy: ztane priority: normal severity: normal status: open title: Wrong return value from PyLong_AsUnsignedLongLongMask on PyErr_BadInternalCall type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 05:52:06 2019 From: report at bugs.python.org (Dima Tisnek) Date: Thu, 06 Jun 2019 09:52:06 +0000 Subject: [New-bugs-announce] [issue37171] Documentation mismatch contextvars module vs PEP-567 Message-ID: <1559814726.03.0.917763473922.issue37171@roundup.psfhosted.org> New submission from Dima Tisnek : PEP-567 states that user "must call Context.run()" while `contextvars` docs don't mention `.run()` `contextvars.Context().run(arg)` exists, but there's no documentation, for example what the required argument is. ---------- components: asyncio messages: 344795 nosy: Dima.Tisnek, asvetlov, yselivanov priority: normal severity: normal status: open title: Documentation mismatch contextvars module vs PEP-567 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 06:29:59 2019 From: report at bugs.python.org (Dima Tisnek) Date: Thu, 06 Jun 2019 10:29:59 +0000 Subject: [New-bugs-announce] [issue37172] Odd error awating a Future Message-ID: <1559816999.36.0.898434753785.issue37172@roundup.psfhosted.org> New submission from Dima Tisnek : Let's start with correct code: import asyncio async def writer(): await asyncio.sleep(1) g1.set_result(41) async def reader(): await g1 async def test(): global g1 g1 = asyncio.Future() await asyncio.gather(reader(), writer()) asyncio.run(test()) No error, as expected. Now let's mess it up a bit: import asyncio g1 = asyncio.Future() async def writer(): await asyncio.sleep(1) g1.set_result(41) async def reader(): await g1 async def test(): await asyncio.gather(reader(), writer()) asyncio.run(test()) Fails with RuntimeError ... attached to a different loop The error makes sense, although it's sad that I can't create global futures / there was no even loop when Future was creates, it was not a *different* event loop / maybe I wish .run() didn't force a new event loop? A nit (IMO), but I can live with it. Let's mess the code up a bit more: import asyncio g1 = asyncio.Future() async def writer(): await asyncio.sleep(1) g1.set_result(41) async def reader(): await g1 async def test(): await asyncio.gather(reader(), reader(), writer()) asyncio.run(test()) RuntimeError: await wasn't used with future What? That's really confusing! The only difference is that there are now 2 readers running in parallel. The actual exception comes from asyncio.Future.__await__ after a yield. I'm not sure how to fix this... ---------- components: asyncio messages: 344798 nosy: Dima.Tisnek, asvetlov, yselivanov priority: normal severity: normal status: open title: Odd error awating a Future versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 07:08:14 2019 From: report at bugs.python.org (flying sheep) Date: Thu, 06 Jun 2019 11:08:14 +0000 Subject: [New-bugs-announce] [issue37173] inspect.getfile error names module instead of passed class Message-ID: <1559819294.25.0.844595961888.issue37173@roundup.psfhosted.org> New submission from flying sheep : Currently, inspect.getfile(str) will report nonsense: >>> inspect.getfile(str) TypeError: is a built-in class ---------- components: Library (Lib) messages: 344799 nosy: flying sheep priority: normal severity: normal status: open title: inspect.getfile error names module instead of passed class versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 08:06:26 2019 From: report at bugs.python.org (Maximilian Ernestus) Date: Thu, 06 Jun 2019 12:06:26 +0000 Subject: [New-bugs-announce] [issue37174] sched.py: run() is caught in delayfunc even if all events are cancelled. Message-ID: <1559822786.43.0.93146538102.issue37174@roundup.psfhosted.org> New submission from Maximilian Ernestus : When I remove all events from a scheduler while its run() is being executed (with blocking=True in another thread), run() continues to block for some time because it is caught in its delayfunc which is time.sleep by default. This issue can easily be solved by using the wait() function of a threading.Event as the delayfunc and setting the event whenever the queue becomes empty. The referenced pull request adds this functionality by default. I also added a cancel_all() method which should be far more efficient than iterating all events and deleting them individually. ---------- components: Library (Lib) messages: 344804 nosy: ernestum priority: normal pull_requests: 13738 severity: normal status: open title: sched.py: run() is caught in delayfunc even if all events are cancelled. type: enhancement versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 09:03:12 2019 From: report at bugs.python.org (daniel hahler) Date: Thu, 06 Jun 2019 13:03:12 +0000 Subject: [New-bugs-announce] [issue37175] make install: make compileall optional Message-ID: <1559826192.88.0.243223248459.issue37175@roundup.psfhosted.org> New submission from daniel hahler : I'd like to make running compileall optionally during installation, since it takes quite a while by itself (with lots of output), and gets a bit into the way when installing often (e.g. with git-bisect). AFAIK it should not be required, because the files would be compiled on demand as usual. I could imagine having an explicit target for this, or a make variable to control this, i.e. "make install COMPILEALL=0". ---------- components: Installation messages: 344812 nosy: blueyed priority: normal severity: normal status: open title: make install: make compileall optional type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 12:06:58 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Thu, 06 Jun 2019 16:06:58 +0000 Subject: [New-bugs-announce] [issue37176] super() docs don't say what super() does Message-ID: <1559837218.79.0.128349910274.issue37176@roundup.psfhosted.org> New submission from Jeroen Demeyer : The documentation for super() at https://docs.python.org/3.8/library/functions.html#super does not actually say what super() does. It only says "Return a proxy object that delegates method calls to a parent or sibling class of type" and then gives a bunch of use cases and examples. If there is one place where we should define exactly what super() does (as opposed to give guidance on how to use it), the stdlib reference should be it. ---------- assignee: docs at python components: Documentation messages: 344827 nosy: docs at python, jdemeyer priority: normal severity: normal status: open title: super() docs don't say what super() does type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 12:21:59 2019 From: report at bugs.python.org (Tal Einat) Date: Thu, 06 Jun 2019 16:21:59 +0000 Subject: [New-bugs-announce] [issue37177] IDLE: Search dialogs can be hidden behind the main window Message-ID: <1559838119.79.0.14038633601.issue37177@roundup.psfhosted.org> New submission from Tal Einat : This issue was brought up by Irv Kalb on the idle-dev mailing list. I haven't been able to reproduce it exactly as he described, but I have managed to do so otherwise, and remember it happening on occasion on other computers. This appears to happen because the dialog windows are not being made transient relative to the main window. I'll have a PR up with a fix in a bit. Following is what Irv Kalb sent to the mailing list: I teach Python classes using IDLE. The search dialog is a real a problem for new students and me as the teacher. My students and I often get into a scenario where the search dialog gets hidden behind a editing window. When that happens, then you try to make edits in code and nothing happens. This happens to very often. I have learned to look for the hidden search dialog window, but my students get very flustered. Simple steps to reproduce (I'm using Python 3.6.1 with IDLE): Open a new document. Enter any code (e.g., a = 1) Bring up the Search Dialog. If the Search Dialog is not over the rectangle of the editing window, move it anywhere over the editing window. (This step is specifically to reproduce the problem, but this happens very often as students move windows around.) Search for: a Click: Find Next Click in the editing window (with the intention to make some change) Results: Search Dialog is now hidden behind the editing window. Keystrokes are now ignored in the editing window, even though the editing window appears to have focus. User has no idea about how to get out of this situation - unless they have seen it before and know that the Search Dialog is still active behind the current window. ---------- assignee: terry.reedy components: IDLE messages: 344831 nosy: taleinat, terry.reedy priority: normal severity: normal status: open title: IDLE: Search dialogs can be hidden behind the main window _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 12:30:28 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 06 Jun 2019 16:30:28 +0000 Subject: [New-bugs-announce] [issue37178] One argument form of math.perm() Message-ID: <1559838628.34.0.228461842236.issue37178@roundup.psfhosted.org> New submission from Raymond Hettinger : The perm() function should have a one argument form and change its signature to ``perm(n, k=None)``. This matches what itertools: itertools.permutations(iterable, r=None) Return successive r length permutations of elements in the iterable. If r is not specified or is None, then r defaults to the length of the iterable and all possible full-length permutations are generated. ---------- components: Library (Lib) messages: 344833 nosy: mark.dickinson, rhettinger, serhiy.storchaka, tim.peters priority: normal severity: normal status: open title: One argument form of math.perm() versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 13:55:13 2019 From: report at bugs.python.org (Cooper Lees) Date: Thu, 06 Jun 2019 17:55:13 +0000 Subject: [New-bugs-announce] [issue37179] asyncio loop.start_tls() provide support for TLS in TLS Message-ID: <1559843713.76.0.0171515478407.issue37179@roundup.psfhosted.org> New submission from Cooper Lees : aiohttp would love to be able to support HTTPS Proxy servers. To do this, asyncio itself needs to be able to provide TLS within TLS connections. Can we add this support to asyncio please. (I tried search but could not find a related issue - Please merge if there is) ---------- components: asyncio messages: 344846 nosy: asvetlov, cooperlees, yselivanov priority: normal severity: normal status: open title: asyncio loop.start_tls() provide support for TLS in TLS versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 14:35:53 2019 From: report at bugs.python.org (Ramin Najjarbashi) Date: Thu, 06 Jun 2019 18:35:53 +0000 Subject: [New-bugs-announce] [issue37180] Fix Persian KAF in mac_farsi.py Message-ID: <1559846153.37.0.617648704502.issue37180@roundup.psfhosted.org> New submission from Ramin Najjarbashi : Regarding the file named: "Lib/encodings/mac_farsi.py", the numbers are inserted in the wrong way, they are inserted Arabic numbers instead of Persian, they are edited in the comment. The KAF character (?) which was the Arabic version, has been changed and edited to the Persian version(?). It's not a big deal, but it's my first step to improve my favorite language ;) ---------- messages: 344850 nosy: RaminNietzsche priority: normal severity: normal status: open title: Fix Persian KAF in mac_farsi.py type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 15:01:24 2019 From: report at bugs.python.org (Paul Monson) Date: Thu, 06 Jun 2019 19:01:24 +0000 Subject: [New-bugs-announce] [issue37181] fix test_regrtest failures on Windows arm64 Message-ID: <1559847684.99.0.249295662285.issue37181@roundup.psfhosted.org> Change by Paul Monson : ---------- components: Tests, Windows nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: fix test_regrtest failures on Windows arm64 type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 15:41:45 2019 From: report at bugs.python.org (Ilya Kamenshchikov) Date: Thu, 06 Jun 2019 19:41:45 +0000 Subject: [New-bugs-announce] [issue37182] ast - handling new line inside a string Message-ID: <1559850105.87.0.562031777923.issue37182@roundup.psfhosted.org> New submission from Ilya Kamenshchikov : parsing two different strings produces identical ast.Str nodes: import ast txt1 = '"""\\n"""' txt2 = '"""\n"""' tree1 = ast.parse(txt1) tree2 = ast.parse(txt2) print(tree1.body[0].value.s == tree2.body[0].value.s) print(bytes(tree1.body[0].value.s, encoding='utf-8')) print(bytes(tree2.body[0].value.s, encoding='utf-8')) >>> True >>> b'\n' >>> b'\n' Expected result: I should be able to distinguish between the nodes created from two different strings. ---------- components: Library (Lib) messages: 344861 nosy: Ilya Kamenshchikov priority: normal severity: normal status: open title: ast - handling new line inside a string type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 15:55:36 2019 From: report at bugs.python.org (Christian Heimes) Date: Thu, 06 Jun 2019 19:55:36 +0000 Subject: [New-bugs-announce] [issue37183] Linker failure when creating main binary with python-config --libs Message-ID: <1559850936.49.0.687139467253.issue37183@roundup.psfhosted.org> New submission from Christian Heimes : With 3.8 it is no longer possible to compile a custom Python interpreter with linker flags from python-config. The config helper omits the main Python library: $ python3.8-config --ldflags -L/usr/lib64 -lcrypt -lpthread -ldl -lutil -lm -lm $ gcc -L/usr/lib64 -lcrypt -lpthread -ldl -lutil -lm -lm -o custompython custompython.o /usr/bin/ld: custompython.o: in function `main': custompython.c:10: undefined reference to `Py_BytesMain' collect2: error: ld returned 1 exit status ---------- components: Interpreter Core messages: 344864 nosy: christian.heimes, vstinner priority: high severity: normal stage: needs patch status: open title: Linker failure when creating main binary with python-config --libs type: compile error versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 18:16:29 2019 From: report at bugs.python.org (Noah) Date: Thu, 06 Jun 2019 22:16:29 +0000 Subject: [New-bugs-announce] [issue37184] suggesting option to raise exception if process exits nonzero in `with subprocess.Popen(...):` Message-ID: <1559859389.6.0.41932196936.issue37184@roundup.psfhosted.org> New submission from Noah : Suggesting option to raise exception if process exits nonzero in `with subprocess.Popen(...):` with subprocess.Popen('/bin/false'): pass I made the mistake of assuming this construct would raise an exception (CalledProcessError). It would be nice if there were a way to do that. ---------- components: Library (Lib) messages: 344880 nosy: nlevitt priority: normal severity: normal status: open title: suggesting option to raise exception if process exits nonzero in `with subprocess.Popen(...):` type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 18:20:10 2019 From: report at bugs.python.org (Pierre Glaser) Date: Thu, 06 Jun 2019 22:20:10 +0000 Subject: [New-bugs-announce] [issue37185] use os.memfd_create in multiprocessing.shared_memory? Message-ID: <1559859610.5.0.492744373458.issue37185@roundup.psfhosted.org> New submission from Pierre Glaser : Hi, Following https://bugs.python.org/issue26836, I started thinking about using memfd_create instead of shm_open for creating shared-memory segments in multiprocessing.shared_memory. The main advantage of memfd_create over shm_open is that the generated resources management is easier: a segment created using using memfd_create is released once all references to the segment are dropped. This is not the case for segments created using shm_open, for which additional resource tracking is needed (using the new multiprocessing.resource_tracker) The main difference between those two calls is that segments created using memfd_create are anonymous and can only be accessed using file descriptors. The name argument in the signature serves only for debugging purposes. On the contrary, shm_open generates segments that map to a file in /dev/shm: therefore, segments each have unique names. Would we decide to switch from shm_open to memfd_create, the name behavior will also change. How big of a deal would that be? ---------- messages: 344881 nosy: davin, pierreglaser, pitrou priority: normal severity: normal status: open title: use os.memfd_create in multiprocessing.shared_memory? type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 20:03:04 2019 From: report at bugs.python.org (Roffild) Date: Fri, 07 Jun 2019 00:03:04 +0000 Subject: [New-bugs-announce] [issue37186] Everyone uses GIL wrong! = DEADLOCK Message-ID: <1559865784.82.0.0210813088917.issue37186@roundup.psfhosted.org> New submission from Roffild : Everyone uses GIL wrong! = DEADLOCK I used sub-interpreters in embedded Python: https://github.com/Roffild/RoffildLibrary/blob/35ef39fafc164d260396b39b28ff897d44cf0adb/Libraries/Roffild/PythonDLL/private.h#L44 https://github.com/Roffild/RoffildLibrary/blob/35ef39fafc164d260396b39b28ff897d44cf0adb/Libraries/Roffild/PythonDLL/mql_class.c#L142 PyEval_AcquireThread(__interp->interp); ... PyGILState_Ensure() = DEADLOCK ... PyEval_ReleaseThread(__interp->interp); A deadlock happens in the line: https://github.com/python/cpython/blob/7114c6504a60365b8b0cd718da0ec8a737599fb9/Python/pystate.c#L1313 Of course in the help there is the note: Note that the PyGILState_() functions assume there is only one global interpreter (created automatically by Py_Initialize()). Python supports the creation of additional interpreters (using Py_NewInterpreter()), but mixing multiple interpreters and the PyGILState_() API is unsupported. But functions PyGILState_() are used in third-party libraries. Most often, these functions are used without checking that GIL is already locked. Often, these functions are added to the code for reinsurance only and this can affect performance. Numpy: https://github.com/numpy/numpy/blob/2d4975e75c210202293b894bf98faf12f4697a31/numpy/core/include/numpy/ndarraytypes.h#L987 https://github.com/numpy/numpy/search?q=NPY_ALLOW_C_API&unscoped_q=NPY_ALLOW_C_API Pytorch: https://github.com/pytorch/pytorch/blob/0a3fb45d3d2cfacbd0469bbdba0e6cb1a2cd1bbe/torch/csrc/utils/auto_gil.h#L9 https://github.com/pytorch/pytorch/search?q=AutoGIL&unscoped_q=AutoGIL Pybind11 developers have already fixed this problem: https://github.com/pybind/pybind11/blob/97784dad3e518ccb415d5db57ff9b933495d9024/include/pybind11/pybind11.h#L1846 It is necessary to change the code of PyGILState_() functions to support sub-interpreters. Or add to https://docs.python.org/3/c-api/init.html#thread-state-and-the-global-interpreter-lock warning: Some Python libraries cannot be used in a sub-interpreter due to the likelihood of deadlock. For me, this is a critical vulnerability! There is another problem: Calling PyEval_AcquireThread() again results in a deadlock. This can be controlled in your code, but not in a third-party library. ---------- components: Interpreter Core messages: 344891 nosy: Roffild priority: normal severity: normal status: open title: Everyone uses GIL wrong! = DEADLOCK type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 6 23:46:36 2019 From: report at bugs.python.org (Eric Wieser) Date: Fri, 07 Jun 2019 03:46:36 +0000 Subject: [New-bugs-announce] [issue37187] CField.size from the ctypes module does not behave as documented on bitfields Message-ID: <1559879196.85.0.0861364976154.issue37187@roundup.psfhosted.org> New submission from Eric Wieser : This behavior is pretty surprising: ```python import ctypes class Simple(ctypes.Structure): _fields_ = [ ('a', ctypes.c_uint8), ('b', ctypes.c_uint8), ] class Bitfields(ctypes.Structure): _fields_ = [ ('a', ctypes.c_uint8, 8), ('b', ctypes.c_uint8, 8), ] print(Simple.b.size) # 1 print(Bitfields.b.size) # 262148 ``` The docstring for this field, from `help(type(Bitfields.b).size)`, is: > Help on getset descriptor _ctypes.CField.size: > > size > size in bytes of this field So either the behavior or the docstring needs to change. ---------- assignee: docs at python components: Documentation messages: 344895 nosy: Eric Wieser, docs at python priority: normal severity: normal status: open title: CField.size from the ctypes module does not behave as documented on bitfields _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 02:41:34 2019 From: report at bugs.python.org (Eric Wieser) Date: Fri, 07 Jun 2019 06:41:34 +0000 Subject: [New-bugs-announce] [issue37188] Creating a ctypes array of an element with size zero causes "Fatal Python error: Floating point exception" Message-ID: <1559889694.06.0.92762632537.issue37188@roundup.psfhosted.org> New submission from Eric Wieser : Introduced in the fix to bpo-36504, GH-12660. ```python >>> (ctypes.c_uint8 * 0 * 2)() Fatal Python error: Floating point exception >>> struct Empty(ctypes.Structure): _fields_ = [] >>> (Empty * 2)() Fatal Python error: Floating point exception ``` This used to work just fine ---------- messages: 344901 nosy: Eric Wieser, ZackerySpytz priority: normal severity: normal status: open title: Creating a ctypes array of an element with size zero causes "Fatal Python error: Floating point exception" type: crash versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 03:55:23 2019 From: report at bugs.python.org (Christoph Gohlke) Date: Fri, 07 Jun 2019 07:55:23 +0000 Subject: [New-bugs-announce] [issue37189] PyRun_String not exported in python38.dll Message-ID: <1559894123.98.0.3282613922.issue37189@roundup.psfhosted.org> New submission from Christoph Gohlke : While testing third party packages on Python 3.8.0b1 for Windows, I noticed that the `PyRun_String` function is no longer exported from `python38.dll`. Is this intentional? I can't see this mentioned at or This change breaks existing code. But then `PyRun_String` is easy to replace with `PyRun_StringFlags`. ---------- components: Windows messages: 344905 nosy: cgohlke, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: PyRun_String not exported in python38.dll versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 04:57:50 2019 From: report at bugs.python.org (=?utf-8?q?Radu_Matei_L=C4=83craru?=) Date: Fri, 07 Jun 2019 08:57:50 +0000 Subject: [New-bugs-announce] [issue37190] asyncio.iscoroutinefunction(.asend) returns False Message-ID: <1559897870.46.0.572165023126.issue37190@roundup.psfhosted.org> New submission from Radu Matei L?craru : asyncio.iscoroutinefunction(.__anext__) asyncio.iscoroutinefunction(.asend) asyncio.iscoroutinefunction(.athrow) asyncio.iscoroutinefunction(.aclose) All of these return False, is this the intended behavior? Aren't all of these in fact coroutine functions? ---------- components: asyncio messages: 344910 nosy: Radu Matei L?craru, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio.iscoroutinefunction(.asend) returns False type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 05:13:57 2019 From: report at bugs.python.org (Petr Viktorin) Date: Fri, 07 Jun 2019 09:13:57 +0000 Subject: [New-bugs-announce] [issue37191] Python.h contains intermingled declarations Message-ID: <1559898837.19.0.323720820169.issue37191@roundup.psfhosted.org> New submission from Petr Viktorin : When compiled with GCC's -Werror=declaration-after-statement ("intermingled declarations" in PEP7), cpython/abstract.h (included from ) errors on vectorcall helper functions added in 3.8.0 Beta 1. It's well within our rights to ignore this: since 3.6 we require intermingled declarations. But, when re-compiling Fedora we've seen several projects fail with this warning (so far: pygobject3, python-dbus, xen; more will likely come). Dear Release Manager, should we patch 3.8 to avoid this? The patch is simple, and it would give projects that we(re) dutifully tested with the Alphas one more release to adapt. I don't think it's worth changing for 3.9 (but if we do we should test it). ---------- messages: 344912 nosy: lukasz.langa, petr.viktorin priority: normal severity: normal status: open title: Python.h contains intermingled declarations versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 07:39:17 2019 From: report at bugs.python.org (Alen Kolman) Date: Fri, 07 Jun 2019 11:39:17 +0000 Subject: [New-bugs-announce] [issue37192] pip instal math3d - EROR Message-ID: <1559907557.33.0.540342882272.issue37192@roundup.psfhosted.org> New submission from Alen Kolman : I am geting constant eror while trying to isntal math3d or urx libary for Python using pip install. I am using Windows 10 and Python 3.7.3 I tried to: - unistall/instal python and also try older version 3.4.4.. nothing work - upgrade setuptools - upgrade pip (then I get even more errors) Here is my error: C:\Python34\Scripts>pip install math3d Collecting math3d Using cached https://files.pythonhosted.org/packages/9a/33/72ac95bb4ac11a2b13e033d90f84430dc23fc815124d9303dffca8789a75/math3d-3.3.4.tar.gz Installing collected packages: math3d Running setup.py install for math3d ... error Complete output from command c:\python34\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\ALEN~1.KOL\\AppData\\Local\\Temp\\pip-install-duiwypg9\\math3d\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\ALEN~1.KOL\AppData\Local\Temp\pip-record-cmd8m8g6\install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build\lib creating build\lib\math3d copying math3d\orientation.py -> build\lib\math3d copying math3d\orientation_computer.py -> build\lib\math3d copying math3d\quaternion.py -> build\lib\math3d copying math3d\transform.py -> build\lib\math3d copying math3d\utils.py -> build\lib\math3d copying math3d\vector.py -> build\lib\math3d copying math3d\__init__.py -> build\lib\math3d creating build\lib\math3d\interpolation copying math3d\interpolation\r3interpolation.py -> build\lib\math3d\interpolation copying math3d\interpolation\se3interpolation.py -> build\lib\math3d\interpolation copying math3d\interpolation\so3interpolation.py -> build\lib\math3d\interpolation copying math3d\interpolation\__init__.py -> build\lib\math3d\interpolation creating build\lib\math3d\reference_system copying math3d\reference_system\frame.py -> build\lib\math3d\reference_system copying math3d\reference_system\free_vector.py -> build\lib\math3d\reference_system copying math3d\reference_system\point.py -> build\lib\math3d\reference_system copying math3d\reference_system\reference_system.py -> build\lib\math3d\reference_system copying math3d\reference_system\__init__.py -> build\lib\math3d\reference_system creating build\lib\math3d\dynamics copying math3d\dynamics\twist.py -> build\lib\math3d\dynamics copying math3d\dynamics\wrench.py -> build\lib\math3d\dynamics copying math3d\dynamics\__init__.py -> build\lib\math3d\dynamics creating build\lib\math3d\geometry copying math3d\geometry\line.py -> build\lib\math3d\geometry copying math3d\geometry\plane.py -> build\lib\math3d\geometry copying math3d\geometry\__init__.py -> build\lib\math3d\geometry running install_lib running install_data Traceback (most recent call last): File "", line 1, in File "C:\Users\ALEN~1.KOL\AppData\Local\Temp\pip-install-duiwypg9\math3d\setup.py", line 23, in data_files=[('share/doc/pymath3d/', ['README.md', 'COPYING'])] File "c:\python34\lib\distutils\core.py", line 148, in setup dist.run_commands() File "c:\python34\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "c:\python34\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "c:\python34\lib\site-packages\setuptools\command\install.py", line 61, in run return orig.install.run(self) File "c:\python34\lib\distutils\command\install.py", line 557, in run self.run_command(cmd_name) File "c:\python34\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\python34\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "c:\python34\lib\distutils\command\install_data.py", line 56, in run dir = convert_path(f[0]) File "c:\python34\lib\distutils\util.py", line 112, in convert_path raise ValueError("path '%s' cannot end with '/'" % pathname) ValueError: path 'share/doc/pymath3d/' cannot end with '/' ---------------------------------------- Command "c:\python34\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\ALEN~1.KOL\\AppData\\Local\\Temp\\pip-install-duiwypg9\\math3d\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\ALEN~1.KOL\AppData\Local\Temp\pip-record-cmd8m8g6\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\ALEN~1.KOL\AppData\Local\Temp\pip-install-duiwypg9\math3d\ ---------- components: Installation messages: 344924 nosy: Alen Kolman priority: normal severity: normal status: open title: pip instal math3d - EROR type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 07:53:39 2019 From: report at bugs.python.org (Norihiro Maruyama) Date: Fri, 07 Jun 2019 11:53:39 +0000 Subject: [New-bugs-announce] [issue37193] Memory leak while running TCP/UDPServer with socketserver.ThreadingMixIn Message-ID: <1559908419.41.0.390422965351.issue37193@roundup.psfhosted.org> New submission from Norihiro Maruyama : UDP/TCPServer with socketserver.ThreadingMixin class (also ThreadingTCPServer and ThreadingUDPServer class) seems to be memory leak while running the server. https://docs.python.org/3/library/socketserver.html#socketserver.ThreadingMixIn My code which wrote to check this is the following. ``` class ThreadedTCPRequestHandler(socketserver.BaseRequestHandler): def handle(self): data = str(self.request.recv(1024), 'ascii') cur_thread = threading.current_thread() response = bytes("{}: {}".format(cur_thread.name, data), 'ascii') self.request.sendall(response) if __name__ == "__main__": HOST, PORT = "localhost", 0 server = socketserver.ThreadingTCPServer((HOST, PORT), ThreadedTCPRequestHandler) server.daemon_threads = False server.block_on_close = True with server: ip, port = server.server_address server_thread = threading.Thread(target=server.serve_forever) server_thread.daemon = True server_thread.start() for i in range(1000): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.connect(server.server_address) sock.sendall(bytes("hello", 'ascii')) response = str(sock.recv(1024), 'ascii') print("Received: {}".format(response)) time.sleep(0.01) server.server_close() server.shutdown() ``` ( I wrote this based on https://docs.python.org/3/library/socketserver.html#asynchronous-mixins) Then I checked memory usage with profiling tool. (I used memory-profiler module https://pypi.org/project/memory-profiler/) $ mprof run python mycode.py $ mprof plot I attached result plot. And also I checked this also more long time and I found memory usage was increased endlessly. My environment is Hardware: MacBook Pro (15-inch, 2018) OS: MacOS 10.14 Python 3.7.1 I guess it caused by a thread object is not released in spite of the thread finished to process request and thread object will be made infinitely until server_close() is called. ---------- components: Library (Lib) files: threadingmixin_memory_usage.png messages: 344926 nosy: maru-n priority: normal severity: normal status: open title: Memory leak while running TCP/UDPServer with socketserver.ThreadingMixIn type: resource usage versions: Python 3.7 Added file: https://bugs.python.org/file48399/threadingmixin_memory_usage.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 07:54:00 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 07 Jun 2019 11:54:00 +0000 Subject: [New-bugs-announce] [issue37194] Move new vector headers to the internal C API Message-ID: <1559908440.37.0.746705371355.issue37194@roundup.psfhosted.org> New submission from STINNER Victor : bpo-37191: the new vector APIs declare "static inline" functions which are no C89 compatible and so cause compilation issues on pygobject3, python-dbus, xen (for example). I propose to move the new *private* declarations to the internal C API. I started to work on an application. The blocker issue is _PyObject_CallNoArg() which is now commonly used in CPython code base for best performances. It is used the _testcapi which must *not* be compiled with the internal C API. So I suggest to first add a new public PyObject_CallNoArg() function. It would be different than _PyObject_CallNoArg() static inline function: PyObject_CallNoArg() would be a regular function and so fit better with ABI issues. ---------- components: Interpreter Core messages: 344927 nosy: vstinner priority: normal severity: normal status: open title: Move new vector headers to the internal C API versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 09:04:54 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 07 Jun 2019 13:04:54 +0000 Subject: [New-bugs-announce] [issue37195] test_utime fails on MacOS Mojave (Kernel Version 18.6.0:) Message-ID: <1559912694.09.0.24854010798.issue37195@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : ./python.exe -m test test_os -R : -v ====================================================================== FAIL: test_utime (test.test_os.UtimeTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/pgalindo3/github/cpython/Lib/test/test_os.py", line 591, in test_utime self._test_utime(set_time) File "/Users/pgalindo3/github/cpython/Lib/test/test_os.py", line 579, in _test_utime self.assertAlmostEqual(st.st_atime, atime_ns * 1e-9, delta=1e-6) AssertionError: 1559912609.2612822 != 1.002003 within 1e-06 delta (1559912608.2592793 difference) ====================================================================== FAIL: test_utime_by_indexed (test.test_os.UtimeTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/pgalindo3/github/cpython/Lib/test/test_os.py", line 609, in test_utime_by_indexed self._test_utime(set_time) File "/Users/pgalindo3/github/cpython/Lib/test/test_os.py", line 579, in _test_utime self.assertAlmostEqual(st.st_atime, atime_ns * 1e-9, delta=1e-6) AssertionError: 1559912609.2679918 != 1.002003 within 1e-06 delta (1559912608.2659888 difference) ====================================================================== FAIL: test_utime_dir_fd (test.test_os.UtimeTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/pgalindo3/github/cpython/Lib/test/test_os.py", line 651, in test_utime_dir_fd self._test_utime(set_time) File "/Users/pgalindo3/github/cpython/Lib/test/test_os.py", line 579, in _test_utime self.assertAlmostEqual(st.st_atime, atime_ns * 1e-9, delta=1e-6) AssertionError: 1559912609.2721212 != 1.002003 within 1e-06 delta (1559912608.2701182 difference) ---------------------------------------------------------------------- ---------- components: Tests messages: 344933 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_utime fails on MacOS Mojave (Kernel Version 18.6.0:) versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 10:05:15 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Fri, 07 Jun 2019 14:05:15 +0000 Subject: [New-bugs-announce] [issue37196] Allowing arbitrary expressions in the @expression syntax Message-ID: <1559916315.57.0.952678742799.issue37196@roundup.psfhosted.org> New submission from G?ry : Could we allow arbitrary expressions in the @expression syntax for applying decorators to functions/classes? The current grammar restriction to: decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE is very surprising and I don't understand the real motivation. I find it weird that you are not able to do that: def f(): def g(): def h(x): pass return h return g @f()() def i(): pass since you get: @f()() ^ SyntaxError: invalid syntax but the following is perfectly valid: def f(): def g(): def h(x): pass return h return g def g(x): def h(x): pass return g @g(f()()) def h(): pass See this post for more details: https://stackoverflow.com/questions/56490579 ---------- components: Interpreter Core messages: 344939 nosy: bob.ippolito, exarkun, gvanrossum, j1m, maggyero, mwh, rhettinger priority: normal severity: normal status: open title: Allowing arbitrary expressions in the @expression syntax type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 12:26:13 2019 From: report at bugs.python.org (Irv Kalb) Date: Fri, 07 Jun 2019 16:26:13 +0000 Subject: [New-bugs-announce] [issue37197] [Idle-dev] Feedback appreciated for two suggested new features In-Reply-To: Message-ID: <183F716E-E3A9-46DE-879A-CC7D02870ECB@furrypants.com> New submission from Irv Kalb : [I'm not sure of the proper protocol here, I hope sending a message this way is OK.] Hi, I am writing about the potential fix to the Search Dialog box problem in IDLE that I reported a while back. I can see that there is a flurry of activity on this bug. I saw that there was a code change, but since I have never looked at IDLE source code, I really cannot comment on the fix. However, I did see comments that it might be difficult to reproduce, so I made a short video to clearly demonstrate the bug. (This is on IDLE/Python 3.6.1 Mac) You can see it here: http://www.youtube.com/watch?v=YWDsOEN8qsE I did not get a chance to test this out on Window, but the bug is easily reproducible on a Mac. If the change is to leave the dialog box in front of the editing window, that would be be fine. But long term, if it is possible to incorporate a search bar like the one in PyCharm, that would be even better. Thanks for addressing the issue and I hope this video is helpful. Irv > On Jun 6, 2019, at 9:28 AM, Tal Einat wrote: > > After a great delay, I've created an issue[0] about the search dialogs being hidden behind the main window, and have a PR[1] up with a proposed fix. > > [0] https://bugs.python.org/issue37177 > [1] https://github.com/python/cpython/pull/13869 > On Thu, Oct 18, 2018 at 8:13 PM Irv Kalb > wrote: > Hi, > > I don't have a strong opinion about the details of the search and replace dialogs. However, I can tell you that a change replacing the dialog box would be greatly appreciated. > > I teach Python classes using IDLE. The search dialog is a real a problem for new students and me as the teacher. My students and I often get into a scenario where the search dialog gets hidden behind a editing window. When that happens, then you try to make edits in code and nothing happens. This happens to very often. I have learned to look for the hidden search dialog window, but my students get very flustered. > > I think that a search bar would be a great improvement. If you can model it similar to the one in PyCharm, that would be wonderful. > > Thanks for looking into this issue. > > Irv > > >> On Oct 13, 2018, at 2:51 PM, Tal Einat > wrote: >> >> Hi, >> >> I've recently been working on two new features, for which I'd like to discuss whether it would be wise to include in IDLE. Each has a working implementation in a PR to make it easy to give it a try. >> >> 1. Replace the search and replace dialogs with a search bar >> https://bugs.python.org/issue34976 >> https://github.com/python/cpython/pull/9855 >> >> 2. Ability to run 3rd party code checkers >> https://bugs.python.org/issue21880 >> https://github.com/python/cpython/pull/9802 >> >> Any thoughts would be greatly appreciated! >> >> - Tal Einat >> _______________________________________________ >> IDLE-dev mailing list >> IDLE-dev at python.org >> https://mail.python.org/mailman/listinfo/idle-dev > > _______________________________________________ > IDLE-dev mailing list > IDLE-dev at python.org > https://mail.python.org/mailman/listinfo/idle-dev ---------- messages: 344963 nosy: IrvKalb, taleinat priority: normal severity: normal status: open title: [Idle-dev] Feedback appreciated for two suggested new features _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 12:33:21 2019 From: report at bugs.python.org (hodai goldman) Date: Fri, 07 Jun 2019 16:33:21 +0000 Subject: [New-bugs-announce] [issue37198] _parse_localename fail to parse 'US_IL' Message-ID: <1559925201.97.0.400746291459.issue37198@roundup.psfhosted.org> New submission from hodai goldman : _parse_localename fail to parse 'US_IL': Traceback (most recent call last): File "/usr/bin/flowblade", line 78, in app.main(modules_path) File "/usr/share/flowblade/Flowblade/app.py", line 194, in main translations.init_languages() File "/usr/share/flowblade/Flowblade/translations.py", line 39, in init_languages lc, encoding = locale.getdefaultlocale() File "/usr/lib/python2.7/locale.py", line 545, in getdefaultlocale return _parse_localename(localename) File "/usr/lib/python2.7/locale.py", line 477, in _parse_localename raise ValueError, 'unknown locale: %s' % localename ValueError: unknown locale: en_IL need to add another check for '_' separator, code: if '.' in code: return tuple(code.split('.')[:2]) ---------- messages: 344966 nosy: hodai goldman priority: normal severity: normal status: open title: _parse_localename fail to parse 'US_IL' type: crash versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 13:00:30 2019 From: report at bugs.python.org (N.P. Khelili) Date: Fri, 07 Jun 2019 17:00:30 +0000 Subject: [New-bugs-announce] [issue37199] Test suite fails when Ipv6 is unavailable Message-ID: <1559926830.72.0.215725323123.issue37199@roundup.psfhosted.org> New submission from N.P. Khelili : The test suite does not handle an OS that does not have IPv6. When Linux kernel is started with argument: ipv6.disable=1 The test suite fails. ( See attached file ) 5 tests failed: test_asyncio test_imaplib test_importlib test_socket test_zipapp ---------- files: test_suite_fail.txt messages: 344970 nosy: Nophke priority: normal severity: normal status: open title: Test suite fails when Ipv6 is unavailable type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file48402/test_suite_fail.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 15:32:16 2019 From: report at bugs.python.org (Neil Schemenauer) Date: Fri, 07 Jun 2019 19:32:16 +0000 Subject: [New-bugs-announce] [issue37200] PyType_GenericAlloc might over-allocate memory Message-ID: <1559935936.79.0.824964365383.issue37200@roundup.psfhosted.org> New submission from Neil Schemenauer : In the process of working on some garbage collector/obmalloc experiments, I noticed what seems to be a quirk about PyType_GenericAlloc(). It calls: size = _PyObject_VAR_SIZE(type, nitems+1); Note the "+1" which is documented as "for the sentinel". That code dates back to change "e5c691abe3946ddbaa00730b92f3b96f96903f7d" when Guido added support for heap types. This extra item is not added by _PyObject_GC_NewVar(). Also, the documentation for tp_alloc says that the size of the allocated block should be: tp_basicsize + nitems*tp_itemsize, rounded up to a multiple of sizeof(void*); The "+1" for the sentinel is definitely needed in certain cases. I think it might only be needed if 'type' is a subtype of 'type'. I.e. if Py_TPFLAGS_TYPE_SUBCLASS is set on 'type'. I haven't done enough analysis to fully understand this quirk yet but I think we are allocating extra memory quite regularly. Quite a lot of types use tp_alloc = PyType_GenericAlloc. E.g. the 'list' type or a subclass of the tuple type. It seems with the attached patch, unit tests still pass. Perhaps the +1 could be removed on the non-GC branch of the code as well. ---------- components: Interpreter Core files: generic_alloc_sentinel.txt messages: 345002 nosy: nascheme priority: normal severity: normal status: open title: PyType_GenericAlloc might over-allocate memory type: performance Added file: https://bugs.python.org/file48403/generic_alloc_sentinel.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 7 16:47:57 2019 From: report at bugs.python.org (Paul Monson) Date: Fri, 07 Jun 2019 20:47:57 +0000 Subject: [New-bugs-announce] [issue37201] fix test_distutils failures for Windows ARM64 Message-ID: <1559940477.37.0.824825377046.issue37201@roundup.psfhosted.org> New submission from Paul Monson : There are a few places where ARM64 is not correctly specified in order for distutils to work for on-target builds using Visual Studio (32-bit x86 emulation). ---------- components: Tests, Windows messages: 345008 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: fix test_distutils failures for Windows ARM64 type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 8 07:07:20 2019 From: report at bugs.python.org (Roland Netzsch) Date: Sat, 08 Jun 2019 11:07:20 +0000 Subject: [New-bugs-announce] [issue37202] Future.cancelled is not set to true immediately after calling Future.cancel Message-ID: <1559992040.91.0.159243641871.issue37202@roundup.psfhosted.org> New submission from Roland Netzsch : The attached file produces the following output: wait is still running wait is not set to cancelled! Awaiting cancelled future produced a CancelledError. A look a the documentation does not suggest a need to await the future in order to make sure the cancelled-flag is being set. ---------- components: asyncio files: test.py messages: 345029 nosy: Roland Netzsch, asvetlov, yselivanov priority: normal severity: normal status: open title: Future.cancelled is not set to true immediately after calling Future.cancel type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48405/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 8 07:51:27 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sat, 08 Jun 2019 11:51:27 +0000 Subject: [New-bugs-announce] [issue37203] Correct classmethod emulation in Descriptor HowTo Guide Message-ID: <1559994687.73.0.550711459072.issue37203@roundup.psfhosted.org> New submission from G?ry : With the current Python equivalent `ClassMethod` implementation of `classmethod` given in Raymond Hettinger's _Descriptor HowTo Guide_, the following code snippet: ``` class A: @ClassMethod def f(cls, *, x): pass print(A.f) A.f(x=3) ``` prints: > .newfunc at 0x106b76268> and raises: > TypeError: newfunc() got an unexpected keyword argument 'x' instead of only printing: > > like the `@classmethod` decorator would do. So the `ClassMethod` implementation fails in two regards: * it does not return a bound method to a class; * it does not handle keyword-only arguments. With this PR `ClassMethod` will correctly emulate `classmethod`. This approach (`types.MethodType`) is already used in the Python equivalent `Function` implementation of functions given earlier in the same guide. ---------- assignee: docs at python components: Documentation messages: 345031 nosy: docs at python, eric.araujo, ezio.melotti, maggyero, mdk, rhettinger, willingc priority: normal pull_requests: 13785 severity: normal status: open title: Correct classmethod emulation in Descriptor HowTo Guide type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 8 09:56:52 2019 From: report at bugs.python.org (Vlad Shcherbina) Date: Sat, 08 Jun 2019 13:56:52 +0000 Subject: [New-bugs-announce] [issue37204] Scripts and binaries in PYTHON_PREFIX/Scripts contain unnecessarily lowercased path to Python location on Windows Message-ID: <1560002212.48.0.983837427907.issue37204@roundup.psfhosted.org> New submission from Vlad Shcherbina : To reproduce: 1. Download and run Python installer (I used python-3.7.3-amd64-webinstall.exe). 2. Modify install settings: - Install for all users: yes - Customize install location: "C:\Python37" (don't know if it's relevant, mentioning it just in case) 3. Install. 4. Run the following commands: pip --version py -m pip --version Expected result: > pip --version pip 19.0.3 from C:\Python37\lib\site-packages\pip (python 3.7) > py -m pip --version pip 19.0.3 from C:\Python37\lib\site-packages\pip (python 3.7) Actual result: > pip --version pip 19.0.3 from c:\python37\lib\site-packages\pip (python 3.7) > py -m pip --version pip 19.0.3 from C:\Python37\lib\site-packages\pip (python 3.7) Same happens for all binaries and scripts in PYTHON_PREFIX/Scripts, not only for pip.exe. For example, if I install pytest (using "pip install pytest" or "py -m pip install pytest", doesn't matter), I get the following: > pytest -v ============================================================ test session starts ============================================================ platform win32 -- Python 3.7.3, pytest-4.6.2, py-1.8.0, pluggy-0.12.0 -- c:\python37\python.exe cachedir: .pytest_cache rootdir: C:\temp\qqq collected 0 items ======================================================= no tests ran in 0.02 seconds ======================================================== > py -m pytest -v ============================================================ test session starts ============================================================ platform win32 -- Python 3.7.3, pytest-4.6.2, py-1.8.0, pluggy-0.12.0 -- C:\Python37\python.exe cachedir: .pytest_cache rootdir: C:\temp\qqq collected 0 items ======================================================= no tests ran in 0.02 seconds ======================================================== ---------- components: Distutils, Installation, Windows messages: 345037 nosy: dstufft, eric.araujo, paul.moore, steve.dower, tim.golden, vlad, zach.ware priority: normal severity: normal status: open title: Scripts and binaries in PYTHON_PREFIX/Scripts contain unnecessarily lowercased path to Python location on Windows type: behavior versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 8 14:15:00 2019 From: report at bugs.python.org (Ken Healy) Date: Sat, 08 Jun 2019 18:15:00 +0000 Subject: [New-bugs-announce] [issue37205] time.perf_counter() is not system-wide on Windows, in disagreement with documentation Message-ID: <1560017700.11.0.127727162727.issue37205@roundup.psfhosted.org> New submission from Ken Healy : The documentation for time.perf_counter() indicates it should return a system-wide value: https://docs.python.org/3/library/time.html#time.perf_counter This is not true on Windows, as a process-specific offset is subtracted from the underlying QueryPerformanceCounter() value. The code comments indicate this is to reduce precision loss: https://github.com/python/cpython/blob/master/Python/pytime.c#L995-L997 This is relevant in multiprocess applications, where accurate timing is required across multiple processes. Here is a simple test case: ----------------------------------------------------------- import concurrent.futures import time def worker(): return time.perf_counter() if __name__ == '__main__': pool = concurrent.futures.ProcessPoolExecutor() futures = [] for i in range(3): print('Submitting worker {:d} at time.perf_counter() == {:.3f}'.format(i, time.perf_counter())) futures.append(pool.submit(worker)) time.sleep(1) for i, f in enumerate(futures): print('Worker {:d} started at time.perf_counter() == {:.3f}'.format(i, f.result())) ----------------------------------------------------------- Output: ----------------------------------------------------------- C:\...>Python37\python.exe -VV Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)] C:\...>Python37\python.exe perf_counter_across_processes.py Submitting worker 0 at time.perf_counter() == 0.376 Submitting worker 1 at time.perf_counter() == 1.527 Submitting worker 2 at time.perf_counter() == 2.529 Worker 0 started at time.perf_counter() == 0.380 Worker 1 started at time.perf_counter() == 0.956 Worker 2 started at time.perf_counter() == 1.963 ----------------------------------------------------------- See my stackoverflow question for a comparison with Linux: https://stackoverflow.com/questions/56502111/should-time-perf-counter-be-consistent-across-processes-in-python-on-windows ---------- assignee: docs at python components: Documentation, Library (Lib), Windows messages: 345057 nosy: docs at python, kh90909, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 8 14:48:05 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 08 Jun 2019 18:48:05 +0000 Subject: [New-bugs-announce] [issue37206] Incorrect application of Argument Clinic to dict.pop() Message-ID: <1560019685.35.0.640301237908.issue37206@roundup.psfhosted.org> New submission from Raymond Hettinger : help(dict.pop) was correct in 3.7: pop(...) D.pop(k[,d]) -> v and incorrect for 3.8: pop(self, key, default=None, /) This happened in: https://github.com/python/cpython/commit/9e4f2f3a6b8ee995c365e86d976937c141d867f8 https://github.com/python/cpython/pull/12792 We've long known that the Argument Clinic was not applicable to all functions and methods, including this one in particular. The issue is that the one argument form does not set the default to None; rather, it triggers a KeyError when the key is missing. In other words, there is an important and long-standing semantic difference between d.pop(k) and d.pop(k,None). When reverting this change, please add a note about why the ArgumentClinic is not being applied to this function until its capabilities have been built-out to handle functions that have different behaviors depending on the number of arguments. Also, we should review other recent applications of the ArgumentClinic to make sure they didn't make the same mistake. ---------- components: Argument Clinic keywords: 3.8regression messages: 345062 nosy: larry, rhettinger priority: high severity: normal status: open title: Incorrect application of Argument Clinic to dict.pop() versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 9 05:23:47 2019 From: report at bugs.python.org (Mark Shannon) Date: Sun, 09 Jun 2019 09:23:47 +0000 Subject: [New-bugs-announce] [issue37207] Use PEP 590 vectorcall to speed up calls to range(), list() and dict() Message-ID: <1560072227.25.0.180298594259.issue37207@roundup.psfhosted.org> New submission from Mark Shannon : PEP 590 allows us the short circuit the __new__, __init__ slow path for commonly created builtin types. As an initial step, we can speed up calls to range, list and dict by about 30%. See https://gist.github.com/markshannon/5cef3a74369391f6ef937d52cca9bfc8 ---------- components: Interpreter Core messages: 345077 nosy: Mark.Shannon priority: normal severity: normal status: open title: Use PEP 590 vectorcall to speed up calls to range(), list() and dict() type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 9 07:04:00 2019 From: report at bugs.python.org (Iceflower) Date: Sun, 09 Jun 2019 11:04:00 +0000 Subject: [New-bugs-announce] [issue37208] Weird exception behaviour in ProcessPoolExecutor Message-ID: <1560078240.39.0.310143322777.issue37208@roundup.psfhosted.org> New submission from Iceflower : I don't really know where this belongs, but it is at least for me not an expected behaviour. It is weird for me at all. Please take a look if this is an intended behaviour and why it is like that. Tested with py3.7.3, py3.8.0b1 ``` A process in the process pool was terminated abruptly while the future was running or pending. ``` Tested with py 3.6.8: ``` Exception in thread Thread-1: Traceback (most recent call last): File "C:\Python36\lib\threading.py", line 916, in _bootstrap_inner self.run() File "C:\Python36\lib\threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "C:\Python36\lib\concurrent\futures\process.py", line 272, in _queue_management_worker result_item = reader.recv() File "C:\Python36\lib\multiprocessing\connection.py", line 251, in recv return _ForkingPickler.loads(buf.getbuffer()) TypeError: __init__() missing 1 required positional argument: 'num' ``` I expect that the PoolBreaker exception would work too. Code to test: As attachment. ---------- components: Interpreter Core files: ProcPoolEx-exception.py messages: 345079 nosy: Iceflower priority: normal severity: normal status: open title: Weird exception behaviour in ProcessPoolExecutor type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48407/ProcPoolEx-exception.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 9 08:38:47 2019 From: report at bugs.python.org (Antoine Pitrou) Date: Sun, 09 Jun 2019 12:38:47 +0000 Subject: [New-bugs-announce] [issue37209] Add what's new entries for pickle enhancements Message-ID: <1560083927.43.0.0939739725641.issue37209@roundup.psfhosted.org> New submission from Antoine Pitrou : The various pickle enhancements in 3.8 (apart from PEP 574) need to be mentioned in the What's New document (`Doc/whatsnew/3.8.rst`). ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 345080 nosy: docs at python, ogrisel, pierreglaser, pitrou priority: normal severity: normal stage: needs patch status: open title: Add what's new entries for pickle enhancements type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 9 16:23:32 2019 From: report at bugs.python.org (Christian Heimes) Date: Sun, 09 Jun 2019 20:23:32 +0000 Subject: [New-bugs-announce] [issue37210] Pure Python pickle module should not depend on _pickle.PickleBuffer Message-ID: <1560111812.29.0.301182333638.issue37210@roundup.psfhosted.org> New submission from Christian Heimes : In https://twitter.com/moreati/status/1137815804530049028 Alex Willmer wrote: > In CPython 3.8dev the pure http://pickle.py module now depends on the C _pickle module, but only for one class https://github.com/python/cpython/blob/c879ff247ae1b67a790ff98d2d59145302cd4e4e/Lib/pickle.py#L39 Antoine, would it make sense turn _pickle.PickleBuffer into an optional dependency? The class is only used for pickle 5 as described in PEP 574, https://www.python.org/dev/peps/pep-0574/ ---------- components: Library (Lib) keywords: 3.8regression messages: 345088 nosy: christian.heimes, pitrou priority: normal severity: normal status: open title: Pure Python pickle module should not depend on _pickle.PickleBuffer type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 9 20:36:19 2019 From: report at bugs.python.org (Tim Peters) Date: Mon, 10 Jun 2019 00:36:19 +0000 Subject: [New-bugs-announce] [issue37211] obmalloc: eliminate limit on pool size Message-ID: <1560126979.94.0.247484308186.issue37211@roundup.psfhosted.org> New submission from Tim Peters : On 64-bit Python, many object sizes essentially doubled over 32-bit Python, because Python objects are so heavy with pointers. More recently, forcing alignment to 16 bytes on 64-bit boxes boosted the memory requirements more modestly. But obmalloc's 256 KiB arenas and 4 KiB pools haven't changed since obmalloc was first written, and its `address_in_range()` machinery cannot deal with pools bigger than that (they're segfault factories, because the machinery relies on that a pool is no larger than a system page). obmalloc's fastest paths are those that stay within a pool. Whenever a pool boundary is hit, it necessarily gets slower, then slower still if an arena boundary is hit. So I propose to: - Remove the 4 KiB pool limit, by making `address_in_range()` page-based rather than pool-based. Pools should be able to span any power-of-2 number of pages. Then a pool for a given size class will be able to hold that many more times as many objects too, and so stay in the fastest paths more often. - On 64-bit boxes, increase both POOL_SIZE and ARENA_SIZE. ---------- components: Interpreter Core messages: 345097 nosy: tim.peters priority: normal severity: normal status: open title: obmalloc: eliminate limit on pool size type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 03:53:37 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Mon, 10 Jun 2019 07:53:37 +0000 Subject: [New-bugs-announce] [issue37212] ordered keyword arguments in unittest.mock.call repr and error messages Message-ID: <1560153217.35.0.990763770677.issue37212@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : With PEP 468 implemented in Python 3.6 the order of keyword arguments are preserved. In mock.call the arguments are sorted [0]. This makes the output different from the expectation that order should be same as the one passed. This was implemented in issue21256 where ordered kwargs was suggested but I guess it was not implemented and hence sort was used for deterministic output. The PEP also links to the discussion where ordered kwargs was discussed [1] in 2009 before ordered dict implementation. Given that keyword argument dictionary is now deterministic is it okay to drop sorting in 3.9? This is backwards incompatible with 3.8 where code might have depended upon the sorted output so if this requires a python-dev discussion I would be happy to start one. # (b=1, a=2) is the actual call but "actual" in error message is noted as (a=2, b=1) >>> from unittest.mock import call, Mock >>> call(b=1, a=2) call(a=2, b=1) >>> m = Mock() >>> m(b=1, a=2) >>> m.assert_called_with(c=3) Traceback (most recent call last): File "", line 1, in File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/unittest/mock.py", line 870, in assert_called_with raise AssertionError(_error_message()) from cause AssertionError: expected call not found. Expected: mock(c=3) Actual: mock(a=2, b=1) # With proposed change removing sorting >>> from unittest.mock import call, Mock >>> call(b=1, a=2) call(b=1, a=2) >>> m = Mock() >>> m(b=1, a=2) >>> m.assert_called_with(c=3) Traceback (most recent call last): File "", line 1, in File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/unittest/mock.py", line 870, in assert_called_with raise AssertionError(_error_message()) from cause AssertionError: expected call not found. Expected: mock(c=3) Actual: mock(b=1, a=2) [0] https://github.com/python/cpython/blob/c879ff247ae1b67a790ff98d2d59145302cd4e4e/Lib/unittest/mock.py#L2284 [1] https://mail.python.org/pipermail/python-ideas/2009-April/004163.html ---------- components: Library (Lib) messages: 345106 nosy: cjw296, mariocj89, michael.foord, xtreak priority: normal severity: normal status: open title: ordered keyword arguments in unittest.mock.call repr and error messages type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 04:24:23 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 10 Jun 2019 08:24:23 +0000 Subject: [New-bugs-announce] [issue37213] Peeephole optimizer does not optimize functions with multiline expressions Message-ID: <1560155063.44.0.426038128869.issue37213@roundup.psfhosted.org> New submission from Serhiy Storchaka : The optimization is skipped if lnotab contains 255. It was very uncommon in older versions (only when the function contains very large expressions, larger than hundreds of lines or bytecode instructions), but in 3.8 this situation is common. For example: [x for x in a if x] 1 0 BUILD_LIST 0 2 LOAD_FAST 0 (.0) >> 4 FOR_ITER 12 (to 18) 2 6 STORE_FAST 1 (x) 8 LOAD_FAST 1 (x) 10 POP_JUMP_IF_FALSE 16 1 12 LOAD_FAST 1 (x) 14 LIST_APPEND 2 >> 16 JUMP_ABSOLUTE 4 >> 18 RETURN_VALUE if x: if (y and z): foo() else: bar() 1 0 LOAD_NAME 0 (x) 2 POP_JUMP_IF_FALSE 20 2 4 LOAD_NAME 1 (y) 6 POP_JUMP_IF_FALSE 18 3 8 LOAD_NAME 2 (z) 2 10 POP_JUMP_IF_FALSE 18 4 12 LOAD_NAME 3 (foo) 14 CALL_FUNCTION 0 16 POP_TOP >> 18 JUMP_FORWARD 6 (to 26) 6 >> 20 LOAD_NAME 4 (bar) 22 CALL_FUNCTION 0 24 POP_TOP >> 26 LOAD_CONST 0 (None) 28 RETURN_VALUE You can see non-optimized jumps to jumps (from 10 to 16 and from 6 and 10 to 16 correspondingly). This is a consequence of two features: ability to encode negative line differences in lnotab and setting lines for both outer and inner expressions. Two ways to solve this issue: 1. Move optimizations from Python/peephole.c to Python/compile.c (see issue32477 and issue33318). This is a new feature and it is too late for 3.8. 2. Make the peepholer to work with lnotab containing 255. Pablo, are you interesting? ---------- components: Interpreter Core messages: 345108 nosy: benjamin.peterson, pablogsal, serhiy.storchaka, vstinner, yselivanov priority: normal severity: normal status: open title: Peeephole optimizer does not optimize functions with multiline expressions type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 06:39:37 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 10 Jun 2019 10:39:37 +0000 Subject: [New-bugs-announce] [issue37214] Add new EncodingWarning warning category: emitted when the locale encoding is used implicitly Message-ID: <1560163177.73.0.639513289421.issue37214@roundup.psfhosted.org> New submission from STINNER Victor : Spin-off of INADA-san's PEP 597: I propose to emit a new EncodingWarning warning when the locale encoding is used implicitly. I propose to add a new warning to be able to control how they are handled: either make them hard error (python3.9 -W error::EncodingWarning) or ignore them (python3.9 -W ignore::EncodingWarning). Displaying these warnings by default or not is an open question discussed in PEP 597 discussion: https://discuss.python.org/t/pep-597-use-utf-8-for-default-text-file-encoding/1819 I propose to only emit a warning which can be turned off easily to not bother people who don't care of encodings. There are many legit cases where a Python application works well in a control environment whereas it would break on any other environment, but the application is only used in one specific environment. For example, an application may only be run in a container which has a well defined locale and locale encoding, but will never be run on Windows, macOS, FreeBSD or HP-UX. Writing portable code is *great*. But I'm not sure that *all* code must be portable. For example, it's also fine to use Python to write applications specific to Linux, again, for code only run in a controlled environment. Many companies have private projects which will never run outside their "walls" (outside their datacenters ;-)). ---------- components: Library (Lib) messages: 345121 nosy: vstinner priority: normal severity: normal status: open title: Add new EncodingWarning warning category: emitted when the locale encoding is used implicitly type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 09:22:54 2019 From: report at bugs.python.org (Jakub Kulik) Date: Mon, 10 Jun 2019 13:22:54 +0000 Subject: [New-bugs-announce] [issue37215] Build with dtrace is broken on some systems Message-ID: <1560172974.02.0.554258836838.issue37215@roundup.psfhosted.org> New submission from Jakub Kulik : After the integration of https://bugs.python.org/issue36842, build with dtrace is broken on systems where files that reference DTrace probes need to be modified in-place by dtrace (e.g., Solaris). The reason for that is that Python/sysmodule.o, which newly contains some dtrace references, was not added to DTRACE_DEPS. ---------- components: Build messages: 345128 nosy: kulikjak priority: normal severity: normal status: open title: Build with dtrace is broken on some systems versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 11:40:22 2019 From: report at bugs.python.org (makdon) Date: Mon, 10 Jun 2019 15:40:22 +0000 Subject: [New-bugs-announce] [issue37216] mac installation document wrong for python 3.7.3 Message-ID: <1560181222.28.0.0831029476518.issue37216@roundup.psfhosted.org> New submission from makdon : According the mail in docs mailing list, there's a typo in python installation document on Macos. I will check it and create a PR if necessary. The email says: https://docs.python.org/3/using/mac.html#getting-and-installing-macpython says "What you get after installing is a number of things: A MacPython 3.6 folder in your Applications folder." However, after installing just now $ /usr/local/bin/python3 -V Python 3.7.3 I instead found a folder called /Applications/Python 3.7 ---------- assignee: docs at python components: Documentation messages: 345134 nosy: docs at python, makdon priority: normal severity: normal status: open title: mac installation document wrong for python 3.7.3 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 16:03:59 2019 From: report at bugs.python.org (kmille) Date: Mon, 10 Jun 2019 20:03:59 +0000 Subject: [New-bugs-announce] [issue37217] SSL socket segfaults during a connect() using a punycode domain containg a umlaut Message-ID: <1560197039.5.0.767955558559.issue37217@roundup.psfhosted.org> New submission from kmille : Hey, chs at gw-sss-nb8:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.4 LTS Release: 16.04 Codename: xenial chs at gw-sss-nb8:~$ python3 --version Python 3.5.2 chs at gw-sss-nb8:~$ cat segfault.py import ssl import socket hostname = "www.xn--b.buchhandlunggr?nden.de" ctx = ssl.create_default_context() s = ctx.wrap_socket(socket.socket(), server_hostname=hostname) s.check_hostname = True try: s.connect((hostname, 443)) except UnicodeError as incorrect_punycode: pass chs at gw-sss-nb8:~$ python3 segfault.py Segmentation fault The problem does not occur if I remove the ? in www.xn--b.buchhandlunggr?nden.de On my Arch the DNS fails (above the name doesn't resolve too but I seems like it doesn't matter): kmille at linbox timetracking master % python3 omg.py Traceback (most recent call last): File "omg.py", line 10, in s.connect((hostname, 443)) File "/usr/lib/python3.7/ssl.py", line 1150, in connect self._real_connect(addr, False) File "/usr/lib/python3.7/ssl.py", line 1137, in _real_connect super().connect(addr) socket.gaierror: [Errno -2] Name or service not known kmille at linbox timetracking master % python3 --version Python 3.7.3 If you need further help please ask. Thank you for python <3 kmille ---------- components: ctypes messages: 345140 nosy: kmille priority: normal severity: normal status: open title: SSL socket segfaults during a connect() using a punycode domain containg a umlaut versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 17:02:15 2019 From: report at bugs.python.org (Alex Willmer) Date: Mon, 10 Jun 2019 21:02:15 +0000 Subject: [New-bugs-announce] [issue37218] Default hmac.new() digestmod has not been removed from documentation Message-ID: <1560200535.68.0.253007657272.issue37218@roundup.psfhosted.org> New submission from Alex Willmer : Until Python 3.8 hmc.new() defaulted the digestmod argument to 'hmac-md5'. This was deperecated, to be removed in Python 3.8. In Python 3.8.0b1 it is gone, e.g. Python 3.8.0b1 (default, Jun 6 2019, 03:44:52) [GCC 7.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import hmac >>> hmac.new(b'qwertyuiop').name Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/hmac.py", line 146, in new return HMAC(key, msg, digestmod) File "/usr/lib/python3.8/hmac.py", line 49, in __init__ raise ValueError('`digestmod` is required.') ValueError: `digestmod` is required. but the deprecation note, and the documented signature haven't been updated. PR incoming ---------- assignee: docs at python components: Documentation messages: 345144 nosy: Alex.Willmer, docs at python priority: normal severity: normal status: open title: Default hmac.new() digestmod has not been removed from documentation versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 18:01:36 2019 From: report at bugs.python.org (Anthony Sottile) Date: Mon, 10 Jun 2019 22:01:36 +0000 Subject: [New-bugs-announce] [issue37219] empty set difference does not check types of values Message-ID: <1560204096.52.0.24651391919.issue37219@roundup.psfhosted.org> New submission from Anthony Sottile : This is a regression from python2.x behaviour: $ python2 Python 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> x = set() >>> x.difference(123) Traceback (most recent call last): File "", line 1, in TypeError: 'int' object is not iterable $ python3.8 Python 3.8.0b1 (default, Jun 6 2019, 03:44:52) [GCC 7.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> x = set() >>> x.difference(123) set() The regression appears to be introduced in this patch: 4103e4dfbce https://github.com/python/cpython/commit/4103e4dfbce ---------- components: Library (Lib) messages: 345147 nosy: Anthony Sottile, rhettinger priority: normal severity: normal status: open title: empty set difference does not check types of values versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 18:49:49 2019 From: report at bugs.python.org (Zachary Ware) Date: Mon, 10 Jun 2019 22:49:49 +0000 Subject: [New-bugs-announce] [issue37220] test_idle crash on Windows when run with -R: Message-ID: <1560206989.94.0.836960275888.issue37220@roundup.psfhosted.org> New submission from Zachary Ware : See for example https://buildbot.python.org/all/#/builders/33/builds/613 This build ended when I logged into the machine and clicked the "Close the program" button on the "python_d.exe crashed" dialog box. Here's the details from that dialog: Problem signature: Problem Event Name: APPCRASH Application Name: python_d.exe Application Version: 0.0.0.0 Application Timestamp: 5cfe5fd5 Fault Module Name: tk85g.dll Fault Module Version: 8.5.2.19 Fault Module Timestamp: 5bf46c38 Exception Code: c0000005 Exception Offset: 000000000002ef7f OS Version: 6.3.9600.2.0.0.256.27 Locale ID: 1033 Additional Information 1: 0d70 Additional Information 2: 0d701a26b222a63773c5247c2c1f6fa1 Additional Information 3: 044d Additional Information 4: 044dd28b2ab7dea188c1da19a590076d Bisection shows that the issue showed up with 1b57ab5c6478b93cf4150bd8c475022252776598 in bpo-37177. ---------- assignee: terry.reedy components: IDLE, Windows keywords: buildbot messages: 345148 nosy: paul.moore, steve.dower, taleinat, terry.reedy, tim.golden, zach.ware priority: normal severity: normal stage: test needed status: open title: test_idle crash on Windows when run with -R: versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 19:24:27 2019 From: report at bugs.python.org (Nick Coghlan) Date: Mon, 10 Jun 2019 23:24:27 +0000 Subject: [New-bugs-announce] [issue37221] PyCode_New API change breaks backwards compatibility policy Message-ID: <1560209067.42.0.855519802872.issue37221@roundup.psfhosted.org> New submission from Nick Coghlan : The Porting section of the What's New guide is for changes where the old behaviour was at best arguably correct, but it's still possible someone was relying on it behaving exactly the way it used to. It isn't for us to say "We broke all extensions that use this existing public C API by adding a new parameter to its signature". For 3.8b2, the function with the extra parameter should be renamed to PyCode_NewEx, and a PyCode_New compatibility wrapper added that calls it with the extra parameter set to zero. ---------- keywords: 3.8regression messages: 345153 nosy: lukasz.langa, ncoghlan, pablogsal priority: release blocker severity: normal stage: needs patch status: open title: PyCode_New API change breaks backwards compatibility policy type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 19:38:21 2019 From: report at bugs.python.org (Dan Hemberger) Date: Mon, 10 Jun 2019 23:38:21 +0000 Subject: [New-bugs-announce] [issue37222] urllib missing voidresp breaks CacheFTPHandler Message-ID: <1560209901.95.0.0637922149523.issue37222@roundup.psfhosted.org> New submission from Dan Hemberger : When using the CacheFTPHandler in the most basic of contexts, a URLError will be thrown if you try to reuse any of the FTP instances stored in the handler. This makes CacheFTPHandler unusable for its intended purpose. Note that the default FTPHandler circumvents this issue by creating a new ftplib.FTP instance for each connection (and thus never reuses any of them). Here is a simple example illustrating the problem: """ import urllib.request as req import ftplib opener = req.build_opener(req.CacheFTPHandler) req.install_opener(opener) ftplib.FTP.debugging = 2 for _ in range(2): req.urlopen("ftp://www.pythontest.net/README", timeout=10) """ >From the ftplib debugging output, we see the following communication between the client and server: """ *cmd* 'TYPE I' *put* 'TYPE I\r\n' *get* '200 Switching to Binary mode.\n' *resp* '200 Switching to Binary mode.' *cmd* 'PASV' *put* 'PASV\r\n' *get* '227 Entering Passive Mode (159,89,235,38,39,111).\n' *resp* '227 Entering Passive Mode (159,89,235,38,39,111).' *cmd* 'RETR README' *put* 'RETR README\r\n' *get* '150 Opening BINARY mode data connection for README (123 bytes).\n' *resp* '150 Opening BINARY mode data connection for README (123 bytes).' *cmd* 'TYPE I' *put* 'TYPE I\r\n' *get* '226 Transfer complete.\n' *resp* '226 Transfer complete.' *cmd* 'PASV' *put* 'PASV\r\n' *get* '200 Switching to Binary mode.\n' *resp* '200 Switching to Binary mode.' ftplib.error_reply: 200 Switching to Binary mode. """ The client and the server have gotten out of sync due to the missing voidresp() call, i.e. the client sends 'Type I' but receives the response from the previous 'RETR README' command. When ftp.voidresp() is added anywhere between after the ftp.ntransfercmd() and before the next command is sent (i.e. by reverting 2d51f687e133fb8141f1a6b5a6ac51c9d5eddf58), we see the correct send/receive pattern: """ *cmd* 'TYPE I' *put* 'TYPE I\r\n' *get* '200 Switching to Binary mode.\n' *resp* '200 Switching to Binary mode.' *cmd* 'PASV' *put* 'PASV\r\n' *get* '227 Entering Passive Mode (159,89,235,38,39,107).\n' *resp* '227 Entering Passive Mode (159,89,235,38,39,107).' *cmd* 'RETR README' *put* 'RETR README\r\n' *get* '150 Opening BINARY mode data connection for README (123 bytes).\n' *resp* '150 Opening BINARY mode data connection for README (123 bytes).' *get* '226 Transfer complete.\n' *resp* '226 Transfer complete.' *cmd* 'TYPE I' *put* 'TYPE I\r\n' *get* '200 Switching to Binary mode.\n' *resp* '200 Switching to Binary mode.' *cmd* 'PASV' *put* 'PASV\r\n' *get* '227 Entering Passive Mode (159,89,235,38,39,107).\n' *resp* '227 Entering Passive Mode (159,89,235,38,39,107).' *cmd* 'RETR README' *put* 'RETR README\r\n' *get* '150 Opening BINARY mode data connection for README (123 bytes).\n' *resp* '150 Opening BINARY mode data connection for README (123 bytes).' *get* '226 Transfer complete.\n' *resp* '226 Transfer complete.' """ By inspecting the methods of ftplib.FTP, we can see that every use of ntransfercmd() is followed by a voidresp(), see e.g. retrbinary, retrlines, storbinary, storlines, and voidcmd. I hope that some experts in urllib and ftplib can weigh in on any of the subtleties of this issue, but I think it's clear that the missing ftp.voidresp() call is a significant bug. -------------------------------------- Some historical notes about this issue -------------------------------------- This issue has been documented in a number of other bug reports, but I don't think any have addressed the complete breakage of the CachedFTPHandler that it causes. The breaking change was originally introduced as a resolution to issue16270. However, it's not clear from the comments why it was believed that removing ftp.voidresp() from endtransfer() was the correct solution. In either case, with this commit reverted, both the test outlined in this report and in issue16270 work correctly. @orsenthil has suggested this fix (to revert the change to endtransfer) in msg286020, and has explained his reasoning in detail in msg286016. @Ivan.Pozdeev has also explained this issue in msg282797 and provided a similar patch in issue28931, though it does more than just revert the breaking commit and I'm not sure what the additional changes are intending to fix. ---------- components: Library (Lib) messages: 345155 nosy: danh priority: normal severity: normal status: open title: urllib missing voidresp breaks CacheFTPHandler type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 19:45:19 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 10 Jun 2019 23:45:19 +0000 Subject: [New-bugs-announce] [issue37223] test_io logs Exception ignored in: warnings Message-ID: <1560210319.85.0.887484202288.issue37223@roundup.psfhosted.org> New submission from STINNER Victor : bpo-18748 modified io.IOBase finalizer to no longer silence close() exception in develoment and in debug mode. The commit 472f794a33221ea835a2fbf6c9f12aa2bd66d1b0 fixed a few destructor errors in test_io, but there are still a few: test_uninitialized (test.test_io.PyBufferedReaderTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 836, in raw return self._raw AttributeError: 'BufferedReader' object has no attribute '_raw' ok test_max_buffer_size_removal (test.test_io.PyBufferedWriterTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1299, in close with self._write_lock: AttributeError: 'BufferedWriter' object has no attribute '_write_lock' ok test_misbehaved_io (test.test_io.PyBufferedWriterTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1308, in close self.flush() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1269, in flush self._flush_unlocked() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1285, in _flush_unlocked raise OSError("write() returned incorrect number of bytes") OSError: write() returned incorrect number of bytes ok test_uninitialized (test.test_io.PyBufferedWriterTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1299, in close with self._write_lock: AttributeError: 'BufferedWriter' object has no attribute '_write_lock' ok test_writer_close_error_on_close (test.test_io.CBufferedRWPairTest) ... Exception ignored in: <_io.BufferedWriter> Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/unittest/case.py", line 611, in _callTestMethod method() ValueError: flush of closed file ok test_constructor_max_buffer_size_removal (test.test_io.PyBufferedRWPairTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1377, in close self.reader.close() AttributeError: 'BufferedRWPair' object has no attribute 'reader' ok test_constructor_with_not_readable (test.test_io.PyBufferedRWPairTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1377, in close self.reader.close() AttributeError: 'BufferedRWPair' object has no attribute 'reader' ok test_constructor_with_not_writeable (test.test_io.PyBufferedRWPairTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1377, in close self.reader.close() AttributeError: 'BufferedRWPair' object has no attribute 'reader' ok test_isatty (test.test_io.PyBufferedRWPairTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1377, in close self.reader.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1300, in close if self.raw is None or self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1377, in close self.reader.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1300, in close if self.raw is None or self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1377, in close self.reader.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1300, in close if self.raw is None or self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1377, in close self.reader.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 814, in close if self.raw is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1300, in close if self.raw is None or self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'SelectableIsAtty' object has no attribute 'closed' ok test_uninitialized (test.test_io.PyBufferedRWPairTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1377, in close self.reader.close() AttributeError: 'BufferedRWPair' object has no attribute 'reader' ok test_destructor (test.test_io.CBufferedRandomTest) ... Exception ignored in: <_io.BufferedRWPair object at 0x7f2f3c5842f0> Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/support/__init__.py", line 1627, in gc_collect gc.collect() ValueError: flush of closed file Exception ignored in: <_io.BufferedWriter> Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/support/__init__.py", line 1627, in gc_collect gc.collect() ValueError: flush of closed file ok test_misbehaved_io (test.test_io.CBufferedRandomTest) ... Exception ignored in: <_io.BufferedRandom> Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/test_io.py", line 2306, in test_misbehaved_io BufferedReaderTest.test_misbehaved_io(self) OSError: Raw stream returned invalid position -123 Exception ignored in: <_io.BufferedRandom> Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/test_io.py", line 2307, in test_misbehaved_io BufferedWriterTest.test_misbehaved_io(self) OSError: Raw stream returned invalid position -123 ok test_write_non_blocking (test.test_io.CBufferedRandomTest) ... Exception ignored in: <_io.BufferedRandom> Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/unittest/case.py", line 611, in _callTestMethod method() io.UnsupportedOperation: seek ok test_max_buffer_size_removal (test.test_io.PyBufferedRandomTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1299, in close with self._write_lock: AttributeError: 'BufferedRandom' object has no attribute '_write_lock' ok test_misbehaved_io (test.test_io.PyBufferedRandomTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1308, in close self.flush() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1269, in flush self._flush_unlocked() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1285, in _flush_unlocked raise OSError("write() returned incorrect number of bytes") OSError: write() returned incorrect number of bytes ok test_uninitialized (test.test_io.PyBufferedRandomTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1299, in close with self._write_lock: AttributeError: 'BufferedRandom' object has no attribute '_write_lock' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 1299, in close with self._write_lock: AttributeError: 'BufferedRandom' object has no attribute '_write_lock' ok test_destructor (test.test_io.PyTextIOWrapperTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/test/test_io.py", line 2817, in close l.append(self.getvalue()) File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 896, in getvalue raise ValueError("getvalue on closed file") ValueError: getvalue on closed file ok test_issue22849 (test.test_io.PyTextIOWrapperTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed AttributeError: 'F' object has no attribute 'closed' ok test_repr (test.test_io.PyTextIOWrapperTest) ... Exception ignored in: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 409, in __del__ self.close() File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2161, in close if self.buffer is not None and not self.closed: File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 2169, in closed return self.buffer.closed File "/home/vstinner/prog/python/master/Lib/_pyio.py", line 840, in closed return self.raw.closed AttributeError: 'NoneType' object has no attribute 'closed' ok List of failing tests: test.test_io.CBufferedRWPairTest.test_writer_close_error_on_close test.test_io.CBufferedRandomTest.test_destructor test.test_io.CBufferedRandomTest.test_misbehaved_io test.test_io.CBufferedRandomTest.test_write_non_blocking test.test_io.PyBufferedRWPairTest.test_constructor_max_buffer_size_removal test.test_io.PyBufferedRWPairTest.test_constructor_with_not_readable test.test_io.PyBufferedRWPairTest.test_constructor_with_not_writeable test.test_io.PyBufferedRWPairTest.test_isatty test.test_io.PyBufferedRWPairTest.test_uninitialized test.test_io.PyBufferedRandomTest.test_max_buffer_size_removal test.test_io.PyBufferedRandomTest.test_misbehaved_io test.test_io.PyBufferedRandomTest.test_uninitialized test.test_io.PyBufferedReaderTest.test_uninitialized test.test_io.PyBufferedWriterTest.test_max_buffer_size_removal test.test_io.PyBufferedWriterTest.test_misbehaved_io test.test_io.PyBufferedWriterTest.test_uninitialized test.test_io.PyTextIOWrapperTest.test_destructor test.test_io.PyTextIOWrapperTest.test_issue22849 test.test_io.PyTextIOWrapperTest.test_repr ---------- components: Tests messages: 345156 nosy: vstinner priority: normal severity: normal status: open title: test_io logs Exception ignored in: warnings versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 10 20:58:01 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 11 Jun 2019 00:58:01 +0000 Subject: [New-bugs-announce] [issue37224] test__xxsubinterpreters failed on AMD64 Windows8.1 Refleaks 3.8 Message-ID: <1560214681.61.0.906498246375.issue37224@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/224/builds/5 0:46:02 load avg: 1.53 [259/423/1] test__xxsubinterpreters failed -- running: test_httplib (1 min 6 sec), test_mmap (16 min 42 sec) beginning 6 repetitions 123456 ..Exception in thread Thread-28: Traceback (most recent call last): File "D:\buildarea\3.8.ware-win81-release.refleak\build\lib\threading.py", line 923, in _bootstrap_inner self.run() File "D:\buildarea\3.8.ware-win81-release.refleak\build\lib\threading.py", line 865, in run self._target(*self._args, **self._kwargs) File "D:\buildarea\3.8.ware-win81-release.refleak\build\lib\test\test__xxsubinterpreters.py", line 51, in run interpreters.run_string(interp, dedent(f""" RuntimeError: unrecognized interpreter ID 182 test test__xxsubinterpreters failed -- Traceback (most recent call last): File "D:\buildarea\3.8.ware-win81-release.refleak\build\lib\test\test__xxsubinterpreters.py", line 492, in test_subinterpreter self.assertTrue(interpreters.is_running(interp)) AssertionError: False is not true (...) FAIL: test_still_running (test.test__xxsubinterpreters.DestroyTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\buildarea\3.8.ware-win81-release.refleak\build\lib\test\test__xxsubinterpreters.py", line 765, in test_still_running interpreters.destroy(interp) AssertionError: RuntimeError not raised ---------- components: Tests messages: 345162 nosy: eric.snow, vstinner priority: normal severity: normal status: open title: test__xxsubinterpreters failed on AMD64 Windows8.1 Refleaks 3.8 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 03:11:42 2019 From: report at bugs.python.org (Hong Xu) Date: Tue, 11 Jun 2019 07:11:42 +0000 Subject: [New-bugs-announce] [issue37225] Signatures of Exceptions not documented Message-ID: <1560237102.52.0.133464005619.issue37225@roundup.psfhosted.org> New submission from Hong Xu : The "Builtin Exceptions" page does not document the constructors of the listed exception classes. All it says is > The tuple of arguments given to the exception constructor. Some built-in exceptions (like OSError) expect a certain number of arguments and assign a special meaning to the elements of this tuple, while others are usually called only with a single string giving an error message. This is quite vague and does not really guide users for individual exception classes. ---------- assignee: docs at python components: Documentation messages: 345195 nosy: Hong Xu, docs at python priority: normal severity: normal status: open title: Signatures of Exceptions not documented versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 04:44:59 2019 From: report at bugs.python.org (Ben Brown) Date: Tue, 11 Jun 2019 08:44:59 +0000 Subject: [New-bugs-announce] [issue37226] Asyncio Fatal Error on SSL Transport - IndexError Deque Index Out Of Range Message-ID: <1560242699.55.0.804750809475.issue37226@roundup.psfhosted.org> New submission from Ben Brown : I have been getting an intermittent errors when using asyncio with SSL. The error always occurs in the _process_write_backlog method in asyncio's sslproto.py file. I have looked at lots of possibilities as to what the cause is and found that for some reason when in _process_write_backlog's loop the deque seems to be empty, I added some quick terrible hacky code to confirm it fixed the issue and checking at each point it is used wether it is empty fixes the issue, I am unusure as to what causes it to become empty but still run through the loop. The most frequent time it happens is after we have a successful message the client sends a request to join a data stream this request mostly causes the error but sometimes it happens while the client is receiving data. I am currently using python 3.7.1 but have also tested my code on 3.7.3 with the same result. NOTE: I am currently working on a minimal sample to show the issue easier. Fatal error on SSL transport protocol: transport: <_SelectorSocketTransport fd=38 read=polling write=> Traceback (most recent call last): File "/usr/local/lib/python3.7/asyncio/sslproto.py", line 689, in _process_write_backlog del self._write_backlog[0] IndexError: deque index out of range Fatal error on SSL transport protocol: transport: <_SelectorSocketTransport fd=29 read=polling write=> Traceback (most recent call last): File "/usr/local/lib/python3.7/asyncio/sslproto.py", line 664, in _process_write_backlog data, offset = self._write_backlog[0] IndexError: deque index out of range ---------- assignee: christian.heimes components: SSL messages: 345201 nosy: ben.brown, christian.heimes priority: normal severity: normal status: open title: Asyncio Fatal Error on SSL Transport - IndexError Deque Index Out Of Range type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 04:53:06 2019 From: report at bugs.python.org (chr0139) Date: Tue, 11 Jun 2019 08:53:06 +0000 Subject: [New-bugs-announce] [issue37227] Wrong parse long argument Message-ID: <1560243186.93.0.426375916497.issue37227@roundup.psfhosted.org> New submission from chr0139 : I have this script import argparse parser = argparse.ArgumentParser() group = parser.add_mutually_exclusive_group() group.add_argument('-l', '--list', action='store_true', help="show help") group.add_argument('-u', '--upgrade', action='store_true', help="show help") parser.parse_args(['--li']) and this is the unexpected result: Namespace(list=True, upgrade=False) ---------- components: Library (Lib) messages: 345203 nosy: chr0139 priority: normal severity: normal status: open title: Wrong parse long argument type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 05:36:54 2019 From: report at bugs.python.org (=?utf-8?q?Jukka_V=C3=A4is=C3=A4nen?=) Date: Tue, 11 Jun 2019 09:36:54 +0000 Subject: [New-bugs-announce] [issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port Message-ID: <1560245814.37.0.496048869185.issue37228@roundup.psfhosted.org> New submission from Jukka V?is?nen : When using loop.create_datagram_endpoint(), the default for reuse_address=True, which sets the SO_REUSEADDR sockopt for the UDP socket. This is a dangerous and unreasonable default for UDP, because in Linux it allows multiple processes to create listening sockets for the same UDP port and the kernel will randomly give incoming packets to these processes. I discovered this by accidentally starting two Python asyncio programs with the same UDP port and instead of getting an exception that the address is already in use, everything looked to be working except half my packets went to the wrong process. The documentation also refers to behavior with TCP sockets: https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.create_datagram_endpoint "reuse_address tells the kernel to reuse a local socket in TIME_WAIT state, without waiting for its natural timeout to expire. If not specified will automatically be set to True on Unix." This is true for TCP but not UDP. Either the documentation should reflect the current (dangerous) behavior or the behavior should be changed for UDP sockets. Workaround is of course to create your own socket without SO_REUSEADDR and pass it to create_datagram_endpoint(). ---------- components: asyncio messages: 345209 nosy: Jukka V?is?nen, asvetlov, yselivanov priority: normal severity: normal status: open title: UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 06:36:53 2019 From: report at bugs.python.org (G) Date: Tue, 11 Jun 2019 10:36:53 +0000 Subject: [New-bugs-announce] [issue37229] bisect: Allow a custom compare function Message-ID: <1560249413.65.0.773454068732.issue37229@roundup.psfhosted.org> New submission from G : All of bisect's functions (insort_{left,right}, bisect_{left,right}) can only use comparison of objects via __lt__. They should support providing a custom comparison function. ---------- components: Library (Lib) messages: 345216 nosy: gpery priority: normal severity: normal status: open title: bisect: Allow a custom compare function type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 07:25:36 2019 From: report at bugs.python.org (pallavi shete) Date: Tue, 11 Jun 2019 11:25:36 +0000 Subject: [New-bugs-announce] [issue37230] Why Highly Skilled Network Engineers prefer On-Demand work? Message-ID: <1560252336.74.0.292651844665.issue37230@roundup.psfhosted.org> New submission from pallavi shete : What Is On-Demand Work? The on-demand workforce describes a shift towards an open talent economy. Rather than working as a traditional employee, skilled workers can take on contracted projects. They could be temporary jobs or ongoing agreements. While the idea of contracting is nothing new, this is a new(ish) approach. Contractors find their projects on a marketplace, like FieldEngineer, to make landing work easier than ever. This ensures that their work is as secure as traditional employment. https://www.fieldengineer.com/blogs/network-engineers-prefer-on-demand-work ---------- components: Windows messages: 345224 nosy: pallavishete, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Why Highly Skilled Network Engineers prefer On-Demand work? versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 09:07:40 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Tue, 11 Jun 2019 13:07:40 +0000 Subject: [New-bugs-announce] [issue37231] Optimize calling special methods Message-ID: <1560258460.83.0.570346812053.issue37231@roundup.psfhosted.org> New submission from Jeroen Demeyer : Change call_method() and related functions in Objects/typeobject.c to allow profiting from the PY_VECTORCALL_ARGUMENTS_OFFSET optimization: instead of passing "self" as separate argument, put it inside the args vector. ---------- components: Interpreter Core messages: 345231 nosy: jdemeyer priority: normal severity: normal status: open title: Optimize calling special methods type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 09:10:04 2019 From: report at bugs.python.org (Jakub Kulik) Date: Tue, 11 Jun 2019 13:10:04 +0000 Subject: [New-bugs-announce] [issue37232] Parallel compilation fails because of low ulimit. Message-ID: <1560258604.43.0.910483303566.issue37232@roundup.psfhosted.org> New submission from Jakub Kulik : When building and installing Python 3.8 on our sparc machine, the build breaks during the compileall stage with [Error 24] Too many open files. The problem is due to the recently enabled parallel compilation (issue36786). When -j0 is passed to the compileall, executor starts as many threads as there are cpus, which is problem on our sparc machine with 512 CPUs and rather low ulimit on number of opened files. Compilation fails and is not able to recover from that. For the same reason, test_compileall also breaks and never ends. I am able to fix both of these by patching both Makefile.pre.in and test_compileall.py with -j32 (instead of -j0), but that is not the best solution. I think that the best solution to this would be to make compileall.py check for this limit and consider that when creating ProcessPoolExecutor (or, maybe even better, to make it recover when this problem occurs). ---------- components: Build messages: 345232 nosy: kulikjak priority: normal severity: normal status: open title: Parallel compilation fails because of low ulimit. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 10:43:31 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Tue, 11 Jun 2019 14:43:31 +0000 Subject: [New-bugs-announce] [issue37233] Use _PY_FASTCALL_SMALL_STACK for method_vectorcall Message-ID: <1560264211.71.0.0575753966343.issue37233@roundup.psfhosted.org> New submission from Jeroen Demeyer : The PEP 590 implementation for type "method" caused a minor regression: instead of using _PyObject_Call_Prepend(), method_vectorcall manually allocates and fills a newly allocated vector. This does NOT use the _PY_FASTCALL_SMALL_STACK optimization, but it should. ---------- components: Interpreter Core messages: 345234 nosy: jdemeyer priority: normal severity: normal status: open title: Use _PY_FASTCALL_SMALL_STACK for method_vectorcall type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 11:26:26 2019 From: report at bugs.python.org (Mahdi Jafary) Date: Tue, 11 Jun 2019 15:26:26 +0000 Subject: [New-bugs-announce] [issue37234] error in variable Message-ID: <1560266786.16.0.932165098203.issue37234@roundup.psfhosted.org> New submission from Mahdi Jafary : this code is ok def test(t): n = 0 def f(t): print(t+str(N)) f(t) test('test') but this code have error def test(t): n = 0 def f(t): n = n+1 print(t+str(n)) f(t) test('test') we can fix this by edit code to: def test(t): n = 0 def f(t): N = n+1 print(t+str(N)) f(t) test('test') ---------- components: Windows messages: 345238 nosy: Mahdi Jafary, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: error in variable versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 11:45:18 2019 From: report at bugs.python.org (Matthew Kenigsberg) Date: Tue, 11 Jun 2019 15:45:18 +0000 Subject: [New-bugs-announce] [issue37235] urljoin behavior unclear/not following RFC 3986 Message-ID: <1560267918.98.0.870769323469.issue37235@roundup.psfhosted.org> New submission from Matthew Kenigsberg : Was trying to figure out the exact behavior of urljoin. As far as I can tell (see https://bugs.python.org/issue22118) it should follow RFC 3986. According to the algorithm in 5.2.2, I think this is wrong: >>> urljoin("ftp://netloc", "http://a/b/../c/d") 'http://a/b/../c/d' And the .. should get removed. Might be a separate issue, but at the very least, I think the docs should be updated to describe the exact behavior, or at least more directly state that the behavior defined in RFC 3986 is followed. Would be happy to write a patch if a change is needed. ---------- messages: 345243 nosy: Matthew Kenigsberg priority: normal severity: normal status: open title: urljoin behavior unclear/not following RFC 3986 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 13:19:02 2019 From: report at bugs.python.org (Paul Monson) Date: Tue, 11 Jun 2019 17:19:02 +0000 Subject: [New-bugs-announce] [issue37236] fix test_complex for Windows arm64 Message-ID: <1560273542.8.0.377927810639.issue37236@roundup.psfhosted.org> New submission from Paul Monson : There is a compiler optimization error on Windows ARM64 that causes test_truediv (test.test_complex.ComplexTest) to fail with ZeroDivisionError: complex division by zero. Adding a pragma optimize around the affected function fixes the issue. I am also submitting a report to the compiler team ---------- components: Windows messages: 345254 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: fix test_complex for Windows arm64 type: compile error versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 13:20:41 2019 From: report at bugs.python.org (Jilguero ostras) Date: Tue, 11 Jun 2019 17:20:41 +0000 Subject: [New-bugs-announce] [issue37237] python 2.16 from source on Ubuntu 18.04 Message-ID: <1560273641.16.0.656737490919.issue37237@roundup.psfhosted.org> New submission from Jilguero ostras : -Trying to install Python 2.16 from source on Ubuntu 18.04. Latest version is not in repositories. -Installation completes but I need to install pip from repositories -modules (e.g. numpy) installed by pip install on /usr/local/lib/python2.7/dist-packages -installed modules can not be imported on Python -path /usr/local/lib/python2.7/dist-packages is not listed on sys.path -when added manually, modules are imported with errors and they don't work -I am installing modules with -t option of pip, to be installed on /home/user01/.local/lib/python2.7/site-packages, because it is included on sys.path -modules are imported to Python with errors In addition to numpy, other modules fail. I think there is a problem with Python/pip after installation from source. ---------- messages: 345255 nosy: Jilguero ostras priority: normal severity: normal status: open title: python 2.16 from source on Ubuntu 18.04 type: resource usage versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 17:09:26 2019 From: report at bugs.python.org (Paul Monson) Date: Tue, 11 Jun 2019 21:09:26 +0000 Subject: [New-bugs-announce] [issue37238] Enable building for Windows using Visual Studio 2019 Message-ID: <1560287366.71.0.530978180943.issue37238@roundup.psfhosted.org> New submission from Paul Monson : What is the normal process for lighting up a new Visual Studio? ---------- components: Build, Windows messages: 345269 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Enable building for Windows using Visual Studio 2019 versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 17:55:57 2019 From: report at bugs.python.org (Paul Monson) Date: Tue, 11 Jun 2019 21:55:57 +0000 Subject: [New-bugs-announce] [issue37239] Add headless development preset layout Message-ID: <1560290157.09.0.169248925355.issue37239@roundup.psfhosted.org> New submission from Paul Monson : Add preset-headless preset which has everything that preset-default has minus tcltk and idle ---------- components: Build, Windows messages: 345275 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add headless development preset layout type: enhancement versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 18:47:11 2019 From: report at bugs.python.org (Agrim Sachdeva) Date: Tue, 11 Jun 2019 22:47:11 +0000 Subject: [New-bugs-announce] [issue37240] Filename http.py breaks calls to urllib Message-ID: <1560293231.78.0.659598330159.issue37240@roundup.psfhosted.org> New submission from Agrim Sachdeva : If a script that uses urllib is named http.py, the following error occurs: Traceback (most recent call last): File ".\http.py", line 1, in import urllib.request, urllib.parse, urllib.error File "C:\Users\grim\AppData\Local\Programs\Python\Python37\lib\urllib\request.py", line 88, in import http.client File "C:\Python\http.py", line 11, in html = urllib.request.urlopen(url).read() AttributeError: module 'urllib' has no attribute 'request' ---------- components: Windows messages: 345282 nosy: Agrim Sachdeva2, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Filename http.py breaks calls to urllib type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 11 20:13:05 2019 From: report at bugs.python.org (Jesse Bacon) Date: Wed, 12 Jun 2019 00:13:05 +0000 Subject: [New-bugs-announce] [issue37241] Item Count Error in Shelf Message-ID: <1560298385.81.0.460821118129.issue37241@roundup.psfhosted.org> New submission from Jesse Bacon : I have loaded the National Vulnerability Database from NIST for 2019 and it includes 3989 JSON Documents. This data I have placed in a shelf. when I run len(db.keys()) I get 3658. len(set(cves)) == 3989 : True When I extract the data from the shelf I have the right amount of records, 3989. I tested on python 3.7.3 and Python 3.6.5. I am concerned this is going to ruin a metric in a security report. For example, A risk exposure report may use the number of keys in a yearly vulnerability db as the baseline for a risk calculation which contrasts the number of patched CVE's. nvdcve-1.0-2019.json ---------- components: Library (Lib) files: KeyCount.png messages: 345290 nosy: jessembacon priority: normal severity: normal status: open title: Item Count Error in Shelf type: behavior versions: Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48411/KeyCount.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 04:35:31 2019 From: report at bugs.python.org (mrqianjinsi) Date: Wed, 12 Jun 2019 08:35:31 +0000 Subject: [New-bugs-announce] [issue37242] sub-process would be terminated when registered finalizer are working Message-ID: <1560328531.7.0.625749907508.issue37242@roundup.psfhosted.org> New submission from mrqianjinsi : Hi guys. I'm using multiprocessing module to accelerate my program. and I want to do some cleanup work when sub-process exit. but I found that sub-process would be terminated when registered finalizer are working. Is this behavior designed intentionally? code example: import multiprocessing as mp from multiprocessing.util import Finalize import os import time def finalizer(): time.sleep(0.2) # some time consuming work print('do cleanup work: {}'.format(os.getpid())) def worker(_): print('do some work: {}'.format(os.getpid())) def initializer(): # ref: https://github.com/python/cpython/blob/master/Lib/multiprocessing/util.py#L147 Finalize(None, finalizer, exitpriority=1) # atexit module don't work along with multiprocessing module # because sub-process exit via os._exit() # ref: https://docs.python.org/3/library/atexit.html # atexit.register(finalizer) # don't work print('main process ID: {}'.format(os.getpid())) with mp.Pool(4, initializer=initializer) as executor: executor.map(worker, range(20)) gist link: https://gist.github.com/MrQianJinSi/2daf5b6a9ef08b00facdfbea5200dd28 ---------- components: Library (Lib) messages: 345310 nosy: mrqianjinsi priority: normal severity: normal status: open title: sub-process would be terminated when registered finalizer are working type: behavior versions: Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 06:05:34 2019 From: report at bugs.python.org (Michael Felt) Date: Wed, 12 Jun 2019 10:05:34 +0000 Subject: [New-bugs-announce] [issue37243] test_sendfile in asyncio crashes when os.sendfile() is not supported Message-ID: <1560333934.74.0.973153336061.issue37243@roundup.psfhosted.org> New submission from Michael Felt : issue34655 added sendfile support to asyncio. However, the `test_sendfile` fails when called if there is no os.sendfile support. This patch will skip the test when @unittest.skipUnless(hasattr(os, 'sendfile'), 'test needs os.sendfile()') ---------- messages: 345314 nosy: Michael.Felt priority: normal severity: normal status: open title: test_sendfile in asyncio crashes when os.sendfile() is not supported _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 07:45:11 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Jun 2019 11:45:11 +0000 Subject: [New-bugs-announce] [issue37244] test_multiprocessing_forkserver: test_resource_tracker() failed on x86 Gentoo Refleaks 3.8 Message-ID: <1560339911.07.0.931464630019.issue37244@roundup.psfhosted.org> New submission from STINNER Victor : x86 Gentoo Refleaks 3.8: https://buildbot.python.org/all/#/builders/223/builds/9 3:22:46 load avg: 5.83 [423/423/2] test_multiprocessing_forkserver failed (5 min 10 sec) beginning 6 repetitions 123456 ./buildbot/buildarea/cpython/3.8.ware-gentoo-x86.refleak/build/Lib/test/_test_multiprocessing.py:5002: ResourceWarning: unclosed file <_io.BufferedReader name=10> p = subprocess.Popen([sys.executable, ResourceWarning: Enable tracemalloc to get the object allocation traceback test test_multiprocessing_forkserver failed -- Traceback (most recent call last): File "/buildbot/buildarea/cpython/3.8.ware-gentoo-x86.refleak/build/Lib/test/_test_multiprocessing.py", line 5015, in test_resource_tracker _resource_unlink(name2, rtype) AssertionError: OSError not raised /buildbot/buildarea/cpython/3.8.ware-gentoo-x86.refleak/build/Lib/multiprocessing/resource_tracker.py:203: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /buildbot/buildarea/cpython/3.8.ware-gentoo-x86.refleak/build/Lib/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: '//psm_dfd85228': [Errno 2] No such file or directory: '//psm_dfd85228' warnings.warn('resource_tracker: %r: %s' % (name, e)) --- Test added by: commit f22cc69b012f52882d434a5c44a004bc3aa5c33c Author: Pierre Glaser Date: Fri May 10 22:59:08 2019 +0200 bpo-36867: Make semaphore_tracker track other system resources (GH-13222) The multiprocessing.resource_tracker replaces the multiprocessing.semaphore_tracker module. Other than semaphores, resource_tracker also tracks shared_memory segment s. Patch by Pierre Glaser. ---------- components: Tests messages: 345320 nosy: vstinner priority: normal severity: normal status: open title: test_multiprocessing_forkserver: test_resource_tracker() failed on x86 Gentoo Refleaks 3.8 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 07:57:02 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Jun 2019 11:57:02 +0000 Subject: [New-bugs-announce] [issue37245] Azure Pipeline: sick macOS job on Python 3.8? Message-ID: <1560340622.3.0.842050597369.issue37245@roundup.psfhosted.org> New submission from STINNER Victor : I backported a change to 3.8: https://github.com/python/cpython/pull/14000 The macOS job of Azure Pipelines failed badly: https://dev.azure.com/Python/cpython/_build/results?buildId=45071&view=results * test_importlib/test_locks.py: test_deadlock() => TIMEOUT * test_multiprocessing_spawn: test_thread_safety() => TIMEOUT * test_concurrent_futures: test_pending_calls_race() => TIMEOUT * test_functools: test_threaded() => TIMEOUT * test_multiprocessing_forkserver: test_thread_safety() => TIMEOUT * test_threading: test_is_alive_after_fork() => TIMEOUT 0:20:21 load avg: 4.55 [155/423/1] test_importlib crashed (Exit code 1) -- running: test_concurrent_futures (16 min 12 sec), test_functools (13 min 30 sec), test_multiprocessing_spawn (18 min 51 sec) Timeout (0:20:00)! Thread 0x0000700004627000 (most recent call first): Thread 0x00007fff96f1a380 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/lock_tests.py", line 49 in __init__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/test_importlib/test_locks.py", line 84 in run_deadlock_avoidance_test File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/test_importlib/test_locks.py", line 89 in test_deadlock ... 0:21:30 load avg: 4.76 [160/423/2] test_multiprocessing_spawn crashed (Exit code 1) -- running: test_concurrent_futures (17 min 21 sec), test_functools (14 min 39 sec), test_threading (35 sec 923 ms) Timeout (0:20:00)! Thread 0x0000700000ff1000 (most recent call first): Thread 0x00007fff96f1a380 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 847 in start File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/support/__init__.py", line 2299 in start_threads File "/Users/vsts/agent/2.152.1/work/1/s/Lib/contextlib.py", line 113 in __enter__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/_test_multiprocessing.py", line 4138 in test_thread_safety ... 0:24:09 load avg: 4.11 [207/423/3] test_concurrent_futures crashed (Exit code 1) -- running: test_functools (17 min 18 sec), test_timeout (34 sec 14 ms), test_threading (3 min 14 sec) Timeout (0:20:00)! Thread 0x0000700006644000 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/concurrent/futures/thread.py", line 78 in _worker File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 865 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 923 in _bootstrap_inner File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 885 in _bootstrap Thread 0x0000700006141000 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/concurrent/futures/thread.py", line 78 in _worker File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 865 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 923 in _bootstrap_inner File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 885 in _bootstrap Thread 0x0000700005c3e000 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/concurrent/futures/thread.py", line 78 in _worker File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 865 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 923 in _bootstrap_inner File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 885 in _bootstrap Thread 0x000070000573b000 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/concurrent/futures/thread.py", line 78 in _worker File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 865 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 923 in _bootstrap_inner File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 885 in _bootstrap Thread 0x0000700005238000 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/concurrent/futures/thread.py", line 78 in _worker File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 865 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 923 in _bootstrap_inner File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 885 in _bootstrap Thread 0x00007fff96f1a380 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/setup.py", line 92 in _test_audit_hook File "/Users/vsts/agent/2.152.1/work/1/s/Lib/concurrent/futures/_base.py", line 146 in __init__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/concurrent/futures/_base.py", line 288 in wait File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/test_concurrent_futures.py", line 565 in test_pending_calls_race File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/case.py", line 628 in _callTestMethod File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/case.py", line 671 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/case.py", line 731 in __call__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/suite.py", line 122 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/suite.py", line 84 in __call__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/suite.py", line 122 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/suite.py", line 84 in __call__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/suite.py", line 122 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/suite.py", line 84 in __call__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/unittest/runner.py", line 176 in run File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/support/__init__.py", line 1984 in _run_suite File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/support/__init__.py", line 2080 in run_unittest File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/test_concurrent_futures.py", line 1300 in test_main File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/support/__init__.py", line 2212 in decorator File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/runtest.py", line 228 in _runtest_inner2 File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/runtest.py", line 264 in _runtest_inner File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/runtest.py", line 135 in _runtest File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/runtest.py", line 187 in runtest File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/runtest_mp.py", line 66 in run_tests_worker File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/main.py", line 611 in _main File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/main.py", line 588 in main File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/libregrtest/main.py", line 663 in main File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/regrtest.py", line 46 in _main File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/regrtest.py", line 50 in File "/Users/vsts/agent/2.152.1/work/1/s/Lib/runpy.py", line 85 in _run_code File "/Users/vsts/agent/2.152.1/work/1/s/Lib/runpy.py", line 192 in _run_module_as_main 0:26:51 load avg: 5.14 [259/423/4] test_functools crashed (Exit code 1) -- running: test_io (1 min 11 sec), test_threading (5 min 56 sec) Timeout (0:20:00)! Thread 0x000070000666f000 (most recent call first): Thread 0x00007fff96f1a380 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 847 in start File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/support/__init__.py", line 2299 in start_threads File "/Users/vsts/agent/2.152.1/work/1/s/Lib/contextlib.py", line 113 in __enter__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/test_functools.py", line 2464 in test_threaded ... 0:47:26 load avg: 1.71 [423/423/6] test_multiprocessing_forkserver crashed (Exit code 1) Timeout (0:20:00)! Thread 0x0000700009471000 (most recent call first): Thread 0x00007fff96f1a380 (most recent call first): File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 246 in __enter__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 555 in wait File "/Users/vsts/agent/2.152.1/work/1/s/Lib/threading.py", line 852 in start File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/support/__init__.py", line 2299 in start_threads File "/Users/vsts/agent/2.152.1/work/1/s/Lib/contextlib.py", line 113 in __enter__ File "/Users/vsts/agent/2.152.1/work/1/s/Lib/test/_test_multiprocessing.py", line 4138 in test_thread_safety ... 6 tests failed: test_concurrent_futures test_functools test_importlib test_multiprocessing_forkserver test_multiprocessing_spawn test_threading -- pythoninfo: 2019-06-12T02:45:41.9759180Z Py_DEBUG: Yes (sys.gettotalrefcount() present) 2019-06-12T02:45:41.9784500Z datetime.datetime.now: 2019-06-12 02:45:40.875495 2019-06-12T02:45:41.9806170Z platform.architecture: 64bit 2019-06-12T02:45:41.9797810Z os.environ[MACOSX_DEPLOYMENT_TARGET]: 10.13 2019-06-12T02:45:41.9806990Z platform.platform: macOS-10.13.6-x86_64-i386-64bit 2019-06-12T02:45:41.9801150Z os.login: _spotlight 2019-06-12T02:45:41.9817220Z socket.hostname: Mac-483.local 2019-06-12T02:45:41.9835490Z sys.version: 3.8.0b1+ (remotes/pull/14000/merge:ce71235d6, Jun 12 2019, 02:45:09) [Clang 10.0.0 (clang-1000.11.45.5)] ---------- components: Tests, macOS messages: 345325 nosy: ned.deily, ronaldoussoren, vstinner priority: normal severity: normal status: open title: Azure Pipeline: sick macOS job on Python 3.8? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 07:59:44 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Wed, 12 Jun 2019 11:59:44 +0000 Subject: [New-bugs-announce] [issue37246] http.cookiejar.DefaultCookiePolicy should use current timestamp instead of last updated timestamp value for checking expiry Message-ID: <1560340784.82.0.908352351929.issue37246@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : In http.cookiejar module's DefaultCookiePolicy checking for cookie expiry uses the last updated value for _now [0] which is in three actions below. So if the cookies are extracted using the policy and the policy is used for expiry check later it could give incorrect values for expiry check. Like in below example I create a cookie with 1 second from now as expiry date. Sleeping for 1 second and using the same policy causing is_expired to work correctly but return_ok uses self._now with the older value returning the cookie. I propose using time.time() and update self._now or to change cookie.is_expired(self._now) to cookie.is_expired() where is_expired uses time.time() internally I think it's better to use the current time while using return_ok to compare the cookie expiry. One another error is that self._now is set only when one of the methods is called so evaluating a new policy against a cookie with expiry causes AttributeError since self._now is not yet set. Actions where self._now is updated * add_cookie_header * extract_cookies * set_cookie_if_ok import time from http.cookiejar import CookieJar, DefaultCookiePolicy, time2netscape from test.test_http_cookiejar import FakeResponse from urllib.request import Request delay = 1 now = time.time() + delay future = time2netscape(now) req = Request("https://example.com") headers = [f"Set-Cookie: a=b; expires={future};"] res = FakeResponse(headers, "https://example.com/") policy = DefaultCookiePolicy() jar = CookieJar(policy=policy) jar.extract_cookies(res, req) cookie = jar.make_cookies(res, req)[0] print(f"{cookie.expires = }") print(f"{policy.return_ok(cookie, req) = }") time.sleep(delay + 1) # Check for cookie expiry where is_expired() returns true print(f"Current time : {int(time.time())}") print(f"{cookie.is_expired() = }") # is_expired uses current timestamp so it returns True print(f"{policy.return_ok(cookie, req) = }") # Return now uses older timestamp and still returns the cookie as valid cookie # Output cookie.expires = 1560339939 policy.return_ok(cookie, req) = True Current time : 1560339940 cookie.is_expired() = True policy.return_ok(cookie, req) = True # Using above cookie variable against a new policy would throw AttributeError since self._now is not yet set req = Request("https://example.com") policy1 = DefaultCookiePolicy() print(policy1.return_ok(cookie, req)) # Output Traceback (most recent call last): File "/tmp/bar.py", line 31, in print(policy1.return_ok(cookie, req)) File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/http/cookiejar.py", line 1096, in return_ok if not fn(cookie, request): File "/Users/karthikeyansingaravelan/stuff/python/cpython/Lib/http/cookiejar.py", line 1128, in return_ok_expires if cookie.is_expired(self._now): AttributeError: 'DefaultCookiePolicy' object has no attribute '_now' [0] https://github.com/python/cpython/blob/a6e190e94b47324f14e22a09200c68b722d55699/Lib/http/cookiejar.py#L1128 ---------- components: Library (Lib) messages: 345327 nosy: martin.panter, orsenthil, serhiy.storchaka, xtreak priority: normal severity: normal status: open title: http.cookiejar.DefaultCookiePolicy should use current timestamp instead of last updated timestamp value for checking expiry type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 08:11:36 2019 From: report at bugs.python.org (Jeroen van den Hout) Date: Wed, 12 Jun 2019 12:11:36 +0000 Subject: [New-bugs-announce] [issue37247] swap distutils build_ext and build_py commands to allow proper SWIG extension installation Message-ID: <1560341496.09.0.819646109086.issue37247@roundup.psfhosted.org> New submission from Jeroen van den Hout : Currently when building an extension which lists a SWIG interface (.i) file as a source file, SWIG creates the files correctly in the build_ext command. Unfortunately this command is run after the build_py command, so the python files (.py) created in the build_ext command are never copied to the installation. To fix this, swap build_ext and build_py commands in the sub_commands attribute of the build command. ---------- components: Distutils messages: 345330 nosy: Jeroen van den Hout, dstufft, eric.araujo priority: normal severity: normal status: open title: swap distutils build_ext and build_py commands to allow proper SWIG extension installation type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 08:15:53 2019 From: report at bugs.python.org (Shen Han) Date: Wed, 12 Jun 2019 12:15:53 +0000 Subject: [New-bugs-announce] [issue37248] support conversion of `func(**{} if a else b)` Message-ID: <1560341753.77.0.566490049047.issue37248@roundup.psfhosted.org> Change by Shen Han : ---------- components: 2to3 (2.x to 3.x conversion tool) nosy: Shen Han priority: normal severity: normal status: open title: support conversion of `func(**{} if a else b)` type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 08:54:43 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Wed, 12 Jun 2019 12:54:43 +0000 Subject: [New-bugs-announce] [issue37249] Missing declaration of _PyObject_GetMethod Message-ID: <1560344083.09.0.0843105430865.issue37249@roundup.psfhosted.org> New submission from Jeroen Demeyer : The function _PyObject_GetMethod is internal and private, but it should still be declared properly. Currently, it is "manually" declared in two .c files where it's used. ---------- components: Interpreter Core messages: 345336 nosy: jdemeyer priority: normal severity: normal status: open title: Missing declaration of _PyObject_GetMethod versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 09:08:30 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Jun 2019 13:08:30 +0000 Subject: [New-bugs-announce] [issue37250] C files generated by Cython set tp_print to NULL Message-ID: <1560344910.52.0.698180941152.issue37250@roundup.psfhosted.org> New submission from STINNER Victor : Copy of Stefan Behnel's msg345305 in bpo-37221: """ Note that PyCode_New() is not the only change in 3.8 beta1 that breaks Cython generated code. The renaming of "tp_print" to "tp_vectorcall" is equally disruptive, because Cython has (or had) a work-around for CPython (mis-)behaviour that reset the field explicitly to NULL after calling PyType_Ready(), which could set it arbitrarily without good reason. So, either revert that field renaming, too, or ignore Cython generated modules for the reasoning about the change in this ticket. I'm fine with keeping things as they are now in beta-1, but we could obviously adapt to whatever beta-2 wants to change again. """ There are 2 problems: * source compatibility * ABI compatibility The following change removed PyTypeObject.tp_print and replaced it with PyTypeObject.tp_vectorcall_offset: commit aacc77fbd77640a8f03638216fa09372cc21673d Author: Jeroen Demeyer Date: Wed May 29 20:31:52 2019 +0200 bpo-36974: implement PEP 590 (GH-13185) Co-authored-by: Jeroen Demeyer Co-authored-by: Mark Shannon == ABI compatibility == In term of ABI, it means that C extensions using static type ("static PyTypeObject mytype = { .... };") is broken by this change. Honestly, I'm not sure if we ever provided any forward compatibility for static types. bpo-32388 removed "cross-version binary compatibility" on purpose in Python 3.8. It's an on-going topic, see also my notes about ABI and PyTypeObject: https://pythoncapi.readthedocs.io/type_object.html Maybe we can simply ignore this part of the problem. == Source compatibility == Cython generates C code setting tp_print explicitly to NULL. To avoid depending on Cython at installation, most (if not all) projects using Cython include C files generated by Cython in files they distribute (like tarballs). Because of that, the commit aacc77fbd77640a8f03638216fa09372cc21673d requires all these projects to regenerate their C files using Cython. In Fedora, we fixed many Python packages to always re-run Cython to regenerate all C files. But Fedora is just one way to distribute packages, it doesn't solve the problem of files distributed on PyPI, nor other Linux distribution (for example). Jeroen Demeyer proposed PR 14009 to fix the source compatibility: #define tp_print tp_vectorcall ---------- components: Interpreter Core messages: 345337 nosy: vstinner priority: normal severity: normal status: open title: C files generated by Cython set tp_print to NULL versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 10:00:18 2019 From: report at bugs.python.org (Jeremy Cline) Date: Wed, 12 Jun 2019 14:00:18 +0000 Subject: [New-bugs-announce] [issue37251] Mocking a MagicMock with a function spec results in an AsyncMock Message-ID: <1560348018.57.0.546920052201.issue37251@roundup.psfhosted.org> New submission from Jeremy Cline : This is related to the new AsyncMock[0] class in Python 3.8b1. A simple reproducer is: from unittest import mock mock_obj = mock.MagicMock() mock_obj.mock_func = mock.MagicMock(spec=lambda x: x) with mock.patch.object(mock_obj, "mock_func") as nested: print(type(nested)) Instead of a MagicMock (the behavior in Python 3.7) in Python 3.8b1 this results in an AsyncMock. [0]https://github.com/python/cpython/pull/9296 ---------- components: Library (Lib) messages: 345358 nosy: jcline priority: normal severity: normal status: open title: Mocking a MagicMock with a function spec results in an AsyncMock type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 10:20:14 2019 From: report at bugs.python.org (Jakub Kulik) Date: Wed, 12 Jun 2019 14:20:14 +0000 Subject: [New-bugs-announce] [issue37252] devpoll test failures on Solaris Message-ID: <1560349214.7.0.192767880732.issue37252@roundup.psfhosted.org> New submission from Jakub Kulik : test_devpoll currently ends with two failures with Python 3.8 on Solaris. First one is wrong number of arguments to devpoll.register function (which thrown the same error as expected in 3.7 but now acts differently). Second one is that register and modify no longer throw OverflowError when negative number is given as second argument, but rather a ValueError. I am not sure whether this is just a small semantics change or some bigger problem (documentation doesn't mention what error should be thrown). So I fixed it in attached pull request by changing the expected thrown error but there might be other problem as well. ---------- components: Extension Modules messages: 345365 nosy: kulikjak priority: normal severity: normal status: open title: devpoll test failures on Solaris versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 11:15:01 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 12 Jun 2019 15:15:01 +0000 Subject: [New-bugs-announce] [issue37253] PyCompilerFlags got a new cf_feature_version field Message-ID: <1560352501.33.0.419965733273.issue37253@roundup.psfhosted.org> New submission from STINNER Victor : The commit dcfcd146f8e6fc5c2fc16a4c192a0c5f5ca8c53c of bpo-35766 added a new cf_feature_version field to PyCompilerFlags. Each PyCompilerFlags variable must properly initialize this new field to PY_MINOR_VERSION. I propose to add a new _PyCompilerFlags_INIT macro to statically initialize such variable. I'm not sure if it should be public or not. The PyCompilerFlags structure is excluded from the stable ABI (PEP 384), but it's documented in the "The Very High Level Layer" C API documentation: https://docs.python.org/dev/c-api/veryhigh.html#c.PyCompilerFlags Structure fields are documented there: struct PyCompilerFlags { int cf_flags; } The doc is outdated. I'm not sure if it's on purpose or not. Moreover, the new PyCompilerFlags.cf_feature_version field is not documented in https://docs.python.org/dev/whatsnew/3.8.html#changes-in-the-c-api whereas C extensions using PyCompilerFlags should initialize cf_feature_version to PY_MINOR_VERSION? I'm not sure if C extensions really *must* initialize cf_feature_version, since the field is only used if PyCF_ONLY_AST flag is set in the cf_flags field. Something else, ast.parse() has been modified to use a (major, minor) version tuple rather an integer to specify the Python version in feature_version, but PyCompilerFlags still only uses the minor major. This API will be broken once the Python major version will be increased to 4, no? Would it make sense to use PY_VERSION_HEX format which includes the major version instead? Or cf_feature_version could be a structure with major and minor fields, but that might be overkill? ---------- components: Interpreter Core messages: 345368 nosy: vstinner priority: normal severity: normal status: open title: PyCompilerFlags got a new cf_feature_version field versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 11:24:48 2019 From: report at bugs.python.org (shajianrui) Date: Wed, 12 Jun 2019 15:24:48 +0000 Subject: [New-bugs-announce] [issue37254] POST large file to server (using http.server.CGIHTTPRequestHandler), always reset by server. Message-ID: <1560353088.37.0.536818039185.issue37254@roundup.psfhosted.org> New submission from shajianrui : Windows 10, python 3.7 I met a problem when using the http.server module. I set up a base server with class HTTPServer and CGIHTTPRequestHandler(Not using thread or fork) and tried to POST a large file (>2MB), then I find the server always reset the connection. In some very rare situation the post operation could be finished(Very slow) but the CGI script I'm posting to always show that an incomplete file is received(Called "incomplete file issue"). ==========First Try=========== At first I think (Actually a misunderstanding but lead to a passable walkaround) that "self.rfile.read(nbytes) " at LINE 1199 is not blocking, so it finish receiving just before the POST operation finished. Then I modify the line like this below: 1198 if self.command.lower() == "post" and nbytes > 0: 1199 #data = self.rfile.read(nbytes) ?The original line, I comment out it.? databuf = bytearray(nbytes) datacount = 0 while datacount + 1 < nbytes: buf = self.rfile.read(self.request.getsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF) #print("Get " + str(len(buf)) + " bytes.") for i in range(len(buf)): databuf[datacount] = buf[i] datacount += 1 if datacount == nbytes: #print("Done.") break data = bytes(databuf) ?Now get the data.? In this modification I just try to repeatedly read 65536(Default number of socket) bytes from rfile until I get nbytes of bytes. Now it works well(Correct file received), and is much faster then the POSTing process when using the original http.server module(If "incomplete file issue" appear). ==========Second Try========== However, now I know that there is no problem with "whether it is blocking" because "self.rfile.read()" should be blocked if the file is not POSTed completely. I check the tcp stream with wireshark and find that in the middle of the transfer, the recv window of server is always 256, so I think that the problem is at the variable "rbufsize", which is transfered to makefile() when the rfile of the XXXRequestHandler Object is created. At least it is the problem of the low speed. But I dont know whether it lead to the reset operation and the incomplete file issue. I go back to the original version of the http.server module. Then I make a subclass of socketserver.StreamRequestHandler, override its setup() method(firstly I copy the codes of setup() from StreamRequestHandler, and modify Line770)(770 is the line number in socketserver module, but I create the new subclass in a new file.): 770 #self.rfile = self.connection.makefile('rb', self.rbufsize) self.rfile = self.connection.makefile('rb', 65536) Then the POST process become much faster(Then my first modification)! But the server print Error: File "c:\Users\Administrator\Desktop\cgi-server-test\modified_http_server_bad.py", line 1204, in run_cgi ?A copy of http.server module? while select.select([self.rfile._sock], [], [], 0)[0]: ?at line 1204? AttributeError: '_io.BufferedReader' object has no attribute '_sock' Because I know it want to get the socket of the current RequestHandler, I just modify http.server module and change "self.rfile._sock" into "self.connection"(I dont know if it would cause problem, it is just a walkaround). OK, It now work well again. The CGI script can get the correct file(return the correct SHA1 of the file uploaded), and the POST process is REALLY MUCH FASTER! ========= Question ========= So here is the problem: 1- What cause the server resetting the connection? Seem it is because the default buffer size of the rfile is too small. 2- What cause the cgi script getting the incomplete file? I really have no idea about it. Seems this problem also disappear if I enlarge the buffer. Other information: 1- The "incomplete file issue" usually appear at the first POST to the server, and almost all of the other POST connections are reset. 2- If the server start resetting connections, another "incomplete file issue" will never appear anymore (Actually it happen, but Chrome only show a RESET page, see 4- below.). 3- If the server start resetting connections, it take a long time to terminate the server with Ctrl+C. 4- When the connection is reset, the response printed by the cgi script is received correctly and it show that cgi script receive an incomplete file, the byte count is much fewer than correct number.(I use Chrome to do the POST, so it just show a reset message and the real response is ignored) Please help. ---------- components: Library (Lib) messages: 345370 nosy: shajianrui priority: normal severity: normal status: open title: POST large file to server (using http.server.CGIHTTPRequestHandler), always reset by server. type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 12:49:14 2019 From: report at bugs.python.org (Andrea Moro) Date: Wed, 12 Jun 2019 16:49:14 +0000 Subject: [New-bugs-announce] [issue37255] Pathlib: Add an expandUserPath method or argument Message-ID: <1560358154.87.0.642844029976.issue37255@roundup.psfhosted.org> New submission from Andrea Moro : Assuming the given path contains a '~' character, it would be nice to have a function to expand the given path so any further calls to an .exists doesn't fail. A prototype of the function could be # Expand the home path in *ix based systems if any if '~' in s: x = [x for x in Path(s).parts if x not in '~'] p = Path.home() for item in x: p = p.joinpath(item) ---------- components: Library (Lib) messages: 345379 nosy: Andrea Moro priority: normal severity: normal status: open title: Pathlib: Add an expandUserPath method or argument type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 15:12:35 2019 From: report at bugs.python.org (Alan De Smet) Date: Wed, 12 Jun 2019 19:12:35 +0000 Subject: [New-bugs-announce] [issue37256] urllib.request.Request documentation erroneously refers to the "final two" Message-ID: <1560366755.97.0.221740241329.issue37256@roundup.psfhosted.org> New submission from Alan De Smet : In Doc/library/urllib.request.rst, in the documentation for the class `Request`, it says ``` The final two arguments are only of interest for correct handling of third-party HTTP cookies: ``` However, three arguments follow, not two, and the last is not necessarily related to third-party cookie handling. I believe replacing "final" with "next" will correct the sentence: ``` The next two arguments are only of interest for correct handling of third-party HTTP cookies: ``` Verified present in the source releases for CPython 3.5.7, 3.6.8, 3.7.3, 3.8.0b1. It is _not_ present in 2.7.16 (as the third parameter didn't yet exist, so the existing phrasing is correct.) ---------- assignee: docs at python components: Documentation messages: 345403 nosy: Alan De Smet, docs at python priority: normal severity: normal status: open title: urllib.request.Request documentation erroneously refers to the "final two" type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 16:55:31 2019 From: report at bugs.python.org (Tim Peters) Date: Wed, 12 Jun 2019 20:55:31 +0000 Subject: [New-bugs-announce] [issue37257] obmalloc: stop simple arena thrashing Message-ID: <1560372931.23.0.979771460507.issue37257@roundup.psfhosted.org> New submission from Tim Peters : Scenario: all arenas are fully used. A program then runs a loop like: while whatever: p = malloc(n) ... free(p) At the top, a new arena has to be created, and a single object is taken out of a single pool. At the bottom, that object is freed, so the arena is empty again, and so is returned to the system. Which cycle continues so long as the loop runs. Very expensive. This is "something like" what happens in practice, and has been reported anecdotally for years, but I've never been clear on _exactly_ what programs were doing in such cases. Neil S pointed out this recent report here: https://mail.python.org/pipermail/python-dev/2019-February/156529.html Which may or may not be relevant. Inada? The "fix" proposed there: - if (nf == ao->ntotalpools) { + if (nf == ao->ntotalpools && ao != usable_arenas) { doesn't appeal to me, because it can lead to states where obmalloc never returns empty arenas, no matter how many pile up. For example, picture a thousand arenas each with just one used pool. The head of the arena list becomes free, so is left alone, but moves to the end of the list (which is sorted by number of free pools). Then the new head of the list becomes free, and ditto. On & on. We're left with a list containing a thousand wholly unused arenas. So I suggest instead: + if (nf == ao->ntotalpools && ao->nextarena != NULL) { That is, instead of exempting the head of the list, exempt the tail of the list. In the example above, the first 999 arenas are returned to the system, but the last one remains in the list for reuse. In general, the change would allow for at most one empty arena in the list. We can't in general predict the future, but this would be enough to stop the thrashing in the specific scenario given at the top, with no apparent serious failure modes (potentially "wasting" one arena is minor). ---------- assignee: tim.peters components: Interpreter Core messages: 345407 nosy: inada.naoki, nascheme, tim.peters priority: normal severity: normal stage: test needed status: open title: obmalloc: stop simple arena thrashing versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 17:48:28 2019 From: report at bugs.python.org (David Wang) Date: Wed, 12 Jun 2019 21:48:28 +0000 Subject: [New-bugs-announce] [issue37258] Logging cache not cleared properly when setting level Message-ID: <1560376108.37.0.126278550075.issue37258@roundup.psfhosted.org> New submission from David Wang : If you call setLevel() on a subclass of logging.Logger, it does not reset the cache for that logger. This mean that if you make some logging calls, then call setLevel(), the logger still acts like it still has its old level. See the attached python file for a reference. Currently, the user has to call logger._cache.clear() to manually clear the cache after calling setLevel(). To fix this in Python, we would have to change Logger.setLevel() in /logging/__init__.py to have the following code ``` self.level = _checkLevel(level) self.manager._clear_cache() self._cache.clear() ``` Note the following: - I made sure the subclass has a handler attached so setLevel() should work - This bug does not occur if you use logging.getLogger(). This is because logging.getLogger() returns the root logger, and the cache clear specifically targets the root logger's cache to be cleared. It occurs when the logger is specifically subclassed from logging.getLoggerClass() - The cache was added in Python 3.7, so this bug is specific to this version of python. ---------- components: Library (Lib) files: test.py messages: 345414 nosy: David Wang priority: normal severity: normal status: open title: Logging cache not cleared properly when setting level type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48416/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 20:14:31 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 13 Jun 2019 00:14:31 +0000 Subject: [New-bugs-announce] [issue37259] Missing Doc/whatsnew/3.9.rst file Message-ID: <1560384871.35.0.2265684667.issue37259@roundup.psfhosted.org> New submission from STINNER Victor : There is no Doc/whatsnew/3.9.rst file yet. How should new features of Python 3.9 be documented? Does someone know how to create a template for this file? ---------- assignee: docs at python components: Documentation messages: 345433 nosy: docs at python, lukasz.langa, ned.deily, vstinner priority: normal severity: normal status: open title: Missing Doc/whatsnew/3.9.rst file versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 21:10:41 2019 From: report at bugs.python.org (Jeffrey Kintscher) Date: Thu, 13 Jun 2019 01:10:41 +0000 Subject: [New-bugs-announce] [issue37260] shutil.rmtree() FileNotFoundError race condition Message-ID: <1560388241.37.0.285109327745.issue37260@roundup.psfhosted.org> New submission from Jeffrey Kintscher : shutil.rmtree() is susceptible to a race condition that can needlessly raise OSError: 1. os.scandir() returns the list of entries in a directory 2. while iterating over the list, another thread or process deletes one or more of the entries not yet iterated 3. os.unlink() or stat() raises OSError for the already deleted entry It should check for and ignore, at a minimum, FileNotFoundError because we were going to delete the entry anyways. For comparison, the 'rm -r' shell command handles this race condition by ignoring entries deleted from under it. I will submit a PR when I work out some test cases to include. ---------- components: Library (Lib) messages: 345445 nosy: Jeffrey.Kintscher, giampaolo.rodola, tarek priority: normal severity: normal status: open title: shutil.rmtree() FileNotFoundError race condition versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 12 21:43:10 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 13 Jun 2019 01:43:10 +0000 Subject: [New-bugs-announce] [issue37261] test_io leaks references on AMD64 Fedora Rawhide Refleaks 3.8 Message-ID: <1560390190.75.0.577750929769.issue37261@roundup.psfhosted.org> New submission from STINNER Victor : test_io leaks references on AMD64 Fedora Rawhide Refleaks 3.8: https://buildbot.python.org/all/#/builders/229/builds/10 test_io leaked [23208, 23204, 23208] references, sum=69620 test_io leaked [7657, 7655, 7657] memory blocks, sum=22969 The issue has been introduced by my change: commit c15a682603a47f5aef5025f6a2e3babb699273d6 Author: Victor Stinner Date: Thu Jun 13 00:23:49 2019 +0200 bpo-37223: test_io: silence destructor errors (GH-14031) * bpo-18748: Fix _pyio.IOBase destructor (closed case) (GH-13952) _pyio.IOBase destructor now does nothing if getting the closed attribute fails to better mimick _io.IOBase finalizer. (cherry picked from commit 4f6f7c5a611905fb6b81671547f268c226bc646a) * bpo-37223: test_io: silence destructor errors (GH-13954) Implement also MockNonBlockWriterIO.seek() method. (cherry picked from commit b589cef9c4dada2fb84ce0fae5040ecf16d9d5ef) * bpo-37223, test_io: silence last 'Exception ignored in:' (GH-14029) Use catch_unraisable_exception() to ignore 'Exception ignored in:' error when the internal BufferedWriter of the BufferedRWPair is destroyed. The C implementation doesn't give access to the internal BufferedWriter, so just ignore the warning instead. (cherry picked from commit 913fa1c8245d1cde6edb4254f4fb965cc91786ef) It seems like the root issue is the usage of catch_unraisable_exception() in test_io. class catch_unraisable_exception: def __init__(self): self.unraisable = None self._old_hook = None def _hook(self, unraisable): self.unraisable = unraisable def __enter__(self): self._old_hook = sys.unraisablehook sys.unraisablehook = self._hook return self def __exit__(self, *exc_info): # Clear the unraisable exception to explicitly break a reference cycle del self.unraisable sys.unraisablehook = self._old_hook *Sometimes* "del self.unraisable" of __exit__() triggers a new unraisable exception which calls again the _hook() method which sets again the unraisable attribute. catch_unraisable_exception resurects _io.BufferedWriter objects, and then "del self.unraisable" calls again their finalizer: iobase_finalize() is called again, and iobase_finalize() calls PyErr_WriteUnraisable() on close() failure. ---------- messages: 345449 nosy: vstinner priority: normal severity: normal status: open title: test_io leaks references on AMD64 Fedora Rawhide Refleaks 3.8 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 03:15:04 2019 From: report at bugs.python.org (Pascal Chambon) Date: Thu, 13 Jun 2019 07:15:04 +0000 Subject: [New-bugs-announce] [issue37262] Make unittest assertions staticmethods/classmethods Message-ID: <1560410104.79.0.196722798693.issue37262@roundup.psfhosted.org> New submission from Pascal Chambon : Is there any reasons why assertXXX methods in TestCase are instance methods and not staticmethods/classmethods? Since they (to my knowledge) don't need to access an instance dict, they could be turned into instance-less methods, and thus be usable from other testing frameworks (like pytest, for those who want to use all the power of fixtures and yet benefit from advanced assertions, like Django's TestCase's assertXXX). Am I missing something here? ---------- components: Tests messages: 345463 nosy: pakal priority: normal severity: normal status: open title: Make unittest assertions staticmethods/classmethods type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 03:58:27 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Thu, 13 Jun 2019 07:58:27 +0000 Subject: [New-bugs-announce] [issue37263] spawn asyncio subprocesses in a thread pool Message-ID: <1560412707.67.0.811242566129.issue37263@roundup.psfhosted.org> New submission from Andrew Svetlov : Subprocess starting can be a long blocking operation, see https://github.com/python-trio/trio/issues/1109 for discussion ---------- messages: 345472 nosy: asvetlov priority: normal severity: normal status: open title: spawn asyncio subprocesses in a thread pool _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 04:21:40 2019 From: report at bugs.python.org (Massimo Fidanza) Date: Thu, 13 Jun 2019 08:21:40 +0000 Subject: [New-bugs-announce] [issue37264] Python 3.7.3 win 64bit - unresolved external symbol PyOS_AfterFork_Child Message-ID: <1560414100.26.0.196764764517.issue37264@roundup.psfhosted.org> New submission from Massimo Fidanza : I need to build mod_wsgi under Windows 10 64bit, but I get a linking error mod_wsgi.obj : error LNK2001: unresolved external symbol PyOS_AfterFork_Child build\lib.win-amd64-3.7\mod_wsgi\server\mod_wsgi.cp37-win_amd64.pyd : fatal error LNK1120: 1 unresolved externals error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit status 1120 I checked my installation and found that inside include\intrcheck.h, PyOS_AfterFork_Child is declared, but if I run dumpbin /exports on libs\* (libpython37.a, python3.lib and python37.lib) there is only PyOS_AfterFork exported, and not PyOS_AfterFork_Child, PyOS_AfterFork_Parent and PyOS_BeforeFork. I have installed Python3.7.3 using "Windows x86-64 executable installer" (python-3.7.3-amd64.exe) downloaded from python.org ---------- components: Library (Lib) messages: 345474 nosy: Massimo Fidanza priority: normal severity: normal status: open title: Python 3.7.3 win 64bit - unresolved external symbol PyOS_AfterFork_Child type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 04:48:58 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 13 Jun 2019 08:48:58 +0000 Subject: [New-bugs-announce] [issue37265] Memory leaks regression caused by: Fix threading._shutdown() race condition Message-ID: <1560415738.53.0.302497815956.issue37265@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/189/builds/60 Examples: test_ssl leaked [3, 3, 3] references, sum=9 test_ssl leaked [3, 3, 3] memory blocks, sum=9 test_decimal leaked [4, 4, 4] references, sum=12 test_decimal leaked [4, 4, 4] memory blocks, sum=12 test_httplib leaked [1, 1, 1] references, sum=3 test_httplib leaked [1, 1, 1] memory blocks, sum=3 test_multiprocessing_fork leaked [27, 27, 27] references, sum=81 test_multiprocessing_fork leaked [27, 27, 27] memory blocks, sum=81 test_multiprocessing_forkserver leaked [27, 27, 27] references, sum=81 test_multiprocessing_forkserver leaked [27, 27, 27] memory blocks, sum=81 test_socketserver leaked [15, 15, 15] references, sum=45 test_socketserver leaked [15, 15, 15] memory blocks, sum=45 test_capi leaked [17, 17, 17] references, sum=51 test_capi leaked [17, 17, 17] memory blocks, sum=51 test_smtplib leaked [47, 47, 47] references, sum=141 test_smtplib leaked [47, 47, 47] memory blocks, sum=141 test_io leaked [23254, 23250, 23254] references, sum=69758 test_io leaked [7703, 7701, 7703] memory blocks, sum=23107 test_pickle leaked [6, 6, 6] references, sum=18 test_pickle leaked [6, 6, 6] memory blocks, sum=18 test_httpservers leaked [43, 43, 43] references, sum=129 test_httpservers leaked [43, 43, 43] memory blocks, sum=129 test_nntplib leaked [1, 1, 1] references, sum=3 test_nntplib leaked [1, 1, 1] memory blocks, sum=3 test_os leaked [5, 5, 5] references, sum=15 test_os leaked [5, 5, 5] memory blocks, sum=15 test_subprocess leaked [4, 4, 4] references, sum=12 test_subprocess leaked [4, 4, 4] memory blocks, sum=12 test_socket leaked [7, 7, 7] references, sum=21 test_socket leaked [7, 7, 7] memory blocks, sum=21 test_multiprocessing_spawn leaked [27, 27, 27] references, sum=81 test_multiprocessing_spawn leaked [27, 27, 27] memory blocks, sum=81 test_concurrent_futures leaked [3, 3, 3] references, sum=9 test_concurrent_futures leaked [3, 3, 3] memory blocks, sum=9 test_asyncio leaked [62, 62, 62] references, sum=186 test_asyncio leaked [62, 62, 62] memory blocks, sum=186 I used git bisect and I found... my own change :-) commit 468e5fec8a2f534f1685d59da3ca4fad425c38dd Author: Victor Stinner Date: Thu Jun 13 01:30:17 2019 +0200 bpo-36402: Fix threading._shutdown() race condition (GH-13948) Fix a race condition at Python shutdown when waiting for threads. Wait until the Python thread state of all non-daemon threads get deleted (join all non-daemon threads), rather than just wait until Python threads complete. * Add threading._shutdown_locks: set of Thread._tstate_lock locks of non-daemon threads used by _shutdown() to wait until all Python thread states get deleted. See Thread._set_tstate_lock(). * Add also threading._shutdown_locks_lock to protect access to threading._shutdown_locks. * Add test_finalization_shutdown() test. ---------- components: Library (Lib) messages: 345476 nosy: vstinner priority: normal severity: normal status: open title: Memory leaks regression caused by: Fix threading._shutdown() race condition versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 05:36:05 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 13 Jun 2019 09:36:05 +0000 Subject: [New-bugs-announce] [issue37266] Daemon threads must be forbidden in subinterpreters Message-ID: <1560418565.72.0.288239825756.issue37266@roundup.psfhosted.org> New submission from STINNER Victor : Py_EndInterpreter() calls threading._shutdown() which waits for non-daemon threads spawned in the subinterpreters. Problem: daemon threads continue to run after threading._shutdown(), but they rely on an interpreter which is being finalized and then deleted. Attached example shows the problem: $ ./python subinterp_daemon_thread.py hello from daemon thread Fatal Python error: Py_EndInterpreter: not the last thread Current thread 0x00007f13e5926740 (most recent call first): File "subinterp_daemon_thread.py", line 23 in Aborted (core dumped) Catching the bug in Py_EndInterpreter() is too late. IMHO we must simply deny daemon threads by design in subinterpreters for safety. In the main interpreter, we provide best effort to prevent crash at exit, but IMHO the implementation is ugly :-( ceval.c uses exit_thread_if_finalizing(): it immediately exit the current daemon thread if the threads attempts to acquire or release the GIL, whereas the interpreter is gone. Problem: we cannot release/clear some data structure at Python exit because of that. So Py_Finalize() may leak some memory by design, because of daemon threads. IMHO we can be way stricter in subinterpreters. I suggest to only modify Python 3.9. ---------- components: Interpreter Core files: subinterp_daemon_thread.py messages: 345485 nosy: vstinner priority: normal severity: normal status: open title: Daemon threads must be forbidden in subinterpreters versions: Python 3.9 Added file: https://bugs.python.org/file48417/subinterp_daemon_thread.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 06:12:30 2019 From: report at bugs.python.org (Zackery Spytz) Date: Thu, 13 Jun 2019 10:12:30 +0000 Subject: [New-bugs-announce] [issue37267] os.dup() creates an inheritable fd when handling a character file on Windows Message-ID: <1560420750.69.0.156424123047.issue37267@roundup.psfhosted.org> New submission from Zackery Spytz : In PR 13739, Eryk Sun mentioned that the Windows implementation of os.dup() returns an inheritable fd when handling a character file. A comment in _Py_dup() makes it seem as though this is due to a belief that handles for character files cannot be made non-inheritable (which is wrong). ---------- components: Extension Modules, Windows messages: 345494 nosy: ZackerySpytz, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.dup() creates an inheritable fd when handling a character file on Windows type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 08:11:39 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 13 Jun 2019 12:11:39 +0000 Subject: [New-bugs-announce] [issue37268] Deprecate the parser module Message-ID: <1560427899.74.0.436857498848.issue37268@roundup.psfhosted.org> New submission from STINNER Victor : The parser module should be deprecated as soon as possible according to Pablo Galindo Salgo and Guido van Rossum: * https://mail.python.org/pipermail/python-dev/2019-May/157464.html * https://bugs.python.org/issue37253#msg345398 I propose to deprecate it in Python 3.8: add a note in the documentation and emit a DeprecationWarning on "import parser". ---------- components: Library (Lib) messages: 345504 nosy: vstinner priority: normal severity: normal status: open title: Deprecate the parser module versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 14:56:04 2019 From: report at bugs.python.org (Marius Gedminas) Date: Thu, 13 Jun 2019 18:56:04 +0000 Subject: [New-bugs-announce] [issue37269] Python 3.8b1 miscompiles conditional expressions containing __debug__ Message-ID: <1560452164.81.0.0834317425739.issue37269@roundup.psfhosted.org> New submission from Marius Gedminas : Python 3.8 miscompiles the following code: $ cat /tmp/wat.py enable_debug = False if not enable_debug or not __debug__: print("you shall not pass!") $ python3.7 /tmp/wat.py you shall not pass! $ python3.8 /tmp/wat.py (no output is produced.) This is a distilled example from zope.traversing's codebase (https://github.com/zopefoundation/zope.traversing/issues/13). ---------- components: Interpreter Core messages: 345533 nosy: mgedmin priority: normal severity: normal status: open title: Python 3.8b1 miscompiles conditional expressions containing __debug__ versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 17:31:34 2019 From: report at bugs.python.org (Joe Jevnik) Date: Thu, 13 Jun 2019 21:31:34 +0000 Subject: [New-bugs-announce] [issue37270] Manage memory lifetime for all type-related objects. Message-ID: <1560461494.72.0.39613394712.issue37270@roundup.psfhosted.org> New submission from Joe Jevnik : When using PyType_FromSpec, the memory for PyType_Spec.name, Py_tp_methods, and Py_tp_members needs to somehow outlive the resulting type. This makes it hard to use this interface to generate types without just leaking the memory for these arrays, which is bad if you are programatically creating short-lived types. I posted about this on capi-sig, and the response seemed to be that it would be to replace the things that currently hold pointers into the array with copies of the data. Remove internal usages of PyMethodDef and PyGetSetDef. For PyMethodDef, change PyCFunctionObject to replace the PyMethodDef* member with a PyCFunctionBase member. The PyCFunctionBase is a new struct to hold the managed values of a PyMethodDef. This type is shared between PyCFunction and the various callable descriptor objects. A PyCFunctionBase is like a PyMethodDef but replaces the char* members with PyObject* members. For PyGetSetDef, inline the members on the resulting PyGetSetDescrObject, replacing all char* members with PyObject* members. The memory for the closure is *not* managed, adding support for that would likely require an API change and can be done in a future change. For the tp_name field, instead of setting it directly to the value of PyType_Spec.name, set it to the result of PyUnicode_AsUTF8(ht_name), where ht_name is the PyUnicode object created from the original spec name. This is the same trick used to properly manage this pointer for heap types when the __name__ is reassigned. ---------- components: Extension Modules messages: 345539 nosy: llllllllll priority: normal severity: normal status: open title: Manage memory lifetime for all type-related objects. versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 21:58:30 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 14 Jun 2019 01:58:30 +0000 Subject: [New-bugs-announce] [issue37271] Make multiple passes of the peephole optimizer until bytecode cannot be optimized further Message-ID: <1560477510.59.0.491449903314.issue37271@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : The peephole optimizer currently does not optimize the result of its own optimization when its possible. For example, consider the following code: if a: if b: if (c or d): foo() else: bar() else: baz() The bytecode for this is: 2 0 LOAD_GLOBAL 0 (a) 2 POP_JUMP_IF_FALSE 32 4 4 LOAD_GLOBAL 1 (b) 6 POP_JUMP_IF_FALSE 24 5 8 LOAD_GLOBAL 2 (c) 10 POP_JUMP_IF_TRUE 16 6 12 LOAD_GLOBAL 3 (d) 14 POP_JUMP_IF_FALSE 30 7 >> 16 LOAD_GLOBAL 4 (foo) 18 CALL_FUNCTION 0 20 POP_TOP 22 JUMP_ABSOLUTE 38 9 >> 24 LOAD_GLOBAL 5 (bar) 26 CALL_FUNCTION 0 28 POP_TOP >> 30 JUMP_FORWARD 6 (to 38) 11 >> 32 LOAD_GLOBAL 6 (baz) 34 CALL_FUNCTION 0 36 POP_TOP >> 38 LOAD_CONST 0 (None) 40 RETURN_VALUE Notice that the 14 POP_JUMP_IF_FALSE 30 jumps to another jump (30 JUMP_FORWARD 6 (to 38)). If we repeat the optimizations until the resulting bytecode does not change more we get: 2 0 LOAD_GLOBAL 0 (a) 2 POP_JUMP_IF_FALSE 32 4 4 LOAD_GLOBAL 1 (b) 6 POP_JUMP_IF_FALSE 24 5 8 LOAD_GLOBAL 2 (c) 10 POP_JUMP_IF_TRUE 16 6 12 LOAD_GLOBAL 3 (d) 5 14 POP_JUMP_IF_FALSE 38 7 >> 16 LOAD_GLOBAL 4 (foo) 18 CALL_FUNCTION 0 20 POP_TOP 22 JUMP_ABSOLUTE 38 9 >> 24 LOAD_GLOBAL 5 (bar) 26 CALL_FUNCTION 0 28 POP_TOP 30 JUMP_FORWARD 6 (to 38) 11 >> 32 LOAD_GLOBAL 6 (baz) 34 CALL_FUNCTION 0 36 POP_TOP >> 38 LOAD_CONST Notice that in this example the original instruction now is (14 POP_JUMP_IF_FALSE 38). ---------- components: Interpreter Core messages: 345543 nosy: pablogsal priority: normal severity: normal status: open title: Make multiple passes of the peephole optimizer until bytecode cannot be optimized further versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 23:34:40 2019 From: report at bugs.python.org (Dull Bananas) Date: Fri, 14 Jun 2019 03:34:40 +0000 Subject: [New-bugs-announce] [issue37272] pygit2 won't build Message-ID: <1560483280.0.0.881909893602.issue37272@roundup.psfhosted.org> New submission from Dull Bananas : I am not sure if this is an issue with Python or pygit2, but Pyhton 3.8 and 3.9 are unable to build the pygit2 package, and an error with gcc occurs. This is causing errors on my Travis CI build; here is a link where you can see build logs for my project that depends on pygit2: https://travis-ci.org/dullbananas/shellp ---------- components: Extension Modules messages: 345544 nosy: Dull Bananas priority: normal severity: normal status: open title: pygit2 won't build type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 13 23:49:16 2019 From: report at bugs.python.org (Luiz Amaral) Date: Fri, 14 Jun 2019 03:49:16 +0000 Subject: [New-bugs-announce] [issue37273] from pickle import rick Message-ID: <1560484156.23.0.547175119772.issue37273@roundup.psfhosted.org> New submission from Luiz Amaral : from pickle import rick print(rick) ---------- components: Library (Lib) messages: 345546 nosy: luxedo priority: normal pull_requests: 13928 severity: normal status: open title: from pickle import rick versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 00:50:52 2019 From: report at bugs.python.org (Hansol Moon) Date: Fri, 14 Jun 2019 04:50:52 +0000 Subject: [New-bugs-announce] [issue37274] Scripts folder is empty in python 3.7.3 for windows. Message-ID: <1560487852.05.0.608471969466.issue37274@roundup.psfhosted.org> New submission from Hansol Moon : I downloaded python 3.7.3 which is the latest version of python for windows. And I was going to download Pygame via pip in Python. and I found out that the scripts folder in Python is literally empty. so I was not able to use pip to install Pygame.. I tried reinstalling over and over, but I have kept getting the empty scripts folder. I have no idea how to fix this. ---------- components: Installation messages: 345550 nosy: Hansol Moon priority: normal severity: normal status: open title: Scripts folder is empty in python 3.7.3 for windows. type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 01:46:30 2019 From: report at bugs.python.org (Inada Naoki) Date: Fri, 14 Jun 2019 05:46:30 +0000 Subject: [New-bugs-announce] [issue37275] GetConsole(Output)CP is used even when stdin/stdout is redirected Message-ID: <1560491190.29.0.949105060129.issue37275@roundup.psfhosted.org> New submission from Inada Naoki : When stdout is redirected to file and cp65001 is used, stdout encoding is unexpectable: # Power Shell 6 (use cp65001 by default) PS C:?> python3 -c "print('????')" > ps.txt # cmd.exe C:?> chcp 65001 C:?> python3 -c "print('????')" > cmd.txt Now, ps.txt is encoded by UTF-8, but cmd.txt is encoded by cp932 (ACP). This is because: * TextIOWrapper tries `os.device_encoding(1)` * `os.device_encoding(1)` use GetConsoleOutputCP() without checking stdout is console In the example above, a console is attached when python is called from Power Shell 6, but it is not attached when python is called from cmd.exe. I think using GetConsoleOutputCP() for non console is abusing. --- There is a relating issue: UTF-8 mode doesn't override stdin,stdout,stderr encoding when console is attached. On Unix, os.device_encoding() uses locale encoding and UTF-8 mode overrides locale encoding. Good. But on Windows, os.device_encoding() uses GetConsole(Output)CP(). UTF-8 mode doesn't override it. If we stop abusing GetConsoleOutputCP(), this issue is fixed automatically. But if we keep using GetConsoleOutputCP() for stdout which is not a console, UTF-8 mode should override it. ---------- components: Windows messages: 345551 nosy: inada.naoki, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: GetConsole(Output)CP is used even when stdin/stdout is redirected type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 02:08:05 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Fri, 14 Jun 2019 06:08:05 +0000 Subject: [New-bugs-announce] [issue37276] Incorrect number of running calls in ProcessPoolExecutor Message-ID: <1560492485.93.0.0515191207525.issue37276@roundup.psfhosted.org> New submission from G?ry : In the `concurrent.futures` standard module, the number of running calls in a `ProcessPoolExecutor` is `max_workers + 1` (unexpected) instead of `max_workers` (expected) like in a `ThreadingPoolExecutor`. The following code snippet which submits 8 calls to 2 workers in a `ProcessPoolExecutor`: import concurrent.futures import time def call(): while True: time.sleep(1) if __name__ == "__main__": with concurrent.futures.ProcessPoolExecutor(max_workers=2) as executor: futures = [executor.submit(call) for _ in range(8)] for future in futures: print(future.running()) prints this (3 running calls; unexpected since there are 2 workers): > True > True > True > False > False > False > False > False while using a `ThreadPoolExecutor` prints this (2 running calls; expected): > True > True > False > False > False > False > False > False Tested on both Windows 10 and MacOS 10.14. ---------- components: Library (Lib) messages: 345553 nosy: asvetlov, bquinlan, inada.naoki, lukasz.langa, maggyero, ned.deily, pitrou, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Incorrect number of running calls in ProcessPoolExecutor type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 04:41:30 2019 From: report at bugs.python.org (Konstantin Enchant) Date: Fri, 14 Jun 2019 08:41:30 +0000 Subject: [New-bugs-announce] [issue37277] http.cookies.SimpleCookie does not parse attribute without value (rfc2109) Message-ID: <1560501690.14.0.434114387204.issue37277@roundup.psfhosted.org> New submission from Konstantin Enchant : Very strange case but https://www.ietf.org/rfc/rfc2109.txt (see 4.1 Syntax: General) defines that "= value" is optional for attribute-value pairs for header Cookie. And SimpleCookie fully broken if meets attribute without value, example: ``` >>> from http.cookies import SimpleCookie # all ok >>> SimpleCookie('a=1') # parse fully broken and does not parse not only `test` but `a` too >>> SimpleCookie('test; a=1') # or >>> SimpleCookie('a=1; test; b=2') ``` I think the problem hasn't been noticed for so long because people usually use frameworks, for example, Django parse it correctly because has workaround - https://github.com/django/django/blob/master/django/http/cookie.py#L20. Also Go Lang handle that case too, example - https://play.golang.org/p/y0eFXVq6byK (How can you see Go Lang and Django has different behavior for that case and I think Go Lang more better do it.) The problem seems minor not but aiohttp use SimpleCookie as is (https://github.com/aio-libs/aiohttp/blob/3.5/aiohttp/web_request.py#L482) and if request has that strange cookie value mixed with other normal values - all cookies can not be parsed by aiohttp (just request.cookies is empty). In real world in my web application (based on aiohttp) it fully break authentication for request based on cookies. I hope that will be fixed for SimpleCookie without implement workaround for aiohttp like Django. ---------- messages: 345563 nosy: sirkonst priority: normal severity: normal status: open title: http.cookies.SimpleCookie does not parse attribute without value (rfc2109) versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 06:19:11 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 14 Jun 2019 10:19:11 +0000 Subject: [New-bugs-announce] [issue37278] test_asyncio: ProactorLoopCtrlC leaks one reference Message-ID: <1560507551.61.0.999190887785.issue37278@roundup.psfhosted.org> New submission from STINNER Victor : vstinner at WIN C:\vstinner\python\master>python -m test -R 3:3 test_asyncio -m test.test_asyncio.test_windows_events.ProactorLoopCtrlC.test_ctrl_c Running Debug|x64 interpreter... Run tests sequentially 0:00:00 load avg: 0.00 [1/1] test_asyncio beginning 6 repetitions 123456 ...... test_asyncio leaked [1, 1, 1] references, sum=3 test_asyncio leaked [2, 1, 1] memory blocks, sum=4 test_asyncio failed == Tests result: FAILURE == 1 test failed: test_asyncio Total duration: 13 sec 391 ms Tests result: FAILURE Attached PR fix the issue by joining the thread. ---------- components: Tests, asyncio messages: 345570 nosy: asvetlov, vstinner, yselivanov priority: normal severity: normal status: open title: test_asyncio: ProactorLoopCtrlC leaks one reference versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 06:25:04 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Fri, 14 Jun 2019 10:25:04 +0000 Subject: [New-bugs-announce] [issue37279] asyncio sendfile sends extra data in the last chunk in fallback mode Message-ID: <1560507904.77.0.354970791225.issue37279@roundup.psfhosted.org> New submission from Andrew Svetlov : My fault introduced in 3.7 in initial async sendfile implementation ---------- components: asyncio messages: 345571 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio sendfile sends extra data in the last chunk in fallback mode type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 06:29:49 2019 From: report at bugs.python.org (Andrew Svetlov) Date: Fri, 14 Jun 2019 10:29:49 +0000 Subject: [New-bugs-announce] [issue37280] Use threadpool for reading from file for sendfile fallback mode Message-ID: <1560508189.12.0.504777268427.issue37280@roundup.psfhosted.org> New submission from Andrew Svetlov : We use thread pool executor for loop.sock_sendfile(), there is no reason to call blocking code for loop.sendfile() ---------- components: asyncio messages: 345574 nosy: asvetlov, yselivanov priority: normal severity: normal status: open title: Use threadpool for reading from file for sendfile fallback mode type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 06:32:18 2019 From: report at bugs.python.org (=?utf-8?b?0JLQsNC70LXQvdGC0LjQvSDQkdGD0YDRh9C10L3Rjw==?=) Date: Fri, 14 Jun 2019 10:32:18 +0000 Subject: [New-bugs-announce] [issue37281] asyncio Task._fut_waiter done callback Message-ID: <1560508338.36.0.821334369353.issue37281@roundup.psfhosted.org> New submission from ???????? ??????? : Future has a done_callback, but Task not, why ? Is a safe to use Task._fut_waiter future done_callback? ---------- messages: 345575 nosy: ???????? ??????? priority: normal severity: normal status: open title: asyncio Task._fut_waiter done callback type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 07:01:44 2019 From: report at bugs.python.org (=?utf-8?q?Jonat=C3=A3_Bolzan_Loss?=) Date: Fri, 14 Jun 2019 11:01:44 +0000 Subject: [New-bugs-announce] [issue37282] os problems on absolute paths containing unicode characters on windows Message-ID: <1560510104.32.0.366432757935.issue37282@roundup.psfhosted.org> New submission from Jonat? Bolzan Loss : If a absolute path is provided for some function on os module, it returns "WinError 3". Example (considering you are on C:\Users\username): import os os.mkdir(u'Exam?pl?') os.listdir(u'C:\\Users\username\Exam?pl?') Result: Traceback (most recent call last): File "", line 1, in FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\username\\Exam?pl?' ---------- components: Windows messages: 345580 nosy: Jonat? Bolzan Loss, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os problems on absolute paths containing unicode characters on windows versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 08:09:47 2019 From: report at bugs.python.org (=?utf-8?q?J=C3=B6rn_Jacobi?=) Date: Fri, 14 Jun 2019 12:09:47 +0000 Subject: [New-bugs-announce] [issue37283] Unexpected behavior when running installer a second time with the same arguments or unattend.xml Message-ID: <1560514187.3.0.83059947509.issue37283@roundup.psfhosted.org> New submission from J?rn Jacobi : When executing the installer with arguments or using the unattend.xml to run it without UI as documented in https://python.readthedocs.io/en/stable/using/windows.html#installing-without-ui the installer ignores the arguments or unattend.xml when it's executed again. The second time, the installer (running i modify mode) uses the default values (hereby removing the Prepenpath if set the first time) (testet with version 3.6.6-amd64 to 3.7.3-amd64 of the executable installer) To recreate, run the following : step 1 : python-3.6.6-amd64.exe /passive InstallAllUsers=1 PrependPath=1 Include_doc=0 Include_dev=0 Include_tcltk=0 Include_test=0 SimpleInstall=1 now python is install as expected. step 2 : python-3.6.6-amd64.exe /passive InstallAllUsers=1 PrependPath=1 Include_doc=0 Include_dev=0 Include_tcltk=0 Include_test=0 SimpleInstall=1 even though the options are exactly the same, the installer now removes python from path and installs tcltk, documentation, test and dev. step 3 : python-3.6.6-amd64.exe /passive InstallAllUsers=1 PrependPath=1 Include_doc=0 Include_dev=0 Include_tcltk=0 Include_test=0 SimpleInstall=1 now the installer only makes a quick check, what i would have expected in step 2, but hare only the default values are used. ---------- components: Installation messages: 345589 nosy: J?rn Jacobi priority: normal severity: normal status: open title: Unexpected behavior when running installer a second time with the same arguments or unattend.xml type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 14:02:41 2019 From: report at bugs.python.org (Eric Snow) Date: Fri, 14 Jun 2019 18:02:41 +0000 Subject: [New-bugs-announce] [issue37284] Not obvious that new required attrs of sys.implementation must have a PEP. Message-ID: <1560535361.35.0.573554677552.issue37284@roundup.psfhosted.org> New submission from Eric Snow : PEP 421 added sys.implementation for Python implementors to provide values required by stdlib code (e.g. importlib). That PEP indicates that any new required attributes must go through the PEP process. [1] That requirement isn't obvious. To fix that we should add a brief note to the sys.implementation docs [2] identifying the requirement. [1] https://www.python.org/dev/peps/pep-0421/#adding-new-required-attributes [2] https://docs.python.org/3/library/sys.html#sys.implementation ---------- assignee: docs at python components: Documentation keywords: easy messages: 345624 nosy: cheryl.sabella, docs at python, eric.snow priority: normal severity: normal stage: needs patch status: open title: Not obvious that new required attrs of sys.implementation must have a PEP. versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 14:29:57 2019 From: report at bugs.python.org (=?utf-8?q?Misty_De_M=C3=A9o?=) Date: Fri, 14 Jun 2019 18:29:57 +0000 Subject: [New-bugs-announce] [issue37285] Python 2.7 setup.py incorrectly double-joins SDKROOT Message-ID: <1560536997.94.0.77736391692.issue37285@roundup.psfhosted.org> New submission from Misty De M?o : Python 2.7's setup.py has incorrect behaviour when adding the SDKROOT to the beginning of the include path while searching. When searching paths, find_file first checks is_macosx_sdk_path to see if it needs to add the sdk_root: https://github.com/python/cpython/blob/2.7/setup.py#L87-L88 This is mostly correct, except one of the path prefixes it checks is /Library: https://github.com/python/cpython/blob/2.7/setup.py#L64 The Xcode CLT path is located in /Library, so this check passes. That means the /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk portion of the path gets repeated, leading to an invalid path. I recognize Python 2.7 isn't in active development, but I have a minimal patch to fix this that I will be submitting. ---------- components: Build messages: 345626 nosy: mistydemeo priority: normal severity: normal status: open title: Python 2.7 setup.py incorrectly double-joins SDKROOT type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 14:53:22 2019 From: report at bugs.python.org (=?utf-8?q?Mi=C5=82osz?=) Date: Fri, 14 Jun 2019 18:53:22 +0000 Subject: [New-bugs-announce] [issue37286] Pasting emoji crashes IDLE Message-ID: <1560538402.62.0.439555493533.issue37286@roundup.psfhosted.org> New submission from Mi?osz : On Windows 10 ---------- assignee: terry.reedy components: IDLE messages: 345630 nosy: md37, terry.reedy priority: normal severity: normal status: open title: Pasting emoji crashes IDLE type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 19:30:45 2019 From: report at bugs.python.org (Brian Quinlan) Date: Fri, 14 Jun 2019 23:30:45 +0000 Subject: [New-bugs-announce] [issue37287] picke cannot dump exceptions subclasses with different super() args Message-ID: <1560555045.5.0.822054090592.issue37287@roundup.psfhosted.org> New submission from Brian Quinlan : $ ./python.exe nopickle.py TypeError: __init__() missing 1 required positional argument: 'num' The issue is that the arguments passed to Exception.__init__ (via `super()`) are collected into `args` and then serialized by pickle e.g. >>> PickleBreaker(5).args () >>> PickleBreaker(5).__reduce_ex__(3) (, (), {'num': 5}) >>> # The 1st index is the `args` tuple Then, during load, the `args` tuple is used to initialize the Exception i.e. PickleBreaker(), which results in the `TypeError` See https://github.com/python/cpython/blob/master/Modules/_pickle.c#L6769 ---------- components: Library (Lib) files: nopickle.py messages: 345647 nosy: bquinlan priority: normal severity: normal status: open title: picke cannot dump exceptions subclasses with different super() args versions: Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48420/nopickle.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 19:36:17 2019 From: report at bugs.python.org (Paul Monson) Date: Fri, 14 Jun 2019 23:36:17 +0000 Subject: [New-bugs-announce] [issue37288] Windows arm32 buildbot build is broken Message-ID: <1560555377.03.0.131397449861.issue37288@roundup.psfhosted.org> New submission from Paul Monson : https://github.com/python/cpython/pull/14065 breaks building with --no-tkinter. Which breaks the Windows arm32 buildbot `C:\Users\buildbot\buildarea\3.x.monson-win-arm32.nondebug\build\PCbuild\python.vcxproj(158,9): error MSB4184: The expression "[System.IO.File]::ReadAllText(C:\Users\buildbot\buildarea\3.x.monson-win-arm32.nondebug\build\externals\tcltk-8.6.9.0\arm32\tcllicense.terms)" cannot be evaluated. Could not find a part of the path 'C:\Users\buildbot\buildarea\3.x.monson-win-arm32.nondebug\build\externals\tcltk-8.6.9.0\arm32\tcllicense.terms'.` ---------- components: Build, Windows messages: 345648 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows arm32 buildbot build is broken type: compile error versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 14 23:43:32 2019 From: report at bugs.python.org (Thomas Caswell) Date: Sat, 15 Jun 2019 03:43:32 +0000 Subject: [New-bugs-announce] [issue37289] regression in cython due to peephole optomization Message-ID: <1560570212.11.0.192149114574.issue37289@roundup.psfhosted.org> New submission from Thomas Caswell : The fix for https://bugs.python.org/issue37213 in 3498c642f4e83f3d8e2214654c0fa8e0d51cebe5 (https://github.com/python/cpython/pull/13969) seems to break building numpy (master) with cython (master) due to a pickle failure. The traceback (which is mostly in the cython source) is in an attachment. I bisected it back to this commit and tested that reverting this commit fixes the numpy build. ---------- files: np_fail.txt messages: 345655 nosy: pablogsal, scoder, serhiy.storchaka, tcaswell, vstinner priority: normal severity: normal status: open title: regression in cython due to peephole optomization type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file48421/np_fail.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 02:36:32 2019 From: report at bugs.python.org (p) Date: Sat, 15 Jun 2019 06:36:32 +0000 Subject: [New-bugs-announce] [issue37290] Mistranslation Message-ID: <1560580592.41.0.231348186327.issue37290@roundup.psfhosted.org> New submission from p : https://docs.python.org/ja/3.5/library/multiprocessing.html?highlight=multiprocessing#module-multiprocessing (?????????????????????????????????????????????????????????????????????????????????) -->??????????????????????????????????????????????????????????????????????? ---------- assignee: docs at python components: Documentation messages: 345659 nosy: docs at python, p priority: normal severity: normal status: open title: Mistranslation versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 07:30:49 2019 From: report at bugs.python.org (David Carlier) Date: Sat, 15 Jun 2019 11:30:49 +0000 Subject: [New-bugs-announce] [issue37291] AST - code cleanup Message-ID: <1560598249.35.0.0160143221148.issue37291@roundup.psfhosted.org> New submission from David Carlier : Removing little dead code part. ---------- messages: 345674 nosy: David Carlier priority: normal pull_requests: 13959 severity: normal status: open title: AST - code cleanup versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 09:06:31 2019 From: report at bugs.python.org (Crusader Ky) Date: Sat, 15 Jun 2019 13:06:31 +0000 Subject: [New-bugs-announce] [issue37292] _xxsubinterpreters: Can't unpickle objects defined in __main__ Message-ID: <1560603991.54.0.788079404137.issue37292@roundup.psfhosted.org> New submission from Crusader Ky : As of CPython 3.8.0b1: If one pickles an object that is defined in the __main__ module, sends it to a subinterpreter as bytes, and then tries unpickling it there, it fails saying that __main__ doesn't define it. import _xxsubinterpreters as interpreters import pickle class C: pass c = C() interp_id = interpreters.create() c_bytes = pickle.dumps(c) interpreters.run_string( interp_id, "import pickle; pickle.loads(c_bytes)", shared={"c_bytes": c_bytes}, ) If the above is executed directly with the python command-line, it fails. If it's imported from another module, it works. One would expected behaviour compatible with sub-processes spawned with the spawn method, where the__main__ of the parent process is visible to the subprocess too. Workarounds: 1 - define everything that must be pickled in an imported module 2 - use CloudPickle ---------- messages: 345680 nosy: Crusader Ky, eric.snow priority: normal severity: normal status: open title: _xxsubinterpreters: Can't unpickle objects defined in __main__ type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 09:15:17 2019 From: report at bugs.python.org (Crusader Ky) Date: Sat, 15 Jun 2019 13:15:17 +0000 Subject: [New-bugs-announce] [issue37293] concurrent.futures.InterpreterPoolExecutor Message-ID: <1560604517.31.0.263744065617.issue37293@roundup.psfhosted.org> New submission from Crusader Ky : As one of the logical consequences to PEP 554, it would be neat to have a concurrent.futures.InterpreterPoolExecutor. I wrote the initial code at https://github.com/crusaderky/subinterpreters_tests - currently missing unit tests and pickle5 buffers support. If everybody is happy with the design, I'll start working on a PR as soon as the GIL becomes per-interpreter (it's kinda pointless before that). ---------- components: Extension Modules messages: 345681 nosy: Crusader Ky, eric.snow priority: normal severity: normal status: open title: concurrent.futures.InterpreterPoolExecutor type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 13:49:58 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Sat, 15 Jun 2019 17:49:58 +0000 Subject: [New-bugs-announce] [issue37294] ProcessPoolExecutor fails with super Message-ID: <1560620998.84.0.271411330169.issue37294@roundup.psfhosted.org> New submission from G?ry : The following code hangs forever instead of printing "called" 10 times: from concurrent.futures import ProcessPoolExecutor class A: def f(self): print("called") class B(A): def f(self): executor = ProcessPoolExecutor(max_workers=2) futures = [executor.submit(super(B, self).f) for _ in range(10)] if __name__ == "__main__": B().f() The same code with `super(B, self)` replaced with `super()` raises the following error: > TypeError: super(type, obj): obj must be an instance or subtype of type However, replacing `ProcessPoolExecutor` with `ThreadPoolExecutor` works as expected, but only with `super(B, self)` (with `super()` it still raises the same error). ---------- components: Library (Lib) messages: 345709 nosy: asvetlov, bquinlan, inada.naoki, lukasz.langa, maggyero, ned.deily, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: ProcessPoolExecutor fails with super type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 15:17:40 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 15 Jun 2019 19:17:40 +0000 Subject: [New-bugs-announce] [issue37295] Possible optimizations for math.comb() Message-ID: <1560626260.31.0.393863097125.issue37295@roundup.psfhosted.org> New submission from Raymond Hettinger : The implementation of math.comb() is nice in that it limits the size of intermediate values as much as possible and it avoids unnecessary steps when min(k,n-k) is small with respect to k. There are some ways it can be made better or faster: 1) For small values of n, there is no need to use PyLong objects at every step. Plain C integers would suffice and we would no longer need to make repeated allocations for all the intermediate values. For 32-bit builds, this could be done for n<=30. For 64-bit builds, it applies to n<=62. Adding this fast path would likely give a 10x to 50x speed-up. 2) For slightly larger values of n, the computation could be sped-up by precomputing one or more rows of binomial coefficients (perhaps for n=75, n=150, and n=225). The calculation could then start from that row instead of from higher rows on Pascal's triangle. For example comb(100, 55) is currently computed as: comb(55,1) * 56 // 2 * 57 // 3 ... 100 // 45 <== 45 steps Instead, it could be computed as: comb(75,20) * 76 // 21 * 77 // 22 ... 100 / 45 <== 25 steps ^-- found by table lookup This gives a nice speed-up in exchange for a little memory in an array of constants (for n=75, we would need an array of length 75//2 after exploiting symmetry). Almost all cases would should show some benefit and in favorable cases like comb(76, 20) the speed-up would be nearly 75x. 3) When k is close to n/2, the current algorithm is slower than just computing (n!) / (k! * (n-k)!). However, the factorial method comes at the cost of more memory usage for large n. The factorial method consumes memory proportional to n*log2(n) while the current early-cancellation method uses memory proportional to n+log2(n). Above some threshold for memory pain, the current method should always be preferred. I'm not sure the factorial method should be used at all, but it is embarrassing that factorial calls sometimes beat the current C implementation: $ python3.8 -m timeit -r 11 -s 'from math import comb, factorial as fact' -s 'n=100_000' -s 'k = n//2' 'comb(n, k)' 1 loop, best of 11: 1.52 sec per loop $ python3.8 -m timeit -r 11 -s 'from math import comb, factorial as fact' -s 'n=100_000' -s 'k = n//2' 'fact(n) // (fact(k) * fact(n-k))' 1 loop, best of 11: 518 msec per loop 4) For values such as n=1_000_000 and k=500_000, the running time is very long and the routine doesn't respond to SIGINT. We could add checks for keyboard interrupts for large n. Also consider releasing the GIL. 5) The inner-loop current does a pure python subtraction than could in many cases be done with plain C integers. When n is smaller than maxsize, we could have a code path that replaces "PyNumber_Subtract(factor, _PyLong_One)" with something like "PyLong_FromUnsignedLongLong((unsigned long long)n - i)". ---------- components: Library (Lib) messages: 345711 nosy: mark.dickinson, pablogsal, rhettinger, serhiy.storchaka, tim.peters priority: normal severity: normal status: open title: Possible optimizations for math.comb() type: performance versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 15:27:39 2019 From: report at bugs.python.org (Rick) Date: Sat, 15 Jun 2019 19:27:39 +0000 Subject: [New-bugs-announce] [issue37296] pdb next vs __next__ Message-ID: <1560626859.25.0.0533027405715.issue37296@roundup.psfhosted.org> New submission from Rick : Don't know how to get version from pdb, Makefile says 2.7 Code runs but when debugging I get an error that an iterator has no next() function. It has got a __next__() function, and it runs, just not under pdb. If I create a spurious next() function, pdb runs, and calls the __next__() function. ---------- messages: 345712 nosy: tsingi priority: normal severity: normal status: open title: pdb next vs __next__ type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 21:40:57 2019 From: report at bugs.python.org (George Xie) Date: Sun, 16 Jun 2019 01:40:57 +0000 Subject: [New-bugs-announce] [issue37297] function changed when pickle bound method object Message-ID: <1560649257.15.0.526142047149.issue37297@roundup.psfhosted.org> New submission from George Xie : if we create a bound method object `f` with function object `A.f` and instance object `B()`, when pickling this bound method object: import pickle class A(): def f(self): pass class B(): def f(self): pass o = B() f = A.f.__get__(o) pf = pickle.loads(pickle.dumps(f)) print(f) print(pf) we get: > > the underlaying function are lost, changed from `A.f` to `B.f`. as pickle calls `__reduce__` method of `method object`, IMO [its implementation][1] simply ignored the real function, whcih is not right. I have tried a [wordaround][2]: import types import copyreg def my_reduce(obj): return (obj.__func__.__get__, (obj.__self__,)) copyreg.pickle(types.MethodType, my_reduce) [1]: https://github.com/python/cpython/blob/v3.7.3/Objects/classobject.c#L75-L89 [2]: https://stackoverflow.com/a/56614748/4201810 ---------- components: Library (Lib) messages: 345721 nosy: georgexsh priority: normal severity: normal status: open title: function changed when pickle bound method object type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 21:51:12 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 16 Jun 2019 01:51:12 +0000 Subject: [New-bugs-announce] [issue37298] IDLE: Revise html to tkinker converter for help.html Message-ID: <1560649872.44.0.457910225797.issue37298@roundup.psfhosted.org> New submission from Terry J. Reedy : Sphinx 2.? generates different html than 1.8 such that the display of Help ==> IDLE Help has extra blank lines. Among possibly other things, the contents of
  • ...
  • is wrapped in

    ...

    and blank lines appear between the bullet and text.
      -
    • coded in 100% pure Python, using the tkinter GUI toolkit
    • -
    • cross-platform: works mostly the same on Windows, Unix, and macOS
    • ... +
    • coded in 100% pure Python, using the tkinter GUI toolkit

    • +
    • cross-platform: works mostly the same on Windows, Unix, and macOS

    • ...
    A similar issue afflicts the menu, with blank lines between the menu item and the explanation. The html original 3x/Doc/build/html/library/idle.html#index-0 looks normal in Firefox. The html parser class in help.py needs to ignore

    within

  • . It should specify which version of Sphinx it is compatible with. Do any of you have any idea what the html change might be about? Is there something wrong with idle.rst? ---------- assignee: terry.reedy components: IDLE messages: 345722 nosy: cheryl.sabella, markroseman, mdk, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: Revise html to tkinker converter for help.html type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 22:22:30 2019 From: report at bugs.python.org (YoSTEALTH) Date: Sun, 16 Jun 2019 02:22:30 +0000 Subject: [New-bugs-announce] [issue37299] RuntimeWarning is NOT raised Message-ID: <1560651750.57.0.52730011587.issue37299@roundup.psfhosted.org> New submission from YoSTEALTH : from asyncio import run async def true(): return True async def false(): return False async def error(): a = false() b = true() return await (a or b) # Good Error # "RuntimeWarning: coroutine 'true' was never awaited print(await error())" async def no_error(): return await (false() or true()) # False # Bad Problem # `RuntimeWarning` is not raised for 'true'. # Note # The correct syntax is `return (await false()) or (await true()) as it should return `True` # not `False` async def test(): print(await no_error()) # False print(await error()) # RuntimeWarning(...), False if __name__ == '__main__': run(test()) - Tested in Python 3.8.0b1 - Why does `error()` return `RuntimeWarning` but `no_error()` does not? ---------- components: asyncio messages: 345723 nosy: YoSTEALTH, asvetlov, yselivanov priority: normal severity: normal status: open title: RuntimeWarning is NOT raised type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 22:30:33 2019 From: report at bugs.python.org (hai shi) Date: Sun, 16 Jun 2019 02:30:33 +0000 Subject: [New-bugs-announce] [issue37300] a Py_XINCREF in classobject.c are not necessary Message-ID: <1560652233.24.0.743311547248.issue37300@roundup.psfhosted.org> Change by hai shi : ---------- components: Interpreter Core nosy: shihai1991 priority: normal severity: normal status: open title: a Py_XINCREF in classobject.c are not necessary type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 15 23:38:49 2019 From: report at bugs.python.org (vsbogd) Date: Sun, 16 Jun 2019 03:38:49 +0000 Subject: [New-bugs-announce] [issue37301] CGIHTTPServer doesn't handle long POST requests Message-ID: <1560656329.17.0.731326583269.issue37301@roundup.psfhosted.org> New submission from vsbogd : Steps to reproduce: Use POST request with "multipart/form-data" encoding to pass long (more than 64KiB) file to the CGI script. Expected result: Script receives the whole file. Actual result: Script receives only first part which size is about size of the TCP packet. Scripts to test issue are attached. To run test execute: $ python test_cgi_server.py & $ python test_cgi_client.py $ kill %1 ---------- components: Library (Lib) files: test_cgi.zip messages: 345724 nosy: vsbogd priority: normal severity: normal status: open title: CGIHTTPServer doesn't handle long POST requests type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48423/test_cgi.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 00:46:08 2019 From: report at bugs.python.org (Jeffrey Kintscher) Date: Sun, 16 Jun 2019 04:46:08 +0000 Subject: [New-bugs-announce] [issue37302] Add an "onerror" callback parameter to the tempfile.TemporaryDirectory member functions Message-ID: <1560660368.0.0.203020777798.issue37302@roundup.psfhosted.org> New submission from Jeffrey Kintscher : Add an "onerror" callback parameter to the tempfile.TemporaryDirectory member functions so that the caller can perform special handling for directory items that it can't automatically delete. The caller created the undeletable directory entries, so it is reasonable to believe the caller may know how to unmake what they made. This enhancement is needed to provide the desired behavior described in issue #29982 and issue #36422. ---------- messages: 345726 nosy: Jeffrey.Kintscher, giampaolo.rodola, gvanrossum, josh.r, max, paul.moore, riccardomurri, serhiy.storchaka, steve.dower, tarek, tim.golden, zach.ware priority: normal severity: normal status: open title: Add an "onerror" callback parameter to the tempfile.TemporaryDirectory member functions _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 00:55:09 2019 From: report at bugs.python.org (Dong-hee Na) Date: Sun, 16 Jun 2019 04:55:09 +0000 Subject: [New-bugs-announce] [issue37303] Rename parameter name of imghdr what Message-ID: <1560660909.55.0.86099057914.issue37303@roundup.psfhosted.org> New submission from Dong-hee Na : Still https://github.com/python/cpython/blob/3a1d50e7e573efb577714146bed5c03b9c95f466/Lib/imghdr.py#L11 signature is def what(file, h=None). I 'd like to suggest to update it into def what(filename, h=None) Also, the doc of imghdr represents it as the filename. https://docs.python.org/3/library/imghdr.html note that sndhdr is already using this name. https://github.com/python/cpython/blob/3a1d50e7e573efb577714146bed5c03b9c95f466/Lib/sndhdr.py#L52 If this proposal is accepted, I'd like to send a patch for it. ---------- components: Library (Lib) messages: 345727 nosy: corona10 priority: normal severity: normal status: open title: Rename parameter name of imghdr what type: enhancement versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 05:46:23 2019 From: report at bugs.python.org (hai shi) Date: Sun, 16 Jun 2019 09:46:23 +0000 Subject: [New-bugs-announce] [issue37304] compiler need support in(de)crement operation or all of it should have syntax error. Message-ID: <1560678383.11.0.0486916958299.issue37304@roundup.psfhosted.org> New submission from hai shi : compiler need support unary operation of in(de)crement or all of it should have syntax error.I have found many user have confused about it(such as:https://stackoverflow.com/questions/2632677/python-integer-incrementing-with). Of course, it is a big change of grammar and it need core developer team to make decision. If core developer team support it, i will try my best to do it;) The behavior is: $ ./python Python 3.8.0a4+ (heads/master:e225beb, Jun 4 2019, 00:35:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> i = 1; >>> ++i 1 >>> i++ File "", line 1 i++ ^ SyntaxError: invalid syntax ---------- components: Interpreter Core messages: 345738 nosy: shihai1991 priority: normal severity: normal status: open title: compiler need support in(de)crement operation or all of it should have syntax error. type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 06:41:01 2019 From: report at bugs.python.org (=?utf-8?q?Filip_=C5=A0?=) Date: Sun, 16 Jun 2019 10:41:01 +0000 Subject: [New-bugs-announce] [issue37305] Add MIME type for Web App Manifest Message-ID: <1560681661.87.0.393263592015.issue37305@roundup.psfhosted.org> New submission from Filip ? : Web App Manifest ( https://w3c.github.io/manifest/ ) is "JSON-based manifest file that provides developers with a centralized place to put metadata associated with a web application". Although it is not required, it is recommended by W3C ( https://w3c.github.io/manifest/#media-type-registration ) that manifest uses `application/manifest+json` media type and `.webmanifest` extension. This should also be used by Python's mimetypes module. I can submit PR for this, but I don't know if it should be added to `types_map` or `common_types`. It is currently not registered with IANA, but it is "for community review and will be submitted to the IESG for review, approval, and registration with IANA". If it is not strictly needed for MIME type that it is registered with IANA, I will add it to `types_map`. If it is, I will add it `common_types` and later move it to `types_map` when it will be registered. What should I choose? ---------- messages: 345741 nosy: filips123 priority: normal severity: normal status: open title: Add MIME type for Web App Manifest type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 07:56:02 2019 From: report at bugs.python.org (flipchan) Date: Sun, 16 Jun 2019 11:56:02 +0000 Subject: [New-bugs-announce] [issue37306] "~/" not working with open() function in posix systems Message-ID: <1560686162.92.0.88188278628.issue37306@roundup.psfhosted.org> New submission from flipchan : "~/" being a shortcut to the current home directory on posix systems(bsd/linux). its not working with the build in open() function open: It works to get the path as a string: with open(str(Path.home())+'/.hey', wb') as minfil: minfil.write(nycklen) But not out of the box: with open('~/.hey', 'wb') as minfil: minfil.write(nycklen) However, "~/" is working with os.path modules: os.path.isfile("~/.hey") Tested with Python 3.6.8 ---------- components: Library (Lib) messages: 345743 nosy: flipchan priority: normal severity: normal status: open title: "~/" not working with open() function in posix systems type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 07:59:07 2019 From: report at bugs.python.org (Franklin? Lee) Date: Sun, 16 Jun 2019 11:59:07 +0000 Subject: [New-bugs-announce] [issue37307] isinstance/issubclass doc isn't clear on whether it's an AND or an OR. Message-ID: <1560686347.4.0.566273891061.issue37307@roundup.psfhosted.org> New submission from Franklin? Lee : isinstance: > If classinfo is not a class (type object), it may be a tuple of type objects, or may recursively contain other such tuples (other sequence types are not accepted). issubclass: > classinfo may be a tuple of class objects, in which case every entry in classinfo will be checked. It is unclear from the docs whether issubclass(bool, (int, float)) should return True (because bool is a subclass of int), or False (because bool is NOT a subclass of float). (It returns True.) It's likely also false that every entry will be checked, since presumably the function uses short-circuit logic. issubclass's doc also doesn't mention the recursive tuple case that isinstance's doc has. ---------- assignee: docs at python components: Documentation messages: 345744 nosy: docs at python, leewz priority: normal severity: normal status: open title: isinstance/issubclass doc isn't clear on whether it's an AND or an OR. type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 13:23:58 2019 From: report at bugs.python.org (Zackery Spytz) Date: Sun, 16 Jun 2019 17:23:58 +0000 Subject: [New-bugs-announce] [issue37308] Possible mojibake in mmap.mmap() when using the tagname parameter on Windows Message-ID: <1560705838.82.0.210132642555.issue37308@roundup.psfhosted.org> New submission from Zackery Spytz : mmap.mmap() passes a char * encoded as UTF-8 to CreateFileMappingA() when the tagname parameter is used. This was reported by Eryk Sun on PR 14114. ---------- components: Extension Modules, Windows messages: 345763 nosy: ZackerySpytz, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Possible mojibake in mmap.mmap() when using the tagname parameter on Windows type: behavior versions: Python 2.7, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 18:22:01 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 16 Jun 2019 22:22:01 +0000 Subject: [New-bugs-announce] [issue37309] idlelib/NEWS.txt for 3.9.0 and backports Message-ID: <1560723721.58.0.865881864361.issue37309@roundup.psfhosted.org> New submission from Terry J. Reedy : Master is 3.9.0a0 as of 2019 June 4. ---------- assignee: terry.reedy components: IDLE messages: 345785 nosy: terry.reedy priority: normal severity: normal stage: commit review status: open title: idlelib/NEWS.txt for 3.9.0 and backports type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 20:43:39 2019 From: report at bugs.python.org (Gordon Marler) Date: Mon, 17 Jun 2019 00:43:39 +0000 Subject: [New-bugs-announce] [issue37310] Solaris 11.3 w/ Studio 12.6 test_ctypes fail Message-ID: <1560732219.57.0.838922158225.issue37310@roundup.psfhosted.org> New submission from Gordon Marler : Building a 64-bit Python 3.7.3 on Solaris 11.3 with Studio 12.6, and these test_ctypes tests fail: python -m test -v test_ctypes ... ====================================================================== FAIL: test_ints (ctypes.test.test_bitfields.C_Test) ---------------------------------------------------------------------- Traceback (most recent call last): File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/ctypes/test/test_bitfields.py", line 40, in test_ints self.assertEqual(getattr(b, name), func(byref(b), name.encode('ascii'))) AssertionError: -1 != 1 ====================================================================== FAIL: test_shorts (ctypes.test.test_bitfields.C_Test) ---------------------------------------------------------------------- Traceback (most recent call last): File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/ctypes/test/test_bitfields.py", line 47, in test_shorts self.assertEqual(getattr(b, name), func(byref(b), name.encode('ascii'))) AssertionError: -1 != 1 ====================================================================== FAIL: test_pass_by_value (ctypes.test.test_structures.StructureTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/ctypes/test/test_structures.py", line 416, in test_pass_by_value self.assertEqual(s.first, 0xdeadbeef) AssertionError: 195948557 != 3735928559 ---------------------------------------------------------------------- Ran 472 tests in 3.545s FAILED (failures=3, skipped=94) test test_ctypes failed test_ctypes failed ---------- components: Tests, ctypes messages: 345793 nosy: gmarler priority: normal severity: normal status: open title: Solaris 11.3 w/ Studio 12.6 test_ctypes fail type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 16 20:57:11 2019 From: report at bugs.python.org (Gordon Marler) Date: Mon, 17 Jun 2019 00:57:11 +0000 Subject: [New-bugs-announce] [issue37311] Solaris 11.3 w/ Studio 12.6 test_support fail Message-ID: <1560733031.84.0.520294809269.issue37311@roundup.psfhosted.org> New submission from Gordon Marler : This failure confuses me, as it seems to occur in a test_rmtree subtest, and there's a warning that test_support is modifying something towards the end: $ python -m test -W test_support ... ====================================================================== ERROR: test_rmtree (test.test_support.TestSupport) ---------------------------------------------------------------------- Traceback (most recent call last): File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/test/support/__init__.py", line 309, in _force_run return func(*args) PermissionError: [Errno 13] Permission denied: '@test_11251_tmpd/subdir' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/test/test_support.py", line 73, in test_rmtree support.rmtree(dirpath) File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/test/support/__init__.py", line 431, in rmtree _rmtree(path) File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/test/support/__init__.py", line 411, in _rmtree _rmtree_inner(path) File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/test/support/__init__.py", line 410, in _rmtree_inner _force_run(path, os.unlink, fullname) File "/perfwork/gitwork/python/solaris/components/python/python373/Python-3.7.3/Lib/test/support/__init__.py", line 315, in _force_run return func(*args) PermissionError: [Errno 1] Not owner: '@test_11251_tmpd/subdir' ---------------------------------------------------------------------- Ran 42 tests in 34.112s FAILED (errors=1, skipped=1) Warning -- files was modified by test_support Before: [] After: ['@test_11251_tmpd/'] test test_support failed test_support failed in 34 sec 255 ms == Tests result: FAILURE == 1 test failed: test_support Total duration: 34 sec 322 ms Tests result: FAILURE ---------- components: Tests messages: 345794 nosy: gmarler priority: normal severity: normal status: open title: Solaris 11.3 w/ Studio 12.6 test_support fail type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 04:02:35 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Jun 2019 08:02:35 +0000 Subject: [New-bugs-announce] [issue37312] Remove deprecated _dummy_thread and dummy_threading modules Message-ID: <1560758555.75.0.0564764505885.issue37312@roundup.psfhosted.org> New submission from STINNER Victor : The _dummy_thread and dummy_threading modules are deprecated since Python 3.7 which now requires threading support. Attached PR removes these modules. ---------- components: Library (Lib) messages: 345810 nosy: vstinner priority: normal severity: normal status: open title: Remove deprecated _dummy_thread and dummy_threading modules versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 04:26:00 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Jun 2019 08:26:00 +0000 Subject: [New-bugs-announce] [issue37313] test_concurrent_futures stopped after 25 hours on AMD64 Windows7 SP1 3.7 Message-ID: <1560759960.52.0.187936052525.issue37313@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Windows7 SP1 3.7: https://buildbot.python.org/all/#/builders/130/builds/935 0:23:27 load avg: 7.92 [412/416] test_plistlib passed -- running: test_venv (1 min 26 sec), test_concurrent_futures (23 min 26 sec) 0:23:32 load avg: 7.29 [413/416] test_argparse passed -- running: test_venv (1 min 31 sec), test_concurrent_futures (23 min 31 sec) 0:23:34 load avg: 7.29 [414/416] test_venv passed (1 min 32 sec) -- running: test_concurrent_futures (23 min 32 sec) 0:23:37 load avg: 6.71 [415/416] test_largefile passed -- running: test_concurrent_futures (23 min 35 sec) running: test_concurrent_futures (24 min 6 sec) running: test_concurrent_futures (24 min 36 sec) ... running: test_concurrent_futures (25 hour 25 min) running: test_concurrent_futures (25 hour 26 min) running: test_concurrent_futures (25 hour 26 min) running: test_concurrent_futures (25 hour 27 min) running: test_concurrent_futures (25 hour 27 min) running: test_concurrent_futures (25 hour 28 min) program finished with exit code 1 elapsedTime=91968.468000 ---------- components: Tests messages: 345818 nosy: vstinner priority: normal severity: normal status: open title: test_concurrent_futures stopped after 25 hours on AMD64 Windows7 SP1 3.7 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 04:42:38 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Jun 2019 08:42:38 +0000 Subject: [New-bugs-announce] [issue37314] Compilation failed on AMD64 Debian root 3.8: undefined reference to _PyTraceMalloc_NewReference Message-ID: <1560760958.7.0.704551196442.issue37314@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Debian root 3.8: https://buildbot.python.org/all/#builders/211/builds/47 libpython3.8d.a(bytesobject.o): In function `_Py_NewReference': /root/buildarea/3.8.angelico-debian-amd64/build/./Include/object.h:439: undefined reference to `_PyTraceMalloc_NewReference' /root/buildarea/3.8.angelico-debian-amd64/build/./Include/object.h:439: undefined reference to `_PyTraceMalloc_NewReference' /root/buildarea/3.8.angelico-debian-amd64/build/./Include/object.h:439: undefined reference to `_PyTraceMalloc_NewReference' /root/buildarea/3.8.angelico-debian-amd64/build/./Include/object.h:439: undefined reference to `_PyTraceMalloc_NewReference' ---------- components: Tests messages: 345822 nosy: vstinner priority: normal severity: normal status: open title: Compilation failed on AMD64 Debian root 3.8: undefined reference to _PyTraceMalloc_NewReference versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 04:49:55 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 17 Jun 2019 08:49:55 +0000 Subject: [New-bugs-announce] [issue37315] Deprecate using math.factorial() with floats Message-ID: <1560761395.48.0.0855458048325.issue37315@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently math.factorial() accepts integer-like objects (including objects with defined __index__) as well as float instances with integral value (but not arbitrary float-like objects with defined __float__). I suppose this was happen because factorial() was the first integer functions in the math module, and all other functions accepted floats at that time. See also issue7550. But now we have more pure integer functions in the math module: gcd, isqrt, comb, perm. Seems accepting floats in factorial was a mistake. Now we can fix it, and deprecate using factorial() with floats. Initial version of factorial() accepted also non-integral numbers (except float) with defined __int__. It was fixed in issue33083. ---------- components: Extension Modules messages: 345828 nosy: mark.dickinson, pablogsal, rhettinger, serhiy.storchaka, stutzbach priority: normal severity: normal status: open title: Deprecate using math.factorial() with floats type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 05:51:19 2019 From: report at bugs.python.org (Zackery Spytz) Date: Mon, 17 Jun 2019 09:51:19 +0000 Subject: [New-bugs-announce] [issue37316] mmap.mmap() passes the wrong variable to PySys_Audit() Message-ID: <1560765079.9.0.997846608931.issue37316@roundup.psfhosted.org> New submission from Zackery Spytz : new_mmap_object() should pass the fd variable to PySys_Audit(), not fileno. Also, there's a missing call to va_end() in PySys_Audit() itself. ---------- components: Extension Modules messages: 345834 nosy: ZackerySpytz priority: normal severity: normal status: open title: mmap.mmap() passes the wrong variable to PySys_Audit() type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 10:10:41 2019 From: report at bugs.python.org (Carlos Mermingas) Date: Mon, 17 Jun 2019 14:10:41 +0000 Subject: [New-bugs-announce] [issue37317] asyncio gather doesn't handle custom exceptions that inherit from BaseException Message-ID: <1560780641.84.0.00364750397427.issue37317@roundup.psfhosted.org> New submission from Carlos Mermingas : asyncio.gather doesn't handle custom exception exceptions that inherit from BaseException in the same manner that it handles those that inherit from Exception, regardless of whether return_exceptions is set to True or False. In the example below, I am using return_exceptions=True. If the custom exception inherits from Exception, a list printed, as expected. Conversely, if the custom exception inherits from BaseException, it is propagated: import asyncio class CustomException(BaseException): # It works if base class changed to Exception pass async def do_this(x): if x == 5: raise CustomException() await asyncio.sleep(1) print(f'THIS DONE: {x}') async def main(): print('BEGIN') tasks = [do_this(x) for x in range(1, 11)] result = await asyncio.gather(*tasks, return_exceptions=True) print(f'Result: {result}') print('END') asyncio.run(main()) ---------- components: Library (Lib) messages: 345861 nosy: cmermingas priority: normal severity: normal status: open title: asyncio gather doesn't handle custom exceptions that inherit from BaseException type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 11:17:02 2019 From: report at bugs.python.org (ChrisRands) Date: Mon, 17 Jun 2019 15:17:02 +0000 Subject: [New-bugs-announce] [issue37318] builtins.True exists but can't be accessed Message-ID: <1560784622.94.0.240402132833.issue37318@roundup.psfhosted.org> New submission from ChrisRands : On Python 3: >>> import builtins >>> 'True' in dir(builtins) True >>> builtins.True File "", line 1 builtins.True ^ SyntaxError: invalid syntax >>> So "True" is a keyword, I guess this explains the behaviour, but still seems odd to me? Relevant SO question: https://stackoverflow.com/questions/56633402/why-are-true-and-false-being-set-in-globals-by-this-code ---------- messages: 345863 nosy: ChrisRands priority: normal severity: normal status: open title: builtins.True exists but can't be accessed type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 15:52:35 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 17 Jun 2019 19:52:35 +0000 Subject: [New-bugs-announce] [issue37319] Deprecate using random.randrange() with non-integers Message-ID: <1560801155.55.0.20318571261.issue37319@roundup.psfhosted.org> New submission from Serhiy Storchaka : Unlike other range()-like functions random.randrange() accepts not only integers and integer-like objects (implementing __index__), but any numbers with integral values, like 3.0 or Decimal(3). In 3.8 using __int__ for implicit converting to C integers were deprecated, and likely will be forbidden in 3.9. __index__ is now preferable. I propose to deprecate accepting non-integer arguments in random.randrange() and use __index__ instead of __int__ for converting values. At end it will lead to simpler and faster code. Instead of istart = _int(start) if istart != start: raise ValueError("non-integer arg 1 for randrange()") we could use just start = _index(start) It will help also in converting the randrange() implementation to C. See also issue37315. ---------- components: Library (Lib) messages: 345892 nosy: mark.dickinson, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Deprecate using random.randrange() with non-integers type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 15:54:46 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Jun 2019 19:54:46 +0000 Subject: [New-bugs-announce] [issue37320] aifc, sunau, wave: remove deprecated openfp() function Message-ID: <1560801286.41.0.0905607754393.issue37320@roundup.psfhosted.org> New submission from STINNER Victor : The openfp() function of aifc, sunau, wave is an alias to the open() function of each module and is deprecated since Python 3.7. Attached PR removes it. ---------- components: Library (Lib) messages: 345894 nosy: vstinner priority: normal severity: normal status: open title: aifc, sunau, wave: remove deprecated openfp() function versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 16:14:20 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 17 Jun 2019 20:14:20 +0000 Subject: [New-bugs-announce] [issue37321] IDLE: Make Subprocess Connection Errors consistent. Message-ID: <1560802460.36.0.0153945253902.issue37321@roundup.psfhosted.org> New submission from Terry J. Reedy : A Subprocess Connection Error (new consistent name) can be displayed by either the user or IDLE process (run and pyshell modules). The latter is much more common. Only the former was updated to refer to the newish doc section. Update the latter also. Otherwise, make more consistent. (May someday combine into one function.) ---------- assignee: terry.reedy components: IDLE messages: 345898 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: Make Subprocess Connection Errors consistent. versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 16:35:34 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Jun 2019 20:35:34 +0000 Subject: [New-bugs-announce] [issue37322] test_ssl: test_pha_required_nocert() emits a ResourceWarning Message-ID: <1560803734.85.0.206698450732.issue37322@roundup.psfhosted.org> New submission from STINNER Victor : vstinner at apu$ ./python -X tracemalloc=10 -m test --fail-env-changed -v test_ssl -m test_pha_required_nocert == CPython 3.9.0a0 (heads/master-dirty:00f6493084, Jun 17 2019, 21:50:32) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)] == Linux-5.1.6-300.fc30.x86_64-x86_64-with-glibc2.29 little-endian == cwd: /home/vstinner/prog/python/master/build/test_python_23407 == CPU count: 8 == encodings: locale=UTF-8, FS=utf-8 Run tests sequentially 0:00:00 load avg: 0.58 [1/1] test_ssl test_ssl: testing with 'OpenSSL 1.1.1c FIPS 28 May 2019' (1, 1, 1, 3, 15) under 'Linux-5.1.6-300.fc30.x86_64-x86_64-with-glibc2.29' HAS_SNI = True OP_ALL = 0x80000054 OP_NO_TLSv1_1 = 0x10000000 test_pha_required_nocert (test.test_ssl.TestPostHandshakeAuth) ... Exception in thread Thread-2: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/test_ssl.py", line 2287, in run msg = self.read() File "/home/vstinner/prog/python/master/Lib/test/test_ssl.py", line 2264, in read return self.sslconn.read() File "/home/vstinner/prog/python/master/Lib/ssl.py", line 1090, in read return self._sslobj.read(len) ssl.SSLError: [SSL: PEER_DID_NOT_RETURN_A_CERTIFICATE] peer did not return a certificate (_ssl.c:2540) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/threading.py", line 938, in _bootstrap_inner self.run() File "/home/vstinner/prog/python/master/Lib/test/test_ssl.py", line 2373, in run raise ssl.SSLError('tlsv13 alert certificate required') ssl.SSLError: ('tlsv13 alert certificate required',) /home/vstinner/prog/python/master/Lib/threading.py:938: ResourceWarning: unclosed self.run() Object allocated at (most recent call last): File "/home/vstinner/prog/python/master/Lib/threading.py", lineno 896 self._bootstrap_inner() File "/home/vstinner/prog/python/master/Lib/threading.py", lineno 938 self.run(). File "/home/vstinner/prog/python/master/Lib/test/test_ssl.py", lineno 2283 if not self.wrap_conn(): File "/home/vstinner/prog/python/master/Lib/test/test_ssl.py", lineno 2207 self.sslconn = self.server.context.wrap_socket( File "/home/vstinner/prog/python/master/Lib/ssl.py", lineno 489 return self.sslsocket_class._create( File "/home/vstinner/prog/python/master/Lib/ssl.py", lineno 992 self = cls.__new__(cls, **kwargs) ok ---------------------------------------------------------------------- Ran 1 test in 0.100s OK == Tests result: SUCCESS == 1 test OK. Total duration: 1 sec 430 ms Tests result: SUCCESS The test fails using -Werror. ---------- components: Tests messages: 345903 nosy: vstinner priority: normal severity: normal status: open title: test_ssl: test_pha_required_nocert() emits a ResourceWarning versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 16:37:03 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Jun 2019 20:37:03 +0000 Subject: [New-bugs-announce] [issue37323] test_asyncio: test_debug_mode_interop() fails using -Werror Message-ID: <1560803823.97.0.920531372035.issue37323@roundup.psfhosted.org> New submission from STINNER Victor : vstinner at apu$ PYTHONWARNINGS=error ./python -Werror -m test -v test_asyncio -m test_debug_mode_interop == CPython 3.9.0a0 (heads/master-dirty:00f6493084, Jun 17 2019, 21:50:32) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)] == Linux-5.1.6-300.fc30.x86_64-x86_64-with-glibc2.29 little-endian == cwd: /home/vstinner/prog/python/master/build/test_python_23510 == CPU count: 8 == encodings: locale=UTF-8, FS=utf-8 Run tests sequentially 0:00:00 load avg: 0.73 [1/1] test_asyncio test_debug_mode_interop (test.test_asyncio.test_tasks.CompatibilityTests) ... FAIL ====================================================================== FAIL: test_debug_mode_interop (test.test_asyncio.test_tasks.CompatibilityTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/test_asyncio/test_tasks.py", line 3355, in test_debug_mode_interop assert_python_ok("-c", code, PYTHONASYNCIODEBUG="1") File "/home/vstinner/prog/python/master/Lib/test/support/script_helper.py", line 157, in assert_python_ok return _assert_python(True, *args, **env_vars) File "/home/vstinner/prog/python/master/Lib/test/support/script_helper.py", line 143, in _assert_python res.fail(cmd_line) File "/home/vstinner/prog/python/master/Lib/test/support/script_helper.py", line 70, in fail raise AssertionError("Process return code is %d\n" AssertionError: Process return code is 1 command line: ['/home/vstinner/prog/python/master/python', '-X', 'faulthandler', '-c', '\nimport asyncio\n\nasync def native_coro():\n pass\n\n at asyncio.coroutine\ndef old_style_coro():\n yield from native_coro()\n\nasyncio.run(old_style_coro())\n'] stdout: --- --- stderr: --- Traceback (most recent call last): File "", line 8, in File "/home/vstinner/prog/python/master/Lib/asyncio/coroutines.py", line 111, in coroutine warnings.warn('"@coroutine" decorator is deprecated since Python 3.8, use "async def" instead', DeprecationWarning: "@coroutine" decorator is deprecated since Python 3.8, use "async def" instead --- ---------------------------------------------------------------------- Ran 1 test in 0.210s FAILED (failures=1) test test_asyncio failed test_asyncio failed == Tests result: FAILURE == 1 test failed: test_asyncio Total duration: 598 ms Tests result: FAILURE ---------- components: Tests messages: 345904 nosy: vstinner priority: normal severity: normal status: open title: test_asyncio: test_debug_mode_interop() fails using -Werror versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 16:38:32 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 17 Jun 2019 20:38:32 +0000 Subject: [New-bugs-announce] [issue37324] collections: remove deprecated aliases to ABC classes Message-ID: <1560803912.16.0.0813723259969.issue37324@roundup.psfhosted.org> New submission from STINNER Victor : Extract of collections documentation: https://docs.python.org/dev/library/collections.html "Deprecated since version 3.3, will be removed in version 3.9: Moved Collections Abstract Base Classes to the collections.abc module. For backwards compatibility, they continue to be visible in this module through Python 3.8." Attached PR removes these aliases. ---------- components: Library (Lib) messages: 345905 nosy: vstinner priority: normal severity: normal status: open title: collections: remove deprecated aliases to ABC classes versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 22:07:11 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 18 Jun 2019 02:07:11 +0000 Subject: [New-bugs-announce] [issue37325] IDLE: fix Query subclass tab focus traversal order Message-ID: <1560823632.0.0.312919780261.issue37325@roundup.psfhosted.org> New submission from Terry J. Reedy : query.Query creates a popup with entry and ok/cancel buttons, in that order. Tabbing moves from the entry in that order. Currently, subclasses that add widgets add them after the 3 above, so they follow Cancel in the tab order. They do this by overriding create_widgets and initially calling super.create_widgets. Added widgets should follow the entry box and precede the exit buttons. To do this, they should be created in between. Proposed solution: create_widgets calls 'extra_widgets' (pass for Query) after creating the Entry. Subclasses override to add widgets. ---------- assignee: terry.reedy components: IDLE messages: 345940 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: fix Query subclass tab focus traversal order type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 17 22:36:56 2019 From: report at bugs.python.org (Gregory Szorc) Date: Tue, 18 Jun 2019 02:36:56 +0000 Subject: [New-bugs-announce] [issue37326] Python distributions do not contain libffi license Message-ID: <1560825416.41.0.74258395972.issue37326@roundup.psfhosted.org> New submission from Gregory Szorc : The Modules/_ctypes/libffi_msvc/ffi.h and Modules/_ctypes/libffi_osx/include/ffi.h files in the CPython repository contain copies of libffi. On at least the official Windows distributions, the LICENSE.txt does not contain libffi's license text from these files. This is seemingly in violation of libffi's license agreement, which clearly says "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software." ---------- components: Build messages: 345944 nosy: indygreg priority: normal severity: normal status: open title: Python distributions do not contain libffi license type: enhancement versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 18 02:17:11 2019 From: report at bugs.python.org (aixian le) Date: Tue, 18 Jun 2019 06:17:11 +0000 Subject: [New-bugs-announce] [issue37327] python re bug Message-ID: <1560838631.6.0.225714277918.issue37327@roundup.psfhosted.org> New submission from aixian le : the code is: banner = "HTTP/1.0 404 Not Found\r\nDate: Mon, 17 Jun 2019 13:15:44 GMT\r\nServer: \r\nConnection: close\r\nContent-Type: text/html\r\n\r\n404 Not Found\r\n

    404 Not Found

    \r\nThe requested URL /PSIA/index was not found on this server.\r\n\r\n" regex = "^HTTP/1\\.0 404 Not Found\\r\\n(?:[^<]+|<(?!/head>))*?