From report at bugs.python.org Fri Feb 1 03:49:13 2019 From: report at bugs.python.org (Marc Schlaich) Date: Fri, 01 Feb 2019 08:49:13 +0000 Subject: [New-bugs-announce] [issue35872] Creating venv from venv no longer works in 3.7.2 Message-ID: <1549010953.4.0.913686801166.issue35872@roundup.psfhosted.org> New submission from Marc Schlaich : Creating a venv from the python.exe from another venv does not work since 3.7.2 on Windows. This is probably related to the change bpo-34977: venv on Windows will now use a python.exe redirector rather than copying the actual binaries from the base environment. For example running c:\users\$USER\.local\pipx\venvs\pipx-app\scripts\python.exe -m venv C:\Users\$USER\.local\pipx\venvs\tox C:\Users\$USER\.local\pipx\venvs\tox\Scripts\python.exe -m pip install --upgrade pip results in pip installing to venvs\pipx-app and not in venvs\tox. Downstream bugreport in pipx is https://github.com/pipxproject/pipx-app/issues/81. ---------- components: Library (Lib), Windows messages: 334658 nosy: paul.moore, schlamar, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Creating venv from venv no longer works in 3.7.2 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 04:03:04 2019 From: report at bugs.python.org (Marc Schlaich) Date: Fri, 01 Feb 2019 09:03:04 +0000 Subject: [New-bugs-announce] [issue35873] Controlling venv from venv no longer works in 3.7.2 Message-ID: <1549011784.82.0.296319027076.issue35873@roundup.psfhosted.org> New submission from Marc Schlaich : Controlling a venv from the python.exe from another venv does not work since 3.7.2 on Windows. This is probably related to the change bpo-34977: venv on Windows will now use a python.exe redirector rather than copying the actual binaries from the base environment. This is obviously related to bpo-35872, but this could be a different bug. When a Python script in a venv wants to control another venv by running commands like `another-venv\python.exe -m pip` with subprocess, python.exe is not referencing the other venv. It is referencing to the venv the script currently running from. This is probably because os.environ contains a '__PYVENV_LAUNCHER__' which is pointing to the venv from the script. This can be reproduced with pipx (https://github.com/pipxproject/pipx-app) by running pipx install --python "C:\Program Files (x86)\Python37-32\python.exe" --verbose tox This results in pip installing to venvs\pipx-app and not in venvs\tox. I assume a simpler reproduction might be (but I cannot check this anymore as I'm back on 3.7.1 right now): C:\Program Files (x86)\Python37-32\python.exe -m venv C:\Users\$USER\.local\pipx\venvs\tox c:\users\$USER\.local\pipx\venvs\pipx-app\scripts\python.exe -c "import subprocess; subprocess.run(['C:\Users\$USER\.local\pipx\venvs\tox\Scripts\python.exe', '-m', 'pip', 'install', 'tox'])" Downstream bugreport in pipx is https://github.com/pipxproject/pipx-app/issues/81. ---------- components: Library (Lib), Windows messages: 334659 nosy: paul.moore, schlamar, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Controlling venv from venv no longer works in 3.7.2 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 04:03:29 2019 From: report at bugs.python.org (Antony Lee) Date: Fri, 01 Feb 2019 09:03:29 +0000 Subject: [New-bugs-announce] [issue35874] Clarify that the (...) convertor to PyArg_ParseTuple... accepts any sequence. Message-ID: <1549011809.83.0.066679441781.issue35874@roundup.psfhosted.org> New submission from Antony Lee : The documentation for the accepted types for each format unit in PyArg_ParseTuple (and its variants) is usually quite detailed; compare for example y* (bytes-like object) [Py_buffer] This variant on s* doesn?t accept Unicode objects, only bytes-like objects. This is the recommended way to accept binary data. and S (bytes) [PyBytesObject *] Requires that the Python object is a bytes object, without attempting any conversion. Raises TypeError if the object is not a bytes object. The C variable may also be declared as PyObject*. There, the type in parenthesis (which is explained as "the entry in (round) parentheses is the Python object type that matches the format unit") differentiates between "bytes-like object" and "bytes". However, the documentation for "(...)" is a bit more confusing: (items) (tuple) [matching-items] The object must be a Python sequence whose length is the number of format units in items. The C arguments must correspond to the individual format units in items. Format units for sequences may be nested. The type in parenthesis is "tuple" (exactly), but the paragraph says "sequence". The actual behavior appears indeed to be that any sequence (e.g., list, numpy array) is accepted, so I'd suggest changing the type in the parenthesis to "sequence". ---------- assignee: docs at python components: Documentation messages: 334660 nosy: Antony.Lee, docs at python priority: normal severity: normal status: open title: Clarify that the (...) convertor to PyArg_ParseTuple... accepts any sequence. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 05:52:26 2019 From: report at bugs.python.org (Bangert) Date: Fri, 01 Feb 2019 10:52:26 +0000 Subject: [New-bugs-announce] [issue35875] Crash - algos.cp36-win_amd64.pyd join.cp36-win_amd64.pyd Message-ID: <1549018346.31.0.580371718902.issue35875@roundup.psfhosted.org> New submission from Bangert : Windows 7 x64 Visual Studio 2017 15.9.6 netframework 4.7 conda 3.6 x64 On project load both extensions algos and joint crash. ---------- components: Extension Modules files: Conda-Algos-Joint-Crash.jpg messages: 334663 nosy: AxelArnoldBangert priority: normal severity: normal status: open title: Crash - algos.cp36-win_amd64.pyd join.cp36-win_amd64.pyd type: crash versions: Python 3.6 Added file: https://bugs.python.org/file48092/Conda-Algos-Joint-Crash.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 05:53:19 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 01 Feb 2019 10:53:19 +0000 Subject: [New-bugs-announce] [issue35876] test_start_new_session for posix_spawnp fails Message-ID: <1549018399.45.0.971138800303.issue35876@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Example failure: https://buildbot.python.org/all/#/builders/141/builds/1142/steps/4/logs/stdio ===================================================================== ERROR: test_start_new_session (test.test_posix.TestPosixSpawn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/test_posix.py", line 1640, in test_start_new_session child_pgid = int(output) NameError: name 'output' is not defined ====================================================================== ERROR: test_start_new_session (test.test_posix.TestPosixSpawnP) ---------------------------------------------------------------------- Traceback (most recent call last): File "/srv/buildbot/buildarea/3.x.bolen-ubuntu/build/Lib/test/test_posix.py", line 1640, in test_start_new_session child_pgid = int(output) NameError: name 'output' is not defined ---------------------------------------------------------------------- Victor made a PR to ignore the test for now: https://github.com/python/cpython/pull/11718 This WIP PR intends to fix this bug: https://github.com/python/cpython/pull/11719 ---------- components: Tests messages: 334664 nosy: pablogsal priority: normal severity: normal status: open title: test_start_new_session for posix_spawnp fails versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 05:55:55 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Fri, 01 Feb 2019 10:55:55 +0000 Subject: [New-bugs-announce] [issue35877] parenthesis is mandatory for named expressions in while statement Message-ID: <1549018555.29.0.231525655805.issue35877@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I thought to open a separate report itself in order to keep the original issue not cluttered with questions since it could be used for other docs. Sorry for my report there. Original report as per msg334341 . It seems parens are mandatory while using named expressions in while statement which makes some of the examples invalid like https://www.python.org/dev/peps/pep-0572/#sysconfig-py . From my limited knowledge while statement Grammar was not modified at https://github.com/python/cpython/pull/10497/files#diff-cb0b9d6312c0d67f6d4aa1966766ceddR73 and no tests for while statement which made me assume it's intentional. I haven't followed the full discussion about PEP 572 so feel free to correct me if it's a conscious decision and in that case the PEP 572 can be updated. # python info ? cpython git:(master) ./python.exe Python 3.8.0a0 (heads/bpo35113-dirty:49329a217e, Jan 25 2019, 09:57:53) [Clang 7.0.2 (clang-700.1.81)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> # Example as in PEP 572 to create a simple file that reads itself and prints lines that matches "foo" ? cpython git:(master) cat /tmp/foo.py import re with open("/tmp/foo.py") as f: while line := f.readline(): if match := re.search(r"foo", line): print(match.string.strip("\n")) ? cpython git:(master) ./python.exe /tmp/foo.py File "/tmp/foo.py", line 4 while line := f.readline(): ^ SyntaxError: invalid syntax # Wrapping named expression with parens for while makes this valid ? cpython git:(master) cat /tmp/foo.py import re with open("/tmp/foo.py") as f: while (line := f.readline()): if match := re.search(r"foo", line): print(match.string.strip("\n")) ? cpython git:(master) ./python.exe /tmp/foo.py with open("/tmp/foo.py") as f: if match := re.search(r"foo", line): As a user I think parens shouldn't be mandatory in while statement since if statement works fine. Parens can cause while statement to be superfluous in some cases and an extra case to remember while teaching. ---------- components: Interpreter Core messages: 334665 nosy: emilyemorehouse, gvanrossum, xtreak priority: normal severity: normal status: open title: parenthesis is mandatory for named expressions in while statement type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 06:22:41 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 01 Feb 2019 11:22:41 +0000 Subject: [New-bugs-announce] [issue35878] ast.c: end_col_offset may be used uninitialized in this function Message-ID: <1549020161.52.0.11493245766.issue35878@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#builders/103/builds/2023 There are many "end_col_offset may be used uninitialized in this function" warnings. Example: In file included from Python/ast.c:7:0: Python/ast.c: In function ?ast_for_funcdef_impl?: ./Include/Python-ast.h:484:66: warning: ?end_col_offset? may be used uninitialized in this function [-Wmaybe-uninitialized] #define FunctionDef(a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10) _Py_FunctionDef(a0, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10) ^~~~~~~~~~~~~~~ Python/ast.c:1738:21: note: ?end_col_offset? was declared here int end_lineno, end_col_offset; ^~~~~~~~~~~~~~ ---------- components: Interpreter Core messages: 334670 nosy: emilyemorehouse, levkivskyi, vstinner priority: normal severity: normal status: open title: ast.c: end_col_offset may be used uninitialized in this function type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 06:30:04 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 01 Feb 2019 11:30:04 +0000 Subject: [New-bugs-announce] [issue35879] test_type_comments leaks references Message-ID: <1549020604.51.0.835471496217.issue35879@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/1/builds/490 https://buildbot.python.org/all/#/builders/80/builds/499 test_type_comments leaked [37, 37, 37] references, sum=111 test_type_comments leaked [11, 12, 11] memory blocks, sum=34 Example: vstinner at apu$ ./python -m test -R 3:3 test_type_comments -m test.test_type_comments.TypeCommentTests.test_asyncdef Run tests sequentially 0:00:00 load avg: 0.56 [1/1] test_type_comments beginning 6 repetitions 123456 ...... test_type_comments leaked [2, 2, 2] references, sum=6 test_type_comments leaked [2, 3, 2] memory blocks, sum=7 test_type_comments failed == Tests result: FAILURE == 1 test failed: test_type_comments Total duration: 129 ms Tests result: FAILURE I bet that the regression comes from: commit dcfcd146f8e6fc5c2fc16a4c192a0c5f5ca8c53c Author: Guido van Rossum Date: Thu Jan 31 03:40:27 2019 -0800 bpo-35766: Merge typed_ast back into CPython (GH-11645) See also bpo-35878. ---------- components: Library (Lib) messages: 334671 nosy: lukasz.langa, vstinner priority: normal severity: normal status: open title: test_type_comments leaks references versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 06:47:37 2019 From: report at bugs.python.org (Jurjen N.E. Bos) Date: Fri, 01 Feb 2019 11:47:37 +0000 Subject: [New-bugs-announce] [issue35880] math.sin has no backward error; this isn't documented Message-ID: <1549021657.57.0.382503652198.issue35880@roundup.psfhosted.org> New submission from Jurjen N.E. Bos : The documentation of math.sin (and related trig functions) doesn't speak about backward error. In cPython, as far as I can see, there is no backward error at all, which is quite uncommon. This may vary between implementations; many math libraries of other languages have a backward error, resulting in large errors for large arguments. e.g. sin(1<<500) is correctly computed as 0.42925739234242827, where a backward error as small as 1e-150 can give a completely wrong result. Some text could be added (which I am happy to produce) that explains what backward error means, and under which circumstances you can expect an accurate result. ---------- assignee: docs at python components: Documentation messages: 334672 nosy: docs at python, jneb priority: normal severity: normal status: open title: math.sin has no backward error; this isn't documented versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 09:02:45 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 01 Feb 2019 14:02:45 +0000 Subject: [New-bugs-announce] [issue35881] test_type_comments leaks references and memory blocks Message-ID: <1549029765.83.0.257385728664.issue35881@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : The refleak buildbots have detected that test_type_comments is leaking references. Example failure: https://buildbot.python.org/all/#/builders/1/builds/490/steps/4/logs/stdio test_type_comments leaked [37, 37, 37] references, sum=111 test_type_comments leaked [11, 12, 11] memory blocks, sum=34 1 test failed again: test_type_comments Bisecting shows that PR11645 is the one that introduced the refleaks: https://github.com/python/cpython/pull/11645/files# After some hours of debugging, I have identified the (possible) cause of the majority of the refleaks. The type_comments created in ast.c with new_type_comment() never reaches 0 reference counts. This **seems** to be because the cleanup for the stmt_ty elements where the type_comments are included never call DECREF on these elements when destroyed unless I am missing something fundamental. ---------- components: Interpreter Core, Tests messages: 334676 nosy: gvanrossum, lukasz.langa, pablogsal priority: normal severity: normal status: open title: test_type_comments leaks references and memory blocks versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 10:24:51 2019 From: report at bugs.python.org (Jody McIntyre) Date: Fri, 01 Feb 2019 15:24:51 +0000 Subject: [New-bugs-announce] [issue35882] distutils fails with UnicodeEncodeError with strange filename in package_data Message-ID: <1549034691.01.0.592197242815.issue35882@roundup.psfhosted.org> New submission from Jody McIntyre : I encountered an error while installing savReaderWriter using pip from a Dockerfile, reported as https://bitbucket.org/fomcl/savreaderwriter/issues/73/pip-install-fails-for-non-utf-8-encoding This is actually an issue with distutils. Steps to reproduce: 1. Create the following tree. setup.py is as attached. The .sav file is empty. ./setup.py ./savReaderWriter ./savReaderWriter/test_data ./savReaderWriter/test_data/schei? Encoding.sav 2. Run LANG=C python setup.py install --record=/tmp/install-record.txt --single-version-externally-managed Actual output: running install running build running build_py package init file 'savReaderWriter/__init__.py' not found (or not a regular file) creating build creating build/lib creating build/lib/savReaderWriter creating build/lib/savReaderWriter/test_data copying savReaderWriter/test_data/schei? Encoding.sav -> build/lib/savReaderWriter/test_data running install_lib running install_egg_info running egg_info creating savReaderWriter.egg-info writing savReaderWriter.egg-info/PKG-INFO writing dependency_links to savReaderWriter.egg-info/dependency_links.txt writing top-level names to savReaderWriter.egg-info/top_level.txt writing manifest file 'savReaderWriter.egg-info/SOURCES.txt' 'savReaderWriter/test_data/schei\udcc3\udc9f Encoding.sav' not utf-8 encodable -- skipping reading manifest file 'savReaderWriter.egg-info/SOURCES.txt' writing manifest file 'savReaderWriter.egg-info/SOURCES.txt' removing '/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/site-packages/savReaderWriter-1.2.3-py3.6.egg-info' (and everything under it) Copying savReaderWriter.egg-info to /home/scjody/.pyenv/versions/3.6.5/lib/python3.6/site-packages/savReaderWriter-1.2.3-py3.6.egg-info running install_scripts writing list of installed files to '/tmp/install-record.txt' Traceback (most recent call last): File "setup.py", line 9, in version='1.2.3', File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/site-packages/setuptools/__init__.py", line 129, in setup return distutils.core.setup(**attrs) File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run return orig.install.run(self) File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/distutils/command/install.py", line 572, in run self.record) File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/distutils/cmd.py", line 335, in execute util.execute(func, args, msg, dry_run=self.dry_run) File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/distutils/util.py", line 301, in execute func(*args) File "/home/scjody/.pyenv/versions/3.6.5/lib/python3.6/distutils/file_util.py", line 236, in write_file f.write(line + "\n") UnicodeEncodeError: 'ascii' codec can't encode characters in position 94-95: ordinal not in range(128) Expected results: The package is installed and install-record.txt is created. Related: https://bugs.python.org/issue9561 ---------- components: Distutils files: setup.py messages: 334687 nosy: dstufft, eric.araujo, scjody priority: normal severity: normal status: open title: distutils fails with UnicodeEncodeError with strange filename in package_data type: crash versions: Python 3.6 Added file: https://bugs.python.org/file48095/setup.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 11:55:38 2019 From: report at bugs.python.org (Neui) Date: Fri, 01 Feb 2019 16:55:38 +0000 Subject: [New-bugs-announce] [issue35883] Change invalid unicode characters to replacement characters in argv Message-ID: <1549040138.25.0.646545491344.issue35883@roundup.psfhosted.org> New submission from Neui : When an invalid unicode character is given to argv (cli arguments), then python abort()s with an fatal error about an character not in range (ValueError: character U+7fffbeba is not in range [U+0000; U+10ffff]). I am wondering if this behaviour should change to replace those with U+FFFD REPLACEMENT CHARACTER (like .decode(..., 'replace')) or even with something similar/better (see https://docs.python.org/3/library/codecs.html#error-handlers ) The reason for this is that other applications can use the invalid character since it is just some data (like GDB for use as an argument to the program to be debugged), where in python this becomes an limitation, since the script (if specified) never runs. The main motivation for me is that there is an command-not-found debian package that gets the wrongly-typed command as a command argument. If that then contains an invalid unicode character, it then just fails rather saying it couldn't find the/a similar command. If this doesn't get changed, it either then has to accept that this is a limitation, use an other way of passing the command or re-write it in not python. # Requires bash 4.2+ # Specifying a script omits the first two lines $ python3.6 $'\U7fffbeba' Failed checking if argv[0] is an import path entry ValueError: character U+7fffbeba is not in range [U+0000; U+10ffff] Fatal Python error: no mem for sys.argv ValueError: character U+7fffbeba is not in range [U+0000; U+10ffff] Current thread 0x00007fd212eaf740 (most recent call first): Aborted (core dumped) $ python3.6 --version Python 3.6.7 $ uname -a Linux nopea 4.15.0-39-generic #42-Ubuntu SMP Tue Oct 23 15:48:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic GDB backtrace just before throwing the error: (note that it's argc=2 since first argument is a script) #0 find_maxchar_surrogates (begin=begin at entry=0xa847a0 L'\x7fffbeba' , end=end at entry=0xa847d0 L"", maxchar=maxchar at entry=0x7fffffffde94, num_surrogates=num_surrogates at entry=0x7fffffffde98) at ../Objects/unicodeobject.c:1626 #1 0x00000000004cee4b in PyUnicode_FromUnicode (u=u at entry=0xa847a0 L'\x7fffbeba' , size=12) at ../Objects/unicodeobject.c:2017 #2 0x00000000004db856 in PyUnicode_FromWideChar (w=0xa847a0 L'\x7fffbeba' , size=, size at entry=-1) at ../Objects/unicodeobject.c:2502 #3 0x000000000043253d in makeargvobject (argc=argc at entry=2, argv=argv at entry=0xa82268) at ../Python/sysmodule.c:2145 #4 0x0000000000433228 in PySys_SetArgvEx (argc=2, argv=0xa82268, updatepath=1) at ../Python/sysmodule.c:2264 #5 0x00000000004332c1 in PySys_SetArgv (argc=, argv=) at ../Python/sysmodule.c:2277 #6 0x000000000043a5bd in Py_Main (argc=argc at entry=3, argv=argv at entry=0xa82260) at ../Modules/main.c:733 #7 0x0000000000421149 in main (argc=3, argv=0x7fffffffe178) at ../Programs/python.c:69 Similar issues: https://bugs.python.org/issue25631 "Segmentation fault with invalid Unicode command-line arguments in embedded Python" (actually 'fixed' since it now abort()s) https://bugs.python.org/issue2128 "sys.argv is wrong for unicode strings" ---------- components: Interpreter Core messages: 334703 nosy: Neui priority: normal severity: normal status: open title: Change invalid unicode characters to replacement characters in argv type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 15:19:40 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Fri, 01 Feb 2019 20:19:40 +0000 Subject: [New-bugs-announce] [issue35884] Add variable access benchmark to Tools/Scripts Message-ID: <1549052380.12.0.716539002961.issue35884@roundup.psfhosted.org> New submission from Raymond Hettinger : Adding a short script that I've found useful many times over the past decade. It has allowed my to quickly notice performance regressions and has helped identify places in need of optimization work. It is also useful for building a empirical mental model of the relative cost of various kinds of variable access -- this is useful in writing high performance code. ---------- components: Demos and Tools messages: 334716 nosy: rhettinger priority: normal severity: normal status: open title: Add variable access benchmark to Tools/Scripts type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 17:59:14 2019 From: report at bugs.python.org (mrs.red) Date: Fri, 01 Feb 2019 22:59:14 +0000 Subject: [New-bugs-announce] [issue35885] configparser: indentation Message-ID: <1549061954.1.0.734541407213.issue35885@roundup.psfhosted.org> New submission from mrs.red : The configparser module does not have an option for indentation. I would like to indent the keys by tabs. Maybe we could implement an option for that? I already have some example code for it. ---------- components: Library (Lib) messages: 334729 nosy: mrs.red priority: normal severity: normal status: open title: configparser: indentation type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 20:07:08 2019 From: report at bugs.python.org (Eric Snow) Date: Sat, 02 Feb 2019 01:07:08 +0000 Subject: [New-bugs-announce] [issue35886] Move PyInterpreterState into Include/internal/pycore_pystate.h Message-ID: <1549069628.87.0.454232469964.issue35886@roundup.psfhosted.org> New submission from Eric Snow : In November Victor created the Include/cpython directory and moved a decent amount of public (but not limited) API there. This included the PyInterpreterState struct. I'd like to move it into the "internal" headers since it is somewhat coupled to the internal runtime implementation. The alternative is extra complexity. I ran into this while working on another issue. Note that the docs indicate that all of the struct's fields are private (and I am not aware of any macros leaking fields into the stable ABI). So moving it should not break anything (yeah, right!), and certainly not the stable ABI (Victor's move would have done that already). ---------- assignee: eric.snow messages: 334733 nosy: eric.snow, vstinner priority: normal severity: normal stage: needs patch status: open title: Move PyInterpreterState into Include/internal/pycore_pystate.h versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 1 20:42:16 2019 From: report at bugs.python.org (Nina Zakharenko) Date: Sat, 02 Feb 2019 01:42:16 +0000 Subject: [New-bugs-announce] [issue35887] Doc string for updating the frozen version of importlib in _bootstrap.py incorrect Message-ID: <1549071736.85.0.965218389814.issue35887@roundup.psfhosted.org> New submission from Nina Zakharenko : In the process of creating a fix for issue #35321, I noticed what I believe to be a documentation omission. In Lib/importlib/_bootstrap.py the top level comment states that: # IMPORTANT: Whenever making changes to this module, be sure to run # a top-level make in order to get the frozen version of the module # updated. >From my testing, it appears that the header file will only be updated when running `make regen-importlib` To repo: - make a code change in Lib/importlib/_bootstrap_external.py - run a top-level `make` - see that Python/importlib.h does not change - run `make regen-importlib` - see that Python/importlib.h has now been updated The documentation in Lib/importlib/_bootstrap_external.py does in fact mention this. I propose amending the documentation to include the correct instructions. If this is deemed necessary, I will write the patch over the weekend. ---------- assignee: nnja components: Library (Lib) messages: 334735 nosy: barry, nnja priority: low severity: normal status: open title: Doc string for updating the frozen version of importlib in _bootstrap.py incorrect versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 2 01:09:55 2019 From: report at bugs.python.org (Lee Eric) Date: Sat, 02 Feb 2019 06:09:55 +0000 Subject: [New-bugs-announce] [issue35888] ssl module - could not get the server certificate w/o completed handshake Message-ID: <1549087795.05.0.177157894622.issue35888@roundup.psfhosted.org> New submission from Lee Eric : Hi, I'm not sure if this is the right place to ask after I exhausted several communication ways. I'm trying to use standard ssl module to get the server certificate details. If I understand correctly, the certificate I can get only when the TLS/SSL handshake is done. Which means, if the server uses mTLS to authenticate client and I use ssl module to try to get the peer certificate w/o client certificate, I would not get the result due to the handshake is not complete. I would like to know if there's any method that I can get the certificate even the handshake is not complete. Also, as the very initial handshake stage, in Server Hello the service side has sent out the server certificate already. If the standard ssl module is designed in this behavior, is there any other module I can use to bypass the completed handshake to get the server certificate? Thanks. Eric ---------- assignee: christian.heimes components: SSL messages: 334738 nosy: Lee Eric, christian.heimes priority: normal severity: normal status: open title: ssl module - could not get the server certificate w/o completed handshake type: behavior versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 2 08:02:16 2019 From: report at bugs.python.org (Vlad Shcherbina) Date: Sat, 02 Feb 2019 13:02:16 +0000 Subject: [New-bugs-announce] [issue35889] sqlite3.Row doesn't have useful repr Message-ID: <1549112536.04.0.744689521798.issue35889@roundup.psfhosted.org> New submission from Vlad Shcherbina : To reproduce, run the following program: import sqlite3 conn = sqlite3.connect(':memory:') conn.row_factory = sqlite3.Row print(conn.execute("SELECT 'John' AS name, 42 AS salary").fetchone()) It prints ''. It would be nice if it printed something like "sqlite3.Row(name='Smith', saraly=42)" instead. It wouldn't satisfy the 'eval(repr(x)) == x' property, but it's still better than nothing. If the maintainers agree this is useful, I'll implement. ---------- components: Library (Lib) messages: 334746 nosy: vlad priority: normal severity: normal status: open title: sqlite3.Row doesn't have useful repr type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 2 17:08:35 2019 From: report at bugs.python.org (Minmin Gong) Date: Sat, 02 Feb 2019 22:08:35 +0000 Subject: [New-bugs-announce] [issue35890] Cleanup some non-consistent API callings Message-ID: <1549145315.91.0.561945747798.issue35890@roundup.psfhosted.org> New submission from Minmin Gong : 1. Unicode version of Windows APIs are used in places, but not for GetVersionEx in Python/sysmodule.c 2. The wcstok_s is called on Windows in Modules/main.c and PC/launcher.c, but not in Python/pathconfig.c ---------- components: Windows messages: 334771 nosy: Minmin.Gong, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Cleanup some non-consistent API callings type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 3 10:11:00 2019 From: report at bugs.python.org (Jason R. Coombs) Date: Sun, 03 Feb 2019 15:11:00 +0000 Subject: [New-bugs-announce] [issue35891] urllib.parse.splituser has no suitable replacement Message-ID: <1549206660.03.0.379337447594.issue35891@roundup.psfhosted.org> New submission from Jason R. Coombs : The removal of splituser (issue27485) has the undesirable effect of leaving the programmer without a suitable alternative. The deprecation warning states to use `urlparse` instead, but `urlparse` doesn't provide the access to the `credential` or `address` components of a URL. Consider for example: >>> import urllib.parse >>> url = 'https://user:password at host:port/path' >>> parsed = urllib.parse.urlparse(url) >>> urllib.parse.splituser(parsed.netloc) ('user:password', 'host:port') It's not readily obvious how one might get those two values, the credential and the address, from `parsed`. Sure, you can get `username` and `password`. You can get `hostname` and `port`. But if what you want is to remove the credential and keep the address, or extract the credential and pass it unchanged as a single string to something like an `_encode_auth` handler, that's no longer possible without some careful handling--because of possible None values, re-assembling a username/password into a colon-separated string is more complicated than simply doing a ':'.join. This recommendation and limitation led to issues in production code and ultimately the inline adoption of the deprecated function, [summarized here](https://github.com/pypa/setuptools/pull/1670). I believe if splituser is to be deprecated, the netloc should provide a suitable alternative - namely that a `urlparse` result should supply `address` and `userinfo`. Such functionality would make it easier to transition code that currently relies on splituser for more than to parse out the username and password. Even better would be for the urlparse result to support `_replace` operations on these attributes... so that one wouldn't have to construct a netloc just to construct a URL that replaces only some portion of the netloc, so one could do something like: >>> parsed = urllib.parse.urlparse(url) >>> without_userinfo = parsed._replace(userinfo=None).geturl() >>> alt_port = parsed._replace(port=443).geturl() I realize that because of the nesting of abstractions (namedtuple for the main parts), that maybe this technique doesn't extend nicely, so maybe the netloc itself should provide this extensibility for a usage something like this: >>> parsed = urllib.parse.urlparse(url) >>> without_userinfo = parsed._replace(netloc=parsed.netloc._replace(userinfo=None)).geturl() >>> alt_port = parsed._replace(netloc=parsed.netloc._replace(port=443)).geturl() It's not as elegant, but likely simpler to implement, with netloc being extended with a _replace method to support replacing segments of itself (and still immutable)... and is dramatically less error-prone than the status quo without splituser. In any case, I don't think it's suitable to leave it to the programmer to have to muddle around with their own URL parsing logic. urllib.parse should provide some help here. ---------- components: Library (Lib) messages: 334793 nosy: jason.coombs priority: normal severity: normal status: open title: urllib.parse.splituser has no suitable replacement type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 3 13:51:25 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sun, 03 Feb 2019 18:51:25 +0000 Subject: [New-bugs-announce] [issue35892] Fix awkwardness of statistics.mode() for multimodal datasets Message-ID: <1549219885.57.0.86866159357.issue35892@roundup.psfhosted.org> New submission from Raymond Hettinger : The current code for mode() does a good deal of extra work to support its two error outcomes (empty input and multimodal input). That latter case is informative but doesn't provide any reasonable way to find just one of those modes, where any of the most popular would suffice. This arises in nearest neighbor algorithms for example. I suggest adding an option to the API: def mode(seq, *, first_tie=False): if tie_goes_to_first: # CHOOSE FIRST x ? S | ? y ? S : x ? y ? count(y) > count(x) return return Counter(seq).most_common(1)[0][0] ... Use it like this: >>> data = 'ABBAC' >>> assert mode(data, first_tie=True) == 'A' With the current API, there is no reasonable way to get to 'A' from 'ABBAC'. Also, the new code path is much faster than the existing code path because it extracts only the 1 most common using min() rather than the n most common which has to sort the whole items() list. New path: O(n). Existing path: O(n log n). Note, the current API is somewhat awkward to use. In general, a user can't know in advance that the data only contains a single mode. Accordingly, every call to mode() has to be wrapped in a try-except. And if the user just wants one of those modal values, there is no way to get to it. See https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mode.html for comparison. There may be better names for the flag. "tie_goes_to_first_encountered" seemed a bit long though ;-) ---------- assignee: steven.daprano components: Library (Lib) messages: 334796 nosy: rhettinger, steven.daprano priority: normal severity: normal status: open title: Fix awkwardness of statistics.mode() for multimodal datasets type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 3 16:10:11 2019 From: report at bugs.python.org (Ronald Oussoren) Date: Sun, 03 Feb 2019 21:10:11 +0000 Subject: [New-bugs-announce] [issue35893] distutils fails to build extension on windows when it is a package.__init__ Message-ID: <1549228211.63.0.529262209714.issue35893@roundup.psfhosted.org> New submission from Ronald Oussoren : Python supports having a C extension for the the __init__ of a package (instead of having __init__.py). This works fine on Linux, but on Windows distutils fails to build the C extension because it assumes the entry point is named PyInit___init__ while importlib expects PyInit_*package* (for a package named *package*). When building the extension I get the following error: LINK : error LNK2001: unresolved external symbol PyInit___init__ build\temp.win32-3.7\Release\__init__.cp37-win32.lib : fatal error LNK1120: 1 unresolved externals The code below can be used to reproduce the issue. Setup.py (extracted from a larger setup.py, but should work...): from setuptools import setup, Extension extension3 = Extension("ext_package.__init__", sources=["init.c"]) setup( ext_modules=[extension3], ) Source code for the module (init.c): #include "Python.h" static PyModuleDef mod_def = { PyModuleDef_HEAD_INIT, "ext_package.__init__", NULL, 0, NULL, NULL, NULL, NULL, NULL }; PyObject* PyInit_ext_package(void) { return PyModule_Create(&mod_def); } P.S. I cannot easily debug this, I ran into this when testing one of my projects on AppVeyor and don't have a local Windows machine. ---------- components: Distutils, Windows messages: 334800 nosy: dstufft, eric.araujo, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: distutils fails to build extension on windows when it is a package.__init__ type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 4 00:05:47 2019 From: report at bugs.python.org (Nathaniel Smith) Date: Mon, 04 Feb 2019 05:05:47 +0000 Subject: [New-bugs-announce] [issue35894] Apparent regression in 3.8-dev: 'TypeError: required field "type_ignores" missing from Module' Message-ID: <1549256747.48.0.424452103214.issue35894@roundup.psfhosted.org> New submission from Nathaniel Smith : Travis provides a "3.8-dev" python, which is updated regularly to track cpython master. When running our tests on this Python, specifically version python: 3.8.0a0 (heads/master:f75d59e, Feb 3 2019, 07:27:24) we just started getting tracebacks: TypeError Traceback (most recent call last) /opt/python/3.8-dev/lib/python3.8/codeop.py in __call__(self, source, filename, symbol) 131 132 def __call__(self, source, filename, symbol): --> 133 codeob = compile(source, filename, symbol, self.flags, 1) 134 for feature in _features: 135 if codeob.co_flags & feature.compiler_flag: TypeError: required field "type_ignores" missing from Module (Full log: https://travis-ci.org/python-trio/trio/jobs/488312057) Grepping through git diffs for 'type_ignores' suggests that this is probably related to bpo-35766. I haven't dug into this in detail, but it seems to be happening on tests using IPython. The lack of further traceback suggests to me that the exception is happening inside IPython's guts (it has some hacks to try to figure out which parts of the traceback are in user-defined code versus its own internal code, and tries to hide the latter when printing tracebacks). The crash is in codeop.Compile.__call__, and IPython does create ast.Module objects and pass them to codeop.Compile.__call__: https://github.com/ipython/ipython/blob/512d47340c09d184e20811ca46aaa2f862bcbafe/IPython/core/interactiveshell.py#L3199-L3200 Maybe ast.Module needs to default-initialize the new type_ignores field, or compile() needs to be tolerant of it being missing? ---------- messages: 334807 nosy: benjamin.peterson, brett.cannon, gvanrossum, njs, yselivanov priority: normal severity: normal status: open title: Apparent regression in 3.8-dev: 'TypeError: required field "type_ignores" missing from Module' type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 4 09:22:27 2019 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Wirtel?=) Date: Mon, 04 Feb 2019 14:22:27 +0000 Subject: [New-bugs-announce] [issue35895] the test suite of pytest failed with 3.8.0a1 Message-ID: <1549290147.76.0.914127082694.issue35895@roundup.psfhosted.org> New submission from St?phane Wirtel : I have execute the tests of pytest with 3.8.0a1 and I get some issues, it's not the case with 3.7.x (see the travis logs of pytest, https://travis-ci.org/pytest-dev/pytest/branches) I am going to create the same issue for pytest. ---------- files: tox-pytest-38-stdout.log messages: 334823 nosy: matrixise priority: normal severity: normal status: open title: the test suite of pytest failed with 3.8.0a1 versions: Python 3.8 Added file: https://bugs.python.org/file48096/tox-pytest-38-stdout.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 4 16:33:45 2019 From: report at bugs.python.org (jcrmatos) Date: Mon, 04 Feb 2019 21:33:45 +0000 Subject: [New-bugs-announce] [issue35896] sysconfig.get_platform returns wrong value when Python 32b is running under Windows 64b Message-ID: <1549316025.61.0.403140167175.issue35896@roundup.psfhosted.org> New submission from jcrmatos : sysconfig.get_platform returns wrong value when Python 32b is running under Windows 64b. It should return win-amd64 and returns win32. ---------- messages: 334841 nosy: jcrmatos priority: normal severity: normal status: open title: sysconfig.get_platform returns wrong value when Python 32b is running under Windows 64b versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 03:36:18 2019 From: report at bugs.python.org (Fred .Flintstone) Date: Tue, 05 Feb 2019 08:36:18 +0000 Subject: [New-bugs-announce] [issue35897] Support list as argument to .startswith() Message-ID: <1549355778.94.0.606894132692.issue35897@roundup.psfhosted.org> New submission from Fred .Flintstone : The "".startswith() method accepts a string or a tuple as a parameter. Consider adding support for list as parameter. Example: "foo".startswith(["food", "for", "fast"]) ---------- components: Interpreter Core messages: 334856 nosy: Fred .Flintstone priority: normal severity: normal status: open title: Support list as argument to .startswith() type: enhancement versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 07:19:07 2019 From: report at bugs.python.org (Thomas Trummer) Date: Tue, 05 Feb 2019 12:19:07 +0000 Subject: [New-bugs-announce] [issue35898] The TARGETDIR variable must be provided when invoking this installer Message-ID: <1549369147.85.0.114900737559.issue35898@roundup.psfhosted.org> New submission from Thomas Trummer : The installer for Python 3.8.0a1 doesn't work under Windows 7. It shows the aforementioned error message. ---------- components: Installation files: Python 3.8.0a1 (64-bit)_20190205130936.log messages: 334865 nosy: Thomas Trummer priority: normal severity: normal status: open title: The TARGETDIR variable must be provided when invoking this installer type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file48097/Python 3.8.0a1 (64-bit)_20190205130936.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 09:27:39 2019 From: report at bugs.python.org (Maxwell) Date: Tue, 05 Feb 2019 14:27:39 +0000 Subject: [New-bugs-announce] [issue35899] '_is_sunder' function in 'enum' module fails on empty string Message-ID: <1549376859.01.0.998806898699.issue35899@roundup.psfhosted.org> New submission from Maxwell : This is a really minor bug. In enum.py the function _is_sunder(name) fails on empty string with an IndexError. As a result, attempting to create an Enum with an empty string fails. >>> from enum import Enum >>> Yay = Enum('Yay', ('', 'B', 'C')) Traceback (most recent call last): File "", line 1, in File "C:\Program Files\Python37\lib\enum.py", line 311, in __call__ return cls._create_(value, names, module=module, qualname=qualname, type=type, start=start) File "C:\Program Files\Python37\lib\enum.py", line 422, in _create_ classdict[member_name] = member_value File "C:\Program Files\Python37\lib\enum.py", line 78, in __setitem__ if _is_sunder(key): File "C:\Program Files\Python37\lib\enum.py", line 36, in _is_sunder return (name[0] == name[-1] == '_' and IndexError: string index out of range >>> Expected behavior is for it to not fail, as Enum accepts wierd strings. Example: >>> from enum import Enum >>> Yay = Enum('Yay', ('!', 'B', 'C')) >>> getattr(Yay, '!') >>> Transcript of lines 26 to 39 of enum.py: def _is_dunder(name): """Returns True if a __dunder__ name, False otherwise.""" return (name[:2] == name[-2:] == '__' and name[2:3] != '_' and name[-3:-2] != '_' and len(name) > 4) def _is_sunder(name): """Returns True if a _sunder_ name, False otherwise.""" return (name[0] == name[-1] == '_' and name[1:2] != '_' and name[-2:-1] != '_' and len(name) > 2) Solution 1: Replace with: def _is_dunder(name): """Returns True if a __dunder__ name, False otherwise.""" return (len(name) > 4 and name[:2] == name[-2:] == '__' and name[2] != '_' and name[-3] != '_') def _is_sunder(name): """Returns True if a _sunder_ name, False otherwise.""" return (len(name) > 2 and name[0] == name[-1] == '_' and name[1:2] != '_' and name[-2:-1] != '_') In this solution, function '_is_dunder' was also altered for consistency. Altering '_is_dunder' is not necessary, though. Solution 2: Replace with: def _is_dunder(name): """Returns True if a __dunder__ name, False otherwise.""" return (name[:2] == name[-2:] == '__' and name[2:3] != '_' and name[-3:-2] != '_' and len(name) > 4) def _is_sunder(name): """Returns True if a _sunder_ name, False otherwise.""" return (name[:0] == name[-1:] == '_' and name[1:2] != '_' and name[-2:-1] != '_' and len(name) > 2) In this solution, function '_is_sunder' was altered to follow the style used in function '_is_dunder'. ---------- components: Library (Lib) messages: 334866 nosy: Maxpxt priority: normal severity: normal status: open title: '_is_sunder' function in 'enum' module fails on empty string type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 09:40:22 2019 From: report at bugs.python.org (Pierre Glaser) Date: Tue, 05 Feb 2019 14:40:22 +0000 Subject: [New-bugs-announce] [issue35900] Add pickler hoor for the user to customize the serialization of user defined functions and types. Message-ID: <1549377622.96.0.737929222958.issue35900@roundup.psfhosted.org> New submission from Pierre Glaser : Pickler objects provide a dispatch_table attribute, where the user can specify custom saving functions depending on the object-to-be-saved type. However, for performance purposes, this table is predated (in the C implementation only) by a hardcoded switch that will take care of the saving for many built-in types, without a lookup in the dispatch_table. Especially, it is not possible to define custom saving methods for functions and classes, although the current default (save_global, that saves an object using its module attribute path) is likely to fail at pickling or unpickling time in many cases. The aforementioned failures exist on purpose in the standard library (as a way to allow for the serialization of functions accessible from non-dynamic (*) modules only). However, there exist cases where serializing functions from dynamic modules matter. These cases are currently handled thanks the cloudpickle module (https://github.com/cloudpipe/cloudpickle), that is used by many distributed data-science frameworks such as pyspark, ray and dask. For the reasons explained above, cloudpickle's Pickler subclass derives from the python Pickler class instead of its C class, which severely harms its performance. While prototyping with Antoine Pitrou, we came to the conclusion that a hook could be added to the C Pickler class, in which an optional user-defined callback would be invoked (if defined) when saving functions and classes instead of the traditional save_global. Here is a patch so that we can have something concrete of which to discuss. (*) dynamic module are modules that cannot be imported by name as traditional python file backed module. Examples include the __main__ module that can be populated dynamically by running a script or by a, user writing code in a python shell / jupyter notebook. ---------- files: test_hook.py messages: 334870 nosy: alexandre.vassalotti, pierreglaser, pitrou priority: normal severity: normal status: open title: Add pickler hoor for the user to customize the serialization of user defined functions and types. type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48101/test_hook.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 11:50:38 2019 From: report at bugs.python.org (MultiSosnooley) Date: Tue, 05 Feb 2019 16:50:38 +0000 Subject: [New-bugs-announce] [issue35901] json.dumps infinite recurssion Message-ID: <1549385438.15.0.743086795547.issue35901@roundup.psfhosted.org> New submission from MultiSosnooley : ``` __import__('json').dumps(object(), default=lambda o: repr(o).encode()) ``` Produce infinite recursion on `default` function. Here is more informative example: ``` >>> def f(o): ... input(f"{o!r} {type(o)}") ... return repr(o).encode() ... >>> import json >>> json.dumps(object(), default=f) b'' b"b''" b'b"b\'\'"' b'b\'b"b\\\'\\\'"\'' b'b\'b\\\'b"b\\\\\\\'\\\\\\\'"\\\'\'' ``` ---------- components: Library (Lib) messages: 334877 nosy: MultiSosnooley priority: normal severity: normal status: open title: json.dumps infinite recurssion type: crash versions: Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 17:17:51 2019 From: report at bugs.python.org (Vadim Tsozik) Date: Tue, 05 Feb 2019 22:17:51 +0000 Subject: [New-bugs-announce] [issue35902] Forking from background thread Message-ID: <1549405071.04.0.889820238084.issue35902@roundup.psfhosted.org> New submission from Vadim Tsozik : Attached is code sample that forks child process either from main or from background thread. Child starts and joins all of its threads except a sleeping daemon. If parent forks child from main thread program exits immediately after child threads are joined and waitpid is unblocked by SIGCHLD. However if parent process happens to fork from main thread everything works correctly and process exits immediately without waiting for daemon to sleep for 3600 seconds. I'm wondering what is the difference between main and background thread in parent. Only one thread survives forking in child and becomes main thread in the child, so there should be no differences in the behavior. Thank you in advance for your help, ---------- components: Build files: threadforkmodel.py messages: 334886 nosy: vtsozik priority: normal severity: normal status: open title: Forking from background thread type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48103/threadforkmodel.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 17:50:22 2019 From: report at bugs.python.org (Neil Schemenauer) Date: Tue, 05 Feb 2019 22:50:22 +0000 Subject: [New-bugs-announce] [issue35903] Build of posixshmem.c should probe for required OS functions Message-ID: <1549407022.09.0.0256128012498.issue35903@roundup.psfhosted.org> New submission from Neil Schemenauer : The logic in setup.py that determines if _multiprocessing/posixshmem.c should get built is not very robust. I think it is better to use autoconfig to probe for the required functions and libraries. My autoconfig brain cells are a bit fuzzy but I think my patch is correct. I look for shm_open and shm_unlink. I also check if librt is required for these functions. ---------- assignee: davin components: Build keywords: patch messages: 334888 nosy: davin, nascheme priority: normal severity: normal stage: patch review status: open title: Build of posixshmem.c should probe for required OS functions type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 18:59:21 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Tue, 05 Feb 2019 23:59:21 +0000 Subject: [New-bugs-announce] [issue35904] Add statistics.fmean(seq) Message-ID: <1549411161.36.0.783593325223.issue35904@roundup.psfhosted.org> New submission from Raymond Hettinger : The current mean() function makes heroic efforts to achieve last bit accuracy and when possible to retain the data type of the input. What is needed is an alternative that has a simpler signature, that is much faster, that is highly accurate without demanding perfection, and that is usually what people expect mean() is going to do, the same as their calculators or numpy.mean(): def fmean(seq: Sequence[float]) -> float: return math.fsum(seq) / len(seq) On my current 3.8 build, this code given an approx 500x speed-up (almost three orders of magnitude). Note that having a fast fmean() function is important in resampling statistics where the mean() is typically called many times: http://statistics.about.com/od/Applications/a/Example-Of-Bootstrapping.htm $ ./python.exe -m timeit -r 11 -s 'from random import random' -s 'from statistics import mean' -s 'seq = [random() for i in range(10_000)]' 'mean(seq)' 50 loops, best of 11: 6.8 msec per loop $ ./python.exe -m timeit -r 11 -s 'from random import random' -s 'from math import fsum' -s 'mean=lambda seq: fsum(seq)/len(seq)' -s 'seq = [random() for i in range(10_000)]' 'mean(seq)' 2000 loops, best of 11: 155 usec per loop ---------- assignee: steven.daprano components: Library (Lib) messages: 334894 nosy: rhettinger, steven.daprano, tim.peters priority: normal severity: normal status: open title: Add statistics.fmean(seq) type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 19:12:22 2019 From: report at bugs.python.org (Jason R. Coombs) Date: Wed, 06 Feb 2019 00:12:22 +0000 Subject: [New-bugs-announce] [issue35905] macOS build docs need refresh (2019) Message-ID: <1549411942.7.0.0722764247373.issue35905@roundup.psfhosted.org> New submission from Jason R. Coombs : In https://github.com/python/devguide/issues/453#issuecomment-460848565, I understand that Ned wishes to update the macOS build docs prior to linking to them from the dev guide. ---------- assignee: docs at python components: Documentation messages: 334895 nosy: docs at python, jason.coombs, ned.deily priority: normal severity: normal status: open title: macOS build docs need refresh (2019) versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 5 19:32:11 2019 From: report at bugs.python.org (Sihoon Lee) Date: Wed, 06 Feb 2019 00:32:11 +0000 Subject: [New-bugs-announce] [issue35906] Header Injection in urllib Message-ID: <1549413131.7.0.216595978501.issue35906@roundup.psfhosted.org> New submission from Sihoon Lee : this patch can also be broken by path and query string. http://www.cvedetails.com/cve/CVE-2016-5699/ https://bugs.python.org/issue30458 can succeed to inject HTTP header and be more critical by bypassing illegal header check # Vulnerability PoC >>> import urllib.request >>> urllib.request.urlopen('http://127.0.0.1:1234/?q=HTTP/1.1\r\nHeader: Value\r\nHeader2: \r\n') or >>> urllib.request.urlopen('http://127.0.0.1:1234/HTTP/1.1\r\nHeader: Value\r\nHeader2: \r\n') > nc -lv 1234 GET /?q=HTTP/1.1 Header: Value Header2: HTTP/1.1 Accept-Encoding: identity Host: 127.0.0.1:1234 User-Agent: Python-urllib/3.8 Connection: close we can inject headers completely. ## Redis redis also be affected by bypassing SSRF protection checking header "host:" with this injection. >>> urllib2.urlopen('http://127.0.0.1:6379/?q=HTTP/1.1\r\nSET VULN POC\r\nHeader2:\r\n').read() '$-1\r\n+OK\r\n-ERR unknown command `Header2:`, with args beginning with: `HTTP/1.1`, \r\n-ERR unknown command `Accept-Encoding:`, with args beginning with: `identity`, \r\n' $ redis-cli 127.0.0.1:6379> GET VULN "POC" # Root Cause https://github.com/python/cpython/commit/cc54c1c0d2d05fe7404ba64c53df4b1352ed2262 - _hostprog = re.compile('^//([^/?]*)(.*)$') + _hostprog = re.compile('//([^/#?]*)(.*)', re.DOTALL) It could succeed to parse host because of re.DOTALL re.DOTALL gave the opportunity of injection. this version of the commit was 3.4.7+ this vulnerability can be affected 3.4.7+ ~ 3.8-dev <- I tested it. also, python 2.7.15 can be affected. I don't know which python2 version is affected because not test. maybe after the commit, all of higher versions can trigger this vulnerability. # Conclusion this patch provides more critical vulnerability to bypass the illegal header check. and we can inject HTTP header completely in urlopen() from this patch. (Although this vulnerability is old on 12 Jul 2017, I don't know why no one has submitted issue still now XDD) ---------- components: Library (Lib) messages: 334896 nosy: push0ebp priority: normal severity: normal status: open title: Header Injection in urllib type: security versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 03:19:51 2019 From: report at bugs.python.org (Sihoon Lee) Date: Wed, 06 Feb 2019 08:19:51 +0000 Subject: [New-bugs-announce] [issue35907] Unnecessary URL scheme exists to allow file:// reading file in urllib Message-ID: <1549441191.29.0.148559977828.issue35907@roundup.psfhosted.org> New submission from Sihoon Lee : The Unnecessary scheme exists in urlopen() urllib when people would protect to read file system in HTTP request of urlopen(), they often filter like this against SSRF. # Vulnerability PoC import urllib print urllib.urlopen('local_file:///etc/passwd').read()[:30] the result is ## # User Database # # Note t but if we use a scheme like this, parsing URL cannot parse scheme with urlparse() this is the parsed result. ParseResult(scheme='', netloc='', path='local_file:/etc/passwd', params='', query='', fragment='') def request(url): from urllib import urlopen from urlparse import urlparse result = urlparse(url) scheme = result.scheme if not scheme: return False #raise Exception("Required scheme") if scheme == 'file': return False #raise Exception("Don't open file") res = urlopen(url) content = res.read() print url, content[:30] return True assert request('file:///etc/passwd') == False assert request(' file:///etc/passwd') == False assert request('File:///etc/passwd') == False assert request('http://www.google.com') != False if they filter only file://, this mitigation can be bypassed against SSRF. with this way. assert request('local-file:/etc/passwd') == True ParseResult(scheme='local-file', netloc='', path='/etc/passwd', params='', query='', fragment='') parseing URL also can be passed. # Attack scenario this is the unnecessary URL scheme("local_file"). even if it has filtering, An Attacker can read arbitrary files as bypassing with it. # Root Cause URLopener::open in urllib.py from 203 lin name = 'open_' + urltype self.type = urltype name = name.replace('-', '_') #it can also allows local-file if not hasattr(self, name): #passed here hasattr(URLopener, 'open_local_file') if proxy: return self.open_unknown_proxy(proxy, fullurl, data) else: return self.open_unknown(fullurl, data) try: if data is None: return getattr(self, name)(url) else: return getattr(self, name)(url, data) #return URLopener::open_local_file it may be just trick because people usually use whitelist (allow only http or https. Even if but anyone may use blacklist like filtering file://, they will be affected with triggering SSRF ---------- components: Library (Lib) messages: 334905 nosy: push0ebp priority: normal severity: normal status: open title: Unnecessary URL scheme exists to allow file:// reading file in urllib type: security versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 03:58:48 2019 From: report at bugs.python.org (Matthias Klose) Date: Wed, 06 Feb 2019 08:58:48 +0000 Subject: [New-bugs-announce] [issue35908] build with building extension modules as builtins is broken in 3.8 Message-ID: <1549443528.5.0.674313815796.issue35908@roundup.psfhosted.org> New submission from Matthias Klose : the build with building extension modules as builtins is broken in 3.8. I assume that is some fallout from the header re-organization. It shows in the link step by having undefined references. Undefined symbols are listed below. Still looking where the behavior change was introduced, but help would be appreciated. ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(longobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Objects/longobject.c:1742: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Objects/longobject.c:2769: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(longobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Objects/longobject.c:3309: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Objects/longobject.c:3271: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(longobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Objects/unicodeobject.c:3766: undefined reference to `PyOS_FSPath' ./build-static/../Objects/unicodeobject.c:3808: undefined reference to `PyOS_FSPath' ./build-static/../Python/pylifecycle.c:2019: undefined reference to `_PyFaulthandler_Fini' /usr/bin/ld: ./build-static/../Python/pylifecycle.c:2019: undefined reference to `_PyFaulthandler_Fini' ./build-static/../Python/pylifecycle.c:1091: undefined reference to `PyOS_FiniInterrupts' /usr/bin/ld: ./build-static/../Python/pylifecycle.c:1143: undefined reference to `_PyTraceMalloc_Fini' /usr/bin/ld: ./build-static/../Python/pylifecycle.c:1152: undefined reference to `_PyFaulthandler_Fini' ./build-static/../Python/pylifecycle.c:860: undefined reference to `_PyFaulthandler_Init' ./build-static/../Python/pylifecycle.c:2150: undefined reference to `PyOS_InitInterrupts' ./build-static/../Python/pylifecycle.c:877: undefined reference to `_PyTraceMalloc_Init' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(bytesobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Python/ceval.c:383: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/errors.c:497: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Python/errors.c:497: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Python/errors.c:497: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Python/errors.c:497: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(object.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Objects/object.c:486: undefined reference to `PyErr_CheckSignals' ./build-static/../Objects/object.c:533: undefined reference to `PyErr_CheckSignals' ./build-static/../Objects/object.c:342: undefined reference to `PyErr_CheckSignals' ./build-static/../Objects/object.c:2132: undefined reference to `_PyMem_DumpTraceback' ./build-static/../Objects/obmalloc.c:2392: undefined reference to `_PyMem_DumpTraceback' /usr/bin/ld: ./build-static/../Objects/obmalloc.c:2392: undefined reference to `_PyMem_DumpTraceback' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Python/marshal.c:308: undefined reference to `_Py_hashtable_get_entry' /usr/bin/ld: ./build-static/../Python/marshal.c:326: undefined reference to `_Py_hashtable_set' /usr/bin/ld: ./build-static/../Python/marshal.c:308: undefined reference to `_Py_hashtable_get_entry' ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_compare_direct' /usr/bin/ld: ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_hash_ptr' /usr/bin/ld: ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_new' ./build-static/../Python/marshal.c:597: undefined reference to `_Py_hashtable_foreach' /usr/bin/ld: ./build-static/../Python/marshal.c:598: undefined reference to `_Py_hashtable_destroy' ./build-static/../Python/marshal.c:326: undefined reference to `_Py_hashtable_set' ./build-static/../Python/marshal.c:597: undefined reference to `_Py_hashtable_foreach' /usr/bin/ld: ./build-static/../Python/marshal.c:598: undefined reference to `_Py_hashtable_destroy' ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_compare_direct' /usr/bin/ld: ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_hash_ptr' /usr/bin/ld: ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_new' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Python/bootstrap_hash.c:175: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/traceback.c:572: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/fileutils.c:1270: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/fileutils.c:1442: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/fileutils.c:1506: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: libpython3.8m.a(fileutils.o):./build-static/../Python/fileutils.c:1560: more undefined references to `PyErr_CheckSignals' follow ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Modules/_posixsubprocess.c:687: undefined reference to `PyOS_BeforeFork' /usr/bin/ld: ./build-static/../Modules/_posixsubprocess.c:725: undefined reference to `PyOS_AfterFork_Parent' /usr/bin/ld: ./build-static/../Modules/_posixsubprocess.c:705: undefined reference to `PyOS_AfterFork_Child' ./build-static/../Modules/fcntlmodule.c:422: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/fcntlmodule.c:296: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/fcntlmodule.c:83: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Modules/fcntlmodule.c:104: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/grpmodule.c:75: undefined reference to `_PyLong_FromGid' ./build-static/../Modules/grpmodule.c:107: undefined reference to `_Py_Gid_Converter' /usr/bin/ld: ./build-static/../Modules/grpmodule.c:120: undefined reference to `_Py_Gid_Converter' /usr/bin/ld: ./build-static/../Modules/grpmodule.c:169: undefined reference to `_PyLong_FromGid' ./build-static/../Modules/selectmodule.c:638: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/selectmodule.c:1563: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/selectmodule.c:332: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Modules/selectmodule.c:332: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/socketmodule.c:903: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: libpython3.8m.a(socketmodule.o):./build-static/../Modules/socketmodule.c:856: more undefined references to `PyErr_CheckSignals' follow ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(complexobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Python/bltinmodule.c:2071: undefined reference to `PyErr_CheckSignals' ./build-static/../Parser/myreadline.c:80: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Parser/myreadline.c:88: undefined reference to `PyOS_InterruptOccurred' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(bytesobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Objects/longobject.c:1742: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Objects/longobject.c:2769: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(longobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Objects/longobject.c:3309: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Objects/longobject.c:3271: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(longobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Objects/unicodeobject.c:3766: undefined reference to `PyOS_FSPath' ./build-static/../Objects/unicodeobject.c:3808: undefined reference to `PyOS_FSPath' ./build-static/../Python/pylifecycle.c:2019: undefined reference to `_PyFaulthandler_Fini' /usr/bin/ld: ./build-static/../Python/pylifecycle.c:2019: undefined reference to `_PyFaulthandler_Fini' ./build-static/../Python/pylifecycle.c:1091: undefined reference to `PyOS_FiniInterrupts' /usr/bin/ld: ./build-static/../Python/pylifecycle.c:1143: undefined reference to `_PyTraceMalloc_Fini' /usr/bin/ld: ./build-static/../Python/pylifecycle.c:1152: undefined reference to `_PyFaulthandler_Fini' ./build-static/../Python/pylifecycle.c:860: undefined reference to `_PyFaulthandler_Init' ./build-static/../Python/pylifecycle.c:2150: undefined reference to `PyOS_InitInterrupts' ./build-static/../Python/pylifecycle.c:877: undefined reference to `_PyTraceMalloc_Init' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Python/ceval.c:383: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/errors.c:497: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Python/errors.c:497: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Python/errors.c:497: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Python/errors.c:497: undefined reference to `PyErr_CheckSignals' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(floatobject.o):./build-static/../Objects/floatobject.c:124: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Objects/object.c:486: undefined reference to `PyErr_CheckSignals' ./build-static/../Objects/object.c:533: undefined reference to `PyErr_CheckSignals' ./build-static/../Objects/object.c:342: undefined reference to `PyErr_CheckSignals' ./build-static/../Objects/object.c:2132: undefined reference to `_PyMem_DumpTraceback' ./build-static/../Objects/obmalloc.c:2392: undefined reference to `_PyMem_DumpTraceback' /usr/bin/ld: ./build-static/../Objects/obmalloc.c:2392: undefined reference to `_PyMem_DumpTraceback' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Python/marshal.c:308: undefined reference to `_Py_hashtable_get_entry' /usr/bin/ld: ./build-static/../Python/marshal.c:326: undefined reference to `_Py_hashtable_set' /usr/bin/ld: ./build-static/../Python/marshal.c:308: undefined reference to `_Py_hashtable_get_entry' ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_compare_direct' /usr/bin/ld: ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_hash_ptr' /usr/bin/ld: ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_new' ./build-static/../Python/marshal.c:597: undefined reference to `_Py_hashtable_foreach' /usr/bin/ld: ./build-static/../Python/marshal.c:598: undefined reference to `_Py_hashtable_destroy' ./build-static/../Python/marshal.c:326: undefined reference to `_Py_hashtable_set' ./build-static/../Python/marshal.c:597: undefined reference to `_Py_hashtable_foreach' /usr/bin/ld: ./build-static/../Python/marshal.c:598: undefined reference to `_Py_hashtable_destroy' ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_compare_direct' /usr/bin/ld: ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_hash_ptr' /usr/bin/ld: ./build-static/../Python/marshal.c:571: undefined reference to `_Py_hashtable_new' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Python/bootstrap_hash.c:175: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/traceback.c:572: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/fileutils.c:1270: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/fileutils.c:1442: undefined reference to `PyErr_CheckSignals' ./build-static/../Python/fileutils.c:1506: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: libpython3.8m.a(fileutils.o):./build-static/../Python/fileutils.c:1560: more undefined references to `PyErr_CheckSignals' follow ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Modules/_posixsubprocess.c:687: undefined reference to `PyOS_BeforeFork' /usr/bin/ld: ./build-static/../Modules/_posixsubprocess.c:725: undefined reference to `PyOS_AfterFork_Parent' /usr/bin/ld: ./build-static/../Modules/_posixsubprocess.c:705: undefined reference to `PyOS_AfterFork_Child' ./build-static/../Modules/fcntlmodule.c:422: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/fcntlmodule.c:296: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/fcntlmodule.c:83: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Modules/fcntlmodule.c:104: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/grpmodule.c:75: undefined reference to `_PyLong_FromGid' ./build-static/../Modules/grpmodule.c:107: undefined reference to `_Py_Gid_Converter' /usr/bin/ld: ./build-static/../Modules/grpmodule.c:120: undefined reference to `_Py_Gid_Converter' /usr/bin/ld: ./build-static/../Modules/grpmodule.c:169: undefined reference to `_PyLong_FromGid' ./build-static/../Modules/selectmodule.c:638: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/selectmodule.c:1563: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/selectmodule.c:332: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Modules/selectmodule.c:332: undefined reference to `PyErr_CheckSignals' ./build-static/../Modules/socketmodule.c:903: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: libpython3.8m.a(socketmodule.o):./build-static/../Modules/socketmodule.c:856: more undefined references to `PyErr_CheckSignals' follow ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: ./build-static/../Include/object.h:434: undefined reference to `_PyTraceMalloc_NewReference' /usr/bin/ld: libpython3.8m.a(complexobject.o):./build-static/../Include/object.h:434: more undefined references to `_PyTraceMalloc_NewReference' follow ./build-static/../Python/bltinmodule.c:2071: undefined reference to `PyErr_CheckSignals' ./build-static/../Parser/myreadline.c:80: undefined reference to `PyErr_CheckSignals' /usr/bin/ld: ./build-static/../Parser/myreadline.c:88: undefined reference to `PyOS_InterruptOccurred' ---------- components: Build messages: 334908 nosy: doko, vstinner priority: normal severity: normal status: open title: build with building extension modules as builtins is broken in 3.8 type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 04:37:04 2019 From: report at bugs.python.org (uhei3nn9) Date: Wed, 06 Feb 2019 09:37:04 +0000 Subject: [New-bugs-announce] [issue35909] Zip Slip Vulnerability Message-ID: <1549445824.49.0.751818540829.issue35909@roundup.psfhosted.org> New submission from uhei3nn9 : As has been discovered in 06.2018 the python library is affected by the zip slip vulbnerability (meaning code execution) The affected section https://github.com/python/cpython/blob/3.7/Lib/tarfile.py has not been patched since then. Therefore it seems python has not yet fixed this vulnerability. Source: https://github.com/snyk/zip-slip-vulnerability ---------- components: Library (Lib) messages: 334910 nosy: uhei3nn9 priority: normal severity: normal status: open title: Zip Slip Vulnerability type: security versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 05:03:18 2019 From: report at bugs.python.org (Phil Dream) Date: Wed, 06 Feb 2019 10:03:18 +0000 Subject: [New-bugs-announce] [issue35910] Curious problem with my choice of variables Message-ID: <1549447398.9.0.00215680394136.issue35910@roundup.psfhosted.org> New submission from Phil Dream : Firstable I am not a software expert just a hobby user so please be indulgent I use a Raspberry Pi3B+ with raspbian lite and Python 3.5.3 In my script, I need 2 nested "while" loops so I chose two variables to incriment them 'i' and j. This script don't work and I made a certain time to understand (I did not think Python could play me a trick). I needed to reset 'j' to go through the inner loop a few times and in fact I realized that when I initialized 'j', 'i' was also initialized !?!? Very curious isn't it ? I replaced 'i' by 'y' and no more problem, my script works very well. ---------- files: sirext.py messages: 334913 nosy: Phil Dream priority: normal severity: normal status: open title: Curious problem with my choice of variables type: behavior versions: Python 3.5 Added file: https://bugs.python.org/file48104/sirext.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 05:55:47 2019 From: report at bugs.python.org (Pierre Glaser) Date: Wed, 06 Feb 2019 10:55:47 +0000 Subject: [New-bugs-announce] [issue35911] add a cell construtor, and expose the cell type in Lib/types.py Message-ID: <1549450547.33.0.317079200708.issue35911@roundup.psfhosted.org> New submission from Pierre Glaser : cell objects are containers for the free variables of functions defined in a local scope. They are located in a function's __closure__ attribute (when it is not None). A cell is a very simple object, with a single (optional, e.g the cell can be empty) attribute: cell_contents. The C/Python API provides a constructor to create such objects (PyCell_New). However no cell.__new__ method is exposed to the pure python user. Workarounds exist, but are hacky, and involve the creation of intermediate, unused functions. Why would cell-creation be useful? because creating cells happens in pickle extensions modules designed to save user-defined functions and classes (https://github.com/cloudpipe/cloudpickle) (*). These moudules are dependencies of many widely-used data science frameworks (pyspark, ray, dask). Exposing a cell constructor will simplify theses extensions code base, and alleviate their maintenance cost. I propose to add and expose a simple cell constructor, that accepts 0 (empty cells) or 1 arguments. I also propose to expose the cell type in Lib/types.py (as types.CellType) (*): see related issues: https://bugs.python.org/issue35900 ---------- files: cell.patch keywords: patch messages: 334924 nosy: pierreglaser, pitrou, yselivanov priority: normal severity: normal status: open title: add a cell construtor, and expose the cell type in Lib/types.py type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48105/cell.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 09:45:32 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Wed, 06 Feb 2019 14:45:32 +0000 Subject: [New-bugs-announce] [issue35912] _testembed.c fails to compile when using --with-cxx-main in the configure step Message-ID: <1549464332.95.0.70522769189.issue35912@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Programs/_testembed.c is compiled with $(MAINCC) in Makefile.pre.in and that will use the C++ compiler if --with-cxx-main is used in the configuration step. The problem is that if the C++ compiler used is some of the more uncommon ones (like the solaris compiler) some parts of _testembed.c and its includes fail to compile because they are assuming C99 or compiler extension. There are mainly two: 1) _testembed.c has gotos that crosses variable definitions. The C++ standard says: If transfer of control enters the scope of any automatic variables (e.g. by jumping forward over a declaration statement), the program is ill-formed (cannot be compiled), unless all variables whose scope is entered have 1) scalar types declared without initializers 2) class types with trivial default constructors and trivial destructors declared without initializers 3) cv-qualified versions of one of the above 4) arrays of one of the above So the gotos that can be found in `dump_config_impl` in `_testembed.c` are invalidating this rule. 2) `testembed.c` is pulling `pystate.h` and that header file defines a macro (_PyCoreConfig_INIT) that uses designated initializers that are not available as an extension in all C++ compilers. The solutions that I can immediately think of aare: 1) Compile _testembed.c with the $(CC) instead of $(CCMAIN). 2) Change these files to make them compliant with the standard (by initializing all variables at the beginning and by using just compound literals). ---------- components: Interpreter Core messages: 334942 nosy: pablogsal priority: normal severity: normal status: open title: _testembed.c fails to compile when using --with-cxx-main in the configure step versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 10:51:04 2019 From: report at bugs.python.org (Isaac Boukris) Date: Wed, 06 Feb 2019 15:51:04 +0000 Subject: [New-bugs-announce] [issue35913] asyncore: allow handling of half closed connections Message-ID: <1549468264.52.0.429414559636.issue35913@roundup.psfhosted.org> New submission from Isaac Boukris : When recv() return 0 we may still have data to send. Add a handler for this case, which may happen with some protocols, notably http1.0 ver. Also, do not call recv with a buffer size of zero to avoid ambiguous return value (see recv man page). ---------- components: Library (Lib) messages: 334944 nosy: Isaac Boukris priority: normal severity: normal status: open title: asyncore: allow handling of half closed connections type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 10:54:42 2019 From: report at bugs.python.org (Xiang Gao) Date: Wed, 06 Feb 2019 15:54:42 +0000 Subject: [New-bugs-announce] [issue35914] [2.7] PyStructSequence objects not behaving like nametuple Message-ID: <1549468482.77.0.451102979772.issue35914@roundup.psfhosted.org> New submission from Xiang Gao : Related: https://bugs.python.org/issue1820 On issue 1820, a bunch of improvements was made on PyStructSequence to make it behave like a namedtuple. These improvements are not ported to Python 2, which makes it a trouble to write python 2-3 compatible code. See also: https://github.com/pytorch/pytorch/pull/15429#discussion_r253205020 ---------- components: Interpreter Core messages: 334946 nosy: Xiang Gao priority: normal severity: normal status: open title: [2.7] PyStructSequence objects not behaving like nametuple type: behavior versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 11:17:37 2019 From: report at bugs.python.org (Ben Spiller) Date: Wed, 06 Feb 2019 16:17:37 +0000 Subject: [New-bugs-announce] [issue35915] re.search livelock/hang, searching for patterns starting .* in a large string Message-ID: <1549469857.93.0.911634458861.issue35915@roundup.psfhosted.org> New submission from Ben Spiller : These work fine and return instantly: python -c "import re; re.compile('.*x').match('y'*(1000*100))" python -c "import re; re.compile('x').search('y'*(1000*100))" python -c "import re; re.compile('.*x').search('y'*(1000*10))" This hangs / freezes / livelocks indefinitely, with lots of CPU usage: python -c "import re; re.compile('.*x').search('y'*(1000*100))" Admittedly performing a search() with a pattern starting .* isn't useful, however it's worth fixing as: - it's easily done by inexperienced developers, or users interacting with code that's far removed from the actual regex call - the failure mode of hanging forever (with the GIL held, of course) is quite severe (took us a lot of debugging with gdb before we figured out where our complex multi-threaded python program was hanging!), and - the fact that the behaviour is different based on the length of the string being matched suggests there is some kind of underlying bug in how the buffer is handled whcih might also affect other, more reasonable regex use cases ---------- components: Regular Expressions messages: 334949 nosy: benspiller, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: re.search livelock/hang, searching for patterns starting .* in a large string type: crash versions: Python 2.7, Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 11:18:37 2019 From: report at bugs.python.org (DMITRY KOSHELEV) Date: Wed, 06 Feb 2019 16:18:37 +0000 Subject: [New-bugs-announce] [issue35916] 3.6.5 try/except/else/finally block executes code with typos, no errors Message-ID: <1549469917.09.0.661318828546.issue35916@roundup.psfhosted.org> New submission from DMITRY KOSHELEV : Hello dear developer! I was playing with try/else/finally block and have found a bug: Inside of "else" or/and "except" I can do this 1 + print('Why do you print me?') + 1 this would print "Why do you print me?", in case if I have "finally" block with a "return" statement, no error raises, if I don't have finally, nothing is printed. def foo(var): try: print("Hello") # 1 + print("Hello") except: 1 + print('Why do you print me?') + 1 else: 1 + print('Why do you print me?') + 1 finally: print("finally block") return ---------- files: bug_in_try_exceptions.py messages: 334950 nosy: dmitry_koshelev priority: normal severity: normal status: open title: 3.6.5 try/except/else/finally block executes code with typos, no errors type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48107/bug_in_try_exceptions.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 11:35:19 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Wed, 06 Feb 2019 16:35:19 +0000 Subject: [New-bugs-announce] [issue35917] multiprocessing: provide unit-tests for manager classes and shareable types Message-ID: <1549470919.94.0.961552344771.issue35917@roundup.psfhosted.org> New submission from Giampaolo Rodola' : This is a follow up of BPO-35813 and PR-11664 and it provides unit tests for SyncManager and SharedMemoryManager classes + all the shareable types which are supposed to be supported by them. Also, see relevant python-dev discussion at: https://mail.python.org/pipermail/python-dev/2019-February/156235.html. In doing so I discovered a couple of issues which I will treat in a separate BPO ticket (multiprocessing.managers's dict.has_key() and Pool() appear to be broken). ---------- components: Tests messages: 334952 nosy: giampaolo.rodola priority: normal severity: normal stage: patch review status: open title: multiprocessing: provide unit-tests for manager classes and shareable types type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 12:22:15 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Wed, 06 Feb 2019 17:22:15 +0000 Subject: [New-bugs-announce] [issue35918] multiprocessing's SyncManager.dict.has_key() method is broken Message-ID: <1549473735.16.0.515862791139.issue35918@roundup.psfhosted.org> New submission from Giampaolo Rodola' : Related to BPO-35917: $ ./python Python 3.8.0a1+ (heads/master:cd90f6a369, Feb 6 2019, 17:16:10) [GCC 7.3.0] on linux >>> import multiprocessing.managers >>> m = multiprocessing.managers.SyncManager() >>> m.start() >>> d = m.dict() >>> 'has_key' in dir(d) True >>> d.has_key(1) Traceback (most recent call last): File "/home/giampaolo/cpython/Lib/multiprocessing/managers.py", line 271, in serve_client fallback_func = self.fallback_mapping[methodname] KeyError: 'has_key' ---------- messages: 334959 nosy: davin, giampaolo.rodola, jnoller, pitrou, sbt priority: normal severity: normal stage: needs patch status: open title: multiprocessing's SyncManager.dict.has_key() method is broken versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 12:36:20 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Wed, 06 Feb 2019 17:36:20 +0000 Subject: [New-bugs-announce] [issue35919] multiprocessing: shared manager Pool fails with AttributeError Message-ID: <1549474580.7.0.974775956703.issue35919@roundup.psfhosted.org> New submission from Giampaolo Rodola' : import multiprocessing import multiprocessing.managers def f(n): return n * n def worker(pool): with pool: pool.apply_async(f, (10, )) manager = multiprocessing.managers.SyncManager() manager.start() pool = manager.Pool(processes=4) proc = multiprocessing.Process(target=worker, args=(pool, )) proc.start() proc.join() This is related to BPO-35917 and it fails with: Process Process-2: Traceback (most recent call last): File "/home/giampaolo/cpython/Lib/multiprocessing/process.py", line 302, in _bootstrap self.run() File "/home/giampaolo/cpython/Lib/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "foo.py", line 54, in worker pool.apply_async(f, (10, )) File "", line 2, in apply_async File "/home/giampaolo/cpython/Lib/multiprocessing/managers.py", line 802, in _callmethod proxytype = self._manager._registry[token.typeid][-1] AttributeError: 'NoneType' object has no attribute '_registry' ---------- messages: 334962 nosy: davin, giampaolo.rodola, pitrou priority: normal severity: normal status: open title: multiprocessing: shared manager Pool fails with AttributeError _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 14:12:57 2019 From: report at bugs.python.org (Steve Dower) Date: Wed, 06 Feb 2019 19:12:57 +0000 Subject: [New-bugs-announce] [issue35920] Windows 10 ARM32 platform support Message-ID: <1549480377.04.0.927646454693.issue35920@roundup.psfhosted.org> New submission from Steve Dower : As posted at https://mail.python.org/pipermail/python-dev/2019-February/156229.html, add support for the Windows ARM32 platform. This is related to issue33125, but we are doing ARM32 first before considering ARM64. Paul Monson (Paul.Monson at microsoft.com) is implementing the support and will be the primary contact for Windows ARM32 support. ---------- components: Windows messages: 334972 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Windows 10 ARM32 platform support type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 14:25:25 2019 From: report at bugs.python.org (Antoine Pitrou) Date: Wed, 06 Feb 2019 19:25:25 +0000 Subject: [New-bugs-announce] [issue35921] Use ccache by default Message-ID: <1549481125.53.0.646421596529.issue35921@roundup.psfhosted.org> New submission from Antoine Pitrou : While compiling CPython isn't very slow, enabling ccache if found would produce faster builds when developing. ---------- components: Build messages: 334973 nosy: pitrou priority: normal severity: normal status: open title: Use ccache by default type: resource usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 15:33:03 2019 From: report at bugs.python.org (Joseph Myers) Date: Wed, 06 Feb 2019 20:33:03 +0000 Subject: [New-bugs-announce] [issue35922] robotparser crawl_delay and request_rate do not work with no matching entry Message-ID: <1549485183.59.0.634497522614.issue35922@roundup.psfhosted.org> New submission from Joseph Myers : RobotFileParser.crawl_delay and RobotFileParser.request_rate raise AttributeError for a robots.txt with no matching entry for the given user-agent, including no default entry, rather than returning None which would be correct according to the documentation. E.g.: >>> from urllib.robotparser import RobotFileParser >>> parser = RobotFileParser() >>> parser.parse([]) >>> parser.crawl_delay('example') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.6/urllib/robotparser.py", line 182, in crawl_delay return self.default_entry.delay AttributeError: 'NoneType' object has no attribute 'delay' ---------- components: Library (Lib) messages: 334982 nosy: joseph_myers priority: normal severity: normal status: open title: robotparser crawl_delay and request_rate do not work with no matching entry type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 17:00:01 2019 From: report at bugs.python.org (Nina Zakharenko) Date: Wed, 06 Feb 2019 22:00:01 +0000 Subject: [New-bugs-announce] [issue35923] Update the BuiltinImporter in importlib to use loader._ORIGIN instead of a hardcoded value Message-ID: <1549490401.35.0.0798819591421.issue35923@roundup.psfhosted.org> New submission from Nina Zakharenko : Update the BuiltinImporter in importllib to set the origin from the shared `loader._ORIGIN` attribute instead of using the hard-coded value of 'built-in' in order to match the functionality of FrozenImporter. The FrozenImporter was updated to use this attribute in PR GH-11732 (https://github.com/python/cpython/pull/11732) ---------- assignee: nnja components: Interpreter Core messages: 334989 nosy: barry, nnja priority: low severity: normal status: open title: Update the BuiltinImporter in importlib to use loader._ORIGIN instead of a hardcoded value type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 19:02:21 2019 From: report at bugs.python.org (Josiah Ulfers) Date: Thu, 07 Feb 2019 00:02:21 +0000 Subject: [New-bugs-announce] [issue35924] curses segfault resizing window Message-ID: <1549497741.38.0.310308715457.issue35924@roundup.psfhosted.org> New submission from Josiah Ulfers : To provoke a segmentation fault, run the attached, then grab the top or bottom edge of the window. Move it down or up until it overlaps the box. Might need to wiggle the edge a little, but it's reliably reproducible. Expected error, which is what happens when dragging the left or right edge instead of the top or bottom: Traceback (most recent call last): File "cursesfault.py", line 12, in curses.wrapper(main) File "/usr/lib64/python3.6/curses/__init__.py", line 94, in wrapper return func(stdscr, *args, **kwds) File "cursesfault.py", line 9, in main w.addstr(0, 0, box) _curses.error: addwstr() returned ERR Actual error message varies a little. It's either: *** Error in `python3': corrupted size vs. prev_size: 0x000055b3055ba820 *** Aborted (core dumped) Or: *** Error in `python3': double free or corruption (!prev): 0x000055b61e1ffbb0 *** Aborted (core dumped) Or: *** Error in `python': malloc(): memory corruption: 0x0000564907a5a4f0 *** Aborted (core dumped) Possibly relates to issue15581 --- Python 2.7.14 and 3.6.5 OpenSUSE 15.0 KDE Plasma 5.12.6 uname -a Linux ... 4.12.14-lp150.12.45-default #1 SMP Mon Jan 14 20:29:59 UTC 2019 (7a62739) x86_64 x86_64 x86_64 GNU/Linux ---------- components: Extension Modules files: cursesfault.py messages: 334991 nosy: Josiah Ulfers priority: normal severity: normal status: open title: curses segfault resizing window type: crash versions: Python 2.7, Python 3.6 Added file: https://bugs.python.org/file48108/cursesfault.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 20:45:26 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Thu, 07 Feb 2019 01:45:26 +0000 Subject: [New-bugs-announce] [issue35925] test_httplib test_nntplib test_ssl fail on ARMv7 Ubuntu 3.7 and ARMv7 Ubuntu 3.x buildbots Message-ID: <1549503926.32.0.191493141556.issue35925@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Example failures https://buildbot.python.org/all/#/builders/117 https://buildbot.python.org/all/#/builders/106 ====================================================================== ERROR: test_networked_good_cert (test.test_httplib.HTTPSTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/test/test_httplib.py", line 1629, in test_networked_good_cert h.request('GET', '/') File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1229, in request self._send_request(method, url, body, headers, encode_chunked) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1275, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1224, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1016, in _send_output self.send(msg) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 956, in send self.connect() File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1391, in connect self.sock = self._context.wrap_socket(self.sock, File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 405, in wrap_socket return self.sslsocket_class._create( File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 853, in _create self.do_handshake() File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: EE certificate key too weak (_ssl.c:1055) ---------------------------------------------------------------------- Ran 105 tests in 2.477s Got an error: [SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:1055) Got an error: [SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:1055) Got an error: [SSL: SSLV3_ALERT_BAD_CERTIFICATE] sslv3 alert bad certificate (_ssl.c:1055) test_local_bad_hostname (test.test_httplib.HTTPSTest) ... server (('127.0.0.1', 41921):41921 ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256)): [06/Feb/2019 06:22:07] code 404, message File not found server (('127.0.0.1', 41921):41921 ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256)): [06/Feb/2019 06:22:07] "GET /nonexistent HTTP/1.1" 404 - server (('127.0.0.1', 41921):41921 ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256)): [06/Feb/2019 06:22:07] code 404, message File not found server (('127.0.0.1', 41921):41921 ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256)): [06/Feb/2019 06:22:07] "GET /nonexistent HTTP/1.1" 404 - stopping HTTPS server joining HTTPS thread ok test_local_good_hostname (test.test_httplib.HTTPSTest) ... server (('127.0.0.1', 38877):38877 ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256)): [06/Feb/2019 06:22:07] code 404, message File not found server (('127.0.0.1', 38877):38877 ('TLS_AES_256_GCM_SHA384', 'TLSv1.3', 256)): [06/Feb/2019 06:22:07] "GET /nonexistent HTTP/1.1" 404 - stopping HTTPS server joining HTTPS thread ok Got an error: [SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca (_ssl.c:1055) test_local_unknown_cert (test.test_httplib.HTTPSTest) ... stopping HTTPS server joining HTTPS thread ok Multiple SSL failures, also old commits that previously succeeded fail now. This seems something in the buildbot itself. Gregory, do you know if something SLL related was upgraded/modify in the gps-ubuntu-exynos5-armv7l worker? ---------- components: Tests messages: 334996 nosy: gregory.p.smith, pablogsal priority: normal severity: normal status: open title: test_httplib test_nntplib test_ssl fail on ARMv7 Ubuntu 3.7 and ARMv7 Ubuntu 3.x buildbots versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 21:53:23 2019 From: report at bugs.python.org (Paul Monson) Date: Thu, 07 Feb 2019 02:53:23 +0000 Subject: [New-bugs-announce] [issue35926] Need openssl 1.1.1 support on Windows for ARM and ARM64 Message-ID: <1549508003.09.0.398179114845.issue35926@roundup.psfhosted.org> New submission from Paul Monson : Need code and test changes to match https://bugs.python.org/issue35740 ---------- assignee: christian.heimes components: SSL, Windows messages: 334998 nosy: Paul Monson, christian.heimes, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Need openssl 1.1.1 support on Windows for ARM and ARM64 type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 6 23:38:30 2019 From: report at bugs.python.org (ADataGman) Date: Thu, 07 Feb 2019 04:38:30 +0000 Subject: [New-bugs-announce] [issue35927] Intra-package References Documentation Incomplete Message-ID: <1549514310.42.0.442993756904.issue35927@roundup.psfhosted.org> New submission from ADataGman : Attempting to follow https://docs.python.org/3.6/tutorial/modules.html#intra-package-references I was unable to recreate the intra-package reference as described. "For example, if the module sound.filters.vocoder needs to use the echo module in the sound.effects package, it can use from sound.effects import echo." Creating the file structure described in https://docs.python.org/3.6/tutorial/modules.html#packages, with empty __init__.py files at all levels, or with __all__ defined as containing relevant file names, results in "No module named 'sound'". If I try to run this using "from ..effects import echo" then it results in "attempted relative import beyond top-level package". At least one other user has run into this issue with this stack overflow post: https://stackoverflow.com/questions/53109627/python-intra-package-reference-doesnt-work-at-all ---------- assignee: docs at python components: Documentation files: sound.zip messages: 335002 nosy: ADataGman, docs at python priority: normal severity: normal status: open title: Intra-package References Documentation Incomplete type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48109/sound.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 07:30:32 2019 From: report at bugs.python.org (Palle Ravn) Date: Thu, 07 Feb 2019 12:30:32 +0000 Subject: [New-bugs-announce] [issue35928] socket makefile read-write discards received data Message-ID: <1549542632.16.0.548779418887.issue35928@roundup.psfhosted.org> New submission from Palle Ravn : Using socket.makefile in read-write mode had a bug introduced between version 3.6.6 and 3.6.7. The same bug is present in version 3.7.x. The below code example will behave very differently between 3.6.6 and 3.6.7. It's based on the echo-server example from the docs. import socket HOST = '127.0.0.1' PORT = 0 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((HOST, PORT)) print(f'Waiting for connection on port {s.getsockname()[1]}') s.listen(1) conn, addr = s.accept() print(f'Connected by {addr}') with conn: f = conn.makefile(mode='rw') while True: m = f.readline() print(f'msg: {m!r}') if not m: exit(0) f.write(m) f.flush() Python 3.6.7: Sending the string "Hello\nYou\n" will only print "Hello\n" and also only return "Hello\n" to the client. Removing the lines with f.write(m) and f.flush() and both "Hello\n" and "You\n" will be returned to the client. It's like the call to f.write() somehow empties the read buffer. Python 3.6.6: Sending "Hello\nYou\n" will return "Hello\n" and "You\n" to the client without any modifications to the above code. ---------- components: IO messages: 335017 nosy: pravn priority: normal severity: normal status: open title: socket makefile read-write discards received data type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 08:57:07 2019 From: report at bugs.python.org (Rohit travels and tours) Date: Thu, 07 Feb 2019 13:57:07 +0000 Subject: [New-bugs-announce] [issue35929] rtat.net Message-ID: <1549547827.35.0.165428804268.issue35929@roundup.psfhosted.org> Change by Rohit travels and tours : ---------- assignee: docs at python components: 2to3 (2.x to 3.x conversion tool), Build, Demos and Tools, Distutils, Documentation, Extension Modules, FreeBSD, Library (Lib), SSL, email, macOS nosy: barry, docs at python, dstufft, eric.araujo, koobs, ned.deily, r.david.murray, ronaldoussoren, roufique7 priority: normal severity: normal status: open title: rtat.net type: resource usage versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 09:25:30 2019 From: report at bugs.python.org (=?utf-8?b?SmVzw7pzIENlYSBBdmnDs24=?=) Date: Thu, 07 Feb 2019 14:25:30 +0000 Subject: [New-bugs-announce] [issue35930] Raising an exception raised in a "future" instance will create reference cycles Message-ID: <1549549530.21.0.398741964575.issue35930@roundup.psfhosted.org> New submission from Jes?s Cea Avi?n : Try this in a terminal: """ import gc import concurrent.futures executor = concurrent.futures.ThreadPoolExecutor(999) def a(): 1/0 future=executor.submit(a) future.result() # An exception is raised here. That is normal gc.set_debug(gc.DEBUG_SAVEALL) gc.collect() gc.garbage """ You will see (python 3.7) that 23 objects are collected when cleaning the cycle. The problem is the attribute "future._exception". If the exception provided by the "future" is raised somewhere else, we will have reference cycles because we have the same exception/traceback in two different places in the traceback framestack. I commonly do this in my code: """ try: future.result() # This will raise an exception if the future did it except Exception: ... some clean up ... raise # Propagate the "future" exception """ This approach will create reference cycles. They will eventually cleaned up, but I noticed this issue because the cycle clean up phase was touching big objects with many references but unused for a long time, so they were living in the SWAP. The cycle collection was hugely slow because of this and the interpreter is completely stopped until done. Not sure about what to do about this. I am currently doing something like: """ try: future.result() # This will raise an exception if the future did it except Exception: if future.done(): del future._exception raise # Propagate the exception """ I am breaking the cycle manually. I do not use "future.set_exception(None) because side effects like notifying waiters. I think this is a bug to be solved. Not sure how to do it cleanly. What do you think? Ideas?. ---------- messages: 335022 nosy: jcea priority: normal severity: normal status: open title: Raising an exception raised in a "future" instance will create reference cycles versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 11:02:51 2019 From: report at bugs.python.org (daniel hahler) Date: Thu, 07 Feb 2019 16:02:51 +0000 Subject: [New-bugs-announce] [issue35931] pdb: "debug print(" crashes with SyntaxError Message-ID: <1549555371.45.0.73616044825.issue35931@roundup.psfhosted.org> New submission from daniel hahler : `debug print(` will make pdb crash with a SyntaxError: % python -c '__import__("pdb").set_trace()' --Return-- > (1)()->None (Pdb) print( *** SyntaxError: unexpected EOF while parsing (Pdb) debug print( ENTERING RECURSIVE DEBUGGER Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.7/bdb.py", line 92, in trace_dispatch return self.dispatch_return(frame, arg) File "/usr/lib64/python3.7/bdb.py", line 151, in dispatch_return self.user_return(frame, arg) File "/usr/lib64/python3.7/pdb.py", line 293, in user_return self.interaction(frame, None) File "/usr/lib64/python3.7/pdb.py", line 352, in interaction self._cmdloop() File "/usr/lib64/python3.7/pdb.py", line 321, in _cmdloop self.cmdloop() File "/usr/lib64/python3.7/cmd.py", line 138, in cmdloop stop = self.onecmd(line) File "/usr/lib64/python3.7/pdb.py", line 418, in onecmd return cmd.Cmd.onecmd(self, line) File "/usr/lib64/python3.7/cmd.py", line 217, in onecmd return func(arg) File "/usr/lib64/python3.7/pdb.py", line 1099, in do_debug sys.call_tracing(p.run, (arg, globals, locals)) File "/usr/lib64/python3.7/bdb.py", line 582, in run cmd = compile(cmd, "", "exec") File "", line 1 print( ^ SyntaxError: unexpected EOF while parsing ---------- components: Library (Lib) messages: 335025 nosy: blueyed priority: normal severity: normal status: open title: pdb: "debug print(" crashes with SyntaxError versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 12:33:27 2019 From: report at bugs.python.org (Sateesh Kumar) Date: Thu, 07 Feb 2019 17:33:27 +0000 Subject: [New-bugs-announce] [issue35932] Interpreter gets stuck while applying a compiled regex pattern Message-ID: <1549560807.38.0.703809508219.issue35932@roundup.psfhosted.org> New submission from Sateesh Kumar : The python interpreter gets stuck while applying a compiled regex pattern against a given string. The regex matching doesn't get stuck if uncompiled regex pattern is used, or if the flag "re.IGNORECASE" is not used for regex match. Below code snippets gives more details. * Variant-1 # Python process gets stuck ~/workbench$ cat match.py import re pattern = "^[_a-z0-9-]+([\.'_a-z0-9-]+)*@[a-z0-9]+([\.a-z0-9-]+)*(\.[a-z]{2,4})$" compiled_pattern = re.compile(pattern, re.IGNORECASE) val = "Z230-B900_X-Suite_Migration_Shared_Volume" re.match(compiled_pattern, val) ~/workbench$ python match.py ^^^ The interpreter gets stuck. * Variant-2 (Using uncompiled pattern) # python interpreter doesn't get stuck ~/workbench$ cat match.py import re pattern = "^[_a-z0-9-]+([\.'_a-z0-9-]+)*@[a-z0-9]+([\.a-z0-9-]+)*(\.[a-z]{2,4})$" compiled_pattern = re.compile(pattern, re.IGNORECASE) val = "Z230-B900_X-Suite_Migration_Shared_Volume" re.match(pattern, val) * Variant-3 (Using compiled pattern, but without flag re.IGNORECASE) # Python interpreter doesn't get stuck ~/workbench$ cat match.py import re pattern = "^[_a-z0-9-]+([\.'_a-z0-9-]+)*@[a-z0-9]+([\.a-z0-9-]+)*(\.[a-z]{2,4})$" compiled_pattern = re.compile(pattern) val = "Z230-B900_X-Suite_Migration_Shared_Volume" re.match(compiled_pattern, val) # Platform details ~/workbench$ python -V Python 2.7.12 ~/workbench$ uname -a Linux ubuntu16-template 4.4.0-59-generic #80-Ubuntu SMP Fri Jan 6 17:47:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Though above details are from Python/2.7 I see similar beahviour with Python/3.5.2 too. ---------- files: gdb_trace messages: 335029 nosy: Sateesh Kumar priority: normal severity: normal status: open title: Interpreter gets stuck while applying a compiled regex pattern type: crash versions: Python 2.7 Added file: https://bugs.python.org/file48110/gdb_trace _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 12:55:22 2019 From: report at bugs.python.org (Pierre Glaser) Date: Thu, 07 Feb 2019 17:55:22 +0000 Subject: [New-bugs-announce] [issue35933] python doc does not say that the state kwarg in Pickler.save_reduce can be a tuple (and not only a dict) Message-ID: <1549562122.06.0.315178283617.issue35933@roundup.psfhosted.org> New submission from Pierre Glaser : Hello all, This 16-year old commit (*) allows an object's state to be updated using its slots instead of its __dict__ at unpickling time. To use this functionality, the state keyword-argument of Pickler.save_reduce (which maps to the third item of the tuple returned by __reduce__) should be a length-2 tuple. As far as I can tell, this is not mentioned in the documentation (**). I suggest having the docs updated. What do you think? (*) https://github.com/python/cpython/commit/ac5b5d2e8b849c499d323b0263ace22e56b4f0d9 (**) https://docs.python.org/3.8/library/pickle.html#object.__reduce__ ---------- assignee: docs at python components: Documentation messages: 335031 nosy: alexandre.vassalotti, docs at python, pierreglaser, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: python doc does not say that the state kwarg in Pickler.save_reduce can be a tuple (and not only a dict) versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 14:15:02 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Thu, 07 Feb 2019 19:15:02 +0000 Subject: [New-bugs-announce] [issue35934] Add socket.bind_socket() utility function Message-ID: <1549566902.79.0.677687282333.issue35934@roundup.psfhosted.org> New submission from Giampaolo Rodola' : The main point of this patch is to automatize all the necessary tasks which are usually involved when creating a server socket, amongst which: * determining the right family based on address, similarly to socket.create_connection() * whether to use SO_REUSEADDR depending on the platform * set AI_PASSIVE flag for getaddrinfo() * set a more optimal default backlog This is somewhat related to issue17561 which I prefer to leave pending for now (need to think about it more carefully). issue17561 is complementary to this one so it appears it can be integrated later (or never) without altering the base functionality implemented in here. ---------- components: Library (Lib) messages: 335034 nosy: asvetlov, cheryl.sabella, giampaolo.rodola, jaraco, josiah.carlson, loewis, neologix, pitrou priority: normal severity: normal stage: patch review status: open title: Add socket.bind_socket() utility function versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 20:12:57 2019 From: report at bugs.python.org (Chris Billington) Date: Fri, 08 Feb 2019 01:12:57 +0000 Subject: [New-bugs-announce] [issue35935] threading.Event().wait() not interruptable with Ctrl-C on Windows Message-ID: <1549588377.49.0.765820535798.issue35935@roundup.psfhosted.org> New submission from Chris Billington : I'm experiencing that the following short program: import threading event = threading.Event() event.wait() Cannot be interrupted with Ctrl-C on Python 2.7.15 or 3.7.1 on Windows 10 (using the Anaconda Python distribution). However, if the wait is given a timeout: import threading event = threading.Event() while True: if event.wait(10000): break then this is interruptable on Python 2.7.15, but is still uninterruptible on Python 3.7.1. ---------- components: Windows messages: 335049 nosy: Chris Billington, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: threading.Event().wait() not interruptable with Ctrl-C on Windows type: behavior versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 7 22:13:18 2019 From: report at bugs.python.org (Brandt Bucher) Date: Fri, 08 Feb 2019 03:13:18 +0000 Subject: [New-bugs-announce] [issue35936] Give modulefinder some much-needed updates. Message-ID: <1549595598.95.0.389205049029.issue35936@roundup.psfhosted.org> Change by Brandt Bucher : ---------- components: Library (Lib) nosy: brandtbucher priority: normal severity: normal status: open title: Give modulefinder some much-needed updates. type: enhancement versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 00:36:55 2019 From: report at bugs.python.org (Dan Snider) Date: Fri, 08 Feb 2019 05:36:55 +0000 Subject: [New-bugs-announce] [issue35937] Add instancemethod to types.py Message-ID: <1549604215.56.0.840022461634.issue35937@roundup.psfhosted.org> New submission from Dan Snider : Having the ability to add binding behavior to any callable is an incredibly useful feature if one knows how to take advantage of it, but unfortunately, nothing that currently (publicly) exists comes close to filling the gap `instancemethod`s secrecy leaves. There really isn't anything else to say about it because it's simply amazing at what it does, and publicly exposing it has zero drawback that I could possibly imagine. Here's a benchmark I just ran 3.6 (can't check 3.7 but shouldn't be significantly different): if 1: from functools import _lru_cache_wrapper, partial method = type((lambda:0).__get__(0)) for cls in object.__subclasses__(): if cls.__name__ == 'instancemethod': instancemethod = cls break del cls class Object: lrx = _lru_cache_wrapper(id, 0, 0, tuple) imx = instancemethod(id) smx = property(partial(method, id)) fnx = lambda self, f=id: f(self) >>> ob = Object() >>> stex.Compare.load_meth_call('ob', '', None, 'lrx', 'imx', 'smx', 'fnx') ------------------------------------------------------------------------ [*] Trial duration: 0.5 seconds [*] Num trials: 20 [*] Verbose: True [*] Imports: ('ob',) +-------+-----------------------+-------+-------+ +-INDEX +-INSTRUCTION + OPARG-+ STACK-+ +-------------------------------+-------+-------+ |[ 22] FOR_ITER ------------> ( 12) 1| |[ 24] STORE_FAST ----------> ( 2) 0| |[ 26] LOAD_FAST -----------> ( 0) 1| |[ 28] LOAD_ATTR -----------> ( 2) 1| |[ 30] CALL_FUNCTION -------> ( 0) 1| |[ 32] POP_TOP ----- 0| |[ 34] JUMP_ABSOLUTE -------> ( 22) 0| +-------+-------+-----------+---+-------+-------+---+ +-STATEMENT + NANOSEC-+ NUM LOOPS-+ EACH LOOP-+ +---------------+-----------+-----------+-----------+ | "ob.lrx()" -> 10.000(b) 55_902(k) 179(ns) | "ob.imx()" -> 10.000(b) 80_053(k) 125(ns) | "ob.smx()" -> 10.000(b) 48_040(k) 208(ns) | "ob.fnx()" -> 10.000(b) 42_759(k) 234(ns) ---------- components: Library (Lib) messages: 335058 nosy: bup priority: normal severity: normal status: open title: Add instancemethod to types.py type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 02:59:17 2019 From: report at bugs.python.org (=?utf-8?b?RmXImXRpbMSDIEdlb3JnZSBDxIN0xINsaW4=?=) Date: Fri, 08 Feb 2019 07:59:17 +0000 Subject: [New-bugs-announce] [issue35938] crash of METADATA file cannot be fixed by reinstall of python Message-ID: <1549612757.81.0.536418249681.issue35938@roundup.psfhosted.org> New submission from Fe?til? George C?t?lin : The pip install module crash with this error: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: 'c:\\python364\\lib\\site-packages\\traits-4.6.0.dist-info\\METADATA ' You are using pip version 18.1, however version 19.0.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' comm and. I try to fix with the reinstall the python 3.6.4 but not working. I think the METADATA was delete by the antivirus. The version I used is 3.6.4 but the crash can be at any python version. ---------- messages: 335060 nosy: catafest priority: normal severity: normal status: open title: crash of METADATA file cannot be fixed by reinstall of python type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 03:14:43 2019 From: report at bugs.python.org (Dong-hee Na) Date: Fri, 08 Feb 2019 08:14:43 +0000 Subject: [New-bugs-announce] [issue35939] Remove urllib.parse._splittype from mimetypes.guess_type Message-ID: <1549613683.87.0.509050815243.issue35939@roundup.psfhosted.org> New submission from Dong-hee Na : Since urllib.parse.splittype is deprecated on 3.8. In the future urllib.parse._splittype also should be removed. I found that mimetypes.guess_type uses urllib.parse._splittype and it can be replaced and it also looks like solve the issue mentioned on https://github.com/python/cpython/blame/e42b705188271da108de42b55d9344642170aa2b/Lib/test/test_urllib2.py#L749 And I checked that all unit tests are passed when --- a/Lib/mimetypes.py +++ b/Lib/mimetypes.py @@ -114,7 +114,8 @@ class MimeTypes: but non-standard types. """ url = os.fspath(url) - scheme, url = urllib.parse._splittype(url) + p = urllib.parse.urlparse(url) + scheme, url = p.scheme, p.path if scheme == 'data': # syntax of data URLs: # dataurl := "data:" [ mediatype ] [ ";base64" ] "," data diff --git a/Lib/test/test_urllib2.py b/Lib/test/test_urllib2.py index 876fcd4199..0677390c2b 100644 --- a/Lib/test/test_urllib2.py +++ b/Lib/test/test_urllib2.py @@ -746,7 +746,7 @@ class HandlerTests(unittest.TestCase): ["foo", "bar"], "", None), ("ftp://localhost/baz.gif;type=a", "localhost", ftplib.FTP_PORT, "", "", "A", - [], "baz.gif", None), # XXX really this should guess image/gif + [], "baz.gif", "image/gif"), ]: req = Request(url) req.timeout = None I'd like to work on this issue if this proposal is accepted! Always thanks ---------- components: Library (Lib) messages: 335061 nosy: corona10 priority: normal severity: normal status: open title: Remove urllib.parse._splittype from mimetypes.guess_type type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 09:06:48 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 08 Feb 2019 14:06:48 +0000 Subject: [New-bugs-announce] [issue35940] multiprocessing manager tests fail in the Refleaks buildbots Message-ID: <1549634808.3.0.556124260005.issue35940@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : After PR11772, some buildbots with refleak checks fail because the tests modify the environment (leaking processes or threads or files ...). Some of the failures: https://buildbot.python.org/all/#builders/80/builds/506 ttps://buildbot.python.org/all/#builders/114/builds/375 https://buildbot.python.org/all/#builders/1/builds/497 == Tests result: ENV CHANGED == 409 tests OK. 10 slowest tests: - test_multiprocessing_spawn: 32 min 19 sec - test_concurrent_futures: 25 min 37 sec - test_asyncio: 24 min 56 sec - test_multiprocessing_forkserver: 13 min 57 sec - test_multiprocessing_fork: 11 min 2 sec - test_zipfile: 10 min 2 sec - test_decimal: 8 min 30 sec - test_gdb: 8 min 301 ms - test_lib2to3: 7 min 46 sec - test_buffer: 7 min 43 sec 3 tests altered the execution environment: test_multiprocessing_fork test_multiprocessing_forkserver test_multiprocessing_spawn 8 tests skipped: test_devpoll test_kqueue test_msilib test_startfile test_winconsoleio test_winreg test_winsound test_zipfile64 ---------- components: Tests messages: 335083 nosy: giampaolo.rodola, pablogsal, pitrou priority: normal severity: normal status: open title: multiprocessing manager tests fail in the Refleaks buildbots versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 09:43:29 2019 From: report at bugs.python.org (Michael Schlenker) Date: Fri, 08 Feb 2019 14:43:29 +0000 Subject: [New-bugs-announce] [issue35941] ssl.enum_certificates() regression Message-ID: <1549637009.73.0.764067041861.issue35941@roundup.psfhosted.org> New submission from Michael Schlenker : The introduction of the ReadOnly flag in the ssl.enum_certificates() function implementation has introduced a regression. The old version returned certificates for both the current user and the local system, the new function only enumerates system wide certificates and ignores the current user. The old function before Patch from https://bugs.python.org/issue25939 used a different function to open the certificate store (CertOpenStore vs. CertOpenSystemStore). Probably some of the param flags are not identical, the new code explictly lists only local system. Testing: 1. Import a self signed CA only into the 'current user' trustworthy certificates. 2. Use IE to Connect to a https:// website using that trust root. Works. 3. Try to open the website with old python and new python. Old one works, new one fails. Or just enum certificates: 1. Import a self signed CA into the current_user trusted store. 2. Compare outputs of: import ssl len(ssl.enum_certificates('ROOT')) ---------- assignee: christian.heimes components: SSL, Windows messages: 335084 nosy: christian.heimes, paul.moore, schlenk, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ssl.enum_certificates() regression type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 11:34:39 2019 From: report at bugs.python.org (=?utf-8?q?=C5=81ukasz_Langa?=) Date: Fri, 08 Feb 2019 16:34:39 +0000 Subject: [New-bugs-announce] [issue35942] posixmodule.c:path_converter() returns an invalid exception message for broken PathLike objects Message-ID: <1549643679.09.0.475002054663.issue35942@roundup.psfhosted.org> New submission from ?ukasz Langa : >>> class K: ... def __fspath__(self): ... return 1 ... >>> import os >>> os.stat(K()) Traceback (most recent call last): File "", line 1, in TypeError: stat: path should be string, bytes, os.PathLike or integer, not int This error message is internally inconsistent: - it suggests that the error is about the path argument whereas it's in fact about the value returned from `__fspath__()` - it hilariously states "should be integer, not int" - it claims os.PathLike is fine as a return value from `__fspath__()` whereas it's not I would advise removing the custom `__fspath__()` handling from `path_converter` and just directly using PyOS_FSPath which returns a valid error in this case (example from pypy3): >>>> class K: .... def __fspath__(self): .... return 1 .... >>>> import os >>>> os.open(K(), os.O_RDONLY) Traceback (most recent call last): File "", line 1, in TypeError: expected K.__fspath__() to return str or bytes, not int ---------- messages: 335094 nosy: lukasz.langa priority: normal severity: normal stage: needs patch status: open title: posixmodule.c:path_converter() returns an invalid exception message for broken PathLike objects type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 12:13:37 2019 From: report at bugs.python.org (Antoine Pitrou) Date: Fri, 08 Feb 2019 17:13:37 +0000 Subject: [New-bugs-announce] [issue35943] PyImport_GetModule() can return partially-initialized module Message-ID: <1549646017.22.0.395154635441.issue35943@roundup.psfhosted.org> New submission from Antoine Pitrou : PyImport_GetModule() returns whatever is in sys.modules, even if the module is still importing and therefore only partially initialized. One possibility is to reuse the optimization already done in PyImport_ImportModuleLevelObject(): /* Optimization: only call _bootstrap._lock_unlock_module() if __spec__._initializing is true. NOTE: because of this, initializing must be set *before* stuffing the new module in sys.modules. */ spec = _PyObject_GetAttrId(mod, &PyId___spec__); if (_PyModuleSpec_IsInitializing(spec)) { PyObject *value = _PyObject_CallMethodIdObjArgs(interp->importlib, &PyId__lock_unlock_module, abs_name, NULL); if (value == NULL) { Py_DECREF(spec); goto error; } Py_DECREF(value); } Py_XDECREF(spec); Issue originally mentioned in issue34572. ---------- components: Interpreter Core messages: 335097 nosy: brett.cannon, eric.snow, ncoghlan, pitrou priority: normal severity: normal stage: needs patch status: open title: PyImport_GetModule() can return partially-initialized module type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 16:03:17 2019 From: report at bugs.python.org (princebaridi) Date: Fri, 08 Feb 2019 21:03:17 +0000 Subject: [New-bugs-announce] [issue35944] Python 3.7 install error Message-ID: <1549659797.94.0.56697514118.issue35944@roundup.psfhosted.org> New submission from princebaridi : win10 x64 bit machine. looks like msi installer error/ ---------- components: Installation files: Python 3.7.2 (64-bit)_20190208145004_000_core_JustForMe.log messages: 335107 nosy: lasonjack priority: normal severity: normal status: open title: Python 3.7 install error type: crash versions: Python 3.7 Added file: https://bugs.python.org/file48128/Python 3.7.2 (64-bit)_20190208145004_000_core_JustForMe.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 16:16:16 2019 From: report at bugs.python.org (Nic Watson) Date: Fri, 08 Feb 2019 21:16:16 +0000 Subject: [New-bugs-announce] [issue35945] Cannot distinguish between subtask cancellation and running task cancellation Message-ID: <1549660576.62.0.347711337086.issue35945@roundup.psfhosted.org> New submission from Nic Watson : Goal: to distinguish inside a CancelledError exception handler whether the current running task is being cancelled, or another task that the current task is pending on was cancelled. Example: import asyncio async def task2func(): await asyncio.sleep(2) async def task1func(task2): try: await task2 except asyncio.CancelledError: print("I don't know if I got cancelled") async def main(): loop = asyncio.get_event_loop() task2 = loop.create_task(task2func()) task1 = loop.create_task(task1func(task2)) await asyncio.sleep(0.5) print('Cancelling first task') task1.cancel() await task1 await asyncio.sleep(0.5) task2 = loop.create_task(task2func()) task1 = loop.create_task(task1func(task2)) await asyncio.sleep(0.5) print('Cancelling second task') task2.cancel() await task1 asyncio.run(main()) If I have a task1 waiting on a task2, there's no public method (that I can tell) available in the task1func exception handler to distinguish between whether task1 or task2 were cancelled. There is an internal field task_must_cancel in the C task struct that could be used to interrogate one's own current task to see if it is being cancelled. It is not available from Python (without ctypes hacking). The Python implementation's equivalent is called _must_cancel. (I'm not sure it is a good idea to export this from an API design perspective, but it does mean a mechanism exists.) A workaround is that one can explicitly add attributes to the Python task object to communicate that a task is *starting to* be cancelled. This field would have the same purpose as the task_must_cancel field. ---------- components: asyncio messages: 335108 nosy: asvetlov, jnwatson, yselivanov priority: normal severity: normal status: open title: Cannot distinguish between subtask cancellation and running task cancellation versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 17:15:08 2019 From: report at bugs.python.org (Mark Forrer) Date: Fri, 08 Feb 2019 22:15:08 +0000 Subject: [New-bugs-announce] [issue35946] Ambiguous documentation for assert_called_with() Message-ID: <1549664108.48.0.268470561842.issue35946@roundup.psfhosted.org> New submission from Mark Forrer : The documentation for assert_called_with() is ambiguous with regard to the fact that the method only checks the most recent call. This behavior for assert_called_with() is only documented under assert_any_call(), which users are unlikely to read when making quick reference to the documentation. ---------- assignee: docs at python components: Documentation messages: 335112 nosy: chimaerase, docs at python priority: normal severity: normal status: open title: Ambiguous documentation for assert_called_with() versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 17:40:46 2019 From: report at bugs.python.org (Paul Monson) Date: Fri, 08 Feb 2019 22:40:46 +0000 Subject: [New-bugs-announce] [issue35947] Update libffi_msvc to current version of libffi Message-ID: <1549665646.45.0.12391127948.issue35947@roundup.psfhosted.org> New submission from Paul Monson : libffi needs to be updated to the current version for Windows builds to make it easier to add ARM support ---------- components: Windows, ctypes messages: 335115 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Update libffi_msvc to current version of libffi type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 17:42:57 2019 From: report at bugs.python.org (Paul Monson) Date: Fri, 08 Feb 2019 22:42:57 +0000 Subject: [New-bugs-announce] [issue35948] update version of libffi in cpython-sources-dep Message-ID: <1549665777.79.0.514593871654.issue35948@roundup.psfhosted.org> New submission from Paul Monson : libffi needs to be updated to the current version for Windows builds to make it easier to add ARM support ---------- components: Windows, ctypes messages: 335116 nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: update version of libffi in cpython-sources-dep type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 8 21:00:19 2019 From: report at bugs.python.org (Eric Snow) Date: Sat, 09 Feb 2019 02:00:19 +0000 Subject: [New-bugs-announce] [issue35949] Move PyThreadState into Include/internal/pycore_pystate.h Message-ID: <1549677619.84.0.322696563838.issue35949@roundup.psfhosted.org> New submission from Eric Snow : (also see issue #35886) In November Victor created the Include/cpython directory and moved a decent amount of public (but not limited) API there. This included the PyThreadState struct. I'd like to move it into the "internal" headers since it is somewhat coupled to the internal runtime implementation. The alternative is extra complexity. I ran into this while working on another issue. There are 2 problems, however, with PyThreadState which make a move potentially challenging. In fact, they could be (but probably aren't) cause for moving it back to Include/pystate.h. 1. the docs say: "The only public data member is PyInterpreterState *interp, which points to this thread?s interpreter state." 2. the struct has 6 fields that are exposed (likely unintentionally) in "stable" header files via the following macros: # Include/object.h Py_TRASHCAN_SAFE_BEGIN Py_TRASHCAN_SAFE_END Include.ceval.h Py_EnterRecursiveCall Py_LeaveRecursiveCall _Py_MakeRecCheck Py_ALLOW_RECURSION Py_END_ALLOW_RECURSION I'm not sure how that factors into the stable ABI (PyThreadState wasn't ever guarded by Py_LIMITED_API). However, keep in mind that Victor already moved PyThreadState out of the "stable" headers in November. I'm fine with sorting out the situation with the macros if that's okay to do. Likewise the promised field ("interp") should be solvable one way or another (e.g. remove it or use a private struct). ...or we could do nothing or (if the change in Novemver is a problem) move it back to Include/pystate.h. ---------- components: Interpreter Core messages: 335122 nosy: eric.snow, ncoghlan, vstinner priority: normal severity: normal status: open title: Move PyThreadState into Include/internal/pycore_pystate.h type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 9 08:36:22 2019 From: report at bugs.python.org (Steve Palmer) Date: Sat, 09 Feb 2019 13:36:22 +0000 Subject: [New-bugs-announce] [issue35950] io.BufferedReader.writabe is False, but io.BufferedReader.truncate does not raise OSError Message-ID: <1549719382.2.0.923720230511.issue35950@roundup.psfhosted.org> New submission from Steve Palmer : An io.BUfferedReader object has an (inherited) writable method that returns False. io.IOBase states in the description of the writable method that "If False, write() and truncate() will raise OSError." However, if the BufferedReader object is constructed from a writabe io.RawIOBase object, then the truncate does not raise the exception. >>> import io >>> import tempfile >>> rf = tempfile.TemporaryFile(buffering=0) >>> rf <_io.FileIO name=3 mode='rb+' closefd=True> >>> bf = io.BufferedReader(rf) >>> bf.writable() False >>> bf.truncate(0) 0 Looking at _pyio.py file, the truncate method in the _BufferedIOMixin wrapper class delegates the truncation to it's raw attribute. If the raw element permits the truncation, then it will proceed regardless of the writable state of the wrapper class. I'd suggest that modifying the truncate method in _BufferedIOMixin to raise OSError (or Unsupported) if not self.writable() could fix this problem. ---------- components: IO messages: 335132 nosy: steverpalmer priority: normal severity: normal status: open title: io.BufferedReader.writabe is False, but io.BufferedReader.truncate does not raise OSError versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 9 12:10:42 2019 From: report at bugs.python.org (Chris Jerdonek) Date: Sat, 09 Feb 2019 17:10:42 +0000 Subject: [New-bugs-announce] [issue35951] os.renames() creates directories if original name doesn't exist Message-ID: <1549732242.93.0.821054504106.issue35951@roundup.psfhosted.org> New submission from Chris Jerdonek : os.renames() creates and leaves behind the intermediate directories if the original (source) path doesn't exist. >>> import os >>> os.mkdir('temp') >>> os.mkdir('temp/test') >>> os.renames('temp/not-exists', 'temp/test2/test3/test4') Traceback (most recent call last): File "", line 1, in File "/.../3.6.5/lib/python3.6/os.py", line 267, in renames rename(old, new) FileNotFoundError: [Errno 2] No such file or directory: 'temp/not-exists' -> 'temp/test2/test3/test4' >>> os.listdir('temp/test2') ['test3'] >>> os.listdir('temp/test2/test3') [] ---------- components: Library (Lib) messages: 335134 nosy: chris.jerdonek priority: normal severity: normal status: open title: os.renames() creates directories if original name doesn't exist type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 9 16:06:05 2019 From: report at bugs.python.org (Xavier de Gaye) Date: Sat, 09 Feb 2019 21:06:05 +0000 Subject: [New-bugs-announce] [issue35952] test.pythoninfo prints a stack trace and exits with 1 when the compiler does not exist Message-ID: <1549746365.16.0.119204181787.issue35952@roundup.psfhosted.org> New submission from Xavier de Gaye : The call to subprocess.Popen() in collect_cc() of Lib/test/pythoninfo.py raises an exception when the compiler that has been used to build python is not present on the host running python. In that case pythoninfo prints a stack trace and exits with an error. On the other side if the execution of the compiler to get its version fails, the error is silently ignored. It seems the exception should be ignored as well. ---------- components: Library (Lib) messages: 335139 nosy: xdegaye priority: normal severity: normal status: open title: test.pythoninfo prints a stack trace and exits with 1 when the compiler does not exist type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 06:48:46 2019 From: report at bugs.python.org (muhzi) Date: Sun, 10 Feb 2019 11:48:46 +0000 Subject: [New-bugs-announce] [issue35953] crosscompilation fails with clang on android Message-ID: <1549799326.05.0.946506554759.issue35953@roundup.psfhosted.org> Change by muhzi : ---------- components: Cross-Build nosy: Alex.Willmer, muhzi priority: normal severity: normal status: open title: crosscompilation fails with clang on android type: compile error versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 07:28:34 2019 From: report at bugs.python.org (Adeokkuw) Date: Sun, 10 Feb 2019 12:28:34 +0000 Subject: [New-bugs-announce] [issue35954] Incoherent type conversion in configparser Message-ID: <1549801714.53.0.77041525212.issue35954@roundup.psfhosted.org> New submission from Adeokkuw : configparser interface implicitly converts all objects to str (read_dict [sic] on line 724 of "3.7/Lib/configparser.py") while saving but not while lookup (__getitem__ on line 956). MWE: ``` config = configparser.ConfigParser() config[123] = {} print(config[123]) ``` ~> KeyError: 123 ---------- components: Library (Lib) messages: 335150 nosy: Adeokkuw priority: normal severity: normal status: open title: Incoherent type conversion in configparser type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 09:11:58 2019 From: report at bugs.python.org (Jason R. Coombs) Date: Sun, 10 Feb 2019 14:11:58 +0000 Subject: [New-bugs-announce] [issue35955] unittest assertEqual reports incorrect location of mismatch Message-ID: <1549807918.24.0.168325596231.issue35955@roundup.psfhosted.org> New submission from Jason R. Coombs : In [this job](https://travis-ci.org/jaraco/cmdix/jobs/491246158), a project is using assertEqual to compare two directory listings that don't match in the group. But the hint markers pointing to the mismatch are pointing at positions that match: E AssertionError: '--w-[50 chars]drwxrwxr-x 2 2000 2000 4096 2019-02-10 14:[58 chars]oo\n' != '--w-[50 chars]drwxr-xr-x 2 2000 2000 4096 2019-02-10 14:[58 chars]oo\n' E --w-r---wx 1 2000 2000 999999 2019-02-10 14:02 bar E - drwxrwxr-x 2 2000 2000 4096 2019-02-10 14:02 biz E ? --- E + drwxr-xr-x 2 2000 2000 4096 2019-02-10 14:02 biz E ? +++ E - -rw-rw-r-- 1 2000 2000 100 2019-02-10 14:02 foo E ? --- E + -rw-r--r-- 1 2000 2000 100 2019-02-10 14:02 foo E ? +++ As you can see, it's the 'group' section of the flags that differ between the left and right comparison, but the hints point at the 'user' section for the left side and the 'world' section for the right side, even though they match. I observed this on Python 3.7.1. I haven't delved deeper to see if the issue exists on 3.7.2 or 3.8. ---------- components: Library (Lib) messages: 335154 nosy: jaraco priority: normal severity: normal status: open title: unittest assertEqual reports incorrect location of mismatch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 11:33:39 2019 From: report at bugs.python.org (fabrice salvaire) Date: Sun, 10 Feb 2019 16:33:39 +0000 Subject: [New-bugs-announce] [issue35956] Sort documentation could be improved for complex sorting Message-ID: <1549816419.03.0.97544553945.issue35956@roundup.psfhosted.org> New submission from fabrice salvaire : I just implemented Graham Scan algorithm in Python 3 and have to read carefully the sort documentation. Notice this is a good algorithm for a large audience language like Python. Since Python 3, the old order function cmp is depicted as an old way to proceed. But some sorting procedure require complex order like this def sort_by_y(p0, p1): return p0.x - p1.x if (p0.y == p1.y) else p0.y - p1.y sorted(points, key=cmp_to_key(sort_by_y)) which is less natural to implement than def sort_by_y(p0, p1): return p0.x < p1.x if (p0.y == p1.y) else p0.y < p1.y sorted(points, cmp=sort_by_y) Since Python 3 we should do this points.sort(key=attrgetter('x')) points.sort(key=attrgetter('y')) But we must take care to the chaining order !!! Here we must sort first on x then on y. I think the documentation could explain much better how to perform complex sort and the performance of the Python sort algorithm. Is the old way faster than the new one ??? What about short and large array ??? What happen when we sort a zillion of short array ??? ---------- assignee: docs at python components: Documentation messages: 335163 nosy: FabriceSalvaire, docs at python priority: normal severity: normal status: open title: Sort documentation could be improved for complex sorting type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 12:26:57 2019 From: report at bugs.python.org (=?utf-8?q?J=C3=A9r=C3=B4me_LAURENS?=) Date: Sun, 10 Feb 2019 17:26:57 +0000 Subject: [New-bugs-announce] [issue35957] Indentation explanation is unclear Message-ID: <1549819617.52.0.344509790006.issue35957@roundup.psfhosted.org> New submission from J?r?me LAURENS : https://docs.python.org/3/reference/lexical_analysis.html#indentation reads Point 1: "Tabs are replaced (from left to right) by one to eight spaces such that the total number of characters up to and including the replacement is a multiple of eight" and in the next paragraph Point 2: "Indentation is rejected as inconsistent if a source file mixes tabs and spaces in a way that makes the meaning dependent on the worth of a tab in spaces" In point 1, each tab has definitely a unique space counterpart, in point 2, tabs may have different space counterpart, which one is reliable ? The documentation should state that Point 1 concerns cPython, or at least indicate that the 8 may depend on the implementation, which then gives sense to point 2. ---------- assignee: docs at python components: Documentation messages: 335165 nosy: J?r?me LAURENS, docs at python priority: normal severity: normal status: open title: Indentation explanation is unclear type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 12:37:17 2019 From: report at bugs.python.org (Jon McMahon) Date: Sun, 10 Feb 2019 17:37:17 +0000 Subject: [New-bugs-announce] [issue35958] io.IOBase subclasses don't play nice with abc.abstractmethod Message-ID: <1549820237.63.0.597110746163.issue35958@roundup.psfhosted.org> New submission from Jon McMahon : Subclasses of io.IOBase can be instantiated with abstractmethod()s, even though ABCs are supposed to prevent this from happening. I'm guessing this has to do with io using the _io C module because the alternative pure-python implementation _pyio doesn't seem to have this issue. I'm using Python 3.6.7 >>> import _pyio >>> import io >>> import abc >>> class TestPurePython(_pyio.IOBase): ... @abc.abstractmethod ... def foo(self): ... print('Pure python implementation') ... >>> class TestCExtension(io.IOBase): ... @abc.abstractmethod ... def bar(self): ... print('C extension implementation') ... >>> x=TestPurePython() Traceback (most recent call last): File "", line 1, in TypeError: Can't instantiate abstract class TestPurePython with abstract methods foo >>> y=TestCExtension() >>> y.bar() C extension implementation >>> ---------- components: IO messages: 335166 nosy: Jon McMahon priority: normal severity: normal status: open title: io.IOBase subclasses don't play nice with abc.abstractmethod versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 14:08:53 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sun, 10 Feb 2019 19:08:53 +0000 Subject: [New-bugs-announce] [issue35959] math.prod(range(10)) caues segfault Message-ID: <1549825733.5.0.0745476001214.issue35959@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : math.prod introduced with issue35606 seems to segfault when zero is present on some cases like start or middle of the iterable. I couldn't find the exact cause of this. This also occurs in optimized builds. # Python information built with ./configure && make ? cpython git:(master) ./python.exe Python 3.8.0a1+ (heads/master:8a03ff2ff4, Feb 11 2019, 00:13:49) [Clang 7.0.2 (clang-700.1.81)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> # Segfaults with range(10), [0, 1, 2, 3] and [1, 0, 2, 3] ? cpython git:(master) ./python.exe -X faulthandler -c 'import math; print(math.prod(range(10)))' Fatal Python error: Floating point exception Current thread 0x00007fff7939f300 (most recent call first): File "", line 1 in [1] 40465 floating point exception ./python.exe -X faulthandler -c 'import math; print(math.prod(range(10)))' ? cpython git:(master) ./python.exe -X faulthandler -c 'import math; print(math.prod([0, 1, 2, 3]))' Fatal Python error: Floating point exception Current thread 0x00007fff7939f300 (most recent call first): File "", line 1 in [1] 40414 floating point exception ./python.exe -X faulthandler -c 'import math; print(math.prod([0, 1, 2, 3]))' ? cpython git:(master) ./python.exe -X faulthandler -c 'import math; print(math.prod([1, 0, 2, 3]))' Fatal Python error: Floating point exception Current thread 0x00007fff7939f300 (most recent call first): File "", line 1 in [1] 40425 floating point exception ./python.exe -X faulthandler -c 'import math; print(math.prod([1, 0, 2, 3]))' # No segfault when zero is at the end and floats seem to work fine. ? cpython git:(master) ./python.exe -X faulthandler -c 'import math; print(math.prod([1, 2, 3, 0]))' 0 ? cpython git:(master) ./python.exe -c 'import math; print(math.prod(map(float, range(10))))' 0.0 ---------- components: Library (Lib) messages: 335168 nosy: pablogsal, rhettinger, xtreak priority: normal severity: normal status: open title: math.prod(range(10)) caues segfault type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 20:06:54 2019 From: report at bugs.python.org (Christopher Hunt) Date: Mon, 11 Feb 2019 01:06:54 +0000 Subject: [New-bugs-announce] [issue35960] dataclasses.field does not preserve empty metadata object Message-ID: <1549847214.14.0.863197964236.issue35960@roundup.psfhosted.org> New submission from Christopher Hunt : The metadata argument to dataclasses.field is not preserved in the resulting Field.metadata attribute if the argument is a mapping with length 0. The docs for dataclasses.field state: > metadata: This can be a mapping or None. None is treated as an empty dict. This value is wrapped in MappingProxyType() to make it read-only, and exposed on the Field object. The docs for MappingProxyType() state: > Read-only proxy of a mapping. It provides a dynamic view on the mapping?s entries, which means that when the mapping changes, the view reflects these changes. I assumed that the mapping provided could be updated after class initialization and the changes would reflect in the field's metadata attribute. Indeed this is the case when the mapping is non-empty, but not when the mapping is initially empty. For example: $ python Python 3.8.0a1+ (heads/master:9db56fb8fa, Feb 10 2019, 19:54:10) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from dataclasses import field >>> d = {} >>> v = field(metadata=d) >>> d['i'] = 1 >>> v.metadata mappingproxy({}) >>> v = field(metadata=d) >>> v.metadata mappingproxy({'i': 1}) >>> d['j'] = 2 >>> v.metadata mappingproxy({'i': 1, 'j': 2}) In my case I have a LazyDict into which I was trying to save partial(callback, field). I could not have the field before it was initialized so I tried: d = {} member: T = field(metadata=d) d['key'] = partial(callback, field) and it failed same as above. As a workaround, one can set a dummy value in the mapping prior to calling dataclasses.field and then remove/overwrite it afterwards. ---------- components: Library (Lib) messages: 335184 nosy: chrahunt priority: normal severity: normal status: open title: dataclasses.field does not preserve empty metadata object type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 10 20:25:10 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 11 Feb 2019 01:25:10 +0000 Subject: [New-bugs-announce] [issue35961] gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small Message-ID: <1549848310.82.0.602608735214.issue35961@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : I am seeing some failures in travis and some buildbots: 0:02:24 load avg: 3.30 [147/420/1] test_slice crashed (Exit code -6) -- running: test_multiprocessing_spawn (32 sec 523 ms), test_asyncio (45 sec 433 ms), test_multiprocessing_forkserver (47 sec 869 ms) Modules/gcmodule.c:110: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small Enable tracemalloc to get the memory block allocation traceback object : .BadCmp object at 0x2ab2051faef0> type : BadCmp refcount: 1 address : 0x2ab2051faef0 Fatal Python error: _PyObject_AssertFailed Current thread 0x00002ab1fe0519c0 (most recent call first): File "/home/travis/build/python/cpython/Lib/test/test_slice.py", line 107 in File "/home/travis/build/python/cpython/Lib/unittest/case.py", line 197 in handle File "/home/travis/build/python/cpython/Lib/unittest/case.py", line 782 in assertRaises File "/home/travis/build/python/cpython/Lib/test/test_slice.py", line 107 in test_cmp File "/home/travis/build/python/cpython/Lib/unittest/case.py", line 642 in run File "/home/travis/build/python/cpython/Lib/unittest/case.py", line 702 in __call__ File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 122 in run File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 84 in __call__ File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 122 in run File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 84 in __call__ File "/home/travis/build/python/cpython/Lib/unittest/suite.py", line 122 in run File "/home/travis/build/python/cpython/Lib/unittest/runner.py", line 176 in run File "/home/travis/build/python/cpython/Lib/test/support/__init__.py", line 1935 in _run_suite File "/home/travis/build/python/cpython/Lib/test/support/__init__.py", line 2031 in run_unittest File "/home/travis/build/python/cpython/Lib/test/libregrtest/runtest.py", line 178 in test_runner File "/home/travis/build/python/cpython/Lib/test/libregrtest/runtest.py", line 182 in runtest_inner File "/home/travis/build/python/cpython/Lib/test/libregrtest/runtest.py", line 127 in runtest File "/home/travis/build/python/cpython/Lib/test/libregrtest/runtest_mp.py", line 68 in run_tests_worker File "/home/travis/build/python/cpython/Lib/test/libregrtest/main.py", line 600 in _main File "/home/travis/build/python/cpython/Lib/test/libregrtest/main.py", line 586 in main File "/home/travis/build/python/cpython/Lib/test/libregrtest/main.py", line 640 in main File "/home/travis/build/python/cpython/Lib/test/regrtest.py", line 46 in _main File "/home/travis/build/python/cpython/Lib/test/regrtest.py", line 50 in File "/home/travis/build/python/cpython/Lib/runpy.py", line 85 in _run_code Usually, this happens with test_slice but when the test runner runs test_slice in isolation, it succeeds. I am afraid that this will be a weird combination of tests. ---------- components: Tests messages: 335185 nosy: pablogsal priority: normal severity: normal status: open title: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 05:12:39 2019 From: report at bugs.python.org (Magnien Sebastien) Date: Mon, 11 Feb 2019 10:12:39 +0000 Subject: [New-bugs-announce] [issue35962] Slight error in words in [ 2.4.1. String and Bytes literals ] Message-ID: <1549879959.7.0.265215879488.issue35962@roundup.psfhosted.org> New submission from Magnien Sebastien : The documentation reads : " The backslash (\) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character. " However, 'n' does not "otherwise have a special meaning", nor does it represent a new line. The backslash character does in fact do two different things : 1) It removes special meanings from characters that have one (\\). 2) It assigns a special meaning to normal characters (\n). A better description would therefore be : " The backslash (\) character is used to either escape characters that have a special meaning, such as backslash itself, or the quote character - or give special meaning to characters that do not have one, such as 'n', whose escapment '\n' means 'newline'. " ---------- assignee: docs at python components: Documentation messages: 335205 nosy: Magnien Sebastien, docs at python priority: normal severity: normal status: open title: Slight error in words in [ 2.4.1. String and Bytes literals ] type: enhancement versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 06:31:24 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 11 Feb 2019 11:31:24 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue35963=5D_Python/symtable?= =?utf-8?q?=2Ec=3A_warning=3A_enumeration_value_=E2=80=98FunctionType=5Fki?= =?utf-8?q?nd=E2=80=99_not_handled_in_switch_=5B-Wswitch=5D=22?= Message-ID: <1549884684.1.0.958621485817.issue35963@roundup.psfhosted.org> New submission from STINNER Victor : On x86 Gentoo Installed with X 3.x buildbot, there is a compiler warning: "Python/symtable.c:289:5: warning: enumeration value ?FunctionType_kind? not handled in switch [-Wswitch]" https://buildbot.python.org/all/#/builders/103/builds/2067 It might me related to the implementation of the PEP 572 (bpo-35224), but I'm not sure. See also bpo-35878. ---------- components: Interpreter Core messages: 335207 nosy: emilyemorehouse, levkivskyi, vstinner priority: normal severity: normal status: open title: Python/symtable.c: warning: enumeration value ?FunctionType_kind? not handled in switch [-Wswitch]" versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 07:40:46 2019 From: report at bugs.python.org (HFM) Date: Mon, 11 Feb 2019 12:40:46 +0000 Subject: [New-bugs-announce] [issue35964] shutil.make_archive (xxx, tar, root_dir) is adding './' entry to archive which is wrong Message-ID: <1549888846.57.0.721579437713.issue35964@roundup.psfhosted.org> New submission from HFM : Running shutil.make_archive('a', 'tar', 'subdir') is created wrong and not really needed entry "./" which is visible in tarfile.Tarfile.list(): ['./', 'foo/', 'hello.txt', 'foo/bar.txt'] I have tested and found this issue in the following versions of python: 2.7.15rc1 FOUND 3.6.7 FOUND 3.7.1 FOUND I've attached a simple script which illustrates problem. Tested on Ubuntu Linux with mentioned python versions. Similar issue has been fixed for 'zip' (https://bugs.python.org/issue28488) but exists for 'tar' archives. ---------- components: Library (Lib) files: tarfile_test.tar messages: 335211 nosy: HFM, alanmcintyre, bialix, python-dev, serhiy.storchaka, tarek, twouters priority: normal severity: normal status: open title: shutil.make_archive (xxx, tar, root_dir) is adding './' entry to archive which is wrong versions: Python 2.7, Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48132/tarfile_test.tar _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 08:04:40 2019 From: report at bugs.python.org (SylvainDe) Date: Mon, 11 Feb 2019 13:04:40 +0000 Subject: [New-bugs-announce] [issue35965] Behavior for unittest.assertRaisesRegex differs depending on whether it is used as a context manager Message-ID: <1549890280.86.0.630330095869.issue35965@roundup.psfhosted.org> New submission from SylvainDe : On some Python versions, the following pieces of code have a different behavior which is not something I'd expect: ``` DESCRIPT_REQUIRES_TYPE_RE = r"descriptor '\w+' requires a 'set' object but received a 'int'" ... def test_assertRaisesRegex(self): self.assertRaisesRegex(TypeError, DESCRIPT_REQUIRES_TYPE_RE, set.add, 0) def test_assertRaisesRegex_contextman(self): with self.assertRaisesRegex(TypeError, DESCRIPT_REQUIRES_TYPE_RE): set.add(0) ``` On impacted Python versions, only test_assertRaisesRegex_contextman fails while test_assertRaisesRegex works fine. Logs for the failure: ``` ====================================================================== FAIL: test_assertRaisesRegex_contextman (didyoumean_sugg_tests.SetAddIntRegexpTests) ---------------------------------------------------------------------- TypeError: descriptor 'add' for 'set' objects doesn't apply to 'int' object During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/build/.../didyoumean/didyoumean_sugg_tests.py", line 23, in test_assertRaisesRegex_contextman set.add(0) AssertionError: "descriptor '\w+' requires a 'set' object but received a 'int'" does not match "descriptor 'add' for 'set' objects doesn't apply to 'int' object" ``` Either I am missing something or it looks like a bug to me. If needed, more details/context can be found on the StackOverflow question I opened: https://stackoverflow.com/questions/54612348/different-error-message-when-unittest-assertraisesregex-is-called-as-a-context-m . I can provide the details directly here if it is relevant. ---------- components: Tests messages: 335212 nosy: SylvainDe priority: normal severity: normal status: open title: Behavior for unittest.assertRaisesRegex differs depending on whether it is used as a context manager type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 08:27:49 2019 From: report at bugs.python.org (sheiun) Date: Mon, 11 Feb 2019 13:27:49 +0000 Subject: [New-bugs-announce] [issue35966] Didn't raise "StopIteration" Error when I use "yield" in the function Message-ID: <1549891669.28.0.260534836361.issue35966@roundup.psfhosted.org> New submission from sheiun : Python 3.6.7 |Anaconda custom (64-bit)| (default, Oct 28 2018, 19:44:12) [MSC v.1915 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> def generate_loop(): ... for i in range(10): ... print(i) ... # should raise StopIteration when i > 5 ... k = next(j for j in range(5) if j == i) ... print(k) ... yield k ... >>> >>> def generate(): ... # should raise StopIteration ... k = next(j for j in range(5) if j == 6) ... yield k ... >>> >>> print(list(generate_loop())) 0 0 1 1 2 2 3 3 4 4 5 [0, 1, 2, 3, 4] >>> >>> print(list(generate())) [] >>> >>> k = next(j for j in range(5) if j == 6) Traceback (most recent call last): File "", line 1, in StopIteration >>> ---------- components: Library (Lib) files: test.py messages: 335215 nosy: sheiun priority: normal severity: normal status: open title: Didn't raise "StopIteration" Error when I use "yield" in the function versions: Python 3.6 Added file: https://bugs.python.org/file48133/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 09:29:23 2019 From: report at bugs.python.org (Jason R. Coombs) Date: Mon, 11 Feb 2019 14:29:23 +0000 Subject: [New-bugs-announce] [issue35967] Better platform.processor support Message-ID: <1549895363.53.0.252912531241.issue35967@roundup.psfhosted.org> New submission from Jason R. Coombs : or: Unable to implement 'uname' on Python due to recursive call or: platform.uname() should avoid calling `uname` in a subprocess In [this issue](https://github.com/jaraco/cmdix/issues/1), I stumbled across a strange and somewhat unintuitive behavior. This project attempts to supply a `uname` executable implemented in Python, so that such functionality could be exposed on any platform including Windows. What I found, however, was that because the stdlib `platform` module actually invokes `sh -c uname -p` during most any call of the module (https://github.com/python/cpython/blob/9db56fb8faaa3cd66e7fe82740a4ae4d786bb27f/Lib/platform.py#L836), attempting to use that functionality on a system where `uname` is implemented by Python (and on the path), will probably fail after a long delay due to infinite recursion. Moreover, the _only_ call that's currently invoking `uname` in a subprocess is the `processor` resolution, which I suspect is rarely used, in part because the results from it are inconsistent and not particularly useful. For example, on Windows, you get a detailed description from the hardware: 'Intel64 Family 6 Model 142 Stepping 9, GenuineIntel' On macOS, you get just 'i386'. And on Linux, I see 'x86_64' or sometimes just '' (in docker). To make matters even worse, this `uname -p` call happens unconditionally on non-Windows systems for nearly any call in platform. As a result, it's impossible to suppress the invocation of `uname`, especially when functionality like `pkg_resources` and its environment markers is invoked early. I suggest instead the platform module should (a) resolve processor information in a more uniform manner and (b) not ever call uname, maybe [with something like this](https://github.com/jaraco/cmdix/blob/d61e6d3b40032a25feff0a9fb2a79afaa7dcd4e0/cmdix/command/uname.py#L53-L77). At the very least, the `uname` call should be late-bound so that it's not invoked unconditionally for rarely-used information. After some period for comment, I'll draft an implementation. ---------- messages: 335220 nosy: jaraco priority: normal severity: normal status: open title: Better platform.processor support _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 11:39:57 2019 From: report at bugs.python.org (Dylan Lloyd) Date: Mon, 11 Feb 2019 16:39:57 +0000 Subject: [New-bugs-announce] [issue35968] lib2to3 cannot parse rf'' Message-ID: <1549903197.05.0.60525639701.issue35968@roundup.psfhosted.org> New submission from Dylan Lloyd : ``` #!/usr/bin/env python # -*- coding: utf-8 -*- regex_format = rf'' ``` ``` yapf poc.py ``` ``` Traceback (most recent call last): File "lib/python3.6/site-packages/yapf/yapflib/pytree_utils.py", line 115, in ParseCodeToTree tree = parser_driver.parse_string(code, debug=False) File "lib/python3.6/lib2to3/pgen2/driver.py", line 107, in parse_string return self.parse_tokens(tokens, debug) File "lib/python3.6/lib2to3/pgen2/driver.py", line 72, in parse_tokens if p.addtoken(type, value, (prefix, start)): File "lib/python3.6/lib2to3/pgen2/parse.py", line 159, in addtoken raise ParseError("bad input", type, value, context) lib2to3.pgen2.parse.ParseError: bad input: type=3, value="''", context=('', (4, 17)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "bin/yapf", line 10, in sys.exit(run_main()) File "lib/python3.6/site-packages/yapf/__init__.py", line 326, in run_main sys.exit(main(sys.argv)) File "lib/python3.6/site-packages/yapf/__init__.py", line 213, in main verbose=args.verbose) File "lib/python3.6/site-packages/yapf/__init__.py", line 263, in FormatFiles in_place, print_diff, verify, verbose) File "lib/python3.6/site-packages/yapf/__init__.py", line 289, in _FormatFile logger=logging.warning) File "lib/python3.6/site-packages/yapf/yapflib/yapf_api.py", line 91, in FormatFile verify=verify) File "lib/python3.6/site-packages/yapf/yapflib/yapf_api.py", line 129, in FormatCode tree = pytree_utils.ParseCodeToTree(unformatted_source) File "lib/python3.6/site-packages/yapf/yapflib/pytree_utils.py", line 121, in ParseCodeToTree tree = parser_driver.parse_string(code, debug=False) File "lib/python3.6/lib2to3/pgen2/driver.py", line 107, in parse_string return self.parse_tokens(tokens, debug) File "lib/python3.6/lib2to3/pgen2/driver.py", line 72, in parse_tokens if p.addtoken(type, value, (prefix, start)): File "lib/python3.6/lib2to3/pgen2/parse.py", line 159, in addtoken raise ParseError("bad input", type, value, context) lib2to3.pgen2.parse.ParseError: bad input: type=3, value="''", context=('', (4, 17)) ``` ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 335238 nosy: majuscule priority: normal severity: normal status: open title: lib2to3 cannot parse rf'' type: crash versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 13:15:34 2019 From: report at bugs.python.org (Paul Ganssle) Date: Mon, 11 Feb 2019 18:15:34 +0000 Subject: [New-bugs-announce] [issue35969] Interpreter crashes with "can't initialize init_sys_streams" when abc fails to import Message-ID: <1549908934.51.0.669039845286.issue35969@roundup.psfhosted.org> New submission from Paul Ganssle : Just noticed this (tested on Python 3.7 and 3.8): mkdir /tmp/demo cd /tmp/demo cat << EOF > abc.py raise Exception("Hi") EOF PYTHONPATH=: python -c "" This will crash the interpreter with: Fatal Python error: init_sys_streams: can't initialize sys standard streams Traceback (most recent call last): File ".../lib/python3.7/io.py", line 52, in File "/tmp/demo/abc.py", line 1, in Exception: Hi Aborted (core dumped) It seems that the problem is that the io module imports the abc module, which raises an exception, so io fails to load. Evidently the io module is necessary to load the interpreter, so the interpreter crashes. Here's the backtrace for 3.7.2 on Arch Linux: (gdb) bt #0 0x00007f234b3f0d7f in raise () from /usr/lib/libc.so.6 #1 0x00007f234b3db672 in abort () from /usr/lib/libc.so.6 #2 0x00007f234b7db490 in fatal_error (prefix=prefix at entry=0x7f234b9d5fe0 <__func__.16645> "init_sys_streams", msg=msg at entry=0x7f234ba01f60 "can't initialize sys standard streams", status=status at entry=-1) at Python/pylifecycle.c:2179 #3 0x00007f234b8460cb in _Py_FatalInitError (err=...) at Python/pylifecycle.c:2198 #4 0x00007f234b8495a9 in pymain_init (pymain=0x7fff971cca70) at Modules/main.c:3019 #5 0x0000555dfa560050 in ?? () #6 0x00007fff971ccbc0 in ?? () #7 0x0000000000000000 in ?? () I'm not sure if anything can or should be done about this. It's very fair for the interpreter to fail to start, though I would guess that it should do that without dumping a core. ---------- messages: 335244 nosy: p-ganssle priority: normal severity: normal status: open title: Interpreter crashes with "can't initialize init_sys_streams" when abc fails to import type: crash versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 13:57:40 2019 From: report at bugs.python.org (Robert Kuska) Date: Mon, 11 Feb 2019 18:57:40 +0000 Subject: [New-bugs-announce] [issue35970] no help flag in base64 util Message-ID: <1549911460.74.0.450147054737.issue35970@roundup.psfhosted.org> New submission from Robert Kuska : I failed today to print help message for base64 utility without an error: $ python3 -m base64 -help option -h not recognized usage: /usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/base64.py [-d|-e|-u|-t] [file|-] -d, -u: decode -e: encode (default) -t: encode and decode string 'Aladdin:open sesame So I felt like this is a ripple in time space continuum worth adjusting. I already opened a PR to address this (probably not so worthy) issue (link attached). ---------- messages: 335255 nosy: rkuska priority: normal pull_requests: 11851 severity: normal status: open title: no help flag in base64 util _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 16:46:00 2019 From: report at bugs.python.org (Gabriel Corona) Date: Mon, 11 Feb 2019 21:46:00 +0000 Subject: [New-bugs-announce] [issue35971] Documentation should warn about code injection from current working directory Message-ID: <1549921560.53.0.488831977782.issue35971@roundup.psfhosted.org> New submission from Gabriel Corona : The CLI tools shipped in Debian python-rdflib-tools package can load modules from the current directory [1]: $ echo 'print("Something")' > cgi.py $ rdf2dot INFO:rdflib:RDFLib Version: 4.2.2 Something Reading from stdin as None... This could be a security issue because an attacker could possibly exploit this behavior to execute arbitrary code. This happens because these CLI tools are implemented as: #!/bin/sh exec /usr/bin/python -m rdflib.tools.rdfpipe $* "python -m $module", "python -c $code" and "$command | python" prepend the current working directory in the Python path. The Python documentation [2] should probably warn about this. In Python 3, "-I" could be suggested to prevent the script/current directory to be added to the Python path. However, this flag has other effects. The Python documentation suggests "python -m" commands at some places [3-5]: some form of warning at those places might be nice as well. See the related behavior of Perl. Perl used to include "." in @INC but this was removed for security reasons [6]. [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=921751. [2] https://docs.python.org/3/using/cmdline.html [3] https://docs.python.org/3.1/library/json.html [4] https://docs.python.org/3/library/http.server.html [5] https://docs.python.org/3/library/zipapp.html [6] https://metacpan.org/pod/release/XSAWYERX/perl-5.26.0/pod/perldelta.pod#Removal-of-the-current-directory-%28%22.%22%29-from- at INC ---------- messages: 335271 nosy: Gabriel Corona priority: normal severity: normal status: open title: Documentation should warn about code injection from current working directory type: security versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 16:52:48 2019 From: report at bugs.python.org (Alexey Izbyshev) Date: Mon, 11 Feb 2019 21:52:48 +0000 Subject: [New-bugs-announce] [issue35972] _xxsubinterpreters: channel_send() may truncate ints on 32-bit platforms Message-ID: <1549921968.67.0.462504390355.issue35972@roundup.psfhosted.org> New submission from Alexey Izbyshev : Victor Stinner pointed out that on x86 Gentoo Installed with X 3.x buildbot, there is a compiler warning: Python/pystate.c:1483:18: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast] (https://buildbot.python.org/all/#/builders/103/builds/2067) This warning reveals a bug: static int _long_shared(PyObject *obj, _PyCrossInterpreterData *data) { int64_t value = PyLong_AsLongLong(obj); if (value == -1 && PyErr_Occurred()) { if (PyErr_ExceptionMatches(PyExc_OverflowError)) { PyErr_SetString(PyExc_OverflowError, "try sending as bytes"); } return -1; } data->data = (void *)value; A 64-bit value is converted to void *, which is 32-bit on 32-bit platforms. I've implemented a PR that uses Py_ssize_t instead to match the pointer size and to preserve the ability to work with negative numbers. ---------- assignee: izbyshev components: Extension Modules messages: 335272 nosy: eric.snow, izbyshev, vstinner priority: normal severity: normal status: open title: _xxsubinterpreters: channel_send() may truncate ints on 32-bit platforms type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 18:05:56 2019 From: report at bugs.python.org (Brennan Vincent) Date: Mon, 11 Feb 2019 23:05:56 +0000 Subject: [New-bugs-announce] [issue35973] `growable_int_array type_ignores` in parsetok.c is not always freed. Message-ID: <1549926356.5.0.0624072683347.issue35973@roundup.psfhosted.org> New submission from Brennan Vincent : To reproduce: (1) build python: `../configure --prefix=$HOME/prefix --with-pydebug --without-pymalloc && make install` (2) run with valgrind: `valgrind --leak-check=full ~/prefix/bin/python3` (3) exit immediately from the interpreter by pressing ^D (4) Note the following output from Valgrind: ``` ==3810071== 40 bytes in 1 blocks are definitely lost in loss record 3 of 527 ==3810071== at 0x4C28B5F: malloc (vg_replace_malloc.c:299) ==3810071== by 0x59ED58: growable_int_array_init (parsetok.c:27) ==3810071== by 0x59EE14: parsetok (parsetok.c:235) ==3810071== by 0x59F697: PyParser_ParseFileObject (parsetok.c:176) ==3810071== by 0x522E85: PyParser_ASTFromFileObject (pythonrun.c:1224) ==3810071== by 0x5231E9: PyRun_InteractiveOneObjectEx (pythonrun.c:238) ==3810071== by 0x5234D0: PyRun_InteractiveLoopFlags (pythonrun.c:120) ==3810071== by 0x523BF2: PyRun_AnyFileExFlags (pythonrun.c:78) ==3810071== by 0x4204FE: pymain_run_stdin (main.c:1185) ==3810071== by 0x42126B: pymain_run_python (main.c:1675) ==3810071== by 0x422EE0: pymain_main (main.c:1820) ==3810071== by 0x422F75: _Py_UnixMain (main.c:1857) ``` Reproduced on git commit hash 522346d792d9013140a3f4ad3534ac10f38d9085 . ---------- components: Interpreter Core messages: 335274 nosy: umanwizard priority: normal severity: normal status: open title: `growable_int_array type_ignores` in parsetok.c is not always freed. type: resource usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 18:11:22 2019 From: report at bugs.python.org (Marat Sharafutdinov) Date: Mon, 11 Feb 2019 23:11:22 +0000 Subject: [New-bugs-announce] [issue35974] os.DirEntry.inode() returns invalid value within Docker container Message-ID: <1549926682.07.0.790537148313.issue35974@roundup.psfhosted.org> New submission from Marat Sharafutdinov : I'm trying to build Python 3.7.2 within official CentOS 7.6.1810 image (https://hub.docker.com/_/centos) and getting the following error during testing: ====================================================================== FAIL: test_attributes (test.test_os.TestScandir) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/src/python/Lib/test/test_os.py", line 3367, in test_attributes self.check_entry(entry, 'dir', True, False, False) File "/usr/src/python/Lib/test/test_os.py", line 3319, in check_entry os.stat(entry.path, follow_symlinks=False).st_ino) AssertionError: 28093768 != 85098458 I guess this bug applies to Docker containers in general. For instance it's reproduced with the official Python 3.7.2-stretch image based on the Debian Stretch (https://hub.docker.com/_/python): $ docker run --rm -it python:3.7.2-stretch Python 3.7.2 (default, Feb 6 2019, 12:04:03) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.mkdir('/test_dir') >>> for entry in os.scandir('/'): ... if entry.name == 'test_dir': ... break ... >>> print(entry, entry.inode(), os.stat(entry.path, follow_symlinks=False).st_ino) 23898155 85118011 >>> assert entry.inode() == os.stat(entry.path, follow_symlinks=False).st_ino Traceback (most recent call last): File "", line 1, in AssertionError >>> In case of using host volume when running container it works ok, - the problem occurs when using default Docker volume: $ docker run --rm -it -v /home/decaz/workspace:/host_dir python:3.7.2-stretch Python 3.7.2 (default, Feb 6 2019, 12:04:03) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.mkdir('/host_dir/test_dir') >>> for entry in os.scandir('/host_dir'): ... if entry.name == 'test_dir': ... break ... >>> print(entry, entry.inode(), os.stat(entry.path, follow_symlinks=False).st_ino) 12873222 12873222 >>> assert entry.inode() == os.stat(entry.path, follow_symlinks=False).st_ino >>> Similar issue - https://bugs.python.org/issue32811. ---------- components: Build messages: 335275 nosy: decaz priority: normal severity: normal status: open title: os.DirEntry.inode() returns invalid value within Docker container type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 19:23:38 2019 From: report at bugs.python.org (Guido van Rossum) Date: Tue, 12 Feb 2019 00:23:38 +0000 Subject: [New-bugs-announce] [issue35975] Put back the ability to parse files where async/await aren't keywords Message-ID: <1549931018.54.0.209945896308.issue35975@roundup.psfhosted.org> New submission from Guido van Rossum : Now that the ast module can be used to parse type comments, there?s one more feature I?d like to lobby for ? a flag that modifies the grammar slightly so it resembles an older version of Python (going back to 3.4). This is used in mypy to decouple the Python version you?re running from the Python version for which you?re checking compatibility (useful when checking code that will be deployed on a system with a different Python version installed). I imagine this would be useful to other linters as well, and the implementation is mostly manipulating whether `async` and `await` are keywords. But if there?s pushback to this part I can live without it ? the rest of the work is still useful. The implementation in typed_ast takes the form of a feature_version flag which is set to the minor Python version to be parsed. Setting it to 4 or lower would make async/await never keywords, setting it to 5 or 6 would make them conditional per PEP 492, and setting it to 7 or higher would make these unconditional keywords. A few minor uses of the same flag also reject f-strings, annotated assignments (PEP 526), and matrix multiply for versions where those don't exist. But I don't need all of those -- I really just need to be able to select the 3.5/3.6 rules for conditional async/await keywords, since all the other features are purely backwards compatible, whereas with async/await there is legal (and plentiful!) code that uses these as variable or function names that should be supported in 3.5/3.6 mode. Of course having it be a (minor) version number would still be more future-proof -- if say in 3.10 we add a match expression using some kind of conditional keyword hack, we might have to dust it off. Note that much of what I'm asking for would effectively roll back https://bugs.python.org/issue30406 -- sorry Jelle! (Though there's more to it -- Serhiy's introduction of Grammar/Token followed, and I would still need to thread some kind of flag all the way from ast.parse() to tokenizer.c. ---------- messages: 335277 nosy: gvanrossum priority: normal severity: normal status: open title: Put back the ability to parse files where async/await aren't keywords _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 11 23:02:51 2019 From: report at bugs.python.org (Paul Monson) Date: Tue, 12 Feb 2019 04:02:51 +0000 Subject: [New-bugs-announce] [issue35976] PCBuild file changes for arm32 should be separated from code changes for review Message-ID: <1549944171.25.0.954799038115.issue35976@roundup.psfhosted.org> Change by Paul Monson : ---------- components: Build, Windows nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: PCBuild file changes for arm32 should be separated from code changes for review type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 12 11:27:40 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 12 Feb 2019 16:27:40 +0000 Subject: [New-bugs-announce] [issue35977] test_slice crashed on s390x Debian 3.x: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small Message-ID: <1549988860.34.0.384038147585.issue35977@roundup.psfhosted.org> New submission from STINNER Victor : s390x Debian 3.x: https://buildbot.python.org/all/#/builders/13/builds/2344 0:06:26 load avg: 0.92 [291/420/1] test_slice crashed (Exit code -6) Modules/gcmodule.c:110: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small Enable tracemalloc to get the memory block allocation traceback object : .BadCmp object at 0x3ff93967e90> type : BadCmp refcount: 1 address : 0x3ff93967e90 Fatal Python error: _PyObject_AssertFailed Current thread 0x000003ff95272700 (most recent call first): File "/home/dje/cpython-buildarea/3.x.edelsohn-debian-z/build/Lib/test/test_slice.py", line 107 in File "/home/dje/cpython-buildarea/3.x.edelsohn-debian-z/build/Lib/unittest/case.py", line 197 in handle File "/home/dje/cpython-buildarea/3.x.edelsohn-debian-z/build/Lib/unittest/case.py", line 782 in assertRaises File "/home/dje/cpython-buildarea/3.x.edelsohn-debian-z/build/Lib/test/test_slice.py", line 107 in test_cmp File "/home/dje/cpython-buildarea/3.x.edelsohn-debian-z/build/Lib/unittest/case.py", line 642 in run File "/home/dje/cpython-buildarea/3.x.edelsohn-debian-z/build/Lib/unittest/case.py", line 702 in __call__ ... ---------- components: Interpreter Core messages: 335320 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: test_slice crashed on s390x Debian 3.x: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 12 12:37:52 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Tue, 12 Feb 2019 17:37:52 +0000 Subject: [New-bugs-announce] [issue35978] test_venv fails in Travis with GCC Message-ID: <1549993072.6.0.794740252515.issue35978@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I noticed this while checking issue35961. test_venv is always failing on GCC which is marked as optional in Travis. Log : https://travis-ci.org/python/cpython/jobs/492123436#L1909 0:39:35 load avg: 1.00 [390/416] test_venv test test_venv failed -- Traceback (most recent call last): File "/home/travis/build/python/cpython/Lib/test/test_venv.py", line 309, in test_multiprocessing out, err = check_output([envpy, '-c', File "/home/travis/build/python/cpython/Lib/test/test_venv.py", line 37, in check_output raise subprocess.CalledProcessError( subprocess.CalledProcessError: Command '['/tmp/tmpg8ubeyfn/bin/python', '-c', 'from multiprocessing import Pool; print(Pool(1).apply_async("Python".lower).get(3))']' died with . Also GCC test have been timing out for at least past 4 months : https://python.zulipchat.com/#narrow/stream/116742-core.2Fhelp/topic/GCC.20build.20in.20Travis.20always.20times.20out ---------- messages: 335338 nosy: pablogsal, vstinner, xtreak priority: normal severity: normal status: open title: test_venv fails in Travis with GCC type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 12 19:42:47 2019 From: report at bugs.python.org (Dan Snider) Date: Wed, 13 Feb 2019 00:42:47 +0000 Subject: [New-bugs-announce] [issue35979] Incorrect __text_signature__ for the __get__ slot wrapper Message-ID: <1550018567.03.0.305357698132.issue35979@roundup.psfhosted.org> New submission from Dan Snider : The current signature: "__get__($self, instance, owner, /)\n--\n\nReturn an attribute of instance, which is of type owner." doens't match how wrap_descr_get actually parses the arguments to __get__ with: PyArg_UnpackTuple(args, "", 1, 2, &obj, &type) ---------- messages: 335378 nosy: bup priority: normal severity: normal status: open title: Incorrect __text_signature__ for the __get__ slot wrapper _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 12 20:08:44 2019 From: report at bugs.python.org (shawnberry) Date: Wed, 13 Feb 2019 01:08:44 +0000 Subject: [New-bugs-announce] [issue35980] Py3 BIF random.choices() is O(N**2) but I've written O(N) code for the same task Message-ID: <1550020124.01.0.607459314938.issue35980@roundup.psfhosted.org> New submission from shawnberry : Please see my GitHub page https://github.com/shawnberry/Improved_random.choices/blob/master/Improved_Py3_BIF_random_dot_choices.py for code that reduces Py3 BIF random.choices() from O(N**2) to O(N). This is my first suggestion to improve Python code. Thanks, shawnberry121 at gmail.com ---------- components: Library (Lib) files: Improved_Py3_BIF_random_dot_choices.py hgrepos: 380 messages: 335379 nosy: shawn_berry priority: normal severity: normal status: open title: Py3 BIF random.choices() is O(N**2) but I've written O(N) code for the same task type: performance versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48135/Improved_Py3_BIF_random_dot_choices.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 02:15:24 2019 From: report at bugs.python.org (maokk) Date: Wed, 13 Feb 2019 07:15:24 +0000 Subject: [New-bugs-announce] [issue35981] shutil make_archive create wrong file when base name contains dots at end Message-ID: <1550042124.47.0.589800343777.issue35981@roundup.psfhosted.org> New submission from maokk : shutil.make_archive("foo...bar..", "zip", os.path.abspath("c:/test")) create zipfile called "foo...bar.zip" not "foo...bar...zip" ---------- components: Distutils messages: 335388 nosy: dstufft, eric.araujo, highwind priority: normal severity: normal status: open title: shutil make_archive create wrong file when base name contains dots at end type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 04:41:27 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Wed, 13 Feb 2019 09:41:27 +0000 Subject: [New-bugs-announce] [issue35982] Create unit-tests for os.renames() Message-ID: <1550050887.9.0.145116201828.issue35982@roundup.psfhosted.org> New submission from Joannah Nanjekye : In a discussion to fix https://bugs.python.org/issue35951, @giampaolo.rodola pointed out that there are no tests for os.renames() I have opened this issue to track this. ---------- messages: 335395 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Create unit-tests for os.renames() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 06:02:31 2019 From: report at bugs.python.org (Jeroen Demeyer) Date: Wed, 13 Feb 2019 11:02:31 +0000 Subject: [New-bugs-announce] [issue35983] tp_dealloc trashcan shouldn't be called for subclasses Message-ID: <1550055751.42.0.717332216151.issue35983@roundup.psfhosted.org> New submission from Jeroen Demeyer : When designing an extension type subclassing an existing type, it makes sense to call the tp_dealloc of the base class from the tp_dealloc of the subclass. Now suppose that I'm subclassing "list" which uses the trashcan mechanism. Then it can happen that the tp_dealloc of list puts this object in the trashcan, even though the tp_dealloc of the subclass has already been called. Then the tp_dealloc of the subclass may be called multiple times, which is unsafe (tp_dealloc is expected to be called exactly once). To solve this, the trashcan mechanism should be disabled when tp_dealloc is called from a subclass. Interestingly, subtype_dealloc also solves this in a much more involved way (see the "Q. Why the bizarre (net-zero) manipulation of _PyRuntime.trash_delete_nesting around the trashcan macros?") in Objects/typeobject.c ---------- components: Interpreter Core messages: 335405 nosy: jdemeyer, pitrou, scoder priority: normal severity: normal status: open title: tp_dealloc trashcan shouldn't be called for subclasses versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 06:10:03 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Wed, 13 Feb 2019 11:10:03 +0000 Subject: [New-bugs-announce] [issue35984] test__xxsubinterpreters leaked [3, 4, 3] memory blocks, sum=1 Message-ID: <1550056203.45.0.644918745906.issue35984@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : The reafleak buildbots have detected memory block leaks in test__xxsubinterpreters: Ran 112 tests in 4.721s OK (skipped=5) . test__xxsubinterpreters leaked [3, 4, 3] memory blocks, sum=10 1 test failed again: test__xxsubinterpreters == Tests result: FAILURE then FAILURE == https://buildbot.python.org/all/#/builders/1/builds/502 https://buildbot.python.org/all/#/builders/80/builds/511 Bisecting shows 16f842da3c862d76a1177bd8ef9579703c24fa5a is the first bad commit. This was introduced in PR11822. ---------- components: Tests messages: 335408 nosy: eric.snow, pablogsal, vstinner priority: normal severity: normal status: open title: test__xxsubinterpreters leaked [3, 4, 3] memory blocks, sum=1 versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 07:08:03 2019 From: report at bugs.python.org (Lukas J) Date: Wed, 13 Feb 2019 12:08:03 +0000 Subject: [New-bugs-announce] [issue35985] Folding tries to slice from 0 to float("+inf") when maxlength is 0 Message-ID: <1550059683.58.0.369389520684.issue35985@roundup.psfhosted.org> New submission from Lukas J : When converting an email.message.Message with the policy set to email.policy.EmailPolicy with all default settings, I eventually end up with this exception: File "/usr/lib/python3.7/email/_header_value_parser.py", line 2727, in _fold_as_ew first_part = to_encode[:text_space] TypeError: slice indices must be integers or None or have an __index__ method Which is caused because text_space is a float of value +inf. This is set on line 2594 of the same file: maxlen = policy.max_line_length or float("+inf") For some reason policy.max_line_length is set to zero, even though the default should be 78 after a glance into the source. So there's maybe even two issues: 1.) The fallback for maxlen shouldn't be float("+inf"), as that is not an integer and thus can't be sliced by. I think a big integer would suffice instead, for example 100000000 2.) policy.max_line_length seems to lose it's value in the default settings somewhere along the way if it isn't explicitly set. Current workaround: Set max_line_length of the policy to a value (78 is default) ---------- components: email messages: 335424 nosy: Lukas J, barry, r.david.murray priority: normal severity: normal status: open title: Folding tries to slice from 0 to float("+inf") when maxlength is 0 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 07:26:53 2019 From: report at bugs.python.org (=?utf-8?b?5p2O56yR5p2l?=) Date: Wed, 13 Feb 2019 12:26:53 +0000 Subject: [New-bugs-announce] [issue35986] print() documentation typo? Message-ID: <1550060813.07.0.324151562751.issue35986@roundup.psfhosted.org> New submission from ??? : print(), default value sep, should be ' '(one space), rather than an empty str, ''. ---------- assignee: docs at python components: Documentation messages: 335428 nosy: docs at python, ??? priority: normal severity: normal status: open title: print() documentation typo? type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 08:43:38 2019 From: report at bugs.python.org (Samuel GIFFARD) Date: Wed, 13 Feb 2019 13:43:38 +0000 Subject: [New-bugs-announce] [issue35987] Mypy and Asyncio import cannot be skipped Message-ID: <1550065418.2.0.752410201909.issue35987@roundup.psfhosted.org> New submission from Samuel GIFFARD : Some modules cannot be found with mypy. And cannot be silenced/skipped by mypy. Have the following 3 files: ############################## # example1.py from asyncio import BaseEventLoop # Module 'asyncio' has no attribute 'BaseEventLoop' from asyncio import Semaphore # OK import aiostream # gets skipped correctly import asyncio # OK asyncio.windows_events.WindowsProactorEventLoopPolicy() # Module has no attribute 'windows_events' asyncio.WindowsProactorEventLoopPolicy() # Module has no attribute 'WindowsProactorEventLoopPolicy' ############################## # mypy.ini [mypy] python_version = 3.7 [mypy-asyncio,aiostream] follow_imports = skip ignore_missing_imports = True ############################## # Pipfile [[source]] url = "https://pypi.org/simple" verify_ssl = true name = "pypi" [dev-packages] [packages] mypy = "==0.670" aiostream = "==0.3.1" [requires] python_version = "3.7" ############################## $> pipenv install #... $> pipenv run python -m mypy --config-file mypy.ini example1.py example1.py:1: error: Module 'asyncio' has no attribute 'BaseEventLoop' example1.py:5: error: Module has no attribute "windows_events" example1.py:6: error: Module has no attribute "WindowsProactorEventLoopPolicy" Why Semaphore (and others) works and BaseEventLoop (among others) doesn't work... no idea. But it completely breaks mypy that it somehow cannot be skipped nor silenced. ---------- components: asyncio messages: 335439 nosy: Samuel GIFFARD, asvetlov, yselivanov priority: normal severity: normal status: open title: Mypy and Asyncio import cannot be skipped type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 09:46:09 2019 From: report at bugs.python.org (Hinko Kocevar) Date: Wed, 13 Feb 2019 14:46:09 +0000 Subject: [New-bugs-announce] [issue35988] Python interpreter segfault Message-ID: <1550069169.56.0.128755825216.issue35988@roundup.psfhosted.org> New submission from Hinko Kocevar : I'm running a tornado server with websockets client. Every now and then the python3.5 crashes, seg faults. I added code tracking (https://stackoverflow.com/questions/2663841/python-tracing-a-segmentation-fault) and this is what I see: line, /usr/lib64/python3.5/asyncio/events.py:675 call, /usr/lib64/python3.5/asyncio/events.py:621 line, /usr/lib64/python3.5/asyncio/events.py:627 line, /usr/lib64/python3.5/asyncio/events.py:628 line, /usr/lib64/python3.5/asyncio/events.py:629 return, /usr/lib64/python3.5/asyncio/events.py:629 line, /usr/lib64/python3.5/asyncio/events.py:676 line, /usr/lib64/python3.5/asyncio/events.py:677 return, /usr/lib64/python3.5/asyncio/events.py:677 line, /usr/lib64/python3.5/asyncio/futures.py:172 line, /usr/lib64/python3.5/asyncio/futures.py:173 call, /usr/lib64/python3.5/asyncio/base_events.py:1461 line, /usr/lib64/python3.5/asyncio/base_events.py:1462 return, /usr/lib64/python3.5/asyncio/base_events.py:1462 return, /usr/lib64/python3.5/asyncio/futures.py:173 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:584 call, /usr/lib64/python3.5/asyncio/futures.py:315 line, /usr/lib64/python3.5/asyncio/futures.py:322 line, /usr/lib64/python3.5/asyncio/futures.py:325 return, /usr/lib64/python3.5/asyncio/futures.py:325 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:585 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:586 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:587 call, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1045 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1046 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1047 call, /usr/lib64/python3.5/site-packages/tornado/iostream.py:134 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:135 return, /usr/lib64/python3.5/site-packages/tornado/iostream.py:135 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1048 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1050 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1051 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1052 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1060 call, /usr/lib64/python3.5/site-packages/tornado/iostream.py:163 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:168 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:169 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:170 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:174 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:175 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:176 return, /usr/lib64/python3.5/site-packages/tornado/iostream.py:176 call, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1247 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1248 line, /usr/lib64/python3.5/site-packages/tornado/iostream.py:1249 Program received signal SIGPIPE, Broken pipe. 0x00007ffff76f7bfb in __libc_send (fd=14, buf=0x118a0b0, n=2332, flags=0) at ../sysdeps/unix/sysv/linux/x86_64/send.c:31 31 ssize_t result = INLINE_SYSCALL (sendto, 6, fd, buf, n, flags, NULL, Missing separate debuginfos, use: debuginfo-install python35u-3.5.6-1.ius.centos7.x86_64 (gdb) (gdb) (gdb) (gdb) bt #0 0x00007ffff76f7bfb in __libc_send (fd=14, buf=0x118a0b0, n=2332, flags=0) at ../sysdeps/unix/sysv/linux/x86_64/send.c:31 #1 0x00007fffed7c6c66 in sock_send_impl () from /usr/lib64/python3.5/lib-dynload/_socket.cpython-35m-x86_64-linux-gnu.so #2 0x00007fffed7c9e06 in sock_call_ex () from /usr/lib64/python3.5/lib-dynload/_socket.cpython-35m-x86_64-linux-gnu.so #3 0x00007fffed7ca79f in sock_send () from /usr/lib64/python3.5/lib-dynload/_socket.cpython-35m-x86_64-linux-gnu.so #4 0x00007ffff79b60d9 in PyCFunction_Call () from /lib64/libpython3.5m.so.1.0 #5 0x00007ffff7a2f646 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #6 0x00007ffff7a2ea88 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #7 0x00007ffff7a2ea88 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #8 0x00007ffff7a30ebc in _PyEval_EvalCodeWithName () from /lib64/libpython3.5m.so.1.0 #9 0x00007ffff7a2e22f in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #10 0x00007ffff7a30ebc in _PyEval_EvalCodeWithName () from /lib64/libpython3.5m.so.1.0 #11 0x00007ffff7a2e22f in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #12 0x00007ffff7a30ebc in _PyEval_EvalCodeWithName () from /lib64/libpython3.5m.so.1.0 #13 0x00007ffff7a2e22f in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #14 0x00007ffff7a30ebc in _PyEval_EvalCodeWithName () from /lib64/libpython3.5m.so.1.0 #15 0x00007ffff7a2e22f in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #16 0x00007ffff79924c0 in gen_send_ex () from /lib64/libpython3.5m.so.1.0 #17 0x00007ffff7a2ec28 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #18 0x00007ffff7a30ebc in _PyEval_EvalCodeWithName () from /lib64/libpython3.5m.so.1.0 #19 0x00007ffff7a30fc8 in PyEval_EvalCodeEx () from /lib64/libpython3.5m.so.1.0 #20 0x00007ffff799958e in function_call () from /lib64/libpython3.5m.so.1.0 #21 0x00007ffff796e2aa in PyObject_Call () from /lib64/libpython3.5m.so.1.0 #22 0x00007ffff7a2be87 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #23 0x00007ffff7a2ea88 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #24 0x00007ffff7a2ea88 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #25 0x00007ffff7a2ea88 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #26 0x00007ffff7a2ea88 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #27 0x00007ffff7a2ea88 in PyEval_EvalFrameEx () from /lib64/libpython3.5m.so.1.0 #28 0x00007ffff7a30ebc in _PyEval_EvalCodeWithName () from /lib64/libpython3.5m.so.1.0 #29 0x00007ffff7a30fc8 in PyEval_EvalCodeEx () from /lib64/libpython3.5m.so.1.0 #30 0x00007ffff7a3100b in PyEval_EvalCode () from /lib64/libpython3.5m.so.1.0 #31 0x00007ffff7a50434 in run_mod () from /lib64/libpython3.5m.so.1.0 #32 0x00007ffff7a528bd in PyRun_FileExFlags () from /lib64/libpython3.5m.so.1.0 #33 0x00007ffff7a52a27 in PyRun_SimpleFileExFlags () from /lib64/libpython3.5m.so.1.0 #34 0x00007ffff7a68bec in Py_Main () from /lib64/libpython3.5m.so.1.0 #35 0x0000000000400a29 in main () (gdb) ---------- components: asyncio messages: 335447 nosy: asvetlov, hinxx, yselivanov priority: normal severity: normal status: open title: Python interpreter segfault type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 11:13:55 2019 From: report at bugs.python.org (John Florian) Date: Wed, 13 Feb 2019 16:13:55 +0000 Subject: [New-bugs-announce] [issue35989] ipaddress.IPv4Network allows prefix > 32 Message-ID: <1550074435.28.0.00523777369606.issue35989@roundup.psfhosted.org> New submission from John Florian : I wanted a simple is_valid_ipv4_network() function, so I wrote one and a bunch of unit tests where I discovered that I can legally: >>> n = IPv4Network(('192.168.123.234', 12345678)) >>> n IPv4Network('192.168.123.234/12345678') >>> n.prefixlen 12345678 >>> n.max_prefixlen 32 I assume this is a bug. ---------- messages: 335460 nosy: John Florian priority: normal severity: normal status: open title: ipaddress.IPv4Network allows prefix > 32 versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 13 14:28:48 2019 From: report at bugs.python.org (John Florian) Date: Wed, 13 Feb 2019 19:28:48 +0000 Subject: [New-bugs-announce] [issue35990] ipaddress.IPv4Interface won't accept 2-tuple (address, mask) Message-ID: <1550086128.37.0.0760252061885.issue35990@roundup.psfhosted.org> New submission from John Florian : The docs say """The meaning of address is as in the constructor of IPv4Network, except that arbitrary host addresses are always accepted.""" However, that doesn't seem to be entirely true: >>> tup1 = ('192.168.123.234', 24) >>> tup2 = ('192.168.123.234', '255.255.255.0') >>> IPv4Network(tup1, strict=False) IPv4Network('192.168.123.0/24') >>> IPv4Network(tup2, strict=False) IPv4Network('192.168.123.0/24') >>> IPv4Interface(tup1) IPv4Interface('192.168.123.234/24') >>> IPv4Interface(tup2) Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.7/ipaddress.py", line 1391, in __init__ self._prefixlen = int(address[1]) ValueError: invalid literal for int() with base 10: '255.255.255.0' ---------- messages: 335474 nosy: John Florian priority: normal severity: normal status: open title: ipaddress.IPv4Interface won't accept 2-tuple (address, mask) versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 14 01:09:55 2019 From: report at bugs.python.org (wangjiangqiang) Date: Thu, 14 Feb 2019 06:09:55 +0000 Subject: [New-bugs-announce] [issue35991] potential double free in Modules/_randommodule.c line 295 and line 317 Message-ID: <1550124595.27.0.553680989056.issue35991@roundup.psfhosted.org> Change by wangjiangqiang <767563655 at qq.com>: ---------- nosy: wjq-security priority: normal severity: normal status: open title: potential double free in Modules/_randommodule.c line 295 and line 317 type: security _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 14 01:14:12 2019 From: report at bugs.python.org (Caleb Donovick) Date: Thu, 14 Feb 2019 06:14:12 +0000 Subject: [New-bugs-announce] [issue35992] Metaclasses interfere with __class_getitem__ Message-ID: <1550124852.78.0.300044847884.issue35992@roundup.psfhosted.org> New submission from Caleb Donovick : OS: Debian testing python3 -VV: Python 3.7.2+ (default, Feb 2 2019, 14:31:48) [gcc 8.2.0] The following: ``` class Meta(type): pass class X(metaclass=Meta): def __class_getitem__(cls, key): return key X[10] ``` Results in ``` TypeError: 'Meta' object does not support indexing ``` However, PEP 560 specifically states that __class_getitem__ should be used as fall back for when a metaclass does not implement __getitem__. ---------- assignee: docs at python components: Documentation, Interpreter Core messages: 335497 nosy: Donovick, docs at python priority: normal severity: normal status: open title: Metaclasses interfere with __class_getitem__ type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 14 01:21:26 2019 From: report at bugs.python.org (wangjiangqiang) Date: Thu, 14 Feb 2019 06:21:26 +0000 Subject: [New-bugs-announce] [issue35993] incorrect use of released memory in Python/pystate.c line 284 Message-ID: <1550125286.46.0.185634050351.issue35993@roundup.psfhosted.org> Change by wangjiangqiang <767563655 at qq.com>: ---------- nosy: wjq-security priority: normal severity: normal status: open title: incorrect use of released memory in Python/pystate.c line 284 type: security _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 14 02:32:35 2019 From: report at bugs.python.org (Peixing Xin) Date: Thu, 14 Feb 2019 07:32:35 +0000 Subject: [New-bugs-announce] [issue35994] In WalkTests of test_os.py, sub2_tree missed the dir "SUB21" if symlink can't be supported. Message-ID: <1550129555.65.0.532598908006.issue35994@roundup.psfhosted.org> New submission from Peixing Xin : Looking into the setUp method of WalkTests class in test_os.py, sub2_tree missed "SUB21" in its directory list if support.can_symlink() returns False. ---------- components: Tests messages: 335505 nosy: pxinwr priority: normal severity: normal status: open title: In WalkTests of test_os.py, sub2_tree missed the dir "SUB21" if symlink can't be supported. type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 14 03:45:46 2019 From: report at bugs.python.org (lidayan) Date: Thu, 14 Feb 2019 08:45:46 +0000 Subject: [New-bugs-announce] [issue35995] logging.handlers.SMTPHandler Message-ID: <1550133946.82.0.233218933413.issue35995@roundup.psfhosted.org> New submission from lidayan <840286247 at qq.com>: SSL encrypted socket on SMTPHandler error Traceback (most recent call last): File "/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/handlers.py", line 1008, in emit smtp = smtplib.SMTP(self.mailhost, port, timeout=self.timeout) File "/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/smtplib.py", line 251, in __init__ (code, msg) = self.connect(host, port) File "/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/smtplib.py", line 338, in connect (code, msg) = self.getreply() File "/usr/local/Cellar/python/3.7.2_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/smtplib.py", line 391, in getreply + str(e)) smtplib.SMTPServerDisconnected: Connection unexpectedly closed: timed out ---------- components: Library (Lib) messages: 335511 nosy: lidayan priority: normal severity: normal status: open title: logging.handlers.SMTPHandler versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 14 14:34:36 2019 From: report at bugs.python.org (Berry Schoenmakers) Date: Thu, 14 Feb 2019 19:34:36 +0000 Subject: [New-bugs-announce] [issue35996] Optional modulus argument for new math.prod() function Message-ID: <1550172876.35.0.867721234262.issue35996@roundup.psfhosted.org> New submission from Berry Schoenmakers : It's nice to see the arrival of the prod() function, see PR11359. Just as for the built-in pow(x, y[, z]) function it would be very useful to have an optional argument z for computing integer products modulo z. Typical use case in cryptography would be: prod((pow(x, y, z) for x, y in zip(g, s)), z) to compute the product of all (potentially many) g[i]**s[i]'s modulo z. And, just as with the use of pow(), the intermediate values for prod() may in general grow quickly, hence modular reduction is essential to limit time and space usage. Maybe an interesting option to add at this stage? ---------- components: Library (Lib) messages: 335557 nosy: lschoe, rhettinger priority: normal severity: normal status: open title: Optional modulus argument for new math.prod() function type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 14 15:36:32 2019 From: report at bugs.python.org (muhzi) Date: Thu, 14 Feb 2019 20:36:32 +0000 Subject: [New-bugs-announce] [issue35997] ImportError: dlopen failed: cannot locate symbol "PyBool_Type" Message-ID: <1550176592.88.0.291433310272.issue35997@roundup.psfhosted.org> New submission from muhzi : I cross compiled python for android x86_64, and the interpreter works fine, no problems. But when I compiled some other extension and try to import it. I get an import error as such the imported shared library fails to locate the symbol "PyBool_Type". ImportError: dlopen failed: cannot locate symbol "PyBool_Type" referenced by .... The extension was compiled with -I && -L flags pointing to the Python installation include and lib folders. ---------- components: Cross-Build, Regular Expressions messages: 335560 nosy: Alex.Willmer, ezio.melotti, mrabarnett, muhzi, xdegaye priority: normal severity: normal status: open title: ImportError: dlopen failed: cannot locate symbol "PyBool_Type" type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 05:12:18 2019 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Wirtel?=) Date: Fri, 15 Feb 2019 10:12:18 +0000 Subject: [New-bugs-announce] [issue35998] ./python -m test test_asyncio fails Message-ID: <1550225538.04.0.738886867119.issue35998@roundup.psfhosted.org> New submission from St?phane Wirtel : ====================================================================== ERROR: test_start_tls_server_1 (test.test_asyncio.test_sslproto.SelectorStartTLSTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/stephane/src/github.com/python/cpython-original/Lib/test/test_asyncio/test_sslproto.py", line 510, in test_start_tls_server_1 self.loop.run_until_complete(run_main()) File "/home/stephane/src/github.com/python/cpython-original/Lib/asyncio/base_events.py", line 589, in run_until_complete return future.result() File "/home/stephane/src/github.com/python/cpython-original/Lib/test/test_asyncio/test_sslproto.py", line 503, in run_main await asyncio.wait_for( File "/home/stephane/src/github.com/python/cpython-original/Lib/asyncio/tasks.py", line 461, in wait_for raise exceptions.TimeoutError() asyncio.exceptions.TimeoutError my current revision: 3e028b2d40370dc986b6f3146a7ae927bc119f97 system: fedora 29 compiled with gcc gcc (GCC) 8.2.1 20181215 (Red Hat 8.2.1-6) No issue on Travis, but this test fails on my computer and I cleaned my repo with git clean -dfqx ---------- files: stdout.txt messages: 335596 nosy: matrixise priority: normal severity: normal status: open title: ./python -m test test_asyncio fails versions: Python 3.8 Added file: https://bugs.python.org/file48143/stdout.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 06:55:24 2019 From: report at bugs.python.org (Defert) Date: Fri, 15 Feb 2019 11:55:24 +0000 Subject: [New-bugs-announce] [issue35999] multpirocessing.Process alive after SIGTERM on parent Message-ID: <1550231724.21.0.67060075553.issue35999@roundup.psfhosted.org> New submission from Defert : Hello, Using the multiprocessing.Process class on Python 3.5 (untested with other versions), child processes are not killed when the main process is killed. The doc mentions a "daemon" flag (https://python.readthedocs.io/en/latest/library/multiprocessing.html#multiprocessing.Process.daemon), which says "When a process exits, it attempts to terminate all of its daemonic child processes." However this does not seem to be the case, when the parent process is killed, all children remain alive whatever the value of the daemon flag is. Test code: from multiprocessing import Process from time import sleep from os import getpid def log(daemon_mode): while True: print('worker %i %s' % (getpid(), daemon_mode)) sleep(3) print('parent pid %i' % getpid()) a = Process(target=log, args=(0,), daemon=False) a.start() b = Process(target=log, args=(1,), daemon=True) b.start() while True: sleep(60) ###### To be run with: user at host~# python3 test.py & [1] 14749 parent pid 14749 worker 14751 1 worker 14750 0 user at host:~# user at host:~# kill 14749 [1]+ Terminated python3 test.py user at host:~# worker 14751 1 worker 14750 0 ---------- components: Library (Lib) messages: 335601 nosy: lids priority: normal severity: normal status: open title: multpirocessing.Process alive after SIGTERM on parent versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 07:22:08 2019 From: report at bugs.python.org (Dan Snider) Date: Fri, 15 Feb 2019 12:22:08 +0000 Subject: [New-bugs-announce] [issue36000] __debug__ is a keyword but not a keyword Message-ID: <1550233328.67.0.392552434895.issue36000@roundup.psfhosted.org> New submission from Dan Snider : keyword.py is used by stuff like the idle colorizer to help determine if an identifier is considered a keyword but it doesn't identify __debug__ despite the fact that the parser treats it exactly the same as None, True, and False. I could not find a more recent issue to bring this back up than #34464 and there it was suggested a issue be made so here it is. As mentioned on that previous issue, currently keyword.py builds the list automatically by scanning "Python/graminit.c" but since there is no "__debug__" token to be found in that file it doesn't get added to kwlist. There is a file that groups the keywords True, False, None, and __debug__: ast.c. But there's no reason for it to be that complicated when nothing would break by for example adding on line 54 of keyword.py the statement "kwlist += ['__debug__']? Actually, I'm interested in knowing why __debug__ is a keyword in the first place. I'm terrible at searching apparently so there might be more but from what I can tell, the only thing the docs have to say about __debug__ really is the following tautology: "The value for the built-in variable [__debug__] is determined when the interpreter starts." ---------- components: Library (Lib) messages: 335605 nosy: bup priority: normal severity: normal status: open title: __debug__ is a keyword but not a keyword type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 07:44:26 2019 From: report at bugs.python.org (neil pop) Date: Fri, 15 Feb 2019 12:44:26 +0000 Subject: [New-bugs-announce] [issue36001] LIBFFI_INCLUDEDIR is not detected when set into a profile nor in ./configure LIBFFI_INCLUDEDIR="path/to/libffi/include" Message-ID: <1550234666.59.0.236804774599.issue36001@roundup.psfhosted.org> New submission from neil pop : Hello, I tried making python 3.7.2 on linux mint without libffi set into my LIBFFI_INCLUDEDIR (usr/local/include) but set into my bashsrc which is basically a profile with export LIBFFI_INCLUDEDIR="-Ipath/to/libff/include" however when i try making python after configuring it (i tried also passing that LIBFFI_INCLUDEDIR as an argument for the config part) then i get cannot build ctypes as there is no ffi.h header detected. So what i did is that i modified the python setup at line 1989: ffi_inc = [sysconfig.get_config_var("LIBFFI_INCLUDEDIR")] to ffi_inc = ["path/to/libffi/include"] and ran the configure then the make and voil?, ctypes are now compiled. So i was wondering is there a way to setup LIBFFI_INCLUDEDIR so it get returned by sysconfig.get_config_var("LIBFFI_INCLUDEDIR") (since it clearly doesn't) ? Cheers, Elisa ---------- components: ctypes messages: 335607 nosy: neil pop priority: normal severity: normal status: open title: LIBFFI_INCLUDEDIR is not detected when set into a profile nor in ./configure LIBFFI_INCLUDEDIR="path/to/libffi/include" type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 08:29:15 2019 From: report at bugs.python.org (Martijn Pieters) Date: Fri, 15 Feb 2019 13:29:15 +0000 Subject: [New-bugs-announce] [issue36002] configure --enable-optimizations with clang fails to detect llvm-profdata Message-ID: <1550237355.31.0.735770966924.issue36002@roundup.psfhosted.org> New submission from Martijn Pieters : This is probably a automake bug. When running CC=clang CXX=clang++ ./configure --enable-optimizations, configure tests for a non-existing -llvm-profdata binary: checking for --enable-optimizations... yes checking for --with-lto... no checking for -llvm-profdata... no configure: error: llvm-profdata is required for a --enable-optimizations build but could not be found. The generated configure script looks for "$target_alias-llvm-profdata", and $target_alias is an empty string. This problem is not visible on Macs, where additional checks for "/usr/bin/xcrun -find llvm-profdata" locate the binary. The work-around would be to specify a target when configuring. ---------- components: Build messages: 335610 nosy: mjpieters priority: normal severity: normal status: open title: configure --enable-optimizations with clang fails to detect llvm-profdata versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 08:44:39 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Fri, 15 Feb 2019 13:44:39 +0000 Subject: [New-bugs-announce] [issue36003] set better defaults for TCPServer options Message-ID: <1550238279.45.0.385319342978.issue36003@roundup.psfhosted.org> New submission from Giampaolo Rodola' : socketserver.TCPServer provides the following defaults: allow_reuse_address = False request_queue_size = 5 Proposal is to: * have "allow_reuse_address = True" on POSIX in order to immediately reuse previous sockets which were bound on the same address and remained in TIME_WAIT state * have "request_queue_size = None" so that it's up to socket.listen() to choose a default reasonable value (usually 128) ---------- components: Library (Lib) keywords: easy messages: 335612 nosy: asvetlov, giampaolo.rodola, neologix, yselivanov priority: normal severity: normal status: open title: set better defaults for TCPServer options type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 09:28:36 2019 From: report at bugs.python.org (Paul Ganssle) Date: Fri, 15 Feb 2019 14:28:36 +0000 Subject: [New-bugs-announce] [issue36004] Add datetime.fromisocalendar Message-ID: <1550240916.35.0.451510247621.issue36004@roundup.psfhosted.org> New submission from Paul Ganssle : Datetime has many methods that "serializes" an instance to some other format - toordinal, timestamp, isoformat, etc. Most methods that "serialize" a datetime have a corresponding method that "deserializes" that method, or another way to reverse the operation: - strftime / strptime - timestamp / fromtimestamp - isoformat / fromisoformat - toordinal / fromordinal - timetuple / datetime(*thetuple[0:6]) However, as I found out when implementing `dateutil.parser.isoparse`, there is no simple way to invert `isocalendar()`. I have an implementation as part of dateutil.parser.isoparse: https://github.com/dateutil/dateutil/blob/master/dateutil/parser/isoparser.py#L297 If there are no objections, I'd like to add an implementation in CPython itself. Thinking the name should be `fromisocalendar`. ---------- components: Library (Lib) messages: 335616 nosy: belopolsky, lemburg, p-ganssle priority: low severity: normal status: open title: Add datetime.fromisocalendar type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 11:29:43 2019 From: report at bugs.python.org (STINNER Victor) Date: Fri, 15 Feb 2019 16:29:43 +0000 Subject: [New-bugs-announce] [issue36005] [2.7] test_ssl failures on ARMv7 Ubuntu 2.7 with OpenSSL 1.1.1a Message-ID: <1550248183.17.0.361168114847.issue36005@roundup.psfhosted.org> New submission from STINNER Victor : Extract of pythoninfo: ssl.HAS_SNI: True ssl.OPENSSL_VERSION: OpenSSL 1.1.1a 20 Nov 2018 ssl.OPENSSL_VERSION_INFO: (1, 1, 1, 1, 15) ssl.OP_ALL: -0x7fffffac ssl.OP_NO_TLSv1_1: 0x10000000 https://buildbot.python.org/all/#/builders/92/builds/325 Many tests with TLS errors. A few examples: ERROR: test_connect (test.test_ssl.NetworkedTests) ... SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727) ERROR: test_protocol_sslv23 (test.test_ssl.ThreadedTests) Connecting to an SSLv23 server with various client options ---------------------------------------------------------------------- ... self._sslobj.do_handshake() SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:727) ERROR: test_networked_good_cert (test.test_httplib.HTTPSTest) ... SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727) ERROR: test_context_argument (test.test_urllibnet.urlopen_HttpsTests) ... IOError: [Errno socket error] [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727) -- There are other failures which may be unrelated: ERROR: test_fileno (test.test_urllib2net.OtherNetworkTests) ... HTTPError: HTTP Error 404: Not Found This buildbot build contains 5 changes: [2.7] bpo-33570: TLS 1.3 ciphers for OpenSSL 1.1.1 (GH-6976) (GH-8760) (GH-10607)(3 hours ago) bpo-35746: Credit Colin Read and Nicolas Edet (GH-11866)(5 hours ago) Doc sidebar: 3.6 has moved to security-fix mode. (GH-11810)(5 days ago) [2.7] Fix url to core-mentorship mailing list (GH-11775). (GH-11778)(9 days ago) bpo-25592: Improve documentation of distutils data_files (GH-9767) (GH-11734)(13 days ago) I bet that it's a regression caused by: https://github.com/python/cpython/commit/c49f63c1761ce03df7850b9e0b31a18c432dac64 ---------- components: Tests messages: 335619 nosy: vstinner priority: normal severity: normal status: open title: [2.7] test_ssl failures on ARMv7 Ubuntu 2.7 with OpenSSL 1.1.1a type: security versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 13:08:32 2019 From: report at bugs.python.org (Cheryl Sabella) Date: Fri, 15 Feb 2019 18:08:32 +0000 Subject: [New-bugs-announce] [issue36006] [good first issue] Align version changed for truncate in io module Message-ID: <1550254112.89.0.985997369649.issue36006@roundup.psfhosted.org> New submission from Cheryl Sabella : In the documentation page for the io module, under the truncate method, there is a version changed directive which is not properly aligned. https://docs.python.org/3/library/io.html#io.IOBase.truncate ---------- assignee: docs at python components: Documentation keywords: easy messages: 335633 nosy: cheryl.sabella, docs at python priority: normal severity: normal stage: needs patch status: open title: [good first issue] Align version changed for truncate in io module type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 17:48:04 2019 From: report at bugs.python.org (Anthony Sottile) Date: Fri, 15 Feb 2019 22:48:04 +0000 Subject: [New-bugs-announce] [issue36007] python3.8 a1 - docs build requires sphinx 1.7 but uses a 1.8 feature Message-ID: <1550270884.67.0.140749638005.issue36007@roundup.psfhosted.org> New submission from Anthony Sottile : doctest `:skipif:` was added in https://github.com/sphinx-doc/sphinx/issues/5273 Used here: https://github.com/python/cpython/blame/36433221f06d649dbd7e13f5fec948be8ffb90af/Doc/library/turtle.rst#L252-L253 Sphinx minimum version configured here: https://github.com/python/cpython/blob/36433221f06d649dbd7e13f5fec948be8ffb90af/Doc/conf.py#L44-L45 Should this be upped to 1.8 or should the :skipif: be removed? My guess is the former, I'd be happy to contribute a patch I hit this while packaging 3.8.0a1 for deadsnakes ---------- assignee: docs at python components: Build, Documentation messages: 335654 nosy: Anthony Sottile, docs at python priority: normal severity: normal status: open title: python3.8 a1 - docs build requires sphinx 1.7 but uses a 1.8 feature versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 15 19:53:55 2019 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 16 Feb 2019 00:53:55 +0000 Subject: [New-bugs-announce] [issue36008] [good first issue] Update documentation for 3.8 Message-ID: <1550278435.56.0.94881175235.issue36008@roundup.psfhosted.org> New submission from Cheryl Sabella : Please save this issue for a new contributor. If you have previously made a pull request to CPython, consider leaving this issue for someone else. Thanks! Similar to what PR 9501 did for updating references in the docs from 3.6 to 3.7, an update needs to be done for 3.8. ---------- assignee: docs at python components: Documentation keywords: easy messages: 335663 nosy: cheryl.sabella, docs at python priority: normal severity: normal stage: needs patch status: open title: [good first issue] Update documentation for 3.8 type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 16 03:04:16 2019 From: report at bugs.python.org (Alexander Mohr) Date: Sat, 16 Feb 2019 08:04:16 +0000 Subject: [New-bugs-announce] [issue36009] weakref.ReferenceType is not a valid typing type Message-ID: <1550304256.68.0.492420733825.issue36009@roundup.psfhosted.org> New submission from Alexander Mohr : For valid types which encapsulate other types, like typing.List or typing.Tuple, you can declare what's contained in them like so: foo: typing.List[int] = [] Unfortunately weakref.ReferenceType does not allow you to do this. You get the error: >>> foo: weakref.ReferenceType[typing.Any] = None Traceback (most recent call last): File "", line 1, in TypeError: 'type' object is not subscriptable so either ReferenceType should be fixed or a new type made available. ---------- components: Library (Lib) messages: 335674 nosy: thehesiod priority: normal severity: normal status: open title: weakref.ReferenceType is not a valid typing type type: behavior versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 16 09:20:20 2019 From: report at bugs.python.org (jt) Date: Sat, 16 Feb 2019 14:20:20 +0000 Subject: [New-bugs-announce] [issue36010] Please provide a .zip Windows release of Python that is not crippled/for embedding only Message-ID: <1550326820.19.0.863740875159.issue36010@roundup.psfhosted.org> New submission from jt : It would be really useful if you could provide a .zip Windows release of Python that is not crippled/for embedding only. The reason is simply is that right now, I am having constant pain & trouble with it writing an automated build script for Windows (.bat) that works on a Python-free machine. Specifically the installer has: 1.) no usable functionality in an automated build environment (%PATH% is very unnecessary or even undesirable to set up for a Python install only temporarily downloaded for a build) 2.) possibly conflicts with existing installs (making an automated build break/affect other Python installs/leaving other side effects, which is extremely undesirable and ugly) 3.) can break into a completely unusable state when interrupted (where the install doesn't complete, but running it again also doesn't, needing complicated special logic to nuke it) NONE of these would be an issue if there was just a .zip. And there is, of course, in form of the embedded install - but that one is useless to me for an automated build because I need pip, and it doesn't support running pip. ---------- components: Installation, Windows messages: 335687 nosy: jt, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Please provide a .zip Windows release of Python that is not crippled/for embedding only versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 16 13:36:12 2019 From: report at bugs.python.org (Christian Korneck) Date: Sat, 16 Feb 2019 18:36:12 +0000 Subject: [New-bugs-announce] [issue36011] ssl - tls verify on Windows 10 fails Message-ID: <1550342172.09.0.390948819907.issue36011@roundup.psfhosted.org> New submission from Christian Korneck : Hello, I have the impression that there's a general issue with how the Python stdlib module `ssl` uses the Windows certificate store to read the "bundle" of trusted Root CA certificates. At a first look, I couldn't find this issue documented elsewhere, so I'm trying to describe it below (apologies if its a duplicate). This issue leads to that on a standard Windows 10 installation with a standard Python 2.x or 3.x installation TLS verification for many webservers fails out of the box, including for common domains/webservers with a highly correct TLS setup like https://google.de or https://www.verisign.com/ . In short: On a vanilla Win 10 with a vanilla Python 2/3 installation, HTTPS connections to "commonly trusted" domain names fail out of the box. Example with Python 2.7.15: >>> import urllib2 >>> response = urllib2.urlopen("https://google.de") [...] ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726) Expected Behavior: TLS verify succeeds Actual Behavior: TLS verify fails Affected Python version/environment: I believe every Python version that uses the Windows certificate store is affected (since 3.4 / 2.7.9). However, I've only tested 2.7.11, 2.7.15, 3.7.2 (all 64 bit). I did test on Windows 10 1803, 1809, Windows Server 2019 1809 (all Pro x64 with latest patchlevel, i.e. the Jan 2019 cumulative update). All tested Python versions on all tested Windows 10 versions show the same behavior. -------- Details: 1.) Background - Factor1: Python's "ssl" std lib Since Python 3.4 / 2.7.9 the ssl lib uses the Windows certificate store to get a "bundle" of the trusted root CA certificates. (Some Python libraries like requests bring their own ca bundle though, usually through certifi. These libs are not affected). However, the ssl lib is not using the Windows SCHANNEL library but instead bundles its own copy of openssl. - Factor2: Windows 10 behavior Windows provides a certificate store, a vendor managed and updated "bundle" of Trusted Root CA certificates and a library for TLS operations called SCHANNEL (the native Windows openssl equivalent). On Windows 10, the list of pre-installed Trusted Root CA certificates is very minimal. On Windows 10 1809 only 12 Root CAs are known by the certificate store. In comparison certifi (Mozilla cabundle) currently lists 134 trusted RootCAs. Many widely trusted RootCAs are missing out of the box in the Windows certstore. Instead there's an online download mechanism used by the SCHANNEL library to download additional trusted root CA certificates from a Microsoft server when they are needed for the first time. Example: The certificate currently used for https://google.de was signed by an IntermediateCA which was signed by the RootCA "GlobalSign Root CA - R2". The cert for this RootCA is not out of the box present in the Windows certstore and therefore not trusted. When I make a HTTPS connection to this domain with a client that uses the SCHANNEL library (i.e. Microsoft Edge or Internet Explorer browser), the connection succeeds and is shown as "trusted". Afterwards the previously missing RootCA certificate appears in the windows certstore. (The Windows certstores can get inspected with the GUIs certml.msc (Machine store) and certmgr.msc (User store)). 2.) Behavior - install a vanilla Windows 10 1809 with default settings - install a vanilla Python 2.7.15 and/or 3.7.2 In Python: c:\python27\python.exe Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:30:26) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket, ssl >>> context = ssl.SSLContext(ssl.PROTOCOL_TLS) >>> context.verify_mode = ssl.CERT_REQUIRED >>> context.check_hostname = True # by default there are no cacerts in the context >>> len(context.get_ca_certs()) 0 >>> context.load_default_certs() >>> len(context.get_ca_certs()) # after loading the cacerts from the Windows cert store "ROOT", we are seeing some - but it's only 12 root cacerts in a vanilla Windows 10 (compared to 134 in the certifi / mozilla cabundle!) 12 >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> ssl_sock = context.wrap_socket(s, server_hostname='www.google.de') >>> ssl_sock.connect(('www.google.de', 443)) Traceback (most recent call last): File "", line 1, in File "c:\python27\lib\ssl.py", line 882, in connect self._real_connect(addr, False) File "c:\python27\lib\ssl.py", line 873, in _real_connect self.do_handshake() File "c:\python27\lib\ssl.py", line 846, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726) >>> ssl_sock.close() This first attempt to make a HTTPS connection to https://google.de failed because the required RootCA for this domain is not part of the very minimal Windows out-of-the-box ca bundle. Now let's make a https request against this domain with an application that uses the Windows SCHANNEL library (for example by typing https://google.de/ into the address bar in Internet Explorer / Edge browser). I will use the experimental pySchannelSSL here: $ git clone https://github.com/lsowen/pySchannelSSL.git $ "c:\Program Files\Python37\python.exe" Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 23:09:28) [MSC v.1916 64 bit (AMD64)] on win32 >>> import pySchannelSSL.httpshandler >>> h = pySchannelSSL.httpshandler.SSLConnection("google.de", port=443) >>> h.connect() >>> h.close() As part of processing the above request, the Windows SCHANNEL library has magically fetched the missing trusted RootCA certificate from a Microsoft server and has stored it permanently in the Windows "Trusted Root CAs" certstore. We can verify this with: >>> import socket, ssl >>> context = ssl.SSLContext(ssl.PROTOCOL_TLS) >>> context.load_default_certs() >>> len(context.get_ca_certs()) 15 Note that in our first attempt there were only 12 root cacerts in the Windows certstore. Now it's 15. And the only difference is that in between we've made an SCHANNEL-based https connection to the google.de domain. (You can also see the additional root certificates via the certificates mmc consoles certlm.msc and certmgr.msc). >From now on all non-SCHANNEL based HTTPS connections via the Python 2/3 ssl standard lib work, as SCHANNEL has permanently placed the RootCA cert in the windows certstore: c:\python27\python.exe Python 2.7.15 (v2.7.15:ca079a3ea3, Apr 30 2018, 16:30:26) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket, ssl >>> context = ssl.SSLContext(ssl.PROTOCOL_TLS) >>> context.verify_mode = ssl.CERT_REQUIRED >>> context.check_hostname = True >>> context.load_default_certs() >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> ssl_sock = context.wrap_socket(s, server_hostname='www.google.de') # got a certificate verify failed error here in the first try, this time the verify is successfull >>> ssl_sock.connect(('www.google.de', 443)) >>> ssl_sock.close() 3.) Conclusion I believe the way how the Python "ssl" stdlib uses the Windows Certificate Store is not ideal. Windows seems to expects all TLS connections to be made through the SCHANNEL library. The Trusted Root CA store in Windows 10 only seems to function as some sort of cache for SCHANNEL but is not as a complete source of truth. Maybe letting the "ssl" stdlib make a minimal SCHANNEL call before handing over to openssl could provide a minimal invasive fix? (Side note: I would still advocate for not bypassing the Windows certstore, as having a certstore per application is a security issue and big pain for deploying/updating own "Intranet" RootCA certificates). -------- Best, Chris ---------- assignee: christian.heimes components: SSL, Windows messages: 335708 nosy: chris-k, christian.heimes, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ssl - tls verify on Windows 10 fails versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 16 15:15:18 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 16 Feb 2019 20:15:18 +0000 Subject: [New-bugs-announce] [issue36012] Investigate slow writes to class variables Message-ID: <1550348118.4.0.228746523417.issue36012@roundup.psfhosted.org> New submission from Raymond Hettinger : Benchmark show what writes to class variables are anomalously slow. class A(object): pass A.x = 1 # This write is 3 to 5 times slower than other writes. FWIW, the same operation for old-style classes in Python 2.7 was several times faster. We should investigate to understand why the writes are so slow. There might be a good reason or there might be an opportunity for optimization. ------------------------------------------------- $ python3.8 Tools/scripts/var_access_benchmark.py Variable and attribute read access: 4.3 ns read_local 4.6 ns read_nonlocal 14.5 ns read_global 19.0 ns read_builtin 18.4 ns read_classvar_from_class 16.2 ns read_classvar_from_instance 24.7 ns read_instancevar 19.7 ns read_instancevar_slots 19.5 ns read_namedtuple 26.4 ns read_boundmethod Variable and attribute write access: 4.4 ns write_local 5.1 ns write_nonlocal 18.2 ns write_global 103.9 ns write_classvar <== Outlier 35.4 ns write_instancevar 25.6 ns write_instancevar_slots ---------- components: Interpreter Core messages: 335714 nosy: nascheme, pablogsal, rhettinger, vstinner priority: low severity: normal status: open title: Investigate slow writes to class variables type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 16 18:07:40 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 16 Feb 2019 23:07:40 +0000 Subject: [New-bugs-announce] [issue36013] test_signal fails in AMD64 Debian PGO 3.x Message-ID: <1550358460.35.0.479267905916.issue36013@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : test_signal fails in AMD64 Debian PGO 3.x: == Tests result: FAILURE then FAILURE == 406 tests OK. 10 slowest tests: - test_concurrent_futures: 2 min 37 sec - test_multiprocessing_spawn: 2 min 3 sec - test_multiprocessing_forkserver: 1 min 28 sec - test_multiprocessing_fork: 1 min 17 sec - test_gdb: 46 sec 874 ms - test_asyncio: 46 sec 166 ms - test_tools: 37 sec 582 ms - test_io: 35 sec 518 ms - test_tokenize: 34 sec 631 ms - test_subprocess: 34 sec 302 ms 1 test failed: test_signal 13 tests skipped: test_devpoll test_ioctl test_kqueue test_msilib test_ossaudiodev test_startfile test_tix test_tk test_ttk_guionly test_winconsoleio test_winreg test_winsound test_zipfile64 https://buildbot.python.org/all/#/builders/47/builds/2251/steps/4/logs/stdio test_valid_signals (test.test_signal.WindowsSignalTests) ... skipped 'Windows specific' ====================================================================== FAIL: test_keyboard_interrupt_communicated_to_shell (test.test_signal.PosixTests) KeyboardInterrupt exits such that shells detect a ^C. ---------------------------------------------------------------------- Traceback (most recent call last): File "/var/lib/buildbot/slaves/enable-optimizations-bot/3.x.gps-debian-profile-opt.nondebug/build/Lib/test/test_signal.py", line 121, in test_keyboard_interrupt_communicated_to_shell self.assertNotIn(b"TESTFAIL", process.stdout) AssertionError: b'TESTFAIL' unexpectedly found in b'TESTFAIL using bash 4.4.12(1)-release\n' ---------------------------------------------------------------------- This may be related to PR11862 ---------- components: Tests messages: 335729 nosy: gregory.p.smith, pablogsal priority: normal severity: normal status: open title: test_signal fails in AMD64 Debian PGO 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 16 23:47:32 2019 From: report at bugs.python.org (wabba) Date: Sun, 17 Feb 2019 04:47:32 +0000 Subject: [New-bugs-announce] [issue36014] test_help_with_metavar broken Message-ID: <1550378852.92.0.619862991077.issue36014@roundup.psfhosted.org> New submission from wabba : The test_help_with_metavar test appears to be broken. It fails with an assertion error: AssertionError: 'usag[55 chars]_name [-h] [--proxy ]\[113 chars]4>\n' != 'usag[55 chars]_name\n [-h] [--proxy \n' If I change "this_is_spammy_prog_with_a_long_name_sorry_about_the_name [-h] [--proxy ]" to "this_is_spammy_prog_with_a_long_name_sorry_about_the_name [-h] [--proxy ]" i.e. move the part starting with '[-h]' onto the same line, the test passes. What is the correct output expected here? I would think the [-h] should be on the same line, but maybe this is intended behavior and the problem is with my system. No other unit test is failing for me though. ---------- components: Tests messages: 335748 nosy: wabba priority: normal severity: normal status: open title: test_help_with_metavar broken versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 17 14:20:24 2019 From: report at bugs.python.org (Riccardo Magliocchetti) Date: Sun, 17 Feb 2019 19:20:24 +0000 Subject: [New-bugs-announce] [issue36015] streamhandler canont represent streams with an integer as name Message-ID: <1550431224.14.0.504793638163.issue36015@roundup.psfhosted.org> New submission from Riccardo Magliocchetti : When debugging uwsgi logging issues with python3.7 i got this on python 3.7.2: Traceback (most recent call last): File "/usr/lib/python3.7/logging/__init__.py", line 269, in _after_at_fork_weak_calls _at_fork_weak_calls('release') File "/usr/lib/python3.7/logging/__init__.py", line 261, in _at_fork_weak_calls method_name, "method:", err, file=sys.stderr) File "/usr/lib/python3.7/logging/__init__.py", line 1066, in __repr__ name = name + ' ' TypeError: unsupported operand type(s) for +: 'int' and 'str' AFAICS uwsgi creates sys.stderr as an unbuffered file with PyFile_FromFd() and sets it to sys dict. ---------- components: Library (Lib) messages: 335784 nosy: Riccardo Magliocchetti priority: normal severity: normal status: open title: streamhandler canont represent streams with an integer as name type: crash versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 17 15:20:35 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 17 Feb 2019 20:20:35 +0000 Subject: [New-bugs-announce] [issue36016] Allow gc.getobjects to return the objects tracked by a specific generation Message-ID: <1550434835.07.0.108252572436.issue36016@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : gc.get_objects() return all the objects tracked by the garbage collector. This is useful, but right now there is no way of knowing in which generation each object is currently on. This information can be beneficial to understand better the state of the garbage collector in a particular moment in time. This will allow knowing what are the oldest object tracked by the collector, gathering more fine-grained statistics about how generations are filled, better debugging and better insights about the internal structure of the generations. To allow this, I propose a new optional parameter to gc.get_objects, allowing the user to specify the generation to get the objects from. ---------- components: Interpreter Core messages: 335787 nosy: pablogsal priority: normal severity: normal status: open title: Allow gc.getobjects to return the objects tracked by a specific generation versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 17 16:15:28 2019 From: report at bugs.python.org (Mathijs Brands) Date: Sun, 17 Feb 2019 21:15:28 +0000 Subject: [New-bugs-announce] [issue36017] test_grp Message-ID: <1550438128.14.0.0187482999284.issue36017@roundup.psfhosted.org> Change by Mathijs Brands : ---------- components: Tests nosy: mjbrands priority: normal severity: normal status: open title: test_grp type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 17 19:00:59 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Mon, 18 Feb 2019 00:00:59 +0000 Subject: [New-bugs-announce] [issue36018] Add a Normal Distribution class to the statistics module Message-ID: <1550448059.37.0.914986932061.issue36018@roundup.psfhosted.org> New submission from Raymond Hettinger : Attached is a class that I've found useful for doing practical statistics work with normal distributions. It provides a nice, high-level API that makes short-work of everyday statistical problems. ------ Examples -------- # Simple scaling and translation temperature_february = NormalDist(5, 2.5) # Celsius print(temperature_february * (9/5) + 32) # Fahrenheit # Classic probability problems # https://blog.prepscholar.com/sat-standard-deviation # The mean score on a SAT exam is 1060 with a standard deviation of 195 # What percentage of students score between 1100 and 1200? sat = NormalDist(1060, 195) fraction = sat.cdf(1200) - sat.cdf(1100) print(f'{fraction * 100 :.1f}% score between 1100 and 1200') # Combination of normal distributions by summing variances birth_weights = NormalDist.from_samples([2.5, 3.1, 2.1, 2.4, 2.7, 3.5]) drug_effects = NormalDist(0.4, 0.15) print(birth_weights + drug_effects) # Statistical calculation estimates using simulations # Estimate the distribution of X * Y / Z n = 100_000 X = NormalDist(350, 15).examples(n) Y = NormalDist(47, 17).examples(n) Z = NormalDist(62, 6).examples(n) print(NormalDist.from_samples(x * y / z for x, y, z in zip(X, Y, Z))) # Naive Bayesian Classifier # https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Sex_classification height_male = NormalDist.from_samples([6, 5.92, 5.58, 5.92]) height_female = NormalDist.from_samples([5, 5.5, 5.42, 5.75]) weight_male = NormalDist.from_samples([180, 190, 170, 165]) weight_female = NormalDist.from_samples([100, 150, 130, 150]) foot_size_male = NormalDist.from_samples([12, 11, 12, 10]) foot_size_female = NormalDist.from_samples([6, 8, 7, 9]) prior_male = 0.5 prior_female = 0.5 posterior_male = prior_male * height_male.pdf(6) * weight_male.pdf(130) * foot_size_male.pdf(8) posterior_female = prior_female * height_female.pdf(6) * weight_female.pdf(130) * foot_size_female.pdf(8) print('Predict', 'male' if posterior_male > posterior_female else 'female') ---------- assignee: steven.daprano components: Library (Lib) files: gauss.py messages: 335792 nosy: rhettinger, steven.daprano priority: normal severity: normal status: open title: Add a Normal Distribution class to the statistics module type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file48147/gauss.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 17 19:41:35 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 18 Feb 2019 00:41:35 +0000 Subject: [New-bugs-announce] [issue36019] test_urllib fail in s390x buildbots Message-ID: <1550450495.08.0.90804425705.issue36019@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : test_urllib fail in s390x buildbots. It does not seem like a temporary failure as they keep failing consistently. Some failed builds: https://buildbot.python.org/all/#builders/126/builds/1010 https://buildbot.python.org/all/#builders/122/builds/1026 https://buildbot.python.org/all/#builders/119/builds/1060 https://buildbot.python.org/all/#builders/21/builds/2332 ====================================================================== ERROR: test_close (test.test_urllib2net.CloseSocketTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 89, in test_close response = _urlopen_with_retry(url) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 531, in open response = meth(req, response) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 641, in http_response 'http', request, response, code, msg, hdrs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 569, in error return self._call_chain(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden ====================================================================== ERROR: test_custom_headers (test.test_urllib2net.OtherNetworkTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 181, in test_custom_headers opener.open(request) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 531, in open response = meth(req, response) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 641, in http_response 'http', request, response, code, msg, hdrs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 569, in error return self._call_chain(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden ====================================================================== ERROR: test_http_basic (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 264, in test_http_basic u = _urlopen_with_retry(url) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 531, in open response = meth(req, response) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 641, in http_response 'http', request, response, code, msg, hdrs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 569, in error return self._call_chain(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden ====================================================================== ERROR: test_http_default_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 274, in test_http_default_timeout u = _urlopen_with_retry(url) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 531, in open response = meth(req, response) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 641, in http_response 'http', request, response, code, msg, hdrs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 569, in error return self._call_chain(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden ====================================================================== ERROR: test_http_no_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 286, in test_http_no_timeout u = _urlopen_with_retry(url, timeout=None) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 531, in open response = meth(req, response) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 641, in http_response 'http', request, response, code, msg, hdrs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 569, in error return self._call_chain(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden /home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/support/__init__.py:1539: ResourceWarning: unclosed gc.collect() ResourceWarning: Enable tracemalloc to get the object allocation traceback /home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/support/__init__.py:1539: ResourceWarning: unclosed gc.collect() ResourceWarning: Enable tracemalloc to get the object allocation traceback /home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/support/__init__.py:1539: ResourceWarning: unclosed gc.collect() ResourceWarning: Enable tracemalloc to get the object allocation traceback /home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/support/__init__.py:1539: ResourceWarning: unclosed gc.collect() ResourceWarning: Enable tracemalloc to get the object allocation traceback test test_urllib2net failed ====================================================================== ERROR: test_http_timeout (test.test_urllib2net.TimeoutTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 295, in test_http_timeout u = _urlopen_with_retry(url, timeout=120) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 27, in wrapped return _retry_thrice(func, exc, *args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 23, in _retry_thrice raise last_exc File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/test/test_urllib2net.py", line 19, in _retry_thrice return func(*args, **kwargs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 531, in open response = meth(req, response) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 641, in http_response 'http', request, response, code, msg, hdrs) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 569, in error return self._call_chain(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/dje/cpython-buildarea/3.7.edelsohn-rhel-z/build/Lib/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden ---------------------------------------------------------------------- Ran 15 tests in 1.006s FAILED (errors=6, skipped=1) 3 tests failed again: test_urllib2 test_urllib2net test_urllibnet ---------- components: Tests messages: 335793 nosy: David.Edelsohn, pablogsal priority: normal severity: normal status: open title: test_urllib fail in s390x buildbots versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 05:18:26 2019 From: report at bugs.python.org (palotasb-conti) Date: Mon, 18 Feb 2019 10:18:26 +0000 Subject: [New-bugs-announce] [issue36020] HAVE_SNPRINTF and MSVC std::snprintf support Message-ID: <1550485106.18.0.506219958256.issue36020@roundup.psfhosted.org> New submission from palotasb-conti : Abstract: pyerrors.h defines snprintf as a macro on MSVC even on versions of MSVC where this workaround causes bugs. snprintf is defined as _snprintf in pyerrors.h, see: https://github.com/python/cpython/blob/ac28147e78c45a6217d348ce90ca5281d91f676f/Include/pyerrors.h#L326-L330 The conditions for this should exclude _MSC_VER >= 1900 where (std::)snprintf is correctly defined. Since this is not the case, subsequent user code that tries to use std::snprintf will fail with an err (_snprintf is not a member of namespace std). ---------- components: Windows messages: 335803 nosy: palotasb-conti, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: HAVE_SNPRINTF and MSVC std::snprintf support type: compile error versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 06:00:53 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 18 Feb 2019 11:00:53 +0000 Subject: [New-bugs-announce] [issue36021] [Security][Windows] webbrowser: WindowsDefault uses os.startfile() and so can be abused to run arbitrary commands Message-ID: <1550487653.15.0.496398912417.issue36021@roundup.psfhosted.org> New submission from STINNER Victor : The webbrowser module uses WindowsDefault which calls os.startfile() and so can be abused to run arbitrary commands. WindowsDefault should do log a warning or raise an error if the URL is unsafe. I'm not sure how to build a list of safe URL schemes. At least, we can explicitly exclude "C:\WINDOWS\system32\calc.exe" which doesn't contain "://". The union of all "uses_*" constants of urllib.parser give me this sorted list of URL schemes: ['', 'file', 'ftp', 'git', 'git+ssh', 'gopher', 'hdl', 'http', 'https', 'imap', 'mailto', 'mms', 'news', 'nfs', 'nntp', 'prospero', 'rsync', 'rtsp', 'rtspu', 'sftp', 'shttp', 'sip', 'sips', 'snews', 'svn', 'svn+ssh', 'tel', 'telnet', 'wais', 'ws', 'wss'] Would it make sense to ensure that urllib.parser can parse an email to check if the URL looks valid? ---------- components: Library (Lib) messages: 335805 nosy: vstinner priority: normal severity: normal status: open title: [Security][Windows] webbrowser: WindowsDefault uses os.startfile() and so can be abused to run arbitrary commands type: security versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 06:29:28 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 18 Feb 2019 11:29:28 +0000 Subject: [New-bugs-announce] [issue36022] logging.config should not use eval() Message-ID: <1550489368.54.0.864409036165.issue36022@roundup.psfhosted.org> New submission from STINNER Victor : For logging "handlers", _install_handlers() of logging.config uses eval(): def _install_handlers(cp, formatters): """Install and return handlers""" hlist = cp["handlers"]["keys"] ... for hand in hlist: ... klass = section["class"] try: klass = eval(klass, vars(logging)) except (AttributeError, NameError): klass = _resolve(klass) args = section.get("args", '()') args = eval(args, vars(logging)) kwargs = section.get("kwargs", '{}') kwargs = eval(kwargs, vars(logging)) h = klass(*args, **kwargs) ... ... return handlers eval() is considered harmful regarding security: it executes arbitrary Python code. Would it be possible to rewrite this function without eval? I'm not sure of the format of the handler "class". Is it something like "module.submod.attr"? If yes, maybe a regex to validate the class would help? Maybe a loop using getattr() would be safer? Maybe ast.literal_eval() would be enough? At least for args and kwargs? $ python3 Python 3.7.2 (default, Jan 16 2019, 19:49:22) >>> import ast # Legit positional and keyword arguments are accepted >>> ast.literal_eval("(1, 2)") (1, 2) >>> ast.literal_eval("{'x': 1, 'y': 2}") {'x': 1, 'y': 2} # eval() executes arbitrary Python code >>> eval('__import__("os").system("echo hello")') hello 0 # literal_eval() doesn't execute system() >>> ast.literal_eval('__import__("os").system("echo hello")') Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.7/ast.py", line 91, in literal_eval return _convert(node_or_string) File "/usr/lib64/python3.7/ast.py", line 90, in _convert return _convert_signed_num(node) File "/usr/lib64/python3.7/ast.py", line 63, in _convert_signed_num return _convert_num(node) File "/usr/lib64/python3.7/ast.py", line 55, in _convert_num raise ValueError('malformed node or string: ' + repr(node)) ValueError: malformed node or string: <_ast.Call object at 0x7f60a400c780> ---------- messages: 335820 nosy: vinay.sajip, vstinner priority: normal severity: normal status: open title: logging.config should not use eval() type: security versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 08:13:28 2019 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Mon, 18 Feb 2019 13:13:28 +0000 Subject: [New-bugs-announce] [issue36023] Import configparser.ConfigParser repr Message-ID: <1550495608.73.0.630456699978.issue36023@roundup.psfhosted.org> New submission from R?mi Lapeyre : This is the current repr of the configparser.ConfigParser instances: >>> import configparser >>> config = configparser.ConfigParser() >>> config['sec'] = {} >>> config I think this could be improved to read: ---------- components: Library (Lib) messages: 335831 nosy: remi.lapeyre priority: normal severity: normal status: open title: Import configparser.ConfigParser repr type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 11:25:21 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 18 Feb 2019 16:25:21 +0000 Subject: [New-bugs-announce] [issue36024] ctypes: test_ctypes test_callbacks() crash on AArch64 Message-ID: <1550507121.83.0.763079263217.issue36024@roundup.psfhosted.org> New submission from STINNER Victor : Attached bug.py does crash *randomly* on AArch64. The code is extract from ctypes.test.test_as_parameter.AsParamPropertyWrapperTestCase.test_callbacks test. Example with Python 2.7.15 and Python 3.6.8 on RHEL8: # python2 bug.py Illegal instruction (core dumped) [root at cav-thunderx2s-cn88xx-01 ~]# python3 bug.py ... OK [root at cav-thunderx2s-cn88xx-01 ~]# python3 bug.py Illegal instruction (core dumped) I can reproduce the crash on Python 2.7.16rc compiled manually: ./configure --enable-unicode=ucs4 --with-system-ffi && make RHEL8 currently uses libffi-3.1-18.el8.aarch64. (I tried optimization levels -O0, -O1, -O2, -O3: I am always able to *randomly* trigger the crash.) Original bug report, Python 2 crash on RHEL8: https://bugzilla.redhat.com/show_bug.cgi?id=1652930 -- I don't know if it's related but I also saw the following error which has been reported in bpo-30991. FAIL: test_pass_by_value (ctypes.test.test_structures.StructureTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/root/src/python-3.6.2/Lib/ctypes/test/test_structures.py", line 416, in test_pass_by_value self.assertEqual(s.first, 0xdeadbeef) AssertionError: 195948557 != 3735928559 ---------- components: Library (Lib) files: bug.py messages: 335847 nosy: vstinner priority: normal severity: normal status: open title: ctypes: test_ctypes test_callbacks() crash on AArch64 versions: Python 2.7, Python 3.6 Added file: https://bugs.python.org/file48149/bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 12:22:05 2019 From: report at bugs.python.org (Paul Ganssle) Date: Mon, 18 Feb 2019 17:22:05 +0000 Subject: [New-bugs-announce] [issue36025] Breaking change in PyDate_FromTimeStamp API Message-ID: <1550510525.68.0.55104762284.issue36025@roundup.psfhosted.org> New submission from Paul Ganssle : The PyO3 test suite has been breaking since the alpha release of Python 3.8 because PyDateTimeAPI->Date_FromTimeStamp has had a breaking change in its API: https://github.com/PyO3/pyo3/issues/352 I believe this happened when `datetime.date.fromtimestamp` and `datetime.datetime.fromtimestamp` were converted over to using the argument clinic. The function `date_from_local_object` was renamed to `date_fromtimestamp`, without a corresponding change to the PyDateTimeCAPI struct. ---------- assignee: p-ganssle components: Library (Lib) messages: 335854 nosy: belopolsky, p-ganssle, petr.viktorin, serhiy.storchaka priority: normal severity: normal status: open title: Breaking change in PyDate_FromTimeStamp API versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 13:06:04 2019 From: report at bugs.python.org (SylvainDe) Date: Mon, 18 Feb 2019 18:06:04 +0000 Subject: [New-bugs-announce] [issue36026] Different error message when sys.settrace is used (regressions) Message-ID: <1550513164.21.0.815390267137.issue36026@roundup.psfhosted.org> New submission from SylvainDe : Context: -------- Follow-up from https://bugs.python.org/issue35965 which was closed when we assumed the issue was on the coverage side. After nedbat investigation, it seems like the issue comes from call to "sys.settrace". Issue: ------ A test relying on "self.assertRaisesRegex" with raising code "set.add(0)" started to lead to a different error message recently on the CI jobs. Indeed, instead of "descriptor '\w+' requires a 'set' object but received a 'int'" the raised exception was: "descriptor 'add' for 'set' objects doesn't apply to 'int' object" More surprisingly, the behavior was different: - depending on whether "self.assertRaisesRegex" was used as a context manager - on the Python version - when coverage was used. Nedbat was able to pinpoint the root cause for this: the usage of "sys.settrace". This can be hilighted using simple unit-tests: ====================================================================================================== import unittest import sys def trace(frame, event, arg): return trace DESCRIPT_REQUIRES_TYPE_RE = r"descriptor '\w+' requires a 'set' object but received a 'int'" DO_SET_TRACE = True class SetAddIntRegexpTests(unittest.TestCase): def test_assertRaisesRegex(self): if DO_SET_TRACE: sys.settrace(trace) self.assertRaisesRegex(TypeError, DESCRIPT_REQUIRES_TYPE_RE, set.add, 0) def test_assertRaisesRegex_contextman(self): if DO_SET_TRACE: sys.settrace(trace) with self.assertRaisesRegex(TypeError, DESCRIPT_REQUIRES_TYPE_RE): set.add(0) if __name__ == '__main__': unittest.main() ====================================================================================================== Here are the results from the original bug: On some versions, both tests pass: Python 3.6 and before Python 3.7.0a4+ (heads/master:4666ec5, Jan 26 2018, 04:14:24) - [GCC 4.8.4] - ('CPython', 'heads/master', '4666ec5')) On some versions, only test_assertRaisesRegex_contextman fails which was the confusing part: Python 3.7.1 (default, Dec 5 2018, 18:09:53) [GCC 5.4.0 20160609] - ('CPython', '', '') Python 3.7.2+ (heads/3.7:3fcfef3, Feb 9 2019, 07:30:09) [GCC 5.4.0 20160609] - ('CPython', 'heads/3.7', '3fcfef3') On some versions, both tests fail: Python 3.8.0a1+ (heads/master:8a03ff2, Feb 9 2019, 07:30:26) [GCC 5.4.0 20160609] - ('CPython', 'heads/master', '8a03ff2') First analysis: --------------- Using some git bisect magic, I was able to pinpoint the commits leading to each behavior change. - test_assertRaisesRegex_contextman starts to fail from: https://bugs.python.org/issue34126 https://github.com/python/cpython/commit/56868f940e0cc0b35d33c0070107ff3bed2d8766 - test_assertRaisesRegex starts to fail as well from: https://bugs.python.org/issue34125 https://github.com/python/cpython/commit/e89de7398718f6e68848b6340830aeb90b7d582c >From my point of view, it looks like we have 2 regressions. Please let me know if this needs to be split into 2 independant issues. ---------- files: issue36_git_bisect_script_and_results.txt messages: 335857 nosy: SylvainDe priority: normal severity: normal status: open title: Different error message when sys.settrace is used (regressions) Added file: https://bugs.python.org/file48151/issue36_git_bisect_script_and_results.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 15:10:59 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Mon, 18 Feb 2019 20:10:59 +0000 Subject: [New-bugs-announce] [issue36027] Consider adding modular multiplicative inverse to the math module Message-ID: <1550520659.43.0.126744170658.issue36027@roundup.psfhosted.org> New submission from Raymond Hettinger : Having gcd() in the math module has been nice. Here is another number theory basic that I've needed every now and then: def multinv(modulus, value): '''Multiplicative inverse in a given modulus >>> multinv(191, 138) 18 >>> 18 * 138 % 191 1 >>> multinv(191, 38) 186 >>> 186 * 38 % 191 1 >>> multinv(120, 23) 47 >>> 47 * 23 % 120 1 ''' # https://en.wikipedia.org/wiki/Modular_multiplicative_inverse # http://en.wikipedia.org/wiki/Extended_Euclidean_algorithm x, lastx = 0, 1 a, b = modulus, value while b: a, q, b = b, a // b, a % b x, lastx = lastx - q * x, x result = (1 - lastx * modulus) // value if result < 0: result += modulus assert 0 <= result < modulus and value * result % modulus == 1 return result ---------- components: Library (Lib) messages: 335862 nosy: mark.dickinson, pablogsal, rhettinger, skrah, tim.peters priority: low severity: normal status: open title: Consider adding modular multiplicative inverse to the math module type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 15:15:14 2019 From: report at bugs.python.org (Au Vo) Date: Mon, 18 Feb 2019 20:15:14 +0000 Subject: [New-bugs-announce] [issue36028] Integer Division discrepancy with decimal Message-ID: <1550520914.92.0.84005909816.issue36028@roundup.psfhosted.org> New submission from Au Vo : In Python3, there is a discrepancy of integer division with decimal. Considering these two examples: 4/ .4 ==10.0. But 4//.4== 9.0 Furthermore: 2//.2 == 9.0 3//.3 ==10.0 5//.5 ==10.0 6//.6 ==10.0 All answers should be 10.0? The problem is in Python3 and not in Python2. Python2 produces 10.0 as the answers for all of the above calculation. Is it a rounding out issue? ---------- messages: 335863 nosy: Au Vo priority: normal severity: normal status: open title: Integer Division discrepancy with decimal type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 18 17:33:30 2019 From: report at bugs.python.org (=?utf-8?b?R8Opcnk=?=) Date: Mon, 18 Feb 2019 22:33:30 +0000 Subject: [New-bugs-announce] [issue36029] Use consistent case for HTTP header fields Message-ID: <1550529210.49.0.526576386821.issue36029@roundup.psfhosted.org> New submission from G?ry : Use consistent case for HTTP header fields, according to [RFC 7231](https://tools.ietf.org/html/rfc7231). ---------- components: Library (Lib) messages: 335870 nosy: brett.cannon, eric.araujo, ezio.melotti, maggyero, mdk, ncoghlan, willingc priority: normal severity: normal status: open title: Use consistent case for HTTP header fields type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 03:02:44 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Tue, 19 Feb 2019 08:02:44 +0000 Subject: [New-bugs-announce] [issue36030] add internal API function to create tuple without items array initialization Message-ID: <1550563364.86.0.164811838679.issue36030@roundup.psfhosted.org> New submission from Sergey Fedoseev : PyTuple_New() fills items array with NULLs to make usage of Py_DECREF() safe even when array is not fully filled with real items. There are multiple cases when this initialization step can be avoided to improve performance. For example it gives such speed-up for PyList_AsTuple(): Before: $ python -m perf timeit -s "l = [None] * 10**6" "tuple(l)" ..................... Mean +- std dev: 4.43 ms +- 0.01 ms After: $ python -m perf timeit -s "l = [None] * 10**6" "tuple(l)" ..................... Mean +- std dev: 4.11 ms +- 0.03 ms ---------- messages: 335897 nosy: sir-sigurd priority: normal severity: normal status: open title: add internal API function to create tuple without items array initialization type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 03:35:20 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Tue, 19 Feb 2019 08:35:20 +0000 Subject: [New-bugs-announce] [issue36031] add internal API function to effectively convert just created list to tuple Message-ID: <1550565320.39.0.769773111251.issue36031@roundup.psfhosted.org> New submission from Sergey Fedoseev : There are several cases in CPython sources when PyList_AsTuple() is used with just created list and immediately after that this list is Py_DECREFed. This operation can be performed more effectively since refcount of items is not changed. For example it gives such speed-up for BUILD_TUPLE_UNPACK: Before: $ python -m perf timeit -s "l = [None]*10**6" "(*l,)" ..................... Mean +- std dev: 8.75 ms +- 0.10 ms After: $ python -m perf timeit -s "l = [None]*10**6" "(*l,)" ..................... Mean +- std dev: 5.41 ms +- 0.07 ms ---------- messages: 335901 nosy: sir-sigurd priority: normal severity: normal status: open title: add internal API function to effectively convert just created list to tuple type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 03:45:34 2019 From: report at bugs.python.org (Amit Amely) Date: Tue, 19 Feb 2019 08:45:34 +0000 Subject: [New-bugs-announce] [issue36032] Wrong output in tutorial (3.1.2. Strings) Message-ID: <1550565934.5.0.314572203435.issue36032@roundup.psfhosted.org> New submission from Amit Amely : This is the tutorial text: >>> prefix = 'Py' >>> prefix 'thon' # can't concatenate a variable and a string literal ... SyntaxError: invalid syntax >>> ('un' * 3) 'ium' ... SyntaxError: invalid syntax This is the actual result: >>> prefix = 'Py' >>> prefix 'thon' # can't concatenate a variable and a string literal File "", line 1 prefix 'thon' # can't concatenate a variable and a string literal ^ SyntaxError: invalid syntax ---------- assignee: docs at python components: Documentation messages: 335905 nosy: Amit Amely, docs at python priority: normal severity: normal status: open title: Wrong output in tutorial (3.1.2. Strings) versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 05:46:00 2019 From: report at bugs.python.org (Rodolfo Alonso) Date: Tue, 19 Feb 2019 10:46:00 +0000 Subject: [New-bugs-announce] [issue36033] logging.makeLogRecord should update "rv" using a dict defined with bytes instead of strings Message-ID: <1550573160.5.0.804221686367.issue36033@roundup.psfhosted.org> New submission from Rodolfo Alonso : When using Pycharm to execute unit/functional tests (OpenStack development), oslo_privsep.daemon._ClientChannel.out_of_band retrieves the record from the input arguments. Pycharm arguments are stored in a dictionary using bytes instead of strings. When logging.makeLogRecord tries to update "rv", the value in "dict" parameter (using bytes) doesn't update "rv" values. ---------- messages: 335925 nosy: ralonsoh priority: normal severity: normal status: open title: logging.makeLogRecord should update "rv" using a dict defined with bytes instead of strings type: enhancement versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 06:56:01 2019 From: report at bugs.python.org (Michael Felt) Date: Tue, 19 Feb 2019 11:56:01 +0000 Subject: [New-bugs-announce] [issue36034] Suprise halt caused by -Werror=implicit-function-declaration in ./Modules/posixmodule.c Message-ID: <1550577361.99.0.890870320024.issue36034@roundup.psfhosted.org> New submission from Michael Felt : On a system using an older version of gcc (v5.7.4) I get an error: (also AIX 6.1) gcc -pthread -Wno-unused-result -Wsign-compare -g -O0 -Wall -O -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Werror=implicit-function-declaration -I./Include/internal -I. -I./Include -DPy_BUILD_CORE_BUILTIN -DPy_BUILD_CORE -I./Include/internal -c ./Modules/posixmodule.c -o Modules/posixmodule.o ./Modules/posixmodule.c: In function 'os_preadv_impl': ./Modules/posixmodule.c:8765:9: error: implicit declaration of function 'preadv' [-Werror=implicit-function-declaration] ./Modules/posixmodule.c: In function 'os_pwritev_impl': ./Modules/posixmodule.c:9336:9: error: implicit declaration of function 'pwritev' [-Werror=implicit-function-declaration] cc1: some warnings being treated as errors On another system - same code base (commit e7a4bb554edb72fc6619d23241d59162d06f249a) no "error" status. (AIX 7.2) Not knowing gcc I have no idea which is correct - maybe they both are because the second system (where I am a guest) has an explicit declaration of these routines. Is there a flag I can add to gcc to get the "in-line" pre-processing output so I can look for differences with regard to these two functions? Thanks for hint. ---------- messages: 335936 nosy: Michael.Felt priority: normal severity: normal status: open title: Suprise halt caused by -Werror=implicit-function-declaration in ./Modules/posixmodule.c type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 07:09:57 2019 From: report at bugs.python.org (=?utf-8?q?J=C3=B6rg_Stucke?=) Date: Tue, 19 Feb 2019 12:09:57 +0000 Subject: [New-bugs-announce] [issue36035] pathlib.Path().rglob() breaks with broken symlinks Message-ID: <1550578197.02.0.381973172531.issue36035@roundup.psfhosted.org> New submission from J?rg Stucke : When using rglob() to iterate over the files of a directory containing a broken symlink (a link pointing to itself) rglob breaks with "[Errno 40] Too many levels of symbolic links" (OS: Linux). Steps to reproduce: mkdir tmp touch foo ls -s foo tmp/foo cd tmp file foo foo: broken symbolic link to foo python3 >>> from pathlib import Path >>> for f in Path().rglob("*"): print(x) foo Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.8/pathlib.py", line 1105, in rglob for p in selector.select_from(self): File "/usr/local/lib/python3.8/pathlib.py", line 552, in _select_from for starting_point in self._iterate_directories(parent_path, is_dir, scandir): File "/usr/local/lib/python3.8/pathlib.py", line 536, in _iterate_directories entry_is_dir = entry.is_dir() OSError: [Errno 40] Too many levels of symbolic links: './foo' What is more, stat(), is_dir(), is_file() and exists() also do not like those broken links: >>> Path("foo").is_file() Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.8/pathlib.py", line 1361, in is_file return S_ISREG(self.stat().st_mode) File "/usr/local/lib/python3.8/pathlib.py", line 1151, in stat return self._accessor.stat(self) OSError: [Errno 40] Too many levels of symbolic links: 'foo' Is this intended behaviour or is this a bug? I guess it's not intended, since it makes it impossible to iterate over such a directory with rglob(). I could not find anything similar in the bug tracker, but https://bugs.python.org/issue26012 seems to be related. Tested with Python 3.8.0a1, 3.6.7 and 3.5.2 (OS: Linux Mint 19) ---------- components: Library (Lib) messages: 335937 nosy: J?rg Stucke priority: normal severity: normal status: open title: pathlib.Path().rglob() breaks with broken symlinks type: behavior versions: Python 3.5, Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 08:06:20 2019 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Tue, 19 Feb 2019 13:06:20 +0000 Subject: [New-bugs-announce] [issue36036] Add method to get user defined command line arguments in unittest.main Message-ID: <1550581580.85.0.535660142284.issue36036@roundup.psfhosted.org> New submission from R?mi Lapeyre : Hi, I'm working on issue 18765 (running pdb.post_mortem() on failing test cases) where I need to control the behavior of a TestCase based on some outside input. For this issue, I want to add a new --pdb option to turn this feature on when enabled. As of today, it seems to me that the command line arguments given to the TestProgram are not accessible from the TestCase. Would adding a new parameter to TestCase be breaking backward compatibility? If so, what would be a better way to access command line arguments from TestCase? In addition to this, I have the need for another project to change the behavior of TestCases based on a custom command line argument. I propose to add a new method `getCustomArguments` that will take the parser as input so the user can define it's own arguments by subclassing TestProgram What do you think about this? ---------- files: 0001-WIP.patch keywords: patch messages: 335944 nosy: remi.lapeyre priority: normal severity: normal status: open title: Add method to get user defined command line arguments in unittest.main Added file: https://bugs.python.org/file48154/0001-WIP.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 09:19:42 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 19 Feb 2019 14:19:42 +0000 Subject: [New-bugs-announce] [issue36037] test_ssl fails on RHEL8 strict OpenSSL configuration Message-ID: <1550585982.05.0.862846003108.issue36037@roundup.psfhosted.org> New submission from STINNER Victor : RHEL8 uses a strict crypto policy by default. For example, SSLContext uses TLS 1.2 as the minimum version by default. Attached PR fix test_ssl for RHEL8. The PR is not specific to RHEL8. It should also fix test_ssl on Debian: see bpo-35925 and bpo-36005. test_ssl failures on RHEL8: ====================================================================== ERROR: test_PROTOCOL_TLS (test.test_ssl.ThreadedTests) Connecting to an SSLv23 server with various client options ---------------------------------------------------------------------- Traceback (most recent call last): File "/root/cpython-master/Lib/test/test_ssl.py", line 3079, in test_PROTOCOL_TLS try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1') File "/root/cpython-master/Lib/test/test_ssl.py", line 2623, in try_protocol_combo stats = server_params_test(client_context, server_context, File "/root/cpython-master/Lib/test/test_ssl.py", line 2549, in server_params_test s.connect((HOST, server.port)) File "/root/cpython-master/Lib/ssl.py", line 1150, in connect self._real_connect(addr, False) File "/root/cpython-master/Lib/ssl.py", line 1141, in _real_connect self.do_handshake() File "/root/cpython-master/Lib/ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:1055) ====================================================================== ERROR: test_protocol_tlsv1_1 (test.test_ssl.ThreadedTests) Connecting to a TLSv1.1 server with various client options. ---------------------------------------------------------------------- Traceback (most recent call last): File "/root/cpython-master/Lib/test/test_ssl.py", line 3150, in test_protocol_tlsv1_1 try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') File "/root/cpython-master/Lib/test/test_ssl.py", line 2623, in try_protocol_combo stats = server_params_test(client_context, server_context, File "/root/cpython-master/Lib/test/test_ssl.py", line 2549, in server_params_test s.connect((HOST, server.port)) File "/root/cpython-master/Lib/ssl.py", line 1150, in connect self._real_connect(addr, False) File "/root/cpython-master/Lib/ssl.py", line 1141, in _real_connect self.do_handshake() File "/root/cpython-master/Lib/ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:1055) ====================================================================== FAIL: test_min_max_version (test.test_ssl.ContextTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/root/cpython-master/Lib/test/test_ssl.py", line 1093, in test_min_max_version self.assertIn( AssertionError: not found in {, } ---------------------------------------------------------------------- Ran 150 tests in 3.318s FAILED (failures=1, errors=2, skipped=9) ---------- assignee: christian.heimes components: SSL, Tests messages: 335950 nosy: christian.heimes, vstinner priority: normal severity: normal status: open title: test_ssl fails on RHEL8 strict OpenSSL configuration versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 09:41:06 2019 From: report at bugs.python.org (Louis Michael) Date: Tue, 19 Feb 2019 14:41:06 +0000 Subject: [New-bugs-announce] [issue36038] ^ used in inaccurate example in regex-howto Message-ID: <1550587266.8.0.644762310509.issue36038@roundup.psfhosted.org> New submission from Louis Michael : at https://docs.python.org/3/howto/regex.html#regex-howto and https://docs.python.org/3.8/howto/regex.html#regex-howto https://docs.python.org/3.7/howto/regex.html#regex-howto https://docs.python.org/3.6/howto/regex.html#regex-howto https://docs.python.org/3.5/howto/regex.html#regex-howto https://docs.python.org/3.4/howto/regex.html#regex-howto https://docs.python.org/2.7/howto/regex.html#regex-howto The following paragraph seems to have a small issue: " You can match the characters not listed within the class by complementing the set. This is indicated by including a '^' as the first character of the class; '^' outside a character class will simply match the '^' character. For example, [^5] will match any character except '5'. " ^ does not simply match ^ outside a character class since is a special character that represents the start of the string. I think the paragraph should read: You can match the characters not listed within the class by complementing the set. This is indicated by including a '^' as the first character of the class; '^' will act differently outside a character class as explained later. For example, [^5] will match any character except '5'. ---------- assignee: docs at python components: Documentation messages: 335953 nosy: docs at python, eric.araujo, ezio.melotti, louism, mdk, willingc priority: normal severity: normal status: open title: ^ used in inaccurate example in regex-howto type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 11:05:09 2019 From: report at bugs.python.org (Lukas Geiger) Date: Tue, 19 Feb 2019 16:05:09 +0000 Subject: [New-bugs-announce] [issue36039] Replace append loops with list comprehensions Message-ID: <1550592309.68.0.760031675356.issue36039@roundup.psfhosted.org> New submission from Lukas Geiger : Lib uses loops to append to a new list in quite a few places. I think it would be better to replace those with list comprehensions. Benefits of this change: - List comprehensions are generally more readable than appending to a newly created list - List comprehensions are also a lot faster. Toy example: In [1]: %%timeit ...: l = [] ...: for i in range(5000): ...: l.append(i) 375 ?s ? 1.73 ?s per loop (mean ? std. dev. of 7 runs, 1000 loops each) In [2]: %%timeit ...: l = [i for i in range(5000)] 168 ?s ? 1.08 ?s per loop (mean ? std. dev. of 7 runs, 10000 loops each) Possible drawbacks: - Refactoring can always introduce bugs and makes it harder to get meaningful output from git blame. In this case I think the diff is very manageable, making the changes easy to review. Personally, I think the codebase would benefit from this change both in terms of some small performance gains and maintainability. I'd be happy to make a PR to fix this. ---------- components: Library (Lib) messages: 335961 nosy: lgeiger priority: normal severity: normal status: open title: Replace append loops with list comprehensions type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 11:41:14 2019 From: report at bugs.python.org (STINNER Victor) Date: Tue, 19 Feb 2019 16:41:14 +0000 Subject: [New-bugs-announce] [issue36040] Python\ast.c(3875): warning C4244: 'initializing': conversion from 'Py_ssize_t' to 'int' Message-ID: <1550594474.58.0.925484616574.issue36040@roundup.psfhosted.org> New submission from STINNER Victor : Example on AMD64 Windows8.1 Non-Debug 3.x buildbot: https://buildbot.python.org/all/#/builders/12/builds/2024 2>d:\buildarea\3.x.ware-win81-release\build\python\ast.c(3875): warning C4244: 'initializing': conversion from 'Py_ssize_t' to 'int', possible loss of data [D:\buildarea\3.x.ware-win81-release\build\PCbuild\pythoncore.vcxproj] ---------- components: Windows messages: 335974 nosy: paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: Python\ast.c(3875): warning C4244: 'initializing': conversion from 'Py_ssize_t' to 'int' type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 11:45:39 2019 From: report at bugs.python.org (Aaryn Tonita) Date: Tue, 19 Feb 2019 16:45:39 +0000 Subject: [New-bugs-announce] [issue36041] email: folding of quoted string in display_name violates RFC Message-ID: <1550594739.09.0.990893515193.issue36041@roundup.psfhosted.org> New submission from Aaryn Tonita : When using a policy for an EmailMessage that triggers folding (during serialization) of a fairly long display_name in an address field, the folding process removes the quotes from the display name breaking the semantics of the field. In particular, for a From address's display name like r'anything at anything.com ' + 'a' * MAX_LINE_LEN the folding puts anything at anything.com unquoted immediately after the From: header. For applications that do sender verification inside and then send it to an internal SMTP server that does not perform its own sender verification this could be considered a security issue since it enables sender spoofing. Receiving mail servers might be able to detect the broken header, but experiments show that the mail gets delivered. Simple demonstration (reproduced in attachment) of issue: SMTP_POLICY = email.policy.default.clone(linesep='\r\n', max_line_length=72) address = Address(display_name=r'anything at anything.com ' + 'a' * 72, addr_spec='dev at local.startmail.org') message = EmailMessage(policy=SMTP_POLICY) message['From'] = Address(display_name=display_name, addr_spec=addr_spec) # Trigger folding (via as_string()), then parse it back in. msg_string = message.as_string() msg_bytes = msg_string.encode('utf-8') msg_deserialized = BytesParser(policy=SMTP_POLICY).parsebytes(msg_bytes) # Verify badness from_hdr = msg_deserialized['From'] assert from_hdr != str(address) # But they should be equal... ---------- components: email files: address_folding_bug.py messages: 335975 nosy: aaryn.startmail, barry, r.david.murray priority: normal severity: normal status: open title: email: folding of quoted string in display_name violates RFC type: behavior versions: Python 3.5, Python 3.6, Python 3.7 Added file: https://bugs.python.org/file48155/address_folding_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 12:19:55 2019 From: report at bugs.python.org (BTaskaya) Date: Tue, 19 Feb 2019 17:19:55 +0000 Subject: [New-bugs-announce] [issue36042] Setting __init_subclass__ and __class_getitem__ methods are in runtime doesnt make them class method. Message-ID: <1550596795.62.0.0325313661294.issue36042@roundup.psfhosted.org> New submission from BTaskaya : CPython only makes these methods class method when a class created. If you set __class_getitem__ method after the creation it doesn't work unless you use classmethod decorator manually. >>> class B: ... pass ... >>> def y(*a, **k): ... return a, k ... >>> B.__class_getitem__ = y >>> B[int] ((,), {}) >>> B.__class_getitem__ = classmethod(y) >>> B[int] ((, ), {}) ---------- messages: 335985 nosy: BTaskaya priority: normal severity: normal status: open title: Setting __init_subclass__ and __class_getitem__ methods are in runtime doesnt make them class method. type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 13:26:02 2019 From: report at bugs.python.org (Alexander Kapshuna) Date: Tue, 19 Feb 2019 18:26:02 +0000 Subject: [New-bugs-announce] [issue36043] FileCookieJar constructor don't accept PathLike Message-ID: <1550600762.69.0.583891908165.issue36043@roundup.psfhosted.org> New submission from Alexander Kapshuna : FileCookieJar and it's subclasses don't accept Paths and DirEntrys. Minimal code to reproduce: === import pathlib from http.cookiejar import FileCookieJar saved_cookies = pathlib.Path('my_cookies.txt') jar = FileCookieJar(saved_cookies) === Results in: "ValueError: filename must be string-like". Workaround is to convert Path explicitly or call load() which doesn't check for type, but it would be nice to see all APIs in standard library consistent. I also did quick and dirty patch which silently converts filename argument using os.fspath(). This way it won't break any existing code relying on FileCookieJar.filename being string. ---------- components: Library (Lib) files: cookiejar_straightforward_convert.patch keywords: patch messages: 335993 nosy: Alexander Kapshuna priority: normal severity: normal status: open title: FileCookieJar constructor don't accept PathLike type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file48156/cookiejar_straightforward_convert.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 15:39:50 2019 From: report at bugs.python.org (Neil Schemenauer) Date: Tue, 19 Feb 2019 20:39:50 +0000 Subject: [New-bugs-announce] [issue36044] PROFILE_TASK for PGO build is not a good workload Message-ID: <1550608790.89.0.306917505619.issue36044@roundup.psfhosted.org> New submission from Neil Schemenauer : I was doing some 'perf' profiling runs of Python. I decided to try running PROFILE_TASK to see what the profile looks like. I was surprised that gc_collect dominated runtime: Children Self Symbol + 93.93% 6.00% [.] _PyEval_EvalFrameDefault + 76.19% 0.12% [.] function_code_fastcall + 70.65% 0.31% [.] _PyMethodDef_RawFastCallKeywords + 63.24% 0.13% [.] _PyCFunction_FastCallKeywords + 58.67% 0.36% [.] _PyEval_EvalCodeWithName + 57.45% 23.84% [.] collect + 52.89% 0.00% [.] gc_collect + 52.10% 0.08% [.] _PyFunction_FastCallDict + 41.99% 0.02% [.] _PyObject_Call_Prepend + 36.37% 0.18% [.] _PyFunction_FastCallKeywords + 20.94% 0.07% [.] _PyObject_FastCallDict + 19.64% 0.00% [.] PyObject_Call + 17.74% 0.11% [.] _PyObject_FastCallKeywords + 12.45% 0.00% [.] slot_tp_call + 12.27% 4.05% [.] dict_traverse + 11.45% 11.04% [.] visit_reachable + 11.18% 10.76% [.] visit_decref + 9.65% 0.11% [.] type_call + 8.80% 0.83% [.] func_traverse + 7.78% 0.08% [.] _PyMethodDescr_FastCallKeywords Part of the problem is that we run full cyclic GC for every test. I.e. cleanup_test_droppings() calls gc.collect(). Maybe we could make these calls conditional on the --pgo flag of regtest. Or, maybe we need to re-evaluate if running the unit test suite is the best way to generate PGO trace data. Based on a tiny bit of further investigation, it looks like gc_collect() is getting called quite a lot of times, in addition to cleanup_test_droppings(). Maybe there is some low-hanging fruit here for optimization. Full GC is pretty expensive. ---------- messages: 336018 nosy: nascheme priority: normal severity: normal status: open title: PROFILE_TASK for PGO build is not a good workload type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 18:36:27 2019 From: report at bugs.python.org (Dan Rose) Date: Tue, 19 Feb 2019 23:36:27 +0000 Subject: [New-bugs-announce] [issue36045] Help function is not much help with async functions Message-ID: <1550619387.23.0.907293710235.issue36045@roundup.psfhosted.org> New submission from Dan Rose : It is very important when using a callable to know whether it is async or not. It is expected that `builtins.help` function should provide the information needed to properly use the function, but it omits whether it is an async function or not. ``` import asyncio def the_answer(): return 42 async def the_answer2(): await asyncio.sleep(100) return the_answer() ``` ``` >>> help(the_answer) Help on function the_answer in module __main__: the_answer() >>> help(the_answer2) Help on function the_answer2 in module __main__: the_answer2() ``` Note there is no way to tell whether the result of the function needs to be awaited. In the similar case of generator functions versus regular functions, one obvious solution is to add a type annotation. That doesn't work here, since PEP-0484 indicates that the correct annotation for a coroutine function is the type that is awaited, not the coroutine object created when the function is called. ``` import typing def non_answers() -> typing.Iterable[int]: yield from range(42) >>> help(non_answers) Help on function non_answers in module __main__: non_answers() -> Iterable[int] ``` One awkward workaround is to wrap the coroutine function in a regular function: ``` def the_answer3() -> typing.Awaitable[int]: return the_answer2() >>> help(the_answer3) Help on function the_answer3 in module __main__: the_answer3() -> Awaitable[int] ``` ---------- components: asyncio messages: 336025 nosy: Dan Rose, asvetlov, yselivanov priority: normal severity: normal status: open title: Help function is not much help with async functions versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 20:24:18 2019 From: report at bugs.python.org (Patrick McLean) Date: Wed, 20 Feb 2019 01:24:18 +0000 Subject: [New-bugs-announce] [issue36046] support dropping privileges when running subprocesses Message-ID: <1550625858.55.0.53498127684.issue36046@roundup.psfhosted.org> New submission from Patrick McLean : Currently when using python to automate system administration tasks, it is useful to drop privileges sometimes. Currently the only way to do this is via a preexec_fn, which has well-documented problems. It would be useful to be able to pass a user and groups arguments to subprocess.popen. ---------- components: Library (Lib) messages: 336033 nosy: patrick.mclean priority: normal severity: normal status: open title: support dropping privileges when running subprocesses type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 19 21:32:25 2019 From: report at bugs.python.org (wang xuancong) Date: Wed, 20 Feb 2019 02:32:25 +0000 Subject: [New-bugs-announce] [issue36047] socket file handle does not support stream write Message-ID: <1550629945.15.0.290470169505.issue36047@roundup.psfhosted.org> New submission from wang xuancong : Python3 programmers have forgotten to convert/implement the socket file descriptor for IO stream operation. Would you please add it? Thanks! import socket s = socket.socket() s.connect('localhost', 5432) S = s.makefile() # on Python2, the following works print >>S, 'hello world' S.flush() # on Python3, the same thing does not work print('hello world', file=S, flush=True) It gives the following error: Traceback (most recent call last): File "", line 1, in io.UnsupportedOperation: not writable Luckily, the stream read operation works, S.readline() ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 336035 nosy: xuancong84 priority: normal severity: normal status: open title: socket file handle does not support stream write type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 01:43:05 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 20 Feb 2019 06:43:05 +0000 Subject: [New-bugs-announce] [issue36048] Deprecate implicit truncating when convert Python numbers to C integers Message-ID: <1550644985.32.0.26417547244.issue36048@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently, C API functions that convert a Python number to a C integer like PyLong_AsLong() and argument parsing functions like PyArg_ParseTuple() with integer converting format units like 'i' use the __int__() special method for converting objects which are not instances of int or int subclasses. This leads to dropping the fractional part if the object is not integral number (e.g. float, Decimal or Fraction). In some cases, there is a special check for float, but it does not prevent truncation for Decimal, Fraction and other numeric types. For example: >>> chr(65.5) Traceback (most recent call last): File "", line 1, in TypeError: integer argument expected, got float >>> chr(Decimal('65.5')) 'A' The proposed PR makes all these functions using __index__() instead of __int__() if available and emit a deprecation warning when __int__() is used for implicit conversion to a C integer. >>> chr(Decimal('65.5')) :1: DeprecationWarning: an integer is required (got type decimal.Decimal) 'A' In future versions only __index__() will be used for the implicit conversion, and __int__() will be used only in the int constructor. ---------- components: Interpreter Core messages: 336041 nosy: mark.dickinson, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Deprecate implicit truncating when convert Python numbers to C integers type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 05:14:35 2019 From: report at bugs.python.org (Zahash Z) Date: Wed, 20 Feb 2019 10:14:35 +0000 Subject: [New-bugs-announce] [issue36049] No __repr__() for queue.PriorityQueue and queue.LifoQueue Message-ID: <1550657675.07.0.575320352135.issue36049@roundup.psfhosted.org> New submission from Zahash Z : There is no __repr__() method for the PriorityQueue class and LifoQueue class in the queue.py file This makes it difficult to check the elements of the queue. ---------- messages: 336053 nosy: Zahash Z priority: normal severity: normal status: open title: No __repr__() for queue.PriorityQueue and queue.LifoQueue type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 07:55:43 2019 From: report at bugs.python.org (Bruce Merry) Date: Wed, 20 Feb 2019 12:55:43 +0000 Subject: [New-bugs-announce] [issue36050] Why does http.client.HTTPResponse._safe_read use MAXAMOUNT Message-ID: <1550667343.87.0.589252473583.issue36050@roundup.psfhosted.org> New submission from Bruce Merry : While investigating poor HTTP read performance I discovered that reading all the data from a response with a content-length goes via _safe_read, which in turn reads in chunks of at most MAXAMOUNT (1MB) before stitching them together with b"".join. This can really hurt performance for responses larger than MAXAMOUNT, because (a) the data has to be copied an additional time; and (b) the join operation doesn't drop the GIL, so this limits multi-threaded scaling. I'm struggling to see any advantage in doing this chunking - it's not saving memory either (in fact it is wasting it). To give an idea of the performance impact, changing MAXAMOUNT to a very large value made a multithreaded test of mine go from 800MB/s to 2.5GB/s (which is limited by the network speed). ---------- components: Library (Lib) messages: 336081 nosy: bmerry priority: normal severity: normal status: open title: Why does http.client.HTTPResponse._safe_read use MAXAMOUNT versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 08:00:54 2019 From: report at bugs.python.org (Bruce Merry) Date: Wed, 20 Feb 2019 13:00:54 +0000 Subject: [New-bugs-announce] [issue36051] (Performance) Drop the GIL during large bytes.join operations? Message-ID: <1550667654.79.0.460473844528.issue36051@roundup.psfhosted.org> New submission from Bruce Merry : A common pattern in libraries doing I/O is to receive data in chunks, put them in a list, then join them all together using b"".join(chunks). For example, see http.client.HTTPResponse._safe_read. When the output is large, the memory copies can block the interpreter for a non-trivial amount of time, and prevent multi-threaded scaling. If the GIL could be dropped during the memcpys it could improve parallel I/O performance in some high-bandwidth scenarios (36050 mentions a case where I've run into this serialisation bottleneck in practice). Obviously it could hurt performance to drop the GIL for small cases. As far as I know numpy uses thresholds to decide when it's worth dropping the GIL and it seems to work fairly well. ---------- components: Interpreter Core messages: 336082 nosy: bmerry priority: normal severity: normal status: open title: (Performance) Drop the GIL during large bytes.join operations? versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 10:23:36 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 20 Feb 2019 15:23:36 +0000 Subject: [New-bugs-announce] [issue36052] Assignment operator allows to assign to __debug__ Message-ID: <1550676216.29.0.487544148429.issue36052@roundup.psfhosted.org> New submission from Serhiy Storchaka : All ways of assigning to __debug__ are forbidden: >>> __debug__ = 1 File "", line 1 SyntaxError: cannot assign to __debug__ >>> for __debug__ in []: pass ... File "", line 1 SyntaxError: cannot assign to __debug__ >>> with cm() as __debug__: pass ... File "", line 1 SyntaxError: cannot assign to __debug__ >>> class __debug__: pass ... File "", line 1 SyntaxError: cannot assign to __debug__ >>> def __debug__(): pass ... File "", line 1 SyntaxError: cannot assign to __debug__ >>> def foo(__debug__): pass ... File "", line 1 SyntaxError: cannot assign to __debug__ >>> import __debug__ File "", line 1 SyntaxError: cannot assign to __debug__ The only exception is the assignment operator. >>> (__debug__ := 'spam') 'spam' >>> globals()['__debug__'] 'spam' This looks like a bug. ---------- components: Interpreter Core messages: 336100 nosy: emilyemorehouse, gvanrossum, serhiy.storchaka priority: normal severity: normal status: open title: Assignment operator allows to assign to __debug__ type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 11:37:07 2019 From: report at bugs.python.org (Piotr Karkut) Date: Wed, 20 Feb 2019 16:37:07 +0000 Subject: [New-bugs-announce] [issue36053] pkgutil.walk_packages jumps out from given path if there is package with the same name in sys.pah Message-ID: <1550680627.64.0.854320242724.issue36053@roundup.psfhosted.org> New submission from Piotr Karkut : When walk_packages encounter a package with a name that is available in sys.path, it will abandon the current package, and start walking the package from the sys.path. Consider this file layout: ``` PYTHONPATH/ ???package1/ | ???core | | ???some_package/ | | | ???__init__.py | | | ???mod.py | | ???__init__.py | ???__init__.py ???some_package/ | ???__init__.py | ???another_mod.py ???__init__.py ``` The result of walking package1 will be: ``` >> pkgutil.walk_packages('PYTHONPATH/package1') ModuleInfo(module_finder=FileFinder('PYTHONPATH/package1/core'), name='some_package', ispkg=True) ModuleInfo(module_finder=FileFinder('PYTHONPATH/some_package), name='another_mod', ispkg=False) ``` I'm not sure if it is a security issue, but it definitely should not jump off the given path. ---------- components: Library (Lib) messages: 336111 nosy: karkucik priority: normal severity: normal status: open title: pkgutil.walk_packages jumps out from given path if there is package with the same name in sys.pah type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 11:56:22 2019 From: report at bugs.python.org (Keir Lawson) Date: Wed, 20 Feb 2019 16:56:22 +0000 Subject: [New-bugs-announce] [issue36054] Way to detect CPU count inside docker container Message-ID: <1550681782.39.0.211832814896.issue36054@roundup.psfhosted.org> New submission from Keir Lawson : There appears to be no way to detect the number of CPUs allotted to a Python program within a docker container. With the following script: import os print("os.cpu_count(): " + str(os.cpu_count())) print("len(os.sched_getaffinity(0)): " + str(len(os.sched_getaffinity(0)))) when run in a container (from an Ubuntu 18.04 host) I get: docker run -v "$PWD":/src/ -w /src/ --cpus=1 python:3.7 python detect_cpus.py os.cpu_count(): 4 len(os.sched_getaffinity(0)): 4 Recent vesions of Java are able to correctly detect the CPU allocation: docker run -it --cpus 1 openjdk:10-jdk Feb 20, 2019 4:20:29 PM java.util.prefs.FileSystemPreferences$1 run INFO: Created user preferences directory. | Welcome to JShell -- Version 10.0.2 | For an introduction type: /help intro jshell> Runtime.getRuntime().availableProcessors() $1 ==> 1 ---------- components: Library (Lib) messages: 336117 nosy: keirlawson priority: normal severity: normal status: open title: Way to detect CPU count inside docker container type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 13:04:41 2019 From: report at bugs.python.org (Marcelo Marotta) Date: Wed, 20 Feb 2019 18:04:41 +0000 Subject: [New-bugs-announce] [issue36055] Division using math.pow and math.log approximation fails Message-ID: <1550685881.68.0.81295097869.issue36055@roundup.psfhosted.org> New submission from Marcelo Marotta : Steps to reproduce the error >>> import math >>> 1/math.log(math.pow(30,0.5),2) == 2/math.log(30,2) True >>> 1/math.log(math.pow(9,0.5),2) == 2/math.log(9,2) True >>> 1/math.log(math.pow(15,0.5),2) == 2/math.log(15,2) True >>> 1/math.log(math.pow(8,0.5),2) == 2/math.log(8,2) False >>> 2/math.log(8,2) 0.6666666666666666 >>> 1/math.log(math.pow(8,0.5),2) 0.6666666666666665 I reproduced the error in Python : Python 3.5.3 and Python 2.7.13 ---------- components: Library (Lib) messages: 336132 nosy: Marcelo Marotta priority: normal severity: normal status: open title: Division using math.pow and math.log approximation fails type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 14:02:21 2019 From: report at bugs.python.org (Dylan Lloyd) Date: Wed, 20 Feb 2019 19:02:21 +0000 Subject: [New-bugs-announce] [issue36056] importlib does not support pathlib Message-ID: <1550689341.22.0.769543209454.issue36056@roundup.psfhosted.org> New submission from Dylan Lloyd : ``` ? python -c 'import pathlib; import importlib; importlib.import_module(pathlib.Path("poc.py"))' Traceback (most recent call last): File "", line 1, in File "~/.conda/envs/py3.6/lib/python3.6/importlib/__init__.py", line 117, in import_module if name.startswith('.'): AttributeError: 'PosixPath' object has no attribute 'startswith' ``` ---------- components: Library (Lib) messages: 336136 nosy: majuscule priority: normal severity: normal status: open title: importlib does not support pathlib versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 19:10:05 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 21 Feb 2019 00:10:05 +0000 Subject: [New-bugs-announce] [issue36057] Add docs and tests for ordering in Counter. [no behavior change] Message-ID: <1550707805.37.0.515933023482.issue36057@roundup.psfhosted.org> New submission from Raymond Hettinger : When dicts became ordered, counters became ordered as well. Update the docs and tests to reflect that fact. ---------- assignee: rhettinger components: Tests messages: 336156 nosy: rhettinger priority: normal severity: normal status: open title: Add docs and tests for ordering in Counter. [no behavior change] versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 19:17:42 2019 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 21 Feb 2019 00:17:42 +0000 Subject: [New-bugs-announce] [issue36058] Improve file decoding before re.search Message-ID: <1550708262.6.0.970683874988.issue36058@roundup.psfhosted.org> New submission from Terry J. Reedy : Spin-off from #14929, which fixed crash. From msg161754: "The default extension is .py. The default encoding for .py files is utf-8. I think that is the default for what Idle writes. So I think this should be the default encoding (explicitly given) at least for .py files. >From msg161755: (but not sure about this) "Also, perhaps dialog box could have encodings field. People should be able to grep python code for any legal identifier, and this means proper decoding according to the encoding they actually use." ---------- assignee: terry.reedy components: IDLE messages: 336158 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: Improve file decoding before re.search type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 19:26:29 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 21 Feb 2019 00:26:29 +0000 Subject: [New-bugs-announce] [issue36059] Update docs for OrderedDict to reflect that regular dicts are ordered Message-ID: <1550708789.68.0.789605857229.issue36059@roundup.psfhosted.org> New submission from Raymond Hettinger : Am working on a PR for this now. The goals are to clarify the distinction between OrderedDict and regular dict, to remove outdated examples that no longer are the best way to solve a problem, and to move, and to add examples where OrderedDict will continue to be relevant. ---------- assignee: rhettinger components: Documentation messages: 336163 nosy: rhettinger priority: normal severity: normal status: open title: Update docs for OrderedDict to reflect that regular dicts are ordered versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 19:30:45 2019 From: report at bugs.python.org (Raymond Hettinger) Date: Thu, 21 Feb 2019 00:30:45 +0000 Subject: [New-bugs-announce] [issue36060] Document how collections.ChainMap() determines iteration order Message-ID: <1550709045.44.0.467024509195.issue36060@roundup.psfhosted.org> New submission from Raymond Hettinger : Am working on a PR for this now. Prior to regular dicts becoming ordered, the was no expectation for ChainMap() to have any particular order. Now that such an expectation might exist, I need to document what ChainMap() does and perhaps why it does it. ---------- assignee: rhettinger components: Documentation messages: 336165 nosy: rhettinger priority: normal severity: normal status: open title: Document how collections.ChainMap() determines iteration order versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 20 20:59:30 2019 From: report at bugs.python.org (Shane Lee) Date: Thu, 21 Feb 2019 01:59:30 +0000 Subject: [New-bugs-announce] [issue36061] zipfile does not handle arcnames with non-ascii characters on Windows Message-ID: <1550714370.81.0.682374426913.issue36061@roundup.psfhosted.org> New submission from Shane Lee : Python 2.7.15 (probably affects newer versions as well) Given an archive with any number of files inside that have non-ascii characters in their filename `zipfile` will crash when extracting them to the file system. ``` Traceback (most recent call last): File "c:\dev\salt\salt\modules\archive.py", line 1081, in unzip zfile.extract(target, dest, password) File "c:\python27\lib\zipfile.py", line 1028, in extract return self._extract_member(member, path, pwd) File "c:\python27\lib\zipfile.py", line 1069, in _extract_member targetpath = os.path.join(targetpath, arcname) File "c:\python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0x82 in position 3: ordinal not in range(128) ``` ---------- components: Windows files: test.zip messages: 336172 nosy: Shane Lee, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: zipfile does not handle arcnames with non-ascii characters on Windows type: behavior versions: Python 2.7 Added file: https://bugs.python.org/file48159/test.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 01:30:09 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Thu, 21 Feb 2019 06:30:09 +0000 Subject: [New-bugs-announce] [issue36062] move index normalization from list_slice() to PyList_GetSlice() Message-ID: <1550730609.16.0.637453891919.issue36062@roundup.psfhosted.org> New submission from Sergey Fedoseev : list_slice() is used by PyList_GetSlice(), list_subscript() and for list copying. In list_subscript() slice indices are already normalized with PySlice_AdjustIndices(), so slice normalization currently performed in list_slice() is only needed for PyList_GetSlice(). Moving this normalization from list_slice() to PyList_GetSlice() provides minor speed-up for list copying and slicing: $ python -m perf timeit -s "copy = [].copy" "copy()" --duplicate=1000 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 26.5 ns +- 0.5 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 25.7 ns +- 0.5 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 26.5 ns +- 0.5 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 25.7 ns +- 0.5 ns: 1.03x faster (-3%) $ python -m perf timeit -s "l = [1]" "l[:]" --duplicate=1000 --compare-to=../cpython-master/venv/bin/python /home/sergey/tmp/cpython-master/venv/bin/python: ..................... 71.5 ns +- 1.4 ns /home/sergey/tmp/cpython-dev/venv/bin/python: ..................... 70.2 ns +- 0.9 ns Mean +- std dev: [/home/sergey/tmp/cpython-master/venv/bin/python] 71.5 ns +- 1.4 ns -> [/home/sergey/tmp/cpython-dev/venv/bin/python] 70.2 ns +- 0.9 ns: 1.02x faster (-2%) ---------- messages: 336184 nosy: sir-sigurd priority: normal severity: normal status: open title: move index normalization from list_slice() to PyList_GetSlice() type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 04:41:35 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Thu, 21 Feb 2019 09:41:35 +0000 Subject: [New-bugs-announce] [issue36063] replace PyTuple_SetItem() with PyTuple_SET_ITEM() in long_divmod() Message-ID: <1550742095.78.0.510833916467.issue36063@roundup.psfhosted.org> New submission from Sergey Fedoseev : This change produces minor speed-up: $ python-other -m perf timeit -s "divmod_ = divmod" "divmod_(1, 1)" --duplicate=1000 --compare-to=../cpython-master/venv/bin/python python: ..................... 64.6 ns +- 4.8 ns python-other: ..................... 59.4 ns +- 3.2 ns Mean +- std dev: [python] 64.6 ns +- 4.8 ns -> [python-other] 59.4 ns +- 3.2 ns: 1.09x faster (-8%) ---------- messages: 336194 nosy: sir-sigurd priority: normal severity: normal status: open title: replace PyTuple_SetItem() with PyTuple_SET_ITEM() in long_divmod() type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 05:23:41 2019 From: report at bugs.python.org (Lye) Date: Thu, 21 Feb 2019 10:23:41 +0000 Subject: [New-bugs-announce] [issue36064] docs: urllib.request.Request not accepting iterables data type Message-ID: <1550744621.39.0.0634252256257.issue36064@roundup.psfhosted.org> New submission from Lye : I found out in the docs 3.6, in the class urllib.request.Request, for the input of 'data' data types, it says : "The supported object types include bytes, file-like objects, and iterables." But after testing it with data type dict for the 'data' input, I got error of: "can't concat str to bytes" It seems the docs should't say the 'data' data types support iterables. There more detail discussion is at : https://stackoverflow.com/questions/54802272/some-fundamental-concept-used-in-python-docs Hope this helps, thanks ! ---------- assignee: docs at python components: Documentation messages: 336198 nosy: docs at python, sylye priority: normal severity: normal status: open title: docs: urllib.request.Request not accepting iterables data type type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 09:20:08 2019 From: report at bugs.python.org (Ori Avtalion) Date: Thu, 21 Feb 2019 14:20:08 +0000 Subject: [New-bugs-announce] [issue36065] Add unified C API for accessing bytes and bytearray Message-ID: <1550758808.61.0.471799213148.issue36065@roundup.psfhosted.org> New submission from Ori Avtalion : It would be useful to have a shared API for consuming bytes and bytearrays. At present, I need to write very similar code twice. Some existing codebases only support bytes (perhaps forgetting bytearrays exist). Adding support for bytearray would be trivial if there was a shared API. These are the functions/macros that can have "BytesOrByteArray" equivalents: * PyBytes_Check * PyBytes_CheckExact * PyBytes_Size * PyBytes_GET_SIZE * PyBytes_AsString * PyBytes_AS_STRING * PyBytes_AsStringAndSize Here are some example implementations for the macros: #define PyBytesOrByteArray_Check(ob) (PyBytes_Check(ob) || PyByteArray_Check(ob)) #define PyBytesOrByteArray_AS_STRING(ob) (PyBytes_Check(ob) ? PyBytes_AS_STRING(ob) : PyByteArray_AS_STRING(ob)) #define PyBytesOrByteArray_GET_SIZE(ob) #define PyByteArray_GET_SIZE(self) (assert(PyBytesOrByteArray_Check(self)), Py_SIZE(self)) ---------- components: Interpreter Core messages: 336218 nosy: salty-horse priority: normal severity: normal status: open title: Add unified C API for accessing bytes and bytearray type: enhancement versions: Python 2.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 09:29:45 2019 From: report at bugs.python.org (WloHu) Date: Thu, 21 Feb 2019 14:29:45 +0000 Subject: [New-bugs-announce] [issue36066] Add `empty` block to `for` and `while` loops. Message-ID: <1550759385.86.0.911138177234.issue36066@roundup.psfhosted.org> New submission from WloHu : ### Description Adding `empty` block to loops will extend them to form for-empty-else and while-empty-else. The idea is that `empty` block will execute when loop iteration wasn't performed because iterated element was empty. The idea is taken from Django framework' `{% empty %}` block (https://docs.djangoproject.com/en/2.1/ref/templates/builtins/#for-empty). ### Details There are combinations how this loop should work together with `else` block (`for` loop taken as example): 1. for-empty - `empty` block runs when iteration wasn't performed, i.e. ended naturally because of empty iterator; 2. for-else - `else` block runs when iteration ended naturally either because iterator was empty or exhausted, behavior the same as currently implemented; 3. for-empty-else - in this form there is split depending on the way in which loop ended naturally: -- empty iterator - only `empty` block is executed, -- non-empty iterator - only `else` block is executed. In 3rd case `else` block is not executed together with `empty` block because this can be done by using for-else form. The only reason to make this case work differently is code duplication in case when regardless of the way, in which loop ended naturally, there is common code we want to execute. E.g.: ``` for: ... empty: statement1 statement2 else: statement1 statement3 ``` However implementing the "common-avoid-duplication" case will be inconsisted with `try-except` which executes only 1st matching `except` block. ### Current alternative solutions In case when iterable object works well with "empty test" (e.g.: `list`, `set`) the most simple solution is: ``` if iterable: print("Empty") else: for item in iterable: print(item) else: print("Ended naturally - non-empty.") ``` Which looks good and is simple enough to avoid extending the language. However in general this would fail if `iterable` object is a generator which is always truthy and fails the expectations of "empty test". In such case special handling should be made to make it work in general. So far I see 3 options: - use helper variable `x = list(iterable)` and do "empty test" as shown above - this isn't an option for unbound `iterable` like stream or asynchronous message queue; - test generator for emptiness a.k.a. peek next element: ``` try: first = next(iterable) except StopIteration: print("Empty") else: for item in itertools.chain([first], iterable): print(item) else: print("Ended naturally - non-empty.") ``` - add `empty` flag inside loop: ``` empty = True for item in iterable: empty = False # Sadly executed for each `item`. print(item) else: if empty: print("Empty") else print("Ended naturally - non-empty.") ``` The two latter options aren't really idiomatic compared to proposed: ``` for item in iterable: print(item) empty: print("Empty") else: print("Ended naturally - non-empty.") ``` ### Enchancement pros and cons Pros: - more idiomatic solution to handle natural loop exhaustion for empty iterator, - shorter horizontal indentation compared to current alternatives, - quite consistent flow control splitting compared to `try-except`, - not so exotic as it's already implemented in Django (`{% empty %}`) and Jinja2 (`{% else %}`). Cons: - new keyword/token, - applies to even smaller number of usecases than for-else which is still considered exotic. ### Actual (my) usecase (shortened): ``` empty = True for message in messages: empty = False try: decoded = message.decode() except ...: ... ... # Handle different exception types. else: log.info("Success") break else: if empty: error_message = "No messages." else: error_message = "Failed to decode available messages." log.error(error_message) ``` ### One more thing to convince readers Considering that Python "went exotic" with for-else and while-else to solve `if not_found: print('Not found.')` case, adding `empty` seems like next inductive step in controling flow of loops. ### Alternative solution Enhance generators to work in "empty test" which peeks for next element behind the scenes. This will additionally solve annoying issue for testing empty generators, which currently must be handled as special case of iterable object. Moreover this solution doesn't require new language keywords. ---------- components: Interpreter Core messages: 336221 nosy: wlohu priority: normal severity: normal status: open title: Add `empty` block to `for` and `while` loops. type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 10:38:02 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Thu, 21 Feb 2019 15:38:02 +0000 Subject: [New-bugs-announce] [issue36067] subprocess terminate() "invalid handle" error when process is gone Message-ID: <1550763482.04.0.374893682062.issue36067@roundup.psfhosted.org> New submission from Giampaolo Rodola' : Happened in psutil: https://ci.appveyor.com/project/giampaolo/psutil/builds/22546914/job/rlp112gffyf2o30i ====================================================================== ERROR: psutil.tests.test_process.TestProcess.test_halfway_terminated_process ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\projects\psutil\psutil\tests\test_process.py", line 85, in tearDown reap_children() File "c:\projects\psutil\psutil\tests\__init__.py", line 493, in reap_children subp.terminate() File "C:\Python35-x64\lib\subprocess.py", line 1092, in terminate _winapi.TerminateProcess(self._handle, 1) OSError: [WinError 6] The handle is invalid During the test case, the process was already gone (no PID). ---------- components: Library (Lib) messages: 336231 nosy: giampaolo.rodola priority: normal severity: normal stage: needs patch status: open title: subprocess terminate() "invalid handle" error when process is gone type: behavior versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 13:44:13 2019 From: report at bugs.python.org (Joe Jevnik) Date: Thu, 21 Feb 2019 18:44:13 +0000 Subject: [New-bugs-announce] [issue36068] Make _tuplegetter objects serializable Message-ID: <1550774653.56.0.094424899165.issue36068@roundup.psfhosted.org> New submission from Joe Jevnik : The new _tuplegetter objects for accessing fields of a namedtuple are no longer serializable with pickle. Cloudpickle, a library which provides extensions to pickle to facilitate distributed computing in Python, depended on being able to pickle the members of namedtuple classes. While property isn't serializable, cloudpickle has support for properties allowing us to serialize the old property(itemgetter) members. The attached PR adds a __reduce__ method to _tuplegetter objects which will allow serialization without special support. Another option would be to expose `index` as a read-only attribute, allowing cloudpickle or other libraries to provide the pickle implementation as a third-party library. ---------- components: Library (Lib) messages: 336251 nosy: llllllllll priority: normal severity: normal status: open title: Make _tuplegetter objects serializable type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 14:13:47 2019 From: report at bugs.python.org (=?utf-8?q?Leonardo_M=C3=B6rlein?=) Date: Thu, 21 Feb 2019 19:13:47 +0000 Subject: [New-bugs-announce] [issue36069] asyncio: create_connection cannot handle IPv6 link-local addresses anymore (linux) Message-ID: <1550776427.72.0.170763876114.issue36069@roundup.psfhosted.org> New submission from Leonardo M?rlein : The tuple (host, port) is ("fe80::5054:01ff:fe04:3402%node4_client", 22) in https://github.com/python/cpython/blob/master/Lib/asyncio/base_events.py#L918. The substring "node4_client" identifies the interface, which is needed for link local connections. The function self._ensure_resolved() is called and resolves to infos[0][4] = ("fe80::5054:01ff:fe04:3402", 22, something, 93), where 93 is the resolved scope id (see sin6_scope_id from struct sockaddr_in6 from man ipv6). Afterwards the self.sock_connect() is called with address = infos[0][4]. In self.sock_connect() the function self._ensure_resolved() is called again. In https://github.com/python/cpython/blob/master/Lib/asyncio/base_events.py#L1282 the scope id is stripped from the tuple. The tuple (host, port) is now only ("fe80::5054:01ff:fe04:3402", 22) and therefore the scope id is lost. I wrote this quick fix, which is not really suitable as a real solution for the problem: lemoer at orange ~> diff /usr/lib/python3.7/asyncio/base_events.py{.bak,} --- /usr/lib/python3.7/asyncio/base_events.py.bak 2019-02-21 18:42:17.060122277 +0100 +++ /usr/lib/python3.7/asyncio/base_events.py 2019-02-21 18:49:36.886866750 +0100 @@ -942,8 +942,8 @@ sock = None continue if self._debug: - logger.debug("connect %r to %r", sock, address) - await self.sock_connect(sock, address) + logger.debug("connect %r to %r", sock, (host, port)) + await self.sock_connect(sock, (host, port)) except OSError as exc: if sock is not None: sock.close() ---------- components: asyncio messages: 336253 nosy: Leonardo M?rlein, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio: create_connection cannot handle IPv6 link-local addresses anymore (linux) versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 18:25:38 2019 From: report at bugs.python.org (Nathan Woods) Date: Thu, 21 Feb 2019 23:25:38 +0000 Subject: [New-bugs-announce] [issue36070] Enclosing scope not visible from within list comprehension Message-ID: <1550791538.43.0.767407053529.issue36070@roundup.psfhosted.org> New submission from Nathan Woods : The following code works in an interactive shell or in a batch file, but not when executed as part of a unittest suite or pdb: from random import random out = [random() for ind in range(3)] It can be made to work using pdb interact, but this doesn't help with unittest. Tested in Python 3.7.2 ---------- messages: 336270 nosy: woodscn priority: normal severity: normal status: open title: Enclosing scope not visible from within list comprehension type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 21 22:34:27 2019 From: report at bugs.python.org (Paul Monson) Date: Fri, 22 Feb 2019 03:34:27 +0000 Subject: [New-bugs-announce] [issue36071] Add support for Windows ARM32 in ctypes/libffi Message-ID: <1550806467.14.0.829617126279.issue36071@roundup.psfhosted.org> Change by Paul Monson : ---------- components: Windows, ctypes nosy: Paul Monson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Add support for Windows ARM32 in ctypes/libffi versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 00:34:29 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Fri, 22 Feb 2019 05:34:29 +0000 Subject: [New-bugs-announce] [issue36072] str.translate() behave differently for ASCII-only and other strings Message-ID: <1550813669.37.0.821443537849.issue36072@roundup.psfhosted.org> New submission from Sergey Fedoseev : In [186]: from itertools import cycle In [187]: class ContainerLike: ...: def __init__(self): ...: self.chars = cycle('12') ...: def __getitem__(self, key): ...: return next(self.chars) ...: In [188]: 'aaaaaa'.translate(ContainerLike()) Out[188]: '111111' In [189]: '??????'.translate(ContainerLike()) Out[189]: '121212 It seems that behavior was changed in https://github.com/python/cpython/commit/89a76abf20889551ec1ed64dee1a4161a435db5b. At least it should be documented. ---------- messages: 336279 nosy: sir-sigurd priority: normal severity: normal status: open title: str.translate() behave differently for ASCII-only and other strings _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 02:44:47 2019 From: report at bugs.python.org (Sergey Fedoseev) Date: Fri, 22 Feb 2019 07:44:47 +0000 Subject: [New-bugs-announce] [issue36073] sqlite crashes with converters mutating cursor Message-ID: <1550821487.87.0.741979609564.issue36073@roundup.psfhosted.org> New submission from Sergey Fedoseev : It's somewhat similar to bpo-10811, but for converter function: In [197]: import sqlite3 as sqlite ...: con = sqlite.connect(':memory:', detect_types=sqlite.PARSE_COLNAMES) ...: cur = con.cursor() ...: sqlite.converters['CURSOR_INIT'] = lambda x: cur.__init__(con) ...: ...: cur.execute('create table test(x foo)') ...: cur.execute('insert into test(x) values (?)', ('foo',)) ...: cur.execute('select x as "x [CURSOR_INIT]", x from test') ...: [1] 25718 segmentation fault python manage.py shell Similar to bpo-10811, proposed patch raises ProgrammingError instead of crashing. ---------- components: Extension Modules messages: 336283 nosy: sir-sigurd priority: normal severity: normal status: open title: sqlite crashes with converters mutating cursor type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 04:14:20 2019 From: report at bugs.python.org (Kevin Mai-Hsuan Chia) Date: Fri, 22 Feb 2019 09:14:20 +0000 Subject: [New-bugs-announce] [issue36074] Result of `asyncio.Server.sockets` after `Server.close()` is not clear Message-ID: <1550826860.7.0.550984466305.issue36074@roundup.psfhosted.org> New submission from Kevin Mai-Hsuan Chia : It seems the result of `asyncio.Server.sockets` after `asyncio.Server.close()` is performed becomes `[]` instead of `None` since python 3.7. However, in the [document 3.7 and 3.8](https://docs.python.org/3.8/library/asyncio-eventloop.html#asyncio.Server.sockets), it states ``` List of socket.socket objects the server is listening on, or None if the server is closed. Changed in version 3.7: Prior to Python 3.7 Server.sockets used to return an internal list of server sockets directly. In 3.7 a copy of that list is returned. ``` For me, I think the comment `Changed in version 3.7: ...` only emphasizes the "copied list" is returned. IMO it will be more clear if the change from `None` to `[]` is mentioned in the comment as well. Sorry if this issue is not appropriate. Thanks! ---------- assignee: docs at python components: Documentation messages: 336287 nosy: docs at python, mhchia priority: normal severity: normal status: open title: Result of `asyncio.Server.sockets` after `Server.close()` is not clear type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 04:38:39 2019 From: report at bugs.python.org (Saba Kauser) Date: Fri, 22 Feb 2019 09:38:39 +0000 Subject: [New-bugs-announce] [issue36075] python 2to3 conversion tool is generating file with extra line for every input line Message-ID: <1550828319.37.0.854628356141.issue36075@roundup.psfhosted.org> New submission from Saba Kauser : Hi, I am building my python ibm_db driver on python 3.7 using the setup.py under https://github.com/SabaKauser/python-ibmdb/blob/master/IBM_DB/ibm_db/setup.py with 2to3 compatibility as: python setup.py build and installing as: python setup.py install I have a python script that after installing has new empty line added for every line. e.g: my source: if database is None: raise InterfaceError("createdb expects a not None database name value") if (not isinstance(database, basestring)) | \ (not isinstance(codeset, basestring)) | \ (not isinstance(mode, basestring)): raise InterfaceError("Arguments sould be string or unicode") The generated file under C:\Users\skauser\AppData\Local\Programs\Python\Python37\Lib\site-packages\ibm_db-2.0.9-py3.7-win-amd64.egg\ibm_db_dbi.py if database is None: raise InterfaceError("createdb expects a not None database name value") if (not isinstance(database, str)) | \ (not isinstance(codeset, str)) | \ (not isinstance(mode, str)): raise InterfaceError("Arguments sould be string or unicode") As you can see, there is this new line that is throwing runtime error. File "c:\users\skauser\appdata\local\programs\python\python37\lib\site-packages\ibm_db-2.0.9-py3.7-win-amd64.egg\ibm_db_dbi.py", line 846 ^ SyntaxError: invalid syntax Could you please let me know how can I get rid of this behavior? When I install the package using pip, I don't see this behavior. Thanks! Saba. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 336288 nosy: sabakauser priority: normal severity: normal status: open title: python 2to3 conversion tool is generating file with extra line for every input line type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 05:50:37 2019 From: report at bugs.python.org (Maciej Grela) Date: Fri, 22 Feb 2019 10:50:37 +0000 Subject: [New-bugs-announce] [issue36076] ssl.get_server_certificate should use SNI Message-ID: <1550832637.63.0.810742849841.issue36076@roundup.psfhosted.org> New submission from Maciej Grela : The ssl.get_server_certificate function doesn't send SNI information causing an wrong certificate to be sent back by the server (or connection close in some cases). This can be seen when trying to use get_server_certificate against a site behind cloudflare. An example is provided below: $ python3 -V Python 3.7.2 $ python3 -c "import ssl; print(ssl.get_server_certificate(('www.mx.com',443)))" | openssl x509 -text Certificate: Data: Version: 3 (0x2) Serial Number: 89:2a:bc:df:8a:f3:d6:f6:ae:c5:18:5a:78:ec:39:6e Signature Algorithm: ecdsa-with-SHA256 Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO ECC Domain Validation Secure Server CA 2 Validity Not Before: Dec 19 00:00:00 2018 GMT Not After : Jun 27 23:59:59 2019 GMT Subject: OU=Domain Control Validated, OU=PositiveSSL Multi-Domain, CN=ssl803013.cloudflaressl.com Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (256 bit) pub: 04:ff:c1:c3:f1:c0:8a:08:84:ad:e4:25:f6:c3:03: 1f:26:0a:b4:85:e0:65:0e:f5:8b:13:1e:21:b2:54: 94:8c:f3:ce:98:eb:cf:ff:ff:1d:3a:03:22:b1:7c: 5f:13:e5:09:1f:77:b0:e8:ac:bf:e6:6c:ea:cb:57: df:e1:c8:14:da ASN1 OID: prime256v1 NIST CURVE: P-256 X509v3 extensions: X509v3 Authority Key Identifier: keyid:40:09:61:67:F0:BC:83:71:4F:DE:12:08:2C:6F:D4:D4:2B:76:3D:96 X509v3 Subject Key Identifier: 4B:F4:77:CD:FB:04:DC:0D:B2:A5:99:B8:6F:17:CC:80:DF:AE:59:DF X509v3 Key Usage: critical Digital Signature X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.6449.1.2.2.7 CPS: https://secure.comodo.com/CPS Policy: 2.23.140.1.2.1 X509v3 CRL Distribution Points: Full Name: URI:http://crl.comodoca4.com/COMODOECCDomainValidationSecureServerCA2.crl Authority Information Access: CA Issuers - URI:http://crt.comodoca4.com/COMODOECCDomainValidationSecureServerCA2.crt OCSP - URI:http://ocsp.comodoca4.com X509v3 Subject Alternative Name: DNS:ssl803013.cloudflaressl.com, DNS:*.hscoscdn00.net, DNS:hscoscdn00.net 1.3.6.1.4.1.11129.2.4.2: ......u.......q...#...{G8W. .R....d6.......g..P......F0D. ...0....J|..2I..}%.Q.P...Z....g.. ......e....j...Y^.Ti^..........].w.t~..1.3..!..%OBp...^B ..75y..{.V...g..Pv.....H0F.!..1#I..\.....#2...$...X.... ......!...].{o..ud..6OV Q.x...J_(....[!. Signature Algorithm: ecdsa-with-SHA256 30:45:02:20:0c:8c:b6:ea:68:e4:d6:d6:18:95:50:8f:77:41: 63:51:81:59:3b:1b:e6:38:47:88:f3:47:d5:b0:0b:03:c5:ba: 02:21:00:d2:19:3f:71:e2:64:36:79:d1:4c:c9:98:fd:74:d7: 32:53:f6:b4:de:09:65:d8:a0:60:85:eb:f1:1f:75:35:75 -----BEGIN CERTIFICATE----- MIIFBzCCBK2gAwIBAgIRAIkqvN+K89b2rsUYWnjsOW4wCgYIKoZIzj0EAwIwgZIx CzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNV BAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMTgwNgYDVQQD Ey9DT01PRE8gRUNDIERvbWFpbiBWYWxpZGF0aW9uIFNlY3VyZSBTZXJ2ZXIgQ0Eg MjAeFw0xODEyMTkwMDAwMDBaFw0xOTA2MjcyMzU5NTlaMGwxITAfBgNVBAsTGERv bWFpbiBDb250cm9sIFZhbGlkYXRlZDEhMB8GA1UECxMYUG9zaXRpdmVTU0wgTXVs dGktRG9tYWluMSQwIgYDVQQDExtzc2w4MDMwMTMuY2xvdWRmbGFyZXNzbC5jb20w WTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAT/wcPxwIoIhK3kJfbDAx8mCrSF4GUO 9YsTHiGyVJSM886Y68///x06AyKxfF8T5Qkfd7DorL/mbOrLV9/hyBTao4IDBzCC AwMwHwYDVR0jBBgwFoAUQAlhZ/C8g3FP3hIILG/U1Ct2PZYwHQYDVR0OBBYEFEv0 d837BNwNsqWZuG8XzIDfrlnfMA4GA1UdDwEB/wQEAwIHgDAMBgNVHRMBAf8EAjAA MB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjBPBgNVHSAESDBGMDoGCysG AQQBsjEBAgIHMCswKQYIKwYBBQUHAgEWHWh0dHBzOi8vc2VjdXJlLmNvbW9kby5j b20vQ1BTMAgGBmeBDAECATBWBgNVHR8ETzBNMEugSaBHhkVodHRwOi8vY3JsLmNv bW9kb2NhNC5jb20vQ09NT0RPRUNDRG9tYWluVmFsaWRhdGlvblNlY3VyZVNlcnZl ckNBMi5jcmwwgYgGCCsGAQUFBwEBBHwwejBRBggrBgEFBQcwAoZFaHR0cDovL2Ny dC5jb21vZG9jYTQuY29tL0NPTU9ET0VDQ0RvbWFpblZhbGlkYXRpb25TZWN1cmVT ZXJ2ZXJDQTIuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC5jb21vZG9jYTQu Y29tMEgGA1UdEQRBMD+CG3NzbDgwMzAxMy5jbG91ZGZsYXJlc3NsLmNvbYIQKi5o c2Nvc2NkbjAwLm5ldIIOaHNjb3NjZG4wMC5uZXQwggEEBgorBgEEAdZ5AgQCBIH1 BIHyAPAAdQC72d+8H4pxtZOUI5eqkntHOFeVCqtS6BqQlmQ2jh7RhQAAAWfEHVAd AAAEAwBGMEQCIB73izCk6hPeSnyYAjJJD959JRpRHFD2EwNa3Bzo3mcUAiASBZfK xLxlg5H302r4gOVZXrFUaV6Ylpy57rzdrvb4XQB3AHR+2oMxrTMQkSGcziVPQnDC v/1eQiAIxjc1eeYQe8xWAAABZ8QdUHYAAAQDAEgwRgIhAMMxI0kXulz+sMgA7iMy LrjbJLcZ6ViL4e71CpHc99etAiEAzKFdBXtvvJR1ZAj4Nk9WClGbeBmO7EpfKP+4 nMFbIQ4wCgYIKoZIzj0EAwIDSAAwRQIgDIy26mjk1tYYlVCPd0FjUYFZOxvmOEeI 80fVsAsDxboCIQDSGT9x4mQ2edFMyZj9dNcyU/a03gll2KBghevxH3U1dQ== -----END CERTIFICATE----- As you can see the certificate returned has CN=ssl803013.cloudflaressl.com, the proper certificate has CN=www.mx.com as you can compare with openssl output when SNI is sent: $ openssl s_client -connect www.mx.com:443 -servername www.mx.com CONNECTED(00000006) depth=2 C = IE, O = Baltimore, OU = CyberTrust, CN = Baltimore CyberTrust Root verify return:1 depth=1 C = US, ST = CA, L = San Francisco, O = "CloudFlare, Inc.", CN = CloudFlare Inc ECC CA-2 verify return:1 depth=0 C = US, ST = CA, L = San Francisco, O = "CloudFlare, Inc.", CN = www.mx.com verify return:1 --- Certificate chain 0 s:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=www.mx.com i:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=CloudFlare Inc ECC CA-2 1 s:/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=CloudFlare Inc ECC CA-2 i:/C=IE/O=Baltimore/OU=CyberTrust/CN=Baltimore CyberTrust Root --- Server certificate -----BEGIN CERTIFICATE----- MIIEsTCCBFigAwIBAgIQAjcQsekTBs8hJemLjbikNDAKBggqhkjOPQQDAjBvMQsw CQYDVQQGEwJVUzELMAkGA1UECBMCQ0ExFjAUBgNVBAcTDVNhbiBGcmFuY2lzY28x GTAXBgNVBAoTEENsb3VkRmxhcmUsIEluYy4xIDAeBgNVBAMTF0Nsb3VkRmxhcmUg SW5jIEVDQyBDQS0yMB4XDTE4MTAxODAwMDAwMFoXDTE5MTAxODEyMDAwMFowYjEL MAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRYwFAYDVQQHEw1TYW4gRnJhbmNpc2Nv MRkwFwYDVQQKExBDbG91ZEZsYXJlLCBJbmMuMRMwEQYDVQQDEwp3d3cubXguY29t MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEpIwExa1SMy+gfeVr5vkDU6HP75tT R/EXeti75Mp7M8+aWGjndy0frFF99sUbLd7eLU4AvcQ/55Q+IPI70czNmqOCAuEw ggLdMB8GA1UdIwQYMBaAFD50LR/PRXUEfj/Aooc+TEODURPGMB0GA1UdDgQWBBTf wVI3nkYN/6HbYZq9m7DuZqxxxzAVBgNVHREEDjAMggp3d3cubXguY29tMA4GA1Ud DwEB/wQEAwIHgDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIweQYDVR0f BHIwcDA2oDSgMoYwaHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0Nsb3VkRmxhcmVJ bmNFQ0NDQTIuY3JsMDagNKAyhjBodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vQ2xv dWRGbGFyZUluY0VDQ0NBMi5jcmwwTAYDVR0gBEUwQzA3BglghkgBhv1sAQEwKjAo BggrBgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAIBgZngQwB AgIwdgYIKwYBBQUHAQEEajBoMCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdp Y2VydC5jb20wQAYIKwYBBQUHMAKGNGh0dHA6Ly9jYWNlcnRzLmRpZ2ljZXJ0LmNv bS9DbG91ZEZsYXJlSW5jRUNDQ0EtMi5jcnQwDAYDVR0TAQH/BAIwADCCAQQGCisG AQQB1nkCBAIEgfUEgfIA8AB2ALvZ37wfinG1k5Qjl6qSe0c4V5UKq1LoGpCWZDaO HtGFAAABZoguW3YAAAQDAEcwRQIhANcAvb1Emol7u2k7LdfkMdUTL8DNU+HNWLZc PjNrfYBfAiBqo3ixk0WfrJ/4X1Esr/DzpasP70RqlvNnhSQpKhUTEwB2AHR+2oMx rTMQkSGcziVPQnDCv/1eQiAIxjc1eeYQe8xWAAABZoguW2oAAAQDAEcwRQIhAPKv tl3iaYpmRBBN9rmcafB3FGBAdQ8ta5y8xPWjpn1cAiAJW45I6ekD3Afp0Nri+qOO 426qMXl9lJTkI+h7seRfEjAKBggqhkjOPQQDAgNHADBEAiBJVUN9XUYl0Hy/f7yn K+ximLR+8xINlban5hHj0PeghAIgXIoKNIKAl7r3la1J1KnWAfaoGgOo86hgSGLv b2tH1ps= -----END CERTIFICATE----- subject=/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=www.mx.com issuer=/C=US/ST=CA/L=San Francisco/O=CloudFlare, Inc./CN=CloudFlare Inc ECC CA-2 --- No client certificate CA names sent Server Temp Key: ECDH, X25519, 253 bits --- SSL handshake has read 2585 bytes and written 304 bytes --- New, TLSv1/SSLv3, Cipher is ECDHE-ECDSA-CHACHA20-POLY1305 Server public key is 256 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-ECDSA-CHACHA20-POLY1305 Session-ID: 2A4A8733FAEB281F19328C80B975907C6DB0FBE37B1D4A0A31D45C21504F10B4 Session-ID-ctx: Master-Key: F2F3881060640EF36D03DBAD46EBA8C3626D0CB2BE1253376DB98668E037D8C3ED03CE468DA5C5C86FFACB1C0C0943C6 TLS session ticket lifetime hint: 64800 (seconds) TLS session ticket: The problem lies within Lib/ssl.py get_server_certificate function: def get_server_certificate(addr, ssl_version=PROTOCOL_TLS, ca_certs=None): """Retrieve the certificate from the server at the specified address, and return it as a PEM-encoded string. If 'ca_certs' is specified, validate the server cert against it. If 'ssl_version' is specified, use it in the connection attempt.""" host, port = addr if ca_certs is not None: cert_reqs = CERT_REQUIRED else: cert_reqs = CERT_NONE context = _create_stdlib_context(ssl_version, cert_reqs=cert_reqs, cafile=ca_certs) with create_connection(addr) as sock: with context.wrap_socket(sock) as sslsock: dercert = sslsock.getpeercert(True) return DER_cert_to_PEM_cert(dercert) The wrap_socket function should be called with server_hostname parameter to send the SNI information. ---------- assignee: christian.heimes components: SSL messages: 336292 nosy: christian.heimes, enki priority: normal severity: normal status: open title: ssl.get_server_certificate should use SNI type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 07:27:13 2019 From: report at bugs.python.org (=?utf-8?b?0JrQuNGA0LjQu9C7INCn0YPRgNC60LjQvQ==?=) Date: Fri, 22 Feb 2019 12:27:13 +0000 Subject: [New-bugs-announce] [issue36077] Inheritance dataclasses fields and default init statement Message-ID: <1550838433.8.0.658564691536.issue36077@roundup.psfhosted.org> New submission from ?????? ?????? : I found a problem when use inherit dataclasses. When I define parent dataclass with field(s) with default (or default_factory) properties, and inherit child dataclass from parent, i define non-default field in it and got `TypeError('non-default argument {f.name!r} follows default argument')` in dataclasses.py(466)._init_fn. It happens because dataclass constructor defines all parent class fields as arguments in __init__ class and then all child class fields. Maybe it need to define all non-default fields in init before all default. ---------- messages: 336297 nosy: ?????? ?????? priority: normal severity: normal status: open title: Inheritance dataclasses fields and default init statement type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 09:18:34 2019 From: report at bugs.python.org (Axel) Date: Fri, 22 Feb 2019 14:18:34 +0000 Subject: [New-bugs-announce] [issue36078] argparse: positional with type=int, default=SUPPRESS raise ValueError Message-ID: <1550845114.74.0.376208054151.issue36078@roundup.psfhosted.org> New submission from Axel : Example source: from argparse import ArgumentParser, SUPPRESS ============== parser = ArgumentParser() parser.add_argument('i', nargs='?', type=int, default=SUPPRESS) args = parser.parse_args([]) ============== results in: error: argument integer: invalid int value: '==SUPPRESS==' Expected: args = Namespace() In Lib/argparse.py: line 2399 in _get_value: result = type_func(arg_string) with arg_string = SUPPRESS = '==SUPPRESS==' called by ... line 1836 in take_action: argument_values = self._get_values(action, argument_strings) which is done before checking for SUPPRESS in line 1851: if argument_values is not SUPPRESS: action(...) ---------- components: Library (Lib) messages: 336314 nosy: n8falke priority: normal severity: normal status: open title: argparse: positional with type=int, default=SUPPRESS raise ValueError type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 09:42:18 2019 From: report at bugs.python.org (Gerrit Holl) Date: Fri, 22 Feb 2019 14:42:18 +0000 Subject: [New-bugs-announce] [issue36079] pdb on setuptools "ValueError: underlying buffer has been detached" Message-ID: <1550846538.64.0.183791243037.issue36079@roundup.psfhosted.org> New submission from Gerrit Holl : I am unable to use `pdb` to debug a problem I have with the `python-hdf4` installer. The exception in the program to be debugged is printed twice, followed by an exception in pdb itself, ending with `ValueError: underlying buffer has been detached`. See below, in particular the lower part: $ python -mpdb setup.py install python -mpdb setup.py install > /panfs/e/vol0/gholl/checkouts/python-hdf4/setup.py(11)() -> """ (Pdb) cont running install running bdist_egg running egg_info running build_src build_src building extension "pyhdf._hdfext" sources build_src: building npy-pkg config files writing python_hdf4.egg-info/PKG-INFO writing dependency_links to python_hdf4.egg-info/dependency_links.txt writing top-level names to python_hdf4.egg-info/top_level.txt reading manifest file 'python_hdf4.egg-info/SOURCES.txt' writing manifest file 'python_hdf4.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'pyhdf._hdfext' extension compiling C sources C compiler: gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC compile options: '-I/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/core/include -I/panfs/e/vol0/gholl/venv/py36/include -I/hpc/rhome/software/python/3.6.5/include/python3.6m -c' extra options: '-DNOSZIP' Traceback (most recent call last): File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 1667, in main pdb._runscript(mainpyfile) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 1548, in _runscript self.run(statement) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/bdb.py", line 434, in run exec(cmd, globals, locals) File "", line 1, in File "/panfs/e/vol0/gholl/checkouts/python-hdf4/setup.py", line 11, in """ File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/core.py", line 171, in setup return old_setup(**new_attr) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/__init__.py", line 129, in setup return distutils.core.setup(**attrs) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/command/install.py", line 62, in run r = self.setuptools_run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/command/install.py", line 56, in setuptools_run self.do_egg_install() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 172, in run cmd = self.call_command('install_lib', warn_dir=0) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 158, in call_command self.run_command(cmdname) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/command/build_ext.py", line 261, in run self.build_extensions() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/command/build_ext.py", line 379, in build_extension **kws) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/ccompiler.py", line 92, in m = lambda self, *args, **kw: func(self, *args, **kw) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/ccompiler.py", line 363, in CCompiler_compile single_compile(o) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/ccompiler.py", line 305, in single_compile if not _needs_build(obj, cc_args, extra_postargs, pp_opts): File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/ccompiler.py", line 64, in _needs_build last_cmdline = lines[-1] IndexError: list index out of range Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program Traceback (most recent call last): File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 1667, in main pdb._runscript(mainpyfile) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 1548, in _runscript self.run(statement) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/bdb.py", line 434, in run exec(cmd, globals, locals) File "", line 1, in File "/panfs/e/vol0/gholl/checkouts/python-hdf4/setup.py", line 11, in """ File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/core.py", line 171, in setup return old_setup(**new_attr) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/__init__.py", line 129, in setup return distutils.core.setup(**attrs) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/core.py", line 148, in setup dist.run_commands() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/command/install.py", line 62, in run r = self.setuptools_run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/command/install.py", line 56, in setuptools_run self.do_egg_install() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/command/install.py", line 109, in do_egg_install self.run_command('bdist_egg') File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 172, in run cmd = self.call_command('install_lib', warn_dir=0) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/command/bdist_egg.py", line 158, in call_command self.run_command(cmdname) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/setuptools/command/install_lib.py", line 11, in run self.build() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/command/install_lib.py", line 107, in build self.run_command('build_ext') File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/command/build_ext.py", line 261, in run self.build_extensions() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/command/build_ext.py", line 379, in build_extension **kws) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/ccompiler.py", line 92, in m = lambda self, *args, **kw: func(self, *args, **kw) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/ccompiler.py", line 363, in CCompiler_compile single_compile(o) File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/ccompiler.py", line 305, in single_compile if not _needs_build(obj, cc_args, extra_postargs, pp_opts): File "/panfs/e/vol0/gholl/venv/py36/lib/python3.6/site-packages/numpy/distutils/ccompiler.py", line 64, in _needs_build last_cmdline = lines[-1] IndexError: list index out of range During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/hpc/rhome/software/python/3.6.5/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 1694, in pdb.main() File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 1686, in main pdb.interaction(None, t) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 351, in interaction self.print_stack_entry(self.stack[self.curindex]) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 1453, in print_stack_entry self.format_stack_entry(frame_lineno, prompt_prefix)) File "/hpc/rhome/software/python/3.6.5/lib/python3.6/pdb.py", line 453, in message print(msg, file=self.stdout) ValueError: underlying buffer has been detached ---------- messages: 336316 nosy: Gerrit.Holl priority: normal severity: normal status: open title: pdb on setuptools "ValueError: underlying buffer has been detached" versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 10:19:41 2019 From: report at bugs.python.org (Sammy Gillespie) Date: Fri, 22 Feb 2019 15:19:41 +0000 Subject: [New-bugs-announce] [issue36080] Ensurepip fails to install pip into a nested virtual environment (on Windows) Message-ID: <1550848781.33.0.584948420455.issue36080@roundup.psfhosted.org> New submission from Sammy Gillespie : Running Windows 10 Enterprise. Create a virtual environment: > python -m venv .venv Activate that virtual environment and attempt to create another virtual environment using the same command. This new environment will not contain pip (or anything other than just Python). Investigating this further. Running: > python -Im ensurepip --upgrade --default-pip from within the second virtual environment results in: > Requirement already up-to-date: pip in ---------- I want to create a python tool that will build a virtual env, but this restricts me from being able to install that tool into a virtual env (e.g. using pipx). ---------- messages: 336322 nosy: Sammy Gillespie priority: normal severity: normal status: open title: Ensurepip fails to install pip into a nested virtual environment (on Windows) type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 11:05:17 2019 From: report at bugs.python.org (Rolf Eike Beer) Date: Fri, 22 Feb 2019 16:05:17 +0000 Subject: [New-bugs-announce] [issue36081] Cannot set LDFLAGS containing $ Message-ID: <1550851517.09.0.227487072045.issue36081@roundup.psfhosted.org> New submission from Rolf Eike Beer : My use case is: LDFLAGS=-Wl,-rpath,'$ORIGIN/../lib' This works fine for everything build directly by the Makefile, but for everything that is build through the python distutils this breaks. This is not an issue of the python side, it happens because the Makefile passes the information to python using LDSHARED='$(BLDSHARED)'. At this point the variable is expanded and the $ORIGIN is expanded by the shell (or so) before passing it to python, so python actually received "-Wl,-rpath,/../lib" from the environment variable. I have worked around locally by doing something like $(subst $$,~dollar~,$(BLDSHARED)) and replacing that inside python with \\$ or so. Really hacky, but works for my current setup. ---------- components: Build messages: 336326 nosy: Dakon priority: normal severity: normal status: open title: Cannot set LDFLAGS containing $ type: compile error versions: Python 2.7, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 12:02:23 2019 From: report at bugs.python.org (Juho Pesonen) Date: Fri, 22 Feb 2019 17:02:23 +0000 Subject: [New-bugs-announce] [issue36082] The built-in round() function giving a wrong output Message-ID: <1550854943.4.0.447015578446.issue36082@roundup.psfhosted.org> New submission from Juho Pesonen : As the title says I have got some wrong outputs with the round() built-in function. The bug occurs only with certain numbers though. (I've had problems only with the number 2.5, but I assume that there is more than one number that gives wrong output.) Here are some outputs with the round built-in function: >>>round(2.5) 2 >>>round(2.51) 3 >>>round(3.5) 4 >>>round(1.5) 2 As you can see the number 2.5 gives 2 instead of 3. ---------- messages: 336329 nosy: Goodester priority: normal severity: normal status: open title: The built-in round() function giving a wrong output type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 13:42:43 2019 From: report at bugs.python.org (=?utf-8?q?Miro_Hron=C4=8Dok?=) Date: Fri, 22 Feb 2019 18:42:43 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue36083=5D_Misformated_manpa?= =?utf-8?q?ge=3A_--check-hash-based-pycs_=C2=B4default=C2=B4=7C=C2=B4alway?= =?utf-8?b?c8K0fMK0bmV2ZXLCtA==?= Message-ID: <1550860963.76.0.541506927126.issue36083@roundup.psfhosted.org> New submission from Miro Hron?ok : man python3.7 or man python3.8 says this in synopsis: python [ -B ] [ -b ] [ -d ] [ -E ] [ -h ] [ -i ] [ -I ] [ -m module-name ] [ -q ] [ -O ] [ -OO ] [ -s ] [ -S ] [ -u ] [ -v ] [ -V ] [ -W argument ] [ -x ] [ [ -X option ] -? ] [ --check-hash-based-pycs ?default?|?always?|?never? ] [ -c command | script | - ] [ arguments ] Some words are bold, some are underlined. However the ?default?|?always?|?never? bit after --check-hash-based-pycs is misformated. The backticks should not be there, they should be underline instead. The source literally has: [ .B \--check-hash-based-pycs \'default\'|\'always\'|\'never\' ] since the original implementation of PEP 552 in 42aa93b8ff2f7879282b06efc73a31ec7785e602 I think it should be replaced with: [ .B \--check-hash-based-pycs .I default|always|never ] Shall I send a PR? ---------- assignee: docs at python components: Documentation messages: 336342 nosy: docs at python, hroncok priority: normal severity: normal status: open title: Misformated manpage: --check-hash-based-pycs ?default?|?always?|?never? versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 17:46:11 2019 From: report at bugs.python.org (Jake Tesler) Date: Fri, 22 Feb 2019 22:46:11 +0000 Subject: [New-bugs-announce] [issue36084] Threading: add builtin TID attribute to Thread objects Message-ID: <1550875571.41.0.297490361597.issue36084@roundup.psfhosted.org> New submission from Jake Tesler : This functionality adds a native Thread ID to threading.Thread objects. This ID (TID), similar to the PID of a process, is assigned by the OS (kernel) and is generally used for externally monitoring resources consumed by the running thread (or process). This does not replace the `ident` attribute within Thread objects, which is assigned by the Python interpreter and is guaranteed as unique for the lifetime of the Python instance. ---------- messages: 336348 nosy: Jake Tesler priority: normal severity: normal status: open title: Threading: add builtin TID attribute to Thread objects type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 19:28:57 2019 From: report at bugs.python.org (Steve Dower) Date: Sat, 23 Feb 2019 00:28:57 +0000 Subject: [New-bugs-announce] [issue36085] Enable better DLL resolution Message-ID: <1550881737.58.0.129017089721.issue36085@roundup.psfhosted.org> New submission from Steve Dower : So the fundamental problem is that the default DLL search path on Windows changes in various contexts, and the only consistent approach is the most difficult to support with current packaging tools. The result is .pyd files that need to resolve .dll dependencies from directories *other* than where the .pyd file is located. Here's a generic scenario: * my_package.subpackage1.my_module is implemented as my_package/subpackage1/my_module.pyd * my_package.subpackage2.my_module is implemented as my_package/subpackage2/my_module.pyd * my_module.pyd in both cases depends on HelperLib.dll * both modules must end up with the same instance of HelperLib.dll While there are various ways for my_modules.pyd to locate HelperLib.dll, the only totally reliable way is to put HelperLib.dll alongside my_module.pyd. However, because it is needed twice, this means two copies of the DLL, which is unacceptable. With Python 3.8, we are *nearly* dropping support for Windows 7, and I believe we can justify dropping support for Windows 7 without KB2533625 [1], which will have been released over eight years by the time 3.8 releases. This means the DLL search path enhancements are available. Proposal #1: CPython calls SetDefaultDllDirectories() [2] on startup and exposes AddDllDirectory() [3] via the sys or os module. This would ensure consistency in DLL search order regardless of security settings, and modules that have their own ".libs" directory have a supported API for adding it to the search path. Past experience of forcing a consistent search path like this is that it has broken many users who expect features like %PATH% to locate DLL dependencies to work. For security reasons, this feature is already deprecated and often disabled (see [4]), so it can't be relied upon, but it makes it impossible for a single package to modify this setting or use the supported method for adding more DLL search directories. Proposal #2: Resolve extension modules by full name Without this proposal, the directory structure looks like: my_package\ -subpackage1\ --__init__.py --my_module.pyd --HelperLib.dll -subpackage2\ --__init__.py --my_module.pyd --HelperLib.dll After this proposal, it could look like: my_package\ -subpackage1 --__init__.py -subpackage2\ --__init__.py -my_package.subpackage1.my_module.pyd -my_package.subpackage2.my_module.pyd -HelperLib.dll Essentially, when searching for modules, allow going up the package hierarchy and locating a fully-qualified name at any level of the import tree. Note that since "import my_package.subpackage1.my_module" implies both "import my_package" and "import my_package.subpackage1", those have to succeed, but then the final part of the import would use subpackage1.__path__ to look for "my_module.pyd" and my_package.__path__ to look for "my_package.subpackage1.my_module.pyd". This allows all extension modules to be co-located in the one (importable) directory, along with a single copy of any shared dependencies. [1]: https://go.microsoft.com/fwlink/p/?linkid=217865 [2]: https://docs.microsoft.com/windows/desktop/api/libloaderapi/nf-libloaderapi-setdefaultdlldirectories [3]: https://docs.microsoft.com/windows/desktop/api/libloaderapi/nf-libloaderapi-adddlldirectory [4]: https://docs.microsoft.com/windows/desktop/Dlls/dynamic-link-library-search-order ---------- assignee: steve.dower components: Windows messages: 336349 nosy: brett.cannon, eric.snow, ncoghlan, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Enable better DLL resolution type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 20:03:02 2019 From: report at bugs.python.org (Jacob Bundgaard) Date: Sat, 23 Feb 2019 01:03:02 +0000 Subject: [New-bugs-announce] [issue36086] Split IDLE into separate feature in Windows installer Message-ID: <1550883782.51.0.779365974532.issue36086@roundup.psfhosted.org> New submission from Jacob Bundgaard : I don't use IDLE to edit Python files, but do use tcl/tk for Python projects on Windows. Therefore, it would be useful for me to be able to install tcl/tk without also installing IDLE. However, in the Windows installer, tcl/tk and IDLE are bundled together into one feature. Splitting them into two features (the IDLE feature requiring the tcl/tk one) would reduce installation time, storage use, Explorer context menu cluttering, etc. for users like me. ---------- components: Installation messages: 336352 nosy: kimsey0 priority: normal severity: normal status: open title: Split IDLE into separate feature in Windows installer type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 20:44:30 2019 From: report at bugs.python.org (Tony Hammack) Date: Sat, 23 Feb 2019 01:44:30 +0000 Subject: [New-bugs-announce] [issue36087] ThreadPoolExecutor max_workers none issue Message-ID: <1550886270.51.0.0223111900331.issue36087@roundup.psfhosted.org> New submission from Tony Hammack : ThreadPoolExecutor(max_workers=None) throws exception when it should not. Inconsistent with 3.4 documentation. If max_workers=None, then it should use the amount of cpus as threadcount. ---------- components: Library (Lib) messages: 336354 nosy: Tony Hammack priority: normal severity: normal status: open title: ThreadPoolExecutor max_workers none issue type: crash versions: Python 3.4 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 21:17:51 2019 From: report at bugs.python.org (Liyu Gong) Date: Sat, 23 Feb 2019 02:17:51 +0000 Subject: [New-bugs-announce] [issue36088] zipfile cannot handle zip in zip Message-ID: <1550888271.02.0.0237674201375.issue36088@roundup.psfhosted.org> New submission from Liyu Gong : Suppose a.zip is z zip file containing 'abc/def/1.txt' zf = zipfile.ZipFile('a.zip') memf = zf.open('abc/def/1.txt', 'r') zf2 = zipfile.ZipFile(memf) will raise an error. However, when a.zip is a tar file containing 'abc/def/1.txt', the following codes tf = tarfile.open('a.zip') memf = tf.open('abc/def/1.txt', 'r') zf2 = zipfile.ZipFile(memf) works well. Is it a known issue? Thanks! ---------- files: a.zip messages: 336356 nosy: liyugong priority: normal severity: normal status: open title: zipfile cannot handle zip in zip versions: Python 3.6 Added file: https://bugs.python.org/file48164/a.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 22:02:12 2019 From: report at bugs.python.org (Alan Grgic) Date: Sat, 23 Feb 2019 03:02:12 +0000 Subject: [New-bugs-announce] [issue36089] Formatting/Spelling errors in SimpleHTTPServer docs Message-ID: <1550890932.86.0.256234354239.issue36089@roundup.psfhosted.org> New submission from Alan Grgic : The warning heading near the top of https://docs.python.org/2/library/simplehttpserver.html contains improperly formatted text and misspells SimpleHTTPServer as 'SimpleHTTServer'. ---------- assignee: docs at python components: Documentation messages: 336358 nosy: Alan Grgic, docs at python priority: normal severity: normal status: open title: Formatting/Spelling errors in SimpleHTTPServer docs versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Feb 22 22:33:27 2019 From: report at bugs.python.org (=?utf-8?b?5pmv5LqR6bmP?=) Date: Sat, 23 Feb 2019 03:33:27 +0000 Subject: [New-bugs-announce] [issue36090] spelling error in PEP219 introduction Message-ID: <1550892807.64.0.358542044817.issue36090@roundup.psfhosted.org> New submission from ??? : https://www.python.org/dev/peps/pep-0219/#introduction paragraph2 in more that a year? or in more than a year? ---------- messages: 336359 nosy: ??? priority: normal severity: normal status: open title: spelling error in PEP219 introduction _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 01:07:17 2019 From: report at bugs.python.org (Henry Chen) Date: Sat, 23 Feb 2019 06:07:17 +0000 Subject: [New-bugs-announce] [issue36091] clean up async generator from types module Message-ID: <1550902037.39.0.489582604413.issue36091@roundup.psfhosted.org> New submission from Henry Chen : the following script: ``` import sys, types def tr(frame, event, arg): print(frame, event, arg) return tr sys.settrace(tr) ``` gives the output: ``` call None exception (, GeneratorExit(), ) return None ``` This is due to Lib/types.py creating an async generator for the sole purpose of getting its type. I'll remove that reference after use to prevent the above message, which is probably benign but perhaps unnerving. ---------- components: Library (Lib) messages: 336368 nosy: scotchka priority: normal severity: normal status: open title: clean up async generator from types module type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 01:51:44 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 23 Feb 2019 06:51:44 +0000 Subject: [New-bugs-announce] [issue36092] unittest.mock's patch.object and patch.dict are not supported on classmethod, propery and staticmethod Message-ID: <1550904704.81.0.829273897144.issue36092@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : While looking into the unittest.mock tests I came across test_patch_descriptor [0] which makes an early return since patch.object and patch.dict are not supported on staticmethod, classmethod and property as noted in the comment. The tests still fail on master. The test was added during initial addition of mock to stdlib (commit 345266aa7e7) and I couldn't find any issues related to this. So I am filing this if someones wants to fix it. [0] https://github.com/python/cpython/blob/175421b58cc97a2555e474f479f30a6c5d2250b0/Lib/unittest/test/testmock/testpatch.py#L667 ---------- components: Library (Lib) messages: 336372 nosy: cjw296, mariocj89, michael.foord, xtreak priority: normal severity: normal status: open title: unittest.mock's patch.object and patch.dict are not supported on classmethod, propery and staticmethod type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 02:59:18 2019 From: report at bugs.python.org (Windson Yang) Date: Sat, 23 Feb 2019 07:59:18 +0000 Subject: [New-bugs-announce] [issue36093] UnicodeEncodeError raise from smtplib.verify() method Message-ID: <1550908758.26.0.552141628876.issue36093@roundup.psfhosted.org> New submission from Windson Yang : AFAIK, the email address should support non-ASCII character (from https://stackoverflow.com/questions/760150/can-an-email-address-contain-international-non-english-characters and SMTPUTF8 option from https://docs.python.org/3/library/smtplib.html#smtplib.SMTP.sendmail) >>> import smtplib >>> s = smtplib.SMTP(host='smtp-mail.outlook.com', port=587) >>> s.verify('??@outlook.com') Traceback (most recent call last): File "", line 1, in File "/Users/windson/learn/cpython/Lib/smtplib.py", line 577, in verify self.putcmd("vrfy", _addr_only(address)) File "/Users/windson/learn/cpython/Lib/smtplib.py", line 367, in putcmd self.send(str) File "/Users/windson/learn/cpython/Lib/smtplib.py", line 352, in send s = s.encode(self.command_encoding) UnicodeEncodeError: 'ascii' codec can't encode characters in position 5-6: ordinal not in range(128) I found this issue when I updating https://github.com/python/cpython/pull/8938/files ---------- components: Unicode messages: 336374 nosy: Windson Yang, ezio.melotti, vstinner priority: normal severity: normal status: open title: UnicodeEncodeError raise from smtplib.verify() method type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 12:44:01 2019 From: report at bugs.python.org (=?utf-8?b?6LW15Yab5pix?=) Date: Sat, 23 Feb 2019 17:44:01 +0000 Subject: [New-bugs-announce] [issue36094] When using an SMTP SSL connection, , get ValueError. Message-ID: <1550943841.97.0.129283187576.issue36094@roundup.psfhosted.org> New submission from ??? <200612453 at qq.com>: The following bug occurs when you connect after creating an instance of SMTP_SSL: ``` import smtplib smtp_server = "smtp.163.com" con2 = smtplib.SMTP_SSL() con2.connect(smtp_server, 465) ``` ValueError: server_hostname cannot be an empty string or start with a leading dot. File "E:\code\noUse.py", line 8, in con2.connect(smtp_server, 465) ---------- components: email files: 1.png messages: 336393 nosy: barry, r.david.murray, tyrone-zhao priority: normal severity: normal status: open title: When using an SMTP SSL connection,, get ValueError. type: resource usage versions: Python 3.7 Added file: https://bugs.python.org/file48167/1.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 14:39:17 2019 From: report at bugs.python.org (Brandt Bucher) Date: Sat, 23 Feb 2019 19:39:17 +0000 Subject: [New-bugs-announce] [issue36095] Better NaN sorting. Message-ID: <1550950757.55.0.405574866511.issue36095@roundup.psfhosted.org> New submission from Brandt Bucher : Sorting sequences containing NaN values produces an incompletely sorted result. Further, because of the complexity of the timsort, this incomplete sort often silently produces unintuitive, unstable-seeming results that are extremely sensitive to the ordering of the inputs: >>> sorted([3, 1, 2, float('nan'), 2.0, 2, 2.0]) [1, 2, 2.0, 2.0, 3, nan, 2] >>> sorted(reversed([3, 1, 2, float('nan'), 2.0, 2, 2.0])) [1, 2.0, 2, 2.0, nan, 2, 3] The patch I have provided addresses these issues, including for lists containing nested lists/tuples with NaN values. Specifically, it stably sorts NaNs to the end of the list with no changes to the timsort itself (just the element-wise comparison functions): >>> sorted([3, 1, 2, float('nan'), 2.0, 2, 2.0]) [1, 2, 2.0, 2, 2.0, 3, nan] >>> sorted([[3], [1], [2], [float('nan')], [2.0], [2], [2.0]]) [[1], [2], [2.0], [2], [2.0], [3], [nan]] It also includes a new regression test for this behavior. Some other benefits to this patch: * These changes generally result in a sorting performance improvement across data types. The largest increases here are for nested lists, since we add a new unsafe_list_compare function. Other speed increases are due to safe_object_compare's delegation to unsafe comparison functions for objects of the same type. Specifically, the speed impact (positive is faster, negative is slower) is between: * -3% and +3% (10 elements, no PGO) * 0% and +4% (10 elements, PGO) * 0% and +9% (1000 elements, no PGO) * -1% and +9% (1000 elements, PGO) * The current weird NaN-sorting behavior is not documented, so this is not a breaking change. * IEEE754 compliance is maintained. The result is still a stable (arguably, more stable), nondecreasing ordering of the original list. ---------- components: Interpreter Core messages: 336401 nosy: brandtbucher priority: normal severity: normal status: open title: Better NaN sorting. type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 17:53:18 2019 From: report at bugs.python.org (Cheryl Sabella) Date: Sat, 23 Feb 2019 22:53:18 +0000 Subject: [New-bugs-announce] [issue36096] IDLE: Refactor class variables to instance variables in colorizer Message-ID: <1550962398.99.0.0484672984397.issue36096@roundup.psfhosted.org> New submission from Cheryl Sabella : >From Terry's comment on PR11472 for issue 35689: > I don't like the use of class variables to initialize volatile instance state variables. I think it confuses the code a bit. Better, I think, to put them in an `init_state` method called from `__init__`. (I am not sure if method could be used in tests.) Since the tests do not access the class vars directly on the class, they should not be affected. I am leaving this minor refactoring for another issue after merging the tests. Then we can modify the class docstring. ---------- assignee: terry.reedy components: IDLE messages: 336415 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Refactor class variables to instance variables in colorizer type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 18:48:58 2019 From: report at bugs.python.org (Eric Snow) Date: Sat, 23 Feb 2019 23:48:58 +0000 Subject: [New-bugs-announce] [issue36097] Use only public C-API in _xxsubinterpreters module. Message-ID: <1550965738.11.0.154386205273.issue36097@roundup.psfhosted.org> New submission from Eric Snow : After discussions about our use of the public C-API in stdlib extension modules, I realized that I'd written the _xxsubinterpreters module using internal C-API. However, there's no reason not to stick to the public C-API. Fixing this will require adding a few new "private" functions. ---------- assignee: eric.snow components: Interpreter Core messages: 336419 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Use only public C-API in _xxsubinterpreters module. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 19:22:05 2019 From: report at bugs.python.org (MultiSosnooley) Date: Sun, 24 Feb 2019 00:22:05 +0000 Subject: [New-bugs-announce] [issue36098] asyncio: ssl client-server with "slow" read Message-ID: <1550967725.88.0.520289067216.issue36098@roundup.psfhosted.org> New submission from MultiSosnooley : Recently, some awesome contributors add support for implicit ssl mode for aioftp. But this produced some issues and one of them is not relevant to aioftp, but to asyncio. Here is link https://repl.it/@multisosnooley/asyncio-ssl-connection-with-slow-retreiving to reproduce. This code mimics one test of aioftp: filesystem slower, than socket write. Reduced code to reproduce use sleep for this purpose. So we have proper run without ssl (you can test this by removing ssl argument in server and client instantiation) and ?buggy? in case of ssl. After some digging in I realised, that someone calls connection_lost of ssl protocol before we read all data from transport. Data sent by server, but client have (looks like 64k) buffer and since there is no transport read fails. Also, I can say (with help of wireshark) that in case of non-ssl connection, tcp FIN initiated by server (as it should be), but in case of ssl ? FIN sent by client. Direct code to reproduce ``` import ssl import asyncio import trustme HOST = "127.0.0.1" PORT = 8021 ca = trustme.CA() server_cert = ca.issue_server_cert(HOST) ssl_server = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) server_cert.configure_cert(ssl_server) ssl_client = ssl.create_default_context(ssl.Purpose.SERVER_AUTH) ca.configure_trust(ssl_client) async def handler(reader, writer): writer.write(b"0" * 4 * 100 * 1024) await writer.drain() writer.close() await asyncio.sleep(1) print("handler done") async def client(): # reader, writer = await asyncio.open_connection(HOST, PORT) reader, writer = await asyncio.open_connection(HOST, PORT, ssl=ssl_client) count = 0 while True: data = await reader.read(8192) count += len(data) print(f"received {len(data)}, total {count}") if not data: break await asyncio.sleep(0.001) writer.close() loop = asyncio.get_event_loop() # start = asyncio.start_server(handler, HOST, PORT) start = asyncio.start_server(handler, HOST, PORT, ssl=ssl_server) server = loop.run_until_complete(start) loop.run_until_complete(client()) server.close() loop.run_until_complete(server.wait_closed()) ``` Output: ``` received 8192, total 8192 received 8192, total 16384 received 8192, total 24576 received 8192, total 32768 received 8192, total 40960 received 8192, total 49152 received 8192, total 57344 received 8192, total 65536 received 8192, total 73728 received 8192, total 81920 received 8192, total 90112 received 8192, total 98304 received 8192, total 106496 received 8192, total 114688 received 8192, total 122880 received 8192, total 131072 received 8192, total 139264 received 8192, total 147456 received 8192, total 155648 received 8192, total 163840 received 8192, total 172032 received 8192, total 180224 received 8192, total 188416 received 8192, total 196608 received 8192, total 204800 received 8192, total 212992 received 8192, total 221184 received 8192, total 229376 received 8192, total 237568 received 8192, total 245760 received 8192, total 253952 received 8192, total 262144 received 8192, total 270336 received 8192, total 278528 received 8192, total 286720 received 8192, total 294912 received 8192, total 303104 received 8192, total 311296 received 8192, total 319488 received 8192, total 327680 received 8192, total 335872 Traceback (most recent call last): File "slow-data-test.py", line 46, in loop.run_until_complete(client()) File "/home/poh/.pyenv/versions/3.6.7/lib/python3.6/asyncio/base_events.py", line 473, in run_until_complete return future.result() File "slow-data-test.py", line 33, in client data = await reader.read(8192) File "/home/poh/.pyenv/versions/3.6.7/lib/python3.6/asyncio/streams.py", line 640, in read self._maybe_resume_transport() File "/home/poh/.pyenv/versions/3.6.7/lib/python3.6/asyncio/streams.py", line 408, in _maybe_resume_transport self._transport.resume_reading() File "/home/poh/.pyenv/versions/3.6.7/lib/python3.6/asyncio/sslproto.py", line 351, in resume_reading self._ssl_protocol._transport.resume_reading() AttributeError: 'NoneType' object has no attribute 'resume_reading' ``` ---------- components: asyncio messages: 336420 nosy: MultiSosnooley, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio: ssl client-server with "slow" read type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Feb 23 22:39:59 2019 From: report at bugs.python.org (Steven D'Aprano) Date: Sun, 24 Feb 2019 03:39:59 +0000 Subject: [New-bugs-announce] [issue36099] Clarify the difference between mu and xbar in the statistics documentation Message-ID: <1550979599.93.0.0625062939228.issue36099@roundup.psfhosted.org> New submission from Steven D'Aprano : The documentation isn't clear as to the difference between mu and xbar, and why one is used in variance and the other in pvariance. See #36018 for discussion. For the record: mu or ? is the population parameter, i.e. the mean of the entire population, if you could average every individual. xbar or x? is the mean of a sample. ---------- assignee: docs at python components: Documentation messages: 336424 nosy: docs at python, steven.daprano priority: normal severity: normal status: open title: Clarify the difference between mu and xbar in the statistics documentation type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 24 04:02:22 2019 From: report at bugs.python.org (Marcos Dione) Date: Sun, 24 Feb 2019 09:02:22 +0000 Subject: [New-bugs-announce] [issue36100] int() and float() should accept any isnumeric() digit Message-ID: <1550998942.15.0.168903018656.issue36100@roundup.psfhosted.org> New submission from Marcos Dione : Following https://blog.lerner.co.il/pythons-str-isdigit-vs-str-isnumeric/, we have this: Python 3.8.0a1+ (heads/master:001fee14e0, Feb 20 2019, 08:28:02) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> '?????'.isnumeric() True >>> int('?????') Traceback (most recent call last): File "", line 1, in ValueError: invalid literal for int() with base 10: '?????' >>> float('?????') Traceback (most recent call last): File "", line 1, in ValueError: could not convert string to float: '?????' I think Reuven is right, these should be accepted as input. I just wonder if we should do the same for f.i. roman numerics... ---------- components: Library (Lib) messages: 336451 nosy: StyXman priority: normal severity: normal status: open title: int() and float() should accept any isnumeric() digit type: behavior versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Feb 24 08:36:09 2019 From: report at bugs.python.org (Ma Lin) Date: Sun, 24 Feb 2019 13:36:09 +0000 Subject: [New-bugs-announce] [issue36101] remove non-ascii characters in docstring Message-ID: <1551015369.9.0.190424926201.issue36101@roundup.psfhosted.org> New submission from Ma Lin : replace ?(\u2019) with '(\x27) ---------- assignee: docs at python components: Documentation messages: 336468 nosy: Ma Lin, docs at python priority: normal severity: normal status: open title: remove non-ascii characters in docstring versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 04:03:37 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 25 Feb 2019 09:03:37 +0000 Subject: [New-bugs-announce] [issue36102] TestSharedMemory fails on AMD64 FreeBSD CURRENT Shared 3.x Message-ID: <1551085417.13.0.962588202568.issue36102@roundup.psfhosted.org> New submission from STINNER Victor : TestSharedMemory fails on AMD64 FreeBSD CURRENT Shared 3.x: * The tests should be skipped on this buildbot worker * Or the feature should be fixed on FreeBSD https://buildbot.python.org/all/#/builders/168/builds/617 Example: ERROR: test_shared_memory_ShareableList_basics (test.test_multiprocessing_forkserver.WithProcessesTestSharedMemory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/test/_test_multiprocessing.py", line 3770, in test_shared_memory_ShareableList_basics sl = shared_memory.ShareableList( File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/multiprocessing/shared_memory.py", line 305, in __init__ self.shm = SharedMemory(name, create=True, size=requested_size) File "/usr/home/buildbot/python/3.x.koobs-freebsd-current/build/Lib/multiprocessing/shared_memory.py", line 88, in __init__ self._fd = _posixshmem.shm_open( OSError: [Errno 22] Invalid argument: 'psm_f66c2bc7b9' Failures: ERROR: test_shared_memory_ShareableList_basics (test.test_multiprocessing_forkserver.WithProcessesTestSharedMemory) ERROR: test_shared_memory_ShareableList_pickling (test.test_multiprocessing_forkserver.WithProcessesTestSharedMemory) ERROR: test_shared_memory_SharedMemoryManager_basics (test.test_multiprocessing_forkserver.WithProcessesTestSharedMemory) ERROR: test_shared_memory_across_processes (test.test_multiprocessing_forkserver.WithProcessesTestSharedMemory) ERROR: test_shared_memory_basics (test.test_multiprocessing_forkserver.WithProcessesTestSharedMemory) ERROR: test_shared_memory_ShareableList_basics (test.test_multiprocessing_fork.WithProcessesTestSharedMemory) ERROR: test_shared_memory_ShareableList_pickling (test.test_multiprocessing_fork.WithProcessesTestSharedMemory) ERROR: test_shared_memory_SharedMemoryManager_basics (test.test_multiprocessing_fork.WithProcessesTestSharedMemory) ERROR: test_shared_memory_across_processes (test.test_multiprocessing_fork.WithProcessesTestSharedMemory) ERROR: test_shared_memory_basics (test.test_multiprocessing_fork.WithProcessesTestSharedMemory) ERROR: test_shared_memory_ShareableList_basics (test.test_multiprocessing_spawn.WithProcessesTestSharedMemory) ERROR: test_shared_memory_ShareableList_pickling (test.test_multiprocessing_spawn.WithProcessesTestSharedMemory) ERROR: test_shared_memory_SharedMemoryManager_basics (test.test_multiprocessing_spawn.WithProcessesTestSharedMemory) ERROR: test_shared_memory_across_processes (test.test_multiprocessing_spawn.WithProcessesTestSharedMemory) ERROR: test_shared_memory_basics (test.test_multiprocessing_spawn.WithProcessesTestSharedMemory) ---------- components: Tests messages: 336503 nosy: davin, giampaolo.rodola, pitrou, vstinner priority: normal severity: normal status: open title: TestSharedMemory fails on AMD64 FreeBSD CURRENT Shared 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 04:27:56 2019 From: report at bugs.python.org (Inada Naoki) Date: Mon, 25 Feb 2019 09:27:56 +0000 Subject: [New-bugs-announce] [issue36103] Increase Message-ID: <1551086876.41.0.525070679929.issue36103@roundup.psfhosted.org> Change by Inada Naoki : ---------- nosy: inada.naoki priority: normal severity: normal status: open title: Increase _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 05:50:03 2019 From: report at bugs.python.org (=?utf-8?q?=C5=81ukasz_Langa?=) Date: Mon, 25 Feb 2019 10:50:03 +0000 Subject: [New-bugs-announce] [issue36104] test_httplib and test_nntplib fail on ARMv7 Ubuntu Message-ID: <1551091803.08.0.0253234252542.issue36104@roundup.psfhosted.org> New submission from ?ukasz Langa : The ARMv7 Ubuntu buildbot is consistently failing since build #2160: https://buildbot.python.org/all/#/builders/106/builds/2160 This looks like a testing environment issue to me rather than a code issue. But I'd like it fixed either way before we get to 3.8.0 beta1 since this is a stable builder. Greg, you can ask Inadasan about whether his dict/OrderedDict changes might have any effect on this failure: https://github.com/python/cpython/commit/c95404ff65dab1469dcd1dfec58ba54a8e7e7b3a That was the only relevant change I observed between the working and the broken build. The NNTP test failure looks like this: ====================================================================== ERROR: setUpClass (test.test_nntplib.NetworkedNNTP_SSLTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/test/test_nntplib.py", line 295, in setUpClass cls.server = cls.NNTP_CLASS(cls.NNTP_HOST, timeout=TIMEOUT, File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/nntplib.py", line 1077, in __init__ self.sock = _encrypt_on(self.sock, ssl_context, host) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/nntplib.py", line 292, in _encrypt_on return context.wrap_socket(sock, server_hostname=hostname) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 405, in wrap_socket return self.sslsocket_class._create( File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 853, in _create self.do_handshake() File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLError: [SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:1055) The HTTP test failure looks like this: ====================================================================== ERROR: test_networked_good_cert (test.test_httplib.HTTPSTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/test/test_httplib.py", line 1629, in test_networked_good_cert h.request('GET', '/') File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1229, in request self._send_request(method, url, body, headers, encode_chunked) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1275, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1224, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1016, in _send_output self.send(msg) File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 956, in send self.connect() File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/http/client.py", line 1391, in connect self.sock = self._context.wrap_socket(self.sock, File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 405, in wrap_socket return self.sslsocket_class._create( File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 853, in _create self.do_handshake() File "/ssd/buildbot/buildarea/3.x.gps-ubuntu-exynos5-armv7l/build/Lib/ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: EE certificate key too weak (_ssl.c:1055) ---------- assignee: gregory.p.smith messages: 336511 nosy: gregory.p.smith, inada.naoki, lukasz.langa, zach.ware priority: deferred blocker severity: normal stage: needs patch status: open title: test_httplib and test_nntplib fail on ARMv7 Ubuntu type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 05:57:16 2019 From: report at bugs.python.org (Giampaolo Rodola') Date: Mon, 25 Feb 2019 10:57:16 +0000 Subject: [New-bugs-announce] [issue36105] Windows: use GetNativeSystemInfo instead of GetSystemInfo Message-ID: <1551092236.86.0.52473881575.issue36105@roundup.psfhosted.org> New submission from Giampaolo Rodola' : This is what MS doc says about GetSystemInfo: https://docs.microsoft.com/en-us/windows/desktop/api/sysinfoapi/nf-sysinfoapi-getsysteminfo <> $ grep -r GetSystemInfo Modules/_ctypes/malloc_closure.c: GetSystemInfo(&systeminfo); Modules/mmapmodule.c: GetSystemInfo(&si); Modules/mmapmodule.c: GetSystemInfo(&si); Modules/posixmodule.c: GetSystemInfo(&sysinfo); ---------- components: Windows messages: 336512 nosy: eryksun, giampaolo.rodola, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows: use GetNativeSystemInfo instead of GetSystemInfo versions: Python 2.7, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 08:47:03 2019 From: report at bugs.python.org (Dmitrii Pasechnik) Date: Mon, 25 Feb 2019 13:47:03 +0000 Subject: [New-bugs-announce] [issue36106] resolve sinpi() name clash with libm Message-ID: <1551102423.19.0.037549170544.issue36106@roundup.psfhosted.org> New submission from Dmitrii Pasechnik : The standard math library (libm) may follow IEEE-754 recommendation to include an implementation of sinPi(), i.e. sinPi(x):=sin(pi*x). And this triggers a name clash, found by FreeBSD developer Steve Kargl, who worken on putting sinpi into libm used on FreeBSD (it has to be named "sinpi", not "sinPi", cf. e.g. https://en.cppreference.com/w/c/experimental/fpext4) ---------- components: Extension Modules messages: 336519 nosy: dimpase priority: normal pull_requests: 12059 severity: normal status: open title: resolve sinpi() name clash with libm type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 11:58:04 2019 From: report at bugs.python.org (Natanael Copa) Date: Mon, 25 Feb 2019 16:58:04 +0000 Subject: [New-bugs-announce] [issue36107] aarch64 python3 buffer overflow with stack protector on rpi3 (alpine linux) Message-ID: <1551113884.84.0.457828150843.issue36107@roundup.psfhosted.org> New submission from Natanael Copa : Alpine Linux's python 3.6.8 native build on aarch64 gets killed by stack protector when run on Raspberry Pi 3. It does not happen when same binary runs on packet.net's aarch64 machine. I was able to get a backtrace by copying the core. Core was generated by `python3'. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x0000007f86e85d9c in a_crash () at ./src/internal/atomic.h:250 250 ./src/internal/atomic.h: No such file or directory. (gdb) bt #0 0x0000007f86e85d9c in a_crash () at ./src/internal/atomic.h:250 #1 __stack_chk_fail () at src/env/__stack_chk_fail.c:17 #2 0x0000007f86cbc068 in _PyObject_CallMethodId_SizeT (o=o at entry=0x7f86bb1a98, name=name at entry=0x7f86e1cb88 , format=format at entry=0x0) at Objects/abstract.c:2677 #3 0x0000007f86d2fbb0 in _io_TextIOWrapper___init___impl (write_through=0, line_buffering=1, newline=, errors=0x7f86d6d810 "strict", encoding=, buffer=0x7f86bb1a98, self=) at ./Modules/_io/textio.c:1017 #4 _io_TextIOWrapper___init__ (self=0x7f86b5e630, args=, kwargs=) at ./Modules/_io/clinic/textio.c.h:173 #5 0x0000007f86cabf94 in type_call (type=, args=0x7f86b2c0a0, kwds=0x0) at Objects/typeobject.c:915 #6 0x0000007f86c5f25c in PyObject_Call (func=0x7f86e083b0 , args=, kwargs=kwargs at entry=0x0) at Objects/abstract.c:2261 #7 0x0000007f86c5f30c in call_function_tail (callable=callable at entry=0x7f86e083b0 , args=, args at entry=0x7f86b2c0a0) at Objects/abstract.c:2512 #8 0x0000007f86c96d5c in callmethod (func=func at entry=0x7f86e083b0 , format=format at entry=0x7f86d7d4de "OsssO", va=..., is_size_t=is_size_t at entry=0) at Objects/abstract.c:2596 #9 0x0000007f86cbc8c8 in _PyObject_CallMethodId (o=o at entry=0x7f86adf098, name=name at entry=0x7f86e1e0c0 , format=format at entry=0x7f86d7d4de "OsssO") at Objects/abstract.c:2640 #10 0x0000007f86cd8dec in create_stdio (io=, fd=, write_mode=, name=, encoding=, errors=, io=, fd=, write_mode=, name=, encoding=, errors=) at Python/pylifecycle.c:1154 #11 0x0000007f86cd91b4 in initstdio () at Python/pylifecycle.c:1277 #12 0x0000007f86d419cc in _Py_InitializeEx_Private (install_sigs=, install_importlib=, install_sigs=, install_importlib=) at Python/pylifecycle.c:449 #13 0x0000007f86d41a70 in Py_InitializeEx (install_sigs=install_sigs at entry=1) at Python/pylifecycle.c:470 #14 0x0000007f86d41a78 in Py_Initialize () at Python/pylifecycle.c:476 #15 0x0000007f86d42c74 in Py_Main (argc=1, argv=0x7f86f10f60) at Modules/main.c:700 #16 0x000000558291db34 in main (argc=1, argv=0x7feb5b3e48) at ./Programs/python.c:69 Downstream reports: https://bugs.alpinelinux.org/issues/9981 https://github.com/gliderlabs/docker-alpine/issues/486 ---------- components: Interpreter Core files: strace.out messages: 336540 nosy: Natanael Copa priority: normal severity: normal status: open title: aarch64 python3 buffer overflow with stack protector on rpi3 (alpine linux) versions: Python 3.6 Added file: https://bugs.python.org/file48168/strace.out _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 14:32:52 2019 From: report at bugs.python.org (Steve Dower) Date: Mon, 25 Feb 2019 19:32:52 +0000 Subject: [New-bugs-announce] [issue36108] Avoid failing the build on race condition in clean Message-ID: <1551123172.45.0.629021519376.issue36108@roundup.psfhosted.org> New submission from Steve Dower : In PCbuild/openssl.props there is a task that deletes some copied files. This *might* fail on rebuilds where a Python process from a previous build has been running (for a very specific example, the rebuild as part of the release build triggered this). We shouldn't need to fail the build in this case, so change the "TreatErrorsAsWarnings" attribute to "false" (Note that this is not fixed by the KillPython step if the still-running executable is "python.exe" but we're building "python_d.exe" in the same folder. But warning and continuing is going to be fine for these particular files.) ---------- components: Build, Windows keywords: easy (C) messages: 336549 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Avoid failing the build on race condition in clean type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 16:15:44 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Mon, 25 Feb 2019 21:15:44 +0000 Subject: [New-bugs-announce] [issue36109] test_descr fails on AMD64 Windows8 3.x buildbots Message-ID: <1551129344.76.0.345137125147.issue36109@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : https://buildbot.python.org/all/#/builders/32/builds/2198 ====================================================================== ERROR: test_vicious_descriptor_nonsense (test.test_descr.ClassPropertiesAndMethods) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\buildarea\3.x.bolen-windows8\build\lib\test\test_descr.py", line 4341, in test_vicious_descriptor_nonsense self.assertEqual(c.attr, 1) File "D:\buildarea\3.x.bolen-windows8\build\lib\test\test_descr.py", line 4328, in __eq__ del C.attr AttributeError: attr ---------------------------------------------------------------------- Ran 140 tests in 7.836s ---------- components: Tests, Windows messages: 336554 nosy: pablogsal, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_descr fails on AMD64 Windows8 3.x buildbots versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 16:45:42 2019 From: report at bugs.python.org (STINNER Victor) Date: Mon, 25 Feb 2019 21:45:42 +0000 Subject: [New-bugs-announce] [issue36110] test_descr: test_vicious_descriptor_nonsense() fails randomly Message-ID: <1551131142.81.0.685289347637.issue36110@roundup.psfhosted.org> New submission from STINNER Victor : test_vicious_descriptor_nonsense() started to fails randomly: $ ./python -m test -j0 -F test_descr test_descr test_descr test_descr test_descr test_descr Run tests in parallel using 10 child processes 0:00:03 load avg: 1.24 [1/6] test_descr passed 0:00:03 load avg: 1.70 [2/6/1] test_descr failed test test_descr failed -- Traceback (most recent call last): File "/home/vstinner/prog/python/master/Lib/test/test_descr.py", line 4341, in test_vicious_descriptor_nonsense self.assertEqual(c.attr, 1) File "/home/vstinner/prog/python/master/Lib/test/test_descr.py", line 4328, in __eq__ del C.attr AttributeError: attr 0:00:03 load avg: 1.70 [3/6/1] test_descr passed 0:00:04 load avg: 1.70 [4/6/1] test_descr passed 0:00:04 load avg: 1.70 [5/6/1] test_descr passed 0:00:04 load avg: 1.70 [6/6/1] test_descr passed == Tests result: FAILURE == 5 tests OK. 1 test failed: test_descr Total duration: 4 sec 213 ms Tests result: FAILURE -- It seems like it's a regression introduced by this change: a24107b04c1277e3c1105f98aff5bfa3a98b33a0 is the first bad commit commit a24107b04c1277e3c1105f98aff5bfa3a98b33a0 Author: Serhiy Storchaka Date: Mon Feb 25 17:59:46 2019 +0200 bpo-35459: Use PyDict_GetItemWithError() instead of PyDict_GetItem(). (GH-11112) :040000 040000 da22597a443c84fb29588a3557f5dd04e292a1cc fb73df9fbfdc1893e9f0bde9fbc6ab6febabbe8f M Include :040000 040000 4b26a84a0e3a813470e34ddef29596da41d3d28f ca6192ea98e014434a32e2a114e42b297408ce00 M Modules :040000 040000 b2dd7d4e832c64ba44781a34093c5d69ea127932 26bfeced0b5776634051ed57d701361252c6de68 M Objects :040000 040000 6b5a0bf3a25434ee3de5c9dae3e880217773c0fb 56265d6d2cd8cb92ba47541909af2edec46196e6 M Python ---------- components: Tests messages: 336562 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: test_descr: test_vicious_descriptor_nonsense() fails randomly versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 17:12:02 2019 From: report at bugs.python.org (Enji Cooper) Date: Mon, 25 Feb 2019 22:12:02 +0000 Subject: [New-bugs-announce] [issue36111] Negative `offset` values are no longer acceptable with implementation of `seek` with python3; should be per POSIX Message-ID: <1551132722.44.0.748659725129.issue36111@roundup.psfhosted.org> New submission from Enji Cooper : I tried using os.SEEK_END in a technical interview, but unfortunately, that didn't work with python 3.x: pinklady:cpython ngie$ python3 Python 3.7.2 (default, Feb 12 2019, 08:15:36) [Clang 10.0.0 (clang-1000.11.45.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> fp = open("configure"); fp.seek(-100, os.SEEK_END) Traceback (most recent call last): File "", line 1, in io.UnsupportedOperation: can't do nonzero end-relative seeks It does however work with 2.x, which is aligned with the POSIX spec implementation, as shown below: pinklady:cpython ngie$ python Python 2.7.15 (default, Oct 2 2018, 11:47:18) [GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.2)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> fp = open("configure"); fp.seek(-100, os.SEEK_END) >>> fp.tell() 501076 >>> os.stat("configure").st_size 501176 >>> ---------- components: IO messages: 336564 nosy: yaneurabeya priority: normal severity: normal status: open title: Negative `offset` values are no longer acceptable with implementation of `seek` with python3; should be per POSIX _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 18:21:50 2019 From: report at bugs.python.org (Totte Karlsson) Date: Mon, 25 Feb 2019 23:21:50 +0000 Subject: [New-bugs-announce] [issue36112] os.path.realpath on windows and substed drives Message-ID: <1551136910.4.0.075710439537.issue36112@roundup.psfhosted.org> New submission from Totte Karlsson : Python's os.path.realpath, when used with a substed drive on the Windows platform, don't resolve the substed path to the *real* path. For example creating a substed drive like this: subst z: C:\Users\Public\Desktop and checking for the real path in python like this: import os myPath = "S:\\" print("Real path of: " + myPath + " is: " + os.path.realpath(myPath) ) prints Real path of: S:\ is: S:\ In the docs for the [subst][1] command, a substed drive is referred to as a *virtual* drive. Virtual, suggesting something being "not real", indicates that the Python *realpath* command is not working properly on Windows. [1]: https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/subst ---------- components: Windows messages: 336574 nosy: Totte Karlsson, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.path.realpath on windows and substed drives type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Feb 25 23:42:25 2019 From: report at bugs.python.org (N. Srinivasan) Date: Tue, 26 Feb 2019 04:42:25 +0000 Subject: [New-bugs-announce] [issue36113] Problem With SciPy Computation of sigma Message-ID: <1551156145.2.0.780333517951.issue36113@roundup.psfhosted.org> New submission from N. Srinivasan : """ Construct normal probability plot """ import math import scipy.stats as ss import numpy as np import matplotlib.pyplot as plt import seaborn as sns import prettytable from random import seed seed(100) ################ mu = 0.0 sigma = 1.0 x = np.linspace(mu - 5*sigma, mu + 5*sigma, 10) norm_pdf_x= ss.norm.pdf(x,mu,sigma) plt.plot(x,norm_pdf_x) print("The Std. Dev. of normally Dist. x =", np.std(norm_pdf_x)) print() ########################################### Produces a wrong Result for sigma: The Std. Dev. of normally Dist. x = 0.13142071847657633 Should be 1.0 ---------- components: Library (Lib) files: test_normal.py messages: 336604 nosy: nisthesecond priority: normal severity: normal status: open title: Problem With SciPy Computation of sigma versions: Python 3.7 Added file: https://bugs.python.org/file48170/test_normal.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 01:09:51 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 26 Feb 2019 06:09:51 +0000 Subject: [New-bugs-announce] [issue36114] test_multiprocessing_spawn changes the execution environment Message-ID: <1551161391.14.0.479239981135.issue36114@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : OK (skipped=32) Warning -- files was modified by test_multiprocessing_spawn Before: [] After: ['python.core'] https://buildbot.python.org/all/#/builders/168/builds/632/steps/4/logs/stdio ---------- components: Tests messages: 336615 nosy: pablogsal priority: normal severity: normal status: open title: test_multiprocessing_spawn changes the execution environment versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 01:54:50 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 26 Feb 2019 06:54:50 +0000 Subject: [New-bugs-announce] [issue36115] test_ctypes leaks references and memory blocks Message-ID: <1551164090.87.0.472334012156.issue36115@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : https://buildbot.python.org/all/#/builders/1/builds/515 OK (skipped=89) . test_ctypes leaked [72, 72, 72] references, sum=216 test_ctypes leaked [26, 26, 26] memory blocks, sum=78 2 tests failed again: test_ctypes test_inspect == Tests result: FAILURE then FAILURE == ---------- components: Tests, ctypes messages: 336619 nosy: pablogsal priority: normal severity: normal status: open title: test_ctypes leaks references and memory blocks versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 02:13:43 2019 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 26 Feb 2019 07:13:43 +0000 Subject: [New-bugs-announce] [issue36116] test_multiprocessing_spawn fails on AMD64 Windows8 3.x Message-ID: <1551165223.62.0.963183120667.issue36116@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : test test_multiprocessing_spawn failed test_import (test.test_multiprocessing_spawn._TestImportStar) ... ok ====================================================================== FAIL: test_mymanager_context (test.test_multiprocessing_spawn.WithManagerTestMyManager) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\buildarea\3.x.bolen-windows8\build\lib\test\_test_multiprocessing.py", line 2747, in test_mymanager_context self.assertIn(manager._process.exitcode, (0, -signal.SIGTERM)) AssertionError: 3221225477 not found in (0, -15) ---------------------------------------------------------------------- Ran 344 tests in 328.196s FAILED (failures=1, skipped=40) 1 test failed again: test_multiprocessing_spawn == Tests result: FAILURE then FAILURE == https://buildbot.python.org/all/#/builders/32/builds/2204/steps/3/logs/stdio ---------- components: Tests, Windows messages: 336625 nosy: pablogsal, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_multiprocessing_spawn fails on AMD64 Windows8 3.x versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 03:04:22 2019 From: report at bugs.python.org (Brandt Bucher) Date: Tue, 26 Feb 2019 08:04:22 +0000 Subject: [New-bugs-announce] [issue36117] Allow rich comparisons for real-valued complex objects. Message-ID: <1551168262.55.0.429171834399.issue36117@roundup.psfhosted.org> New submission from Brandt Bucher : Currently, it isn't legal to perform <, >, <=, or >= rich comparisons on any complex objects, even though these operations are mathematically well-defined for real numbers. The attached PR addresses this by defining rich comparisons for real-valued complex objects against subclasses of int and float, as well as for decimal.Decimal and fractions.Fraction types. They still raise TypeErrors when either of the operands has a nonzero imaginary part. ---------- components: Interpreter Core messages: 336628 nosy: brandtbucher priority: normal severity: normal status: open title: Allow rich comparisons for real-valued complex objects. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 03:59:27 2019 From: report at bugs.python.org (Domenico Barbieri) Date: Tue, 26 Feb 2019 08:59:27 +0000 Subject: [New-bugs-announce] [issue36118] Cannot correctly concatenate nested list that contains more than ~45 entries with other nested lists. Message-ID: <1551171567.53.0.226883358106.issue36118@roundup.psfhosted.org> New submission from Domenico Barbieri : Here is an example of what happens: >>> x = [["a", "b", ... , "BZ"]] >>> y = [[], [1,2,3,4,5, ... , 99]] >>> y[0] = x[0] >>> print(y[0]) >>> ["a", "b", "c", ... , "BZ", [1,2,3,4,5, ... , 99]] ---------- messages: 336634 nosy: Domenico Barbieri priority: normal severity: normal status: open title: Cannot correctly concatenate nested list that contains more than ~45 entries with other nested lists. type: behavior versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 05:47:13 2019 From: report at bugs.python.org (Andrei Stefan) Date: Tue, 26 Feb 2019 10:47:13 +0000 Subject: [New-bugs-announce] [issue36119] Can't add/append in set/list inside shared dict Message-ID: <1551178033.97.0.988149024615.issue36119@roundup.psfhosted.org> New submission from Andrei Stefan : I'm creating a shared dict for multiprocessing purposes: from multiprocessing import Manager manager = Manager() shared_dict = manager.dict() If I add a set or a list as a value in the dict: shared_dict['test'] = set() or shared_dict['test'] = list() I can't add/append in that set/list inside the shared dictionary: shared_dict['test'].add(1234) or shared_dict['test'].append(1234) The following expression: print(dict(shared_dict)) Will return: {'test': set()} or {'test': []}. But if I add in the set/list using operators: shared_dict['test'] |= {1234} or shared_dict['test'] += [1234] It will work: {'test': {1234}} or {'test': [1234]}. ---------- components: Build files: image (2).png messages: 336642 nosy: andrei2peu priority: normal severity: normal status: open title: Can't add/append in set/list inside shared dict type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file48171/image (2).png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 06:34:43 2019 From: report at bugs.python.org (Jonathan) Date: Tue, 26 Feb 2019 11:34:43 +0000 Subject: [New-bugs-announce] [issue36120] Regression - Concurrent Futures Message-ID: <1551180883.79.0.205102352043.issue36120@roundup.psfhosted.org> New submission from Jonathan : I'm using Concurrent Futures to run some work in parallel (futures.ProcessPoolExecutor) on windows 7 x64. The code works fine in 3.6.3, and 3.5.x before that. I've just upgraded to 3.7.2 and it's giving me these errors: Process SpawnProcess-6: Traceback (most recent call last): File "c:\_libs\Python37\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "c:\_libs\Python37\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "c:\_libs\Python37\lib\concurrent\futures\process.py", line 226, in _process_worker call_item = call_queue.get(block=True) File "c:\_libs\Python37\lib\multiprocessing\queues.py", line 93, in get with self._rlock: File "c:\_libs\Python37\lib\multiprocessing\synchronize.py", line 95, in __enter__ return self._semlock.__enter__() PermissionError: [WinError 5] Access is denied If I switch back to the 3.6.3 venv it works fine again. ---------- messages: 336649 nosy: jonathan-lp priority: normal severity: normal status: open title: Regression - Concurrent Futures versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 06:44:07 2019 From: report at bugs.python.org (Carlos Ramos) Date: Tue, 26 Feb 2019 11:44:07 +0000 Subject: [New-bugs-announce] [issue36121] csv: Non global alternative to csv.field_size_limit Message-ID: <1551181447.91.0.36050465631.issue36121@roundup.psfhosted.org> New submission from Carlos Ramos : The function csv.field_size_limit gets and sets a global variable. It would be useful to change this limit in a per-reader or per-thread basis, so that a library can change it without affecting global state. ---------- components: Extension Modules messages: 336651 nosy: Carlos Ramos priority: normal severity: normal status: open title: csv: Non global alternative to csv.field_size_limit type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 06:51:13 2019 From: report at bugs.python.org (bers) Date: Tue, 26 Feb 2019 11:51:13 +0000 Subject: [New-bugs-announce] [issue36122] Second run of 2to3 continues to modify output Message-ID: <1551181873.79.0.208082027128.issue36122@roundup.psfhosted.org> New submission from bers : I did this on Windows 10: P:\>python --version Python 3.7.2 P:\>echo print 1, 2 > Test.py P:\>python Test.py File "Test.py", line 1 print 1, 2 ^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print(1, 2)? P:\>2to3 -w Test.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored Test.py --- Test.py (original) +++ Test.py (refactored) @@ -1 +1 @@ -print 1, 2 +print(1, 2) RefactoringTool: Files that were modified: RefactoringTool: Test.py P:\>python Test.py 1 2 P:\>2to3 -w Test.py RefactoringTool: Skipping optional fixer: buffer RefactoringTool: Skipping optional fixer: idioms RefactoringTool: Skipping optional fixer: set_literal RefactoringTool: Skipping optional fixer: ws_comma RefactoringTool: Refactored Test.py --- Test.py (original) +++ Test.py (refactored) @@ -1 +1 @@ -print(1, 2) +print((1, 2)) RefactoringTool: Files that were modified: RefactoringTool: Test.py P:\>python Test.py (1, 2) Note how "print 1, 2" first becomes "print(1, 2)" (expected), then becomes "print((1, 2))" in the following run. This changes the output of Test.py ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 336653 nosy: bers priority: normal severity: normal status: open title: Second run of 2to3 continues to modify output type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 07:43:13 2019 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 26 Feb 2019 12:43:13 +0000 Subject: [New-bugs-announce] [issue36123] Race condition in test_socket Message-ID: <1551184993.57.0.687748994767.issue36123@roundup.psfhosted.org> New submission from Joannah Nanjekye : Looking at the buildbot failures, there is a race condition in a test_socket test: def _testWithTimeoutTriggeredSend(self): address = self.serv.getsockname() with open(support.TESTFN, 'rb') as file: with socket.create_connection(address, timeout=0.01) as sock: meth = self.meth_from_sock(sock) self.assertRaises(socket.timeout, meth, file) def testWithTimeoutTriggeredSend(self): conn = self.accept_conn() conn.recv(88192) on slow buildbot, create_connection() fails with a timeout exception sometimes because the server fails to start listing in less than 10 ms. https://buildbot.python.org/all/#/builders/167/builds/597 ====================================================================== ERROR: testWithTimeoutTriggeredSend (test.test_socket.SendfileUsingSendTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd10.nondebug/build/Lib/test/test_socket.py", line 5796, in testWithTimeoutTriggeredSend conn = self.accept_conn() File "/usr/home/buildbot/python/3.x.koobs-freebsd10.nondebug/build/Lib/test/test_socket.py", line 5607, in accept_conn conn, addr = self.serv.accept() File "/usr/home/buildbot/python/3.x.koobs-freebsd10.nondebug/build/Lib/socket.py", line 212, in accept fd, addr = self._accept() socket.timeout: timed out ====================================================================== ERROR: testWithTimeoutTriggeredSend (test.test_socket.SendfileUsingSendTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd10.nondebug/build/Lib/test/test_socket.py", line 335, in _tearDown raise exc File "/usr/home/buildbot/python/3.x.koobs-freebsd10.nondebug/build/Lib/test/test_socket.py", line 353, in clientRun test_func() File "/usr/home/buildbot/python/3.x.koobs-freebsd10.nondebug/build/Lib/test/test_socket.py", line 5791, in _testWithTimeoutTriggeredSend with socket.create_connection(address, timeout=0.01) as sock: File "/usr/home/buildbot/python/3.x.koobs-freebsd10.nondebug/build/Lib/socket.py", line 727, in create_connection raise err File "/usr/home/buildbot/python/3.x.koobs-freebsd10.nondebug/build/Lib/socket.py", line 716, in create_connection sock.connect(sa) socket.timeout: timed out Note: Reported my Victor. I created the bug to track. ---------- components: Tests messages: 336658 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Race condition in test_socket type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 08:46:42 2019 From: report at bugs.python.org (Nick Coghlan) Date: Tue, 26 Feb 2019 13:46:42 +0000 Subject: [New-bugs-announce] [issue36124] Provide convenient C API for storing per-interpreter state Message-ID: <1551188802.37.0.811137397494.issue36124@roundup.psfhosted.org> New submission from Nick Coghlan : (New issue derived from https://bugs.python.org/issue35886#msg336501 ) cffi needs a generally available way to get access to a caching dict for the currently active subinterpreter. Currently, they do that by storing it as an attribute in the builtins namespace: https://bitbucket.org/cffi/cffi/src/07d1803cb17b230571e3155e52082a356b31d44c/c/call_python.c?fileviewer=file-view-default As a result, they had to amend their code to include the CPython internal headers in 3.8.x, in order to regain access to the "builtins" reference. Armin suggested that a nicer way for them to achieve the same end result is if there was a PyInterpreter_GetDict() API, akin to https://docs.python.org/3/c-api/init.html#c.PyThreadState_GetDict That way they could store their cache dict in there in 3.8+, and only use the builtin dict on older Python versions. ---------- messages: 336670 nosy: ncoghlan priority: normal severity: normal status: open title: Provide convenient C API for storing per-interpreter state _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 11:00:10 2019 From: report at bugs.python.org (Ross Burton) Date: Tue, 26 Feb 2019 16:00:10 +0000 Subject: [New-bugs-announce] [issue36125] Cannot cross-compile to more featureful but same tune Message-ID: <1551196810.67.0.249603335877.issue36125@roundup.psfhosted.org> New submission from Ross Burton : My build machine is a Haswell Intel x86-64. I'm cross-compiling to x86-64, with -mtune=Skylake -avx2. During make install PYTHON_FOR_BUILD loads modules from the *build* Lib/ which contain instructions my Haswell can't execute: | _PYTHON_PROJECT_BASE=/data/poky-tmp/master/work/corei7-64-poky-linux/python3/3.7.2-r0/build _PYTHON_HOST_PLATFORM=linux-x86_64 PYTHONPATH=../Python-3.7.2/Lib _PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata_m_linux_x86_64-linux-gnu python3.7 -v -S -m sysconfig --generate-posix-vars ;\ Illegal instruction ---------- components: Build messages: 336688 nosy: rossburton priority: normal severity: normal status: open title: Cannot cross-compile to more featureful but same tune versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 12:09:57 2019 From: report at bugs.python.org (zasdfgbnm) Date: Tue, 26 Feb 2019 17:09:57 +0000 Subject: [New-bugs-announce] [issue36126] Reference count leakage in structseq_repr Message-ID: <1551200997.63.0.481035129926.issue36126@roundup.psfhosted.org> New submission from zasdfgbnm : In Python 2.7 structseq is not a tuple, and in `structseq_repr` a tuple is created to help extracting items. However when the check at https://github.com/python/cpython/blob/2.7/Objects/structseq.c#L268 fails, the reference count of this tuple is not decreased, causing memory leakage. To reproduce, download the attached file, install and run the `quicktest.py` and watch the memory usage. This bug only exists on python 2.7, because on python >3.2, no helper tuple is created. ---------- components: Interpreter Core files: structseq.tar.xz messages: 336699 nosy: zasdfgbnm priority: normal pull_requests: 12082 severity: normal status: open title: Reference count leakage in structseq_repr type: behavior versions: Python 2.7 Added file: https://bugs.python.org/file48172/structseq.tar.xz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 12:30:36 2019 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 26 Feb 2019 17:30:36 +0000 Subject: [New-bugs-announce] [issue36127] Argument Clinic: inline parsing code for functions with keyword parameters Message-ID: <1551202236.03.0.123634318665.issue36127@roundup.psfhosted.org> New submission from Serhiy Storchaka : This is a follow up of issue23867 and issue35582. The proposed PR makes Argument Clinic inlining parsing code for functions with keyword parameters, i.e. functions that use _PyArg_ParseTupleAndKeywordsFast() and _PyArg_ParseStackAndKeywords() now. This saves time for parsing format strings and calling few levels of functions. ---------- assignee: serhiy.storchaka components: Argument Clinic messages: 336700 nosy: larry, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Argument Clinic: inline parsing code for functions with keyword parameters type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 16:05:00 2019 From: report at bugs.python.org (Gregory Szorc) Date: Tue, 26 Feb 2019 21:05:00 +0000 Subject: [New-bugs-announce] [issue36128] ResourceReader for FileLoader inconsistently handles path separators Message-ID: <1551215100.55.0.236644911032.issue36128@roundup.psfhosted.org> New submission from Gregory Szorc : The implementation of the ResourceReader API for the FileLoader class in importlib/_bootstrap_external.py is inconsistent with regards to handling of path separators. Specifically, "is_resource()" returns False if "resource" has a path separator. But "open_resource()" will happily open resources containing a path separator. I would think the two would agree about whether a path with separators is a resource or not. The documentation at https://docs.python.org/3.7/library/importlib.html#importlib.abc.ResourceReader implies that resources in subdirectories should not be allowed. One can easily demonstrate this behavior oddity with Mercurial: (Pdb) p sys.modules['mercurial'].__spec__.loader.get_resource_reader('mercurial').open_resource('help/config.txt') <_io.FileIO name='/home/gps/src/hg/mercurial/help/config.txt' mode='rb' closefd=True> (Pdb) p sys.modules['mercurial'].__spec__.loader.get_resource_reader('mercurial').is_resource('help/config.txt') False The behavior has been present since the functionality was added (https://github.com/python/cpython/pull/5168). ---------- components: Library (Lib) messages: 336712 nosy: barry, indygreg priority: normal severity: normal status: open title: ResourceReader for FileLoader inconsistently handles path separators type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 16:47:13 2019 From: report at bugs.python.org (Gregory Szorc) Date: Tue, 26 Feb 2019 21:47:13 +0000 Subject: [New-bugs-announce] [issue36129] io documentation unclear about flush() and close() semantics for wrapped streams Message-ID: <1551217633.94.0.247260600398.issue36129@roundup.psfhosted.org> New submission from Gregory Szorc : As part of implementing io.RawIOBase/io.BufferedIOBase compatible stream types for python-zstandard, I became confused about the expected behavior of flush() and close() when a stream is wrapping another stream. The documentation doesn't lay out explicitly when flush() and close() on the outer stream should be proxied to the inner stream, if ever. Here are some specific questions (it would be great to have answers to these in the docs): 1. When flush() is called on the outer stream, should flush() be called on the inner stream as well? Or should we simply write() to the inner stream and not perform a flush() on that inner stream? 2. When close() is called on the outer stream, should close() be called on the inner stream as well? 3. If close() is not automatically called on the inner stream during an outer stream's close(), should the outer stream trigger a flush() or any other special behavior on the inner stream? Or should it just write() any lingering data and then go away? 4. Are any of the answers from 1-3 impacted by whether the stream is a reader or writer? (Obviously reader streams don't have a meaningful flush() implementation.) 5. Are any of the answers from 1-3 impacted by whether the stream is in blocking/non-blocking mode? 6. Do any of the answers from 1-3 vary depending on whether behavior is incurred by the outer stream's __exit__? (This issue was inspired by https://github.com/indygreg/python-zstandard/issues/76.) ---------- components: Interpreter Core messages: 336715 nosy: indygreg priority: normal severity: normal status: open title: io documentation unclear about flush() and close() semantics for wrapped streams type: enhancement versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Feb 26 22:46:52 2019 From: report at bugs.python.org (Anthony Sottile) Date: Wed, 27 Feb 2019 03:46:52 +0000 Subject: [New-bugs-announce] [issue36130] Pdb(skip=[...]) + module without __name__ => TypeError Message-ID: <1551239212.19.0.177135344366.issue36130@roundup.psfhosted.org> New submission from Anthony Sottile : Here's the simplest example I could come up with -- hit this while debugging pytest (which uses attrs which uses similar code to this to make classes) import pdb; pdb.Pdb(skip=['django.*']).set_trace() eval(compile("1", "", "exec"), {}) print('ok!') When running this: $ python3.8 t.py > /home/asottile/workspace/setup-cfg-fmt/t.py(2)() -> eval(compile("1", "", "exec"), {}) (Pdb) n Traceback (most recent call last): File "t.py", line 2, in eval(compile("1", "", "exec"), {}) File "", line 1, in File "/usr/lib/python3.8/bdb.py", line 90, in trace_dispatch return self.dispatch_call(frame, arg) File "/usr/lib/python3.8/bdb.py", line 128, in dispatch_call if not (self.stop_here(frame) or self.break_anywhere(frame)): File "/usr/lib/python3.8/bdb.py", line 203, in stop_here self.is_skipped_module(frame.f_globals.get('__name__')): File "/usr/lib/python3.8/bdb.py", line 194, in is_skipped_module if fnmatch.fnmatch(module_name, pattern): File "/usr/lib/python3.8/fnmatch.py", line 34, in fnmatch name = os.path.normcase(name) File "/usr/lib/python3.8/posixpath.py", line 54, in normcase s = os.fspath(s) TypeError: expected str, bytes or os.PathLike object, not NoneType $ python3.8 --version --version Python 3.8.0a2 (default, Feb 25 2019, 23:11:49) [GCC 7.3.0] ---------- components: Library (Lib) messages: 336728 nosy: Anthony Sottile priority: normal severity: normal status: open title: Pdb(skip=[...]) + module without __name__ => TypeError versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 00:06:02 2019 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Wed, 27 Feb 2019 05:06:02 +0000 Subject: [New-bugs-announce] [issue36131] test.test_urllib2net.TimeoutTest ftp related tests fail due to ftp://www.pythontest.net/ being unavailable Message-ID: <1551243962.9.0.321127985724.issue36131@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I am seeing this error on Windows and Mac CI builds where FTP related tests in test.test_urllib2net.TimeoutTest are failing. It's reproducible locally too where the tests are skipped on Mac and Ubuntu. Not sure if it's random since several PRs in the last few hours fail with this. Appveyor : * https://ci.appveyor.com/project/python/cpython/builds/22675425#L2817 VSTS builds : * https://dev.azure.com/Python/cpython/_build/results?buildId=38631 * https://dev.azure.com/Python/cpython/_build/results?buildId=38625 $ ./python.exe -m unittest -vv test.test_urllib2net.TimeoutTest test_ftp_basic (test.test_urllib2net.TimeoutTest) ... skipped "Resource 'ftp://www.pythontest.net/' is not available" test_ftp_default_timeout (test.test_urllib2net.TimeoutTest) ... skipped "Resource 'ftp://www.pythontest.net/' is not available" test_ftp_no_timeout (test.test_urllib2net.TimeoutTest) ... /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/encodings/idna.py:163: ResourceWarning: unclosed for label in labels[:-1]: ResourceWarning: Enable tracemalloc to get the object allocation traceback /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/encodings/idna.py:163: ResourceWarning: unclosed for label in labels[:-1]: ResourceWarning: Enable tracemalloc to get the object allocation traceback /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/encodings/idna.py:163: ResourceWarning: unclosed for label in labels[:-1]: ResourceWarning: Enable tracemalloc to get the object allocation traceback /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/encodings/idna.py:163: ResourceWarning: unclosed for label in labels[:-1]: ResourceWarning: Enable tracemalloc to get the object allocation traceback /Users/karthikeyansingaravelan/stuff/python/cpython/Lib/encodings/idna.py:163: ResourceWarning: unclosed for label in labels[:-1]: ResourceWarning: Enable tracemalloc to get the object allocation traceback skipped "Resource 'ftp://www.pythontest.net/' is not available" test_ftp_timeout (test.test_urllib2net.TimeoutTest) ... skipped "Resource 'ftp://www.pythontest.net/' is not available" test_http_basic (test.test_urllib2net.TimeoutTest) ... ok test_http_default_timeout (test.test_urllib2net.TimeoutTest) ... ok test_http_no_timeout (test.test_urllib2net.TimeoutTest) ... ok test_http_timeout (test.test_urllib2net.TimeoutTest) ... ok ---------------------------------------------------------------------- Ran 8 tests in 33.617s OK (skipped=4) ---------- components: Tests messages: 336729 nosy: benjamin.peterson, pablogsal, vstinner, xtreak priority: normal severity: normal status: open title: test.test_urllib2net.TimeoutTest ftp related tests fail due to ftp://www.pythontest.net/ being unavailable type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 06:10:31 2019 From: report at bugs.python.org (Andrew P. Lentvorski, Jr.) Date: Wed, 27 Feb 2019 11:10:31 +0000 Subject: [New-bugs-announce] [issue36132] Python cannot access hci_channel field in sockaddr_hci Message-ID: <1551265831.42.0.956760803512.issue36132@roundup.psfhosted.org> New submission from Andrew P. Lentvorski, Jr. : On Linux, sockaddr_hci is: struct sockaddr_hci { sa_family_t hci_family; unsigned short hci_dev; unsigned short hci_channel; }; Unfortunately, it seems like python does not allow any way to initialize hci_channel, so you can't use a user channel socket (hci_channel == 0) or a monitor channel. There is probably a larger discussion of how to enable people to use a new field that appears in a structure like this, but that's above my pay grade ... Even worse, this appears to have been known for a while (since 2013 at least! by Chromium), but while people complained, nobody actually took the time to file it upstream with Python. So, I'm filing it upstream. Hopefully this is easy to fix by someone who knows what's up. Thanks. See: https://chromium.googlesource.com/chromiumos/platform/btsocket/+/factory-4455.B https://github.com/w3h/isf/blob/master/lib/thirdparty/scapy/layers/bluetooth.py class BluetoothUserSocket(SuperSocket): desc = "read/write H4 over a Bluetooth user channel" def __init__(self, adapter=0): # s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_RAW, socket.BTPROTO_HCI) # s.bind((0,1)) # yeah, if only # thanks to Python's weak ass socket and bind implementations, we have # to call down into libc with ctypes sockaddr_hcip = POINTER(sockaddr_hci) cdll.LoadLibrary("libc.so.6") libc = CDLL("libc.so.6") socket_c = libc.socket socket_c.argtypes = (c_int, c_int, c_int); socket_c.restype = c_int bind = libc.bind bind.argtypes = (c_int, POINTER(sockaddr_hci), c_int) bind.restype = c_int ######## ## actual code s = socket_c(31, 3, 1) # (AF_BLUETOOTH, SOCK_RAW, HCI_CHANNEL_USER) if s < 0: raise BluetoothSocketError("Unable to open PF_BLUETOOTH socket") sa = sockaddr_hci() sa.sin_family = 31 # AF_BLUETOOTH sa.hci_dev = adapter # adapter index sa.hci_channel = 1 # HCI_USER_CHANNEL r = bind(s, sockaddr_hcip(sa), sizeof(sa)) ---------- components: Library (Lib) messages: 336743 nosy: bsder priority: normal severity: normal status: open title: Python cannot access hci_channel field in sockaddr_hci versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 06:40:15 2019 From: report at bugs.python.org (Fabian Dill) Date: Wed, 27 Feb 2019 11:40:15 +0000 Subject: [New-bugs-announce] [issue36133] ThreadPoolExecutor and ProcessPoolExecutor, dynamic worker count Message-ID: <1551267615.45.0.917307104051.issue36133@roundup.psfhosted.org> New submission from Fabian Dill : The request is, that the _max_workers attribute of the pools are exposed as a proper interface, that would allow changing of the worker amount after initialisation. ---------- components: Library (Lib) messages: 336745 nosy: Fabian Dill priority: normal severity: normal status: open title: ThreadPoolExecutor and ProcessPoolExecutor, dynamic worker count versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 07:37:54 2019 From: report at bugs.python.org (Apoorv Sachan) Date: Wed, 27 Feb 2019 12:37:54 +0000 Subject: [New-bugs-announce] [issue36134] test failure : test_re; recipe for target 'test' failed Message-ID: <1551271074.84.0.0671486163913.issue36134@roundup.psfhosted.org> New submission from Apoorv Sachan : ## while building python3.7 from source code on debian9.8 stretch. hardware : intel corei5 3337U 4 Gib -- physical ram//4 Gib -- swap space following the instructions given in the README.rst file: user at host $ ./configure --enable-optimizations #---works fine : no issues user at host $ make #---works fine : noissues user at host $ make test produces this FAIL: test_locale_flag (test.test_re.ReTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/apoos-maximus/packages/Python-3.7.2/Lib/test/test_re.py", line 1540, in test_locale_flag self.assertTrue(pat.match(bletter)) AssertionError: None is not true ---------------------------------------------------------------------- Ran 123 tests in 1.048s FAILED (failures=1, skipped=2) test test_re failed 1 test failed again: test_re == Tests result: FAILURE then FAILURE == 384 tests OK. 1 test failed: test_re 29 tests skipped: test_bz2 test_ctypes test_curses test_dbm_gnu test_dbm_ndbm test_devpoll test_gdb test_gzip test_idle test_kqueue test_lzma test_msilib test_ossaudiodev test_readline test_smtpnet test_sqlite test_ssl test_startfile test_tcl test_tix test_tk test_ttk_guionly test_ttk_textonly test_turtle test_winconsoleio test_winreg test_winsound test_zipfile64 test_zlib 1 re-run test: test_re 2 tests run no tests: test_dtrace test_future4 Total duration: 4 min 44 sec Tests result: FAILURE then FAILURE Makefile:1074: recipe for target 'test' failed make: *** [test] Error 2 #on rerunning the same test user at host $ make test TESTOPTS="-v test_re" FAIL: test_locale_flag (test.test_re.ReTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/apoos-maximus/packages/Python-3.7.2/Lib/test/test_re.py", line 1540, in test_locale_flag self.assertTrue(pat.match(bletter)) AssertionError: None is not true ---------------------------------------------------------------------- Ran 123 tests in 0.750s FAILED (failures=1, skipped=2) test test_re failed 1 test failed again: test_re == Tests result: FAILURE then FAILURE == 1 test failed: test_re 1 re-run test: test_re Total duration: 1 sec 646 ms Tests result: FAILURE then FAILURE Makefile:1074: recipe for target 'test' failed make: *** [test] Error 2 ###### this is how the bug could be reproduced PS: do have a look at the traceback ::it makes no sense to me, it might to you ! ---------- components: Build messages: 336747 nosy: apoos-maximus priority: normal severity: normal status: open title: test failure : test_re; recipe for target 'test' failed type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 07:52:51 2019 From: report at bugs.python.org (Apoorv Sachan) Date: Wed, 27 Feb 2019 12:52:51 +0000 Subject: [New-bugs-announce] [issue36135] altinstall error Makefile:1140: recipe for target 'altinstall' failed Message-ID: <1551271971.31.0.932732600535.issue36135@roundup.psfhosted.org> New submission from Apoorv Sachan : #as directed in the README.rst file #doing a 'make altinstall' instead of install to install python3.7.2 along side other versions ....this is to be performed after './configure' 'make' and 'make test' commands. please refer issue 36134 :titled :: (two issues could be related) "test failure : test_re; recipe for target 'test' failed" as there was also a make test failure prior to 'make altinstall' -------terminal-------------- user at host $ make altinstall .... #... everything goes well.... #... process ends with this traceback . . . . Traceback (most recent call last): File "/home/apoos-maximus/packages/Python-3.7.2/Lib/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/apoos-maximus/packages/Python-3.7.2/Lib/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/apoos-maximus/packages/Python-3.7.2/Lib/ensurepip/__main__.py", line 5, in sys.exit(ensurepip._main()) File "/home/apoos-maximus/packages/Python-3.7.2/Lib/ensurepip/__init__.py", line 204, in _main default_pip=args.default_pip, File "/home/apoos-maximus/packages/Python-3.7.2/Lib/ensurepip/__init__.py", line 117, in _bootstrap return _run_pip(args + [p[0] for p in _PROJECTS], additional_paths) File "/home/apoos-maximus/packages/Python-3.7.2/Lib/ensurepip/__init__.py", line 27, in _run_pip import pip._internal zipimport.ZipImportError: can't decompress data; zlib not available Makefile:1140: recipe for target 'altinstall' failed make: *** [altinstall] Error 1 =========end============== i do end up with a python 3.7 installation and still can't make sense of the failure. ---------- components: Installation messages: 336749 nosy: apoos-maximus priority: normal severity: normal status: open title: altinstall error Makefile:1140: recipe for target 'altinstall' failed type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 09:21:26 2019 From: report at bugs.python.org (STINNER Victor) Date: Wed, 27 Feb 2019 14:21:26 +0000 Subject: [New-bugs-announce] [issue36136] Windows: python._pth sets isolated mode late during Python initialization Message-ID: <1551277286.04.0.945215476701.issue36136@roundup.psfhosted.org> New submission from STINNER Victor : The read_pth_file() of PC/getpathp.c sets Py_IsolatedFlag and Py_NoSiteFlag to 1 in Python 3.6. calculate_path() checked if a file with a "._pth" extension exists in "dllpath" or "progpath". I refactored deeply the Python initialization in Python 3.7 and I'm not sure I introduced a regression or not. In Python 3.7, _PyCoreConfig_Read() calls config_init_path_config() which indirectly calls read_pth_file(). pymain_read_conf_impl() detects if Py_IsolatedFlag and Py_NoSiteFlag have been modified and store the new value in cmdline->isolated and cmdline->no_site_import. Later, cmdline_set_global_config() sets Py_IsolatedFlag and Py_NoSiteFlag; and _PyCoreConfig_SetGlobalConfig() sets Py_IgnoreEnvironmentFlag. The problem is the relationship between isolated/cmdline.Py_IsolatedFlag, no_site_import/cmdline.Py_NoSiteFlag and Py_IgnoreEnvironmentFlag/config.ignore_environment. The isolated mode must set Py_NoSiteFlag to 1 and Py_IgnoreEnvironmentFlag to 1. For example, pymain_read_conf_impl() uses: /* Set Py_IgnoreEnvironmentFlag for Py_GETENV() */ Py_IgnoreEnvironmentFlag = config->ignore_environment || cmdline->isolated; But it's done before calling _PyCoreConfig_Read(). Moreover, _PyCoreConfig_Read() reads PYTHONxxx environment variables before calling indirectly read_pth_file(), and so PYTHONxxx env vars are read whereas they are supposed to be ignored. Calling read_pth_file() earlier is challenging since it depends on configuration parameters which are before calling it. At the end, I'm not sure if it's a real issue. I'm not sure if there is a regression in Python 3.7. -- But the code in Python 3.8 changed a lot again: _PyCoreConfig_Read() is now responsible to read all environment variables. In Python 3.8, read_pth_file() uses a _PyPathConfig structure to set isolated and site_import parameters. These parameters are then copied to _PyCoreConfig in _PyCoreConfig_CalculatePathConfig(). Moreover, _PyCoreConfig_Read() is more explicit with the relationship between isolated, use_environment and user_site_directory. The function *starts* with: if (config->isolated > 0) { config->use_environment = 0; config->user_site_directory = 0; } Problems (inconsistencies) arise if isolated is set from 0 to 1 after this code. ---------- components: Interpreter Core messages: 336760 nosy: vstinner priority: normal severity: normal status: open title: Windows: python._pth sets isolated mode late during Python initialization versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 09:34:57 2019 From: report at bugs.python.org (Mika Fischer) Date: Wed, 27 Feb 2019 14:34:57 +0000 Subject: [New-bugs-announce] [issue36137] SSL verification fails for some sites inside windows docker container Message-ID: <1551278097.55.0.344471071893.issue36137@roundup.psfhosted.org> New submission from Mika Fischer : Inside a windows docker container, SSL verification fails for some but not all hosts. See this issue over in the docker repo: https://github.com/docker-library/python/issues/359 Maybe you guys could shed some light on what could be the possible. To reproduce, install Docker for Windows and then: This works: ``` docker run -ti python:3.7-windowsservercore-1809 python -c "import urllib.request as r; r.urlopen('https://bootstrap.pypa.io').close()" ``` This doesn't ``` docker run -ti python:3.7-windowsservercore-1809 python -c "import urllib.request as r; r.urlopen('https://google.com').close()" Traceback (most recent call last): File "C:\Python\lib\urllib\request.py", line 1317, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "C:\Python\lib\http\client.py", line 1229, in request self._send_request(method, url, body, headers, encode_chunked) File "C:\Python\lib\http\client.py", line 1275, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "C:\Python\lib\http\client.py", line 1224, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "C:\Python\lib\http\client.py", line 1016, in _send_output self.send(msg) File "C:\Python\lib\http\client.py", line 956, in send self.connect() File "C:\Python\lib\http\client.py", line 1392, in connect server_hostname=server_hostname) File "C:\Python\lib\ssl.py", line 412, in wrap_socket session=session File "C:\Python\lib\ssl.py", line 853, in _create self.do_handshake() File "C:\Python\lib\ssl.py", line 1117, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "C:\Python\lib\urllib\request.py", line 222, in urlopen return opener.open(url, data, timeout) File "C:\Python\lib\urllib\request.py", line 525, in open response = self._open(req, data) File "C:\Python\lib\urllib\request.py", line 543, in _open '_open', req) File "C:\Python\lib\urllib\request.py", line 503, in _call_chain result = func(*args) File "C:\Python\lib\urllib\request.py", line 1360, in https_open context=self._context, check_hostname=self._check_hostname) File "C:\Python\lib\urllib\request.py", line 1319, in do_open raise URLError(err) urllib.error.URLError: ``` ---------- assignee: christian.heimes components: SSL messages: 336761 nosy: Mika Fischer, christian.heimes priority: normal severity: normal status: open title: SSL verification fails for some sites inside windows docker container type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 09:54:37 2019 From: report at bugs.python.org (Paul Ganssle) Date: Wed, 27 Feb 2019 14:54:37 +0000 Subject: [New-bugs-announce] [issue36138] Improve documentation about converting datetime.timedelta to scalars Message-ID: <1551279277.78.0.420201199405.issue36138@roundup.psfhosted.org> New submission from Paul Ganssle : In a recent python-dev thread, there was some confusion about how to get something like `timedelta.total_microseconds()`. There is already an existing, supported idiom for this, which is that `timedelta` implements division: td = timedelta(hours=1) num_microseconds = td / timedelta(microseconds=1) In this e-mail ( https://mail.python.org/pipermail/python-dev/2019-February/156351.html ), Nick Coghlan proposed that we update the documentation and there were no objections, quoting: * In the "Supported Operations" section of https://docs.python.org/3/library/datetime.html#timedelta-objects, change "Division (3) of t2 by t3." to "Division (3) of overall duration t2 by interval unit t3." * In the total_seconds() documentation, add a sentence "For interval units other than seconds, use the division form directly (e.g. `td / timedelta(microseconds=1)`)" I am starting this issue to track that change. ---------- assignee: docs at python components: Documentation, Library (Lib) messages: 336765 nosy: belopolsky, docs at python, ncoghlan, p-ganssle priority: normal severity: normal status: open title: Improve documentation about converting datetime.timedelta to scalars versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 13:00:41 2019 From: report at bugs.python.org (Davide Rizzo) Date: Wed, 27 Feb 2019 18:00:41 +0000 Subject: [New-bugs-announce] [issue36139] release GIL on mmap dealloc Message-ID: <1551290441.84.0.237778309898.issue36139@roundup.psfhosted.org> New submission from Davide Rizzo : munmap() can take a long time. I think mmap_object_dealloc can trivially release the GIL around this operation. Something similar was already mentioned in https://bugs.python.org/issue1572968 but a general patch was never provided. The dealloc case alone is significant enough to deserve fixing. ---------- components: Library (Lib) messages: 336775 nosy: davide.rizzo priority: normal severity: normal status: open title: release GIL on mmap dealloc versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 13:38:06 2019 From: report at bugs.python.org (Zackery Spytz) Date: Wed, 27 Feb 2019 18:38:06 +0000 Subject: [New-bugs-announce] [issue36140] An incorrect check in _msi.c's msidb_getsummaryinformation() Message-ID: <1551292686.09.0.0441775599344.issue36140@roundup.psfhosted.org> New submission from Zackery Spytz : msidb_getsummaryinformation() checks the wrong variable after calling PyObject_NEW(). ---------- components: Windows messages: 336776 nosy: ZackerySpytz, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: An incorrect check in _msi.c's msidb_getsummaryinformation() type: behavior versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 14:51:13 2019 From: report at bugs.python.org (muhzi) Date: Wed, 27 Feb 2019 19:51:13 +0000 Subject: [New-bugs-announce] [issue36141] configure: error: could not find pthreads on your system during cross compilation Message-ID: <1551297073.48.0.367193754314.issue36141@roundup.psfhosted.org> New submission from muhzi : I am facing a problem while trying to compile Python for android armv7a using the latest NDK version (clang). The configure script fails to find pthread library, which should be bundled in libc. Here is the full configure output: Building for armv7a-linux-androideabi configure: loading site script ./config.site checking build system type... x86_64-pc-linux-gnu checking host system type... armv7a-unknown-linux-androideabi checking for python3.7... python3.7 checking for python interpreter for cross build... python3.7 checking for --enable-universalsdk... no checking for --with-universal-archs... no checking MACHDEP... checking for --without-gcc... no checking for --with-icc... no checking for armv7a-linux-androideabi-gcc... armv7a-linux-androideabi16-clang checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... yes checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether armv7a-linux-androideabi16-clang accepts -g... yes checking for armv7a-linux-androideabi16-clang option to accept ISO C89... none needed checking how to run the C preprocessor... armv7a-linux-androideabi16-clang -E checking for grep that handles long lines and -e... /bin/grep checking for a sed that does not truncate output... /bin/sed checking for --with-cxx-main=... no checking for the platform triplet based on compiler characteristics... none checking for -Wl,--no-as-needed... yes checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking minix/config.h usability... no checking minix/config.h presence... no checking for minix/config.h... no checking whether it is safe to define __EXTENSIONS__... yes checking for the Android API level... 16 checking for the Android arm ABI... 7 checking for --with-suffix... checking for case-insensitive build directory... no checking LIBRARY... libpython$(VERSION)$(ABIFLAGS).a checking LINKCC... $(PURIFY) $(MAINCC) checking for GNU ld... yes checking for --enable-shared... yes checking for --enable-profiling... no checking LDLIBRARY... libpython$(LDVERSION).so checking for armv7a-linux-androideabi-ar... arm-linux-androideabi-ar checking for armv7a-linux-androideabi-readelf... arm-linux-androideabi-readelf checking for a BSD-compatible install... /usr/bin/install -c checking for a thread-safe mkdir -p... /bin/mkdir -p checking for --with-pydebug... no checking for --with-assertions... no checking for --enable-optimizations... no checking for --with-lto... no checking for -llvm-profdata... no checking for -Wextra... yes checking whether armv7a-linux-androideabi16-clang accepts and needs -fno-strict-aliasing... no checking if we can turn off armv7a-linux-androideabi16-clang unused result warning... yes checking if we can turn off armv7a-linux-androideabi16-clang unused parameter warning... yes checking if we can turn off armv7a-linux-androideabi16-clang missing field initializers warning... yes checking if we can turn off armv7a-linux-androideabi16-clang invalid function cast warning... no checking if we can turn on armv7a-linux-androideabi16-clang mixed sign comparison warning... yes checking if we can turn on armv7a-linux-androideabi16-clang unreachable code warning... yes checking if we can turn on armv7a-linux-androideabi16-clang strict-prototypes warning... yes checking if we can make implicit function declaration an error in armv7a-linux-androideabi16-clang... yes checking whether pthreads are available without options... no checking whether armv7a-linux-androideabi16-clang accepts -Kpthread... no checking whether armv7a-linux-androideabi16-clang accepts -Kthread... no checking whether armv7a-linux-androideabi16-clang accepts -pthread... no checking whether armv7a-linux-androideabi16-clang++ also accepts flags for thread support... no checking for ANSI C header files... (cached) yes checking asm/types.h usability... yes checking asm/types.h presence... yes checking for asm/types.h... yes checking crypt.h usability... no checking crypt.h presence... no checking for crypt.h... no checking conio.h usability... no checking conio.h presence... no checking for conio.h... no checking direct.h usability... no checking direct.h presence... no checking for direct.h... no checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking errno.h usability... yes checking errno.h presence... yes checking for errno.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking grp.h usability... yes checking grp.h presence... yes checking for grp.h... yes checking ieeefp.h usability... no checking ieeefp.h presence... no checking for ieeefp.h... no checking io.h usability... no checking io.h presence... no checking for io.h... no checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking libintl.h usability... no checking libintl.h presence... no checking for libintl.h... no checking process.h usability... no checking process.h presence... no checking for process.h... no checking pthread.h usability... yes checking pthread.h presence... yes checking for pthread.h... yes checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking shadow.h usability... no checking shadow.h presence... no checking for shadow.h... no checking signal.h usability... yes checking signal.h presence... yes checking for signal.h... yes checking stropts.h usability... no checking stropts.h presence... no checking for stropts.h... no checking termios.h usability... yes checking termios.h presence... yes checking for termios.h... yes checking for unistd.h... (cached) yes checking utime.h usability... yes checking utime.h presence... yes checking for utime.h... yes checking poll.h usability... yes checking poll.h presence... yes checking for poll.h... yes checking sys/devpoll.h usability... no checking sys/devpoll.h presence... no checking for sys/devpoll.h... no checking sys/epoll.h usability... yes checking sys/epoll.h presence... yes checking for sys/epoll.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/audioio.h usability... no checking sys/audioio.h presence... no checking for sys/audioio.h... no checking sys/xattr.h usability... yes checking sys/xattr.h presence... yes checking for sys/xattr.h... yes checking sys/bsdtty.h usability... no checking sys/bsdtty.h presence... no checking for sys/bsdtty.h... no checking sys/event.h usability... no checking sys/event.h presence... no checking for sys/event.h... no checking sys/file.h usability... yes checking sys/file.h presence... yes checking for sys/file.h... yes checking sys/ioctl.h usability... yes checking sys/ioctl.h presence... yes checking for sys/ioctl.h... yes checking sys/kern_control.h usability... no checking sys/kern_control.h presence... no checking for sys/kern_control.h... no checking sys/loadavg.h usability... no checking sys/loadavg.h presence... no checking for sys/loadavg.h... no checking sys/lock.h usability... no checking sys/lock.h presence... no checking for sys/lock.h... no checking sys/mkdev.h usability... no checking sys/mkdev.h presence... no checking for sys/mkdev.h... no checking sys/modem.h usability... no checking sys/modem.h presence... no checking for sys/modem.h... no checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/random.h usability... yes checking sys/random.h presence... yes checking for sys/random.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking sys/sendfile.h usability... yes checking sys/sendfile.h presence... yes checking for sys/sendfile.h... yes checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking sys/statvfs.h usability... yes checking sys/statvfs.h presence... yes checking for sys/statvfs.h... yes checking for sys/stat.h... (cached) yes checking sys/syscall.h usability... yes checking sys/syscall.h presence... yes checking for sys/syscall.h... yes checking sys/sys_domain.h usability... no checking sys/sys_domain.h presence... no checking for sys/sys_domain.h... no checking sys/termio.h usability... no checking sys/termio.h presence... no checking for sys/termio.h... no checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking for sys/types.h... (cached) yes checking sys/uio.h usability... yes checking sys/uio.h presence... yes checking for sys/uio.h... yes checking sys/un.h usability... yes checking sys/un.h presence... yes checking for sys/un.h... yes checking sys/utsname.h usability... yes checking sys/utsname.h presence... yes checking for sys/utsname.h... yes checking sys/wait.h usability... yes checking sys/wait.h presence... yes checking for sys/wait.h... yes checking pty.h usability... yes checking pty.h presence... yes checking for pty.h... yes checking libutil.h usability... no checking libutil.h presence... no checking for libutil.h... no checking sys/resource.h usability... yes checking sys/resource.h presence... yes checking for sys/resource.h... yes checking netpacket/packet.h usability... yes checking netpacket/packet.h presence... yes checking for netpacket/packet.h... yes checking sysexits.h usability... yes checking sysexits.h presence... yes checking for sysexits.h... yes checking bluetooth.h usability... no checking bluetooth.h presence... no checking for bluetooth.h... no checking linux/tipc.h usability... yes checking linux/tipc.h presence... yes checking for linux/tipc.h... yes checking linux/random.h usability... yes checking linux/random.h presence... yes checking for linux/random.h... yes checking spawn.h usability... yes checking spawn.h presence... yes checking for spawn.h... yes checking util.h usability... no checking util.h presence... no checking for util.h... no checking alloca.h usability... yes checking alloca.h presence... yes checking for alloca.h... yes checking endian.h usability... yes checking endian.h presence... yes checking for endian.h... yes checking sys/endian.h usability... yes checking sys/endian.h presence... yes checking for sys/endian.h... yes checking sys/sysmacros.h usability... yes checking sys/sysmacros.h presence... yes checking for sys/sysmacros.h... yes checking for dirent.h that defines DIR... yes checking for library containing opendir... no checking whether sys/types.h defines makedev... no checking for sys/mkdev.h... (cached) no checking for sys/sysmacros.h... (cached) yes checking bluetooth/bluetooth.h usability... no checking bluetooth/bluetooth.h presence... no checking for bluetooth/bluetooth.h... no checking for net/if.h... yes checking for linux/netlink.h... yes checking for linux/vm_sockets.h... yes checking for linux/can.h... yes checking for linux/can/raw.h... yes checking for linux/can/bcm.h... yes checking for clock_t in time.h... yes checking for makedev... no checking for le64toh... no checking for mode_t... yes checking for off_t... yes checking for pid_t... yes checking for size_t... yes checking for uid_t in sys/types.h... yes checking for ssize_t... yes checking for __uint128_t... no checking size of int... 4 checking size of long... 4 checking size of long long... 8 checking size of void *... 4 checking size of short... 2 checking size of float... 4 checking size of double... 8 checking size of fpos_t... 8 checking size of size_t... 4 checking size of pid_t... 4 checking size of uintptr_t... 4 checking for long double support... yes checking size of long double... 8 checking size of _Bool... 1 checking size of off_t... 8 checking whether to enable large file support... yes checking size of time_t... 4 checking for pthread_t... yes checking size of pthread_t... 4 checking size of pthread_key_t... 4 checking whether pthread_key_t is compatible with int... yes checking for --enable-framework... no checking for dyld... no checking the extension of shared libraries... .so checking LDSHARED... $(CC) -shared checking CCSHARED... checking LINKFORSHARED... -pie -Xlinker -export-dynamic checking CFLAGSFORSHARED... $(CCSHARED) checking SHLIBS... $(LIBS) checking for sendfile in -lsendfile... no checking for dlopen in -ldl... no checking for shl_load in -ldld... no checking uuid/uuid.h usability... no checking uuid/uuid.h presence... no checking for uuid/uuid.h... no checking uuid.h usability... no checking uuid.h presence... no checking for uuid.h... no checking for uuid_generate_time_safe... no checking for uuid_create... no checking for uuid_enc_be... no checking for library containing sem_init... no checking for textdomain in -lintl... no checking aligned memory access is required... yes checking for --with-hash-algorithm... default checking for --with-address-sanitizer... no checking for --with-memory-sanitizer... no checking for --with-undefined-behavior-sanitizer... no checking for t_open in -lnsl... no checking for socket in -lsocket... no checking for --with-libs... no checking for armv7a-linux-androideabi-pkg-config... no checking for pkg-config... /usr/bin/pkg-config configure: WARNING: using cross tools not prefixed with host triplet checking pkg-config is at least version 0.9.0... yes checking for --with-system-expat... yes checking for --with-system-ffi... yes configure: WARNING: --with(out)-system-ffi is ignored on this platform checking for --with-system-libmpdec... no checking for --enable-loadable-sqlite-extensions... no checking for --with-tcltk-includes... default checking for --with-tcltk-libs... default checking for --with-dbmliborder... checking for _POSIX_THREADS in unistd.h... yes checking for pthread_create in -lpthread... checking for pthread_detach... no checking for pthread_create in -lpthreads... no checking for pthread_create in -lc_r... no checking for __pthread_create_system in -lpthread... no checking for pthread_create in -lcma... no configure: error: could not find pthreads on your system ---------- messages: 336779 nosy: muhzi, xdegaye priority: normal severity: normal status: open title: configure: error: could not find pthreads on your system during cross compilation versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 20:42:19 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 28 Feb 2019 01:42:19 +0000 Subject: [New-bugs-announce] [issue36142] Add a new _PyPreConfig step to Python initialization to setup memory allocator and encodings Message-ID: <1551318139.47.0.0428953967493.issue36142@roundup.psfhosted.org> New submission from STINNER Victor : I added a _PyCoreConfig structure to Python 3.7 which contains almost all parameters used to configure Python. Problems: _PyCoreConfig uses bytes and Unicode strings (char* and wchar_t*) whereas it is also used to setup the memory allocator and (filesystem, locale and stdio) encodings. I propose to add a new _PyPreConfig which is the "strict minimum" configuration to setup encodings and the memory allocator. In practice, it also contains parameters which directly or indirectly impacts the allocator and encodings. For example, isolated impacts use_environment which impacts the allocator (PYTHONMALLOC environment variable). Another example: dev_mode=1 sets the allocator to "debug". The command line arguments are now parsed twice. _PyPreConfig only parses a few parameters like -E, -I and -X. A temporary _PyPreCmdline is used to store command line arguments like -X options. I moved structures closer to where they are used. "Global" _PyMain structure has been removed. _PyCmdline now lives way shorter than previously and is moved from main.c to coreconfig.c. The idea is to better control when and how memory is allocated. In term of API, we get something like: _PyCoreConfig config = _PyCoreConfig_INIT; config.preconfig.stdio_encoding = "iso8859-1"; config.preconfig.stdio_errors = "replace"; config.user_site_directory = 0; ... _PyInitError err = _Py_InitializeFromConfig(&config); if (_Py_INIT_FAILED(err)) { _Py_ExitInitError(err); } ... Py_Finalize(); return 0; "config.preconfig.stdio_errors" syntax isn't great, but it's simpler to implement than duplicating all _PyPreConfig fields into _PyCoreConfig. ---------- components: Interpreter Core messages: 336791 nosy: vstinner priority: normal severity: normal status: open title: Add a new _PyPreConfig step to Python initialization to setup memory allocator and encodings versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 23:11:10 2019 From: report at bugs.python.org (Guido van Rossum) Date: Thu, 28 Feb 2019 04:11:10 +0000 Subject: [New-bugs-announce] [issue36143] AUto-generate Lib/keyword.py Message-ID: <1551327070.18.0.354102770805.issue36143@roundup.psfhosted.org> New submission from Guido van Rossum : The stdib keyword.py module must be regenerated after adding/removing keywords from the grammar. While this is rare, we now generate everything else derived from the grammar. Hopefully someone can add some rules to the Makefile to auto-generate this one too when regen-grammar is run. This is probably an easy project for a beginning contributor. ---------- components: Build messages: 336797 nosy: gvanrossum priority: normal severity: normal status: open title: AUto-generate Lib/keyword.py versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Feb 27 23:18:58 2019 From: report at bugs.python.org (Brandt Bucher) Date: Thu, 28 Feb 2019 04:18:58 +0000 Subject: [New-bugs-announce] [issue36144] Dictionary addition. Message-ID: <1551327538.36.0.964853059958.issue36144@roundup.psfhosted.org> New submission from Brandt Bucher : ...as discussed in python-ideas. Semantically: d1 + d2 <-> d3 = d1.copy(); d3.update(d2); d3 d1 += d2 <-> d1.update(d2) Attached is a working implementation with new/fixed tests for consideration. I've also updated collections.UserDict with the new __add__/__radd__/__iadd__ methods. ---------- components: Interpreter Core messages: 336798 nosy: brandtbucher priority: normal severity: normal status: open title: Dictionary addition. type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 06:51:25 2019 From: report at bugs.python.org (muhzi) Date: Thu, 28 Feb 2019 11:51:25 +0000 Subject: [New-bugs-announce] [issue36145] android arm cross compilation fails, h Message-ID: <1551354685.49.0.372861646836.issue36145@roundup.psfhosted.org> New submission from muhzi : This is a follow up of #36141, I'm trying to build python for android armv7a. The problem was the configure script fails to find pthread_create in the android headers. Now after getting past the configuration, I get a build error: armv7a-linux-androideabi16-clang -c -mfloat-abi=softfp -mfpu=vfpv3-d16 -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -I. -I./Include -DPy_BUILD_CORE -o Python/pytime.o Python/pytime.c Python/pytime.c:911:9: error: implicit declaration of function 'pytime_fromtimespec' is invalid in C99 [-Werror,-Wimplicit-function-declaration] if (pytime_fromtimespec(tp, &ts, raise) < 0) { ^ Python/pytime.c:911:9: note: did you mean 'pytime_fromtimeval'? Python/pytime.c:336:1: note: 'pytime_fromtimeval' declared here pytime_fromtimeval(_PyTime_t *tp, struct timeval *tv, int raise) ^ Python/pytime.c:911:9: warning: this function declaration is not a prototype [-Wstrict-prototypes] if (pytime_fromtimespec(tp, &ts, raise) < 0) { ^ The declaration for pytime_fromtimespec needs the token HAVE_CLOCK_GETTIME to be defined to 1, which isn't the case (as per pyconfig.h). I checked the android headers and time.h seems to have the necessary function for this token. I uploaded pyconfig.h for reference (attached). ---------- components: Cross-Build files: pyconfig.h messages: 336828 nosy: Alex.Willmer, muhzi, xdegaye priority: normal severity: normal status: open title: android arm cross compilation fails, h type: compile error versions: Python 3.7 Added file: https://bugs.python.org/file48178/pyconfig.h _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 10:37:34 2019 From: report at bugs.python.org (STINNER Victor) Date: Thu, 28 Feb 2019 15:37:34 +0000 Subject: [New-bugs-announce] [issue36146] Refactor setup.py Message-ID: <1551368254.14.0.622163548129.issue36146@roundup.psfhosted.org> New submission from STINNER Victor : The detect_modules() method of setup.py became longer and longer over the years. It is now 1128 lines long. It's way too long: it becomes very hard to track the lifetime of a variable and many variables are overriden on purpose or not. Shorter functions help to track the lifetime of variables, ease review and reduce the number of bugs. ---------- components: Build messages: 336841 nosy: vstinner priority: normal severity: normal status: open title: Refactor setup.py versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 13:03:20 2019 From: report at bugs.python.org (Charalampos Stratakis) Date: Thu, 28 Feb 2019 18:03:20 +0000 Subject: [New-bugs-announce] [issue36147] [2.7] Coverity scan: Modules/_ctypes/cfield.c , Variable "result" going out of scope Message-ID: <1551377000.6.0.726806174953.issue36147@roundup.psfhosted.org> New submission from Charalampos Stratakis : Coverity scan on python2 resulted in this error. Python-2.7.15/Modules/_ctypes/cfield.c:1297: alloc_fn: Storage is returned from allocation function "PyString_FromString". Python-2.7.15/Objects/stringobject.c:143:5: alloc_fn: Storage is returned from allocation function "PyObject_Malloc". Python-2.7.15/Objects/obmalloc.c:982:5: alloc_fn: Storage is returned from allocation function "malloc". Python-2.7.15/Objects/obmalloc.c:982:5: return_alloc_fn: Directly returning storage allocated by "malloc". Python-2.7.15/Objects/stringobject.c:143:5: var_assign: Assigning: "op" = "PyObject_Malloc(37UL + size)". Python-2.7.15/Objects/stringobject.c:164:5: return_alloc: Returning allocated memory "op". Python-2.7.15/Modules/_ctypes/cfield.c:1297: var_assign: Assigning: "result" = storage returned from "PyString_FromString((char *)ptr)". Python-2.7.15/Modules/_ctypes/cfield.c:1311: leaked_storage: Variable "result" going out of scope leaks the storage it points to. 1309| } else 1310| /* cannot shorten the result */ 1311|-> return PyString_FromStringAndSize(ptr, size); 1312| } 1313| This was fixed on python3 with https://github.com/python/cpython/commit/19b52545df898ec911c44e29f75badb902924c0b Partially backporting this change for this file should fix the issue. ---------- components: Extension Modules messages: 336859 nosy: cstratak, vstinner priority: normal severity: normal status: open title: [2.7] Coverity scan: Modules/_ctypes/cfield.c , Variable "result" going out of scope versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 13:35:43 2019 From: report at bugs.python.org (sls) Date: Thu, 28 Feb 2019 18:35:43 +0000 Subject: [New-bugs-announce] [issue36148] smtplib.SMTP.sendmail: mta status codes only accessible by local variables Message-ID: <1551378943.89.0.92811496764.issue36148@roundup.psfhosted.org> New submission from sls : MTA status codes (visible via setdebuglevel(1)) are not accessible as the `sendmail`-method stores them only locally (code, resp). I suggest to store the mta status codes, for instance: "250, b'2.0.0 Ok: queued as XYZ" etc. in an instance attribute (tuple) to access them on smtp-sessions. As an email developer those information are very important to me. ---------- components: Library (Lib) messages: 336864 nosy: sls priority: normal severity: normal status: open title: smtplib.SMTP.sendmail: mta status codes only accessible by local variables type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 13:45:18 2019 From: report at bugs.python.org (Thomas Wouters) Date: Thu, 28 Feb 2019 18:45:18 +0000 Subject: [New-bugs-announce] [issue36149] use of uninitialised memory in cPickle. Message-ID: <1551379518.13.0.26926379608.issue36149@roundup.psfhosted.org> New submission from Thomas Wouters : There is a bug in cPickle that makes it use uninitialised memory when reading a truncated pickle from a file (an actual C FILE*, not just any file-like object). Using MemorySanitizer: % ./python Python 2.7.15 (default, redacted, redacted) [GCC 4.2.1 Compatible Clang (redacted)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import tempfile, cPickle >>> f = tempfile.TemporaryFile() >>> f.write('F') >>> f.seek(0) >>> cPickle.load(f) Uninitialized bytes in __interceptor_strlen at offset 1 inside [0x701000001b50, 3) ==23453==WARNING: MemorySanitizer: use-of-uninitialized-value #0 0x561a9110d51b in PyString_FromFormatV Objects/stringobject.c:241:22 #1 0x561a912ba454 in PyErr_Format Python/errors.c:567:14 #2 0x561a91303bc8 in PyOS_string_to_double Python/pystrtod.c #3 0x561a90b4a7d7 in load_float Modules/cPickle.c:3618:9 #4 0x561a90b48ca7 in load Modules/cPickle.c:4779:17 #5 0x561a90b56d9c in cpm_load Modules/cPickle.c:5758:11 #6 0x561a91260b89 in call_function Python/ceval.c:4379:17 The problem is Modules/cPickle:readline_file end-of-file handling logic: for (; i < (self->buf_size - 1); i++) { if (feof(self->fp) || (self->buf[i] = getc(self->fp)) == '\n') { self->buf[i + 1] = '\0'; *s = self->buf; return i + 1; } } When feof(self->pf) becomes true, the code skips over writing to self->buf[i] (which hasn't been written to yet), only writes to self->buf[i+1], and returns self->buf. There is an additional problem that the code fails to check for the EOF return of getc(), which means it may write EOF-cast-to-a-char to self->buf[i] without realising it's meant to be an EOF. (EOF is usually -1, but I don't remember if that's a valid cast or undefined behaviour.) Theoretically this could cause invalid, truncated pickles to be read succesfully, but with the wrong value for the last item. (It could perhaps do even worse things with files that change while cPickle is reading them, but that's harder to reason about.) (Use of uninitialised memory is a potential security issue, although so is using pickles, so I'm conflicted about the bug type...) ---------- components: Interpreter Core messages: 336865 nosy: twouters priority: normal severity: normal status: open title: use of uninitialised memory in cPickle. type: security versions: Python 2.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 14:13:41 2019 From: report at bugs.python.org (Zackery Spytz) Date: Thu, 28 Feb 2019 19:13:41 +0000 Subject: [New-bugs-announce] [issue36150] Possible assertion failures due to _ctypes.c's PyCData_reduce() Message-ID: <1551381221.9.0.581708361836.issue36150@roundup.psfhosted.org> New submission from Zackery Spytz : The PyBytes_FromStringAndSize() and PyObject_GetAttrString() calls in PyCData_reduce() are not checked for failure. ---------- components: Extension Modules, ctypes messages: 336866 nosy: ZackerySpytz priority: normal severity: normal status: open title: Possible assertion failures due to _ctypes.c's PyCData_reduce() type: crash versions: Python 2.7, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 16:07:43 2019 From: report at bugs.python.org (Aiden Zhou) Date: Thu, 28 Feb 2019 21:07:43 +0000 Subject: [New-bugs-announce] [issue36151] Incorrect answer when calculating 11//3 Message-ID: <1551388063.52.0.300657450058.issue36151@roundup.psfhosted.org> New submission from Aiden Zhou : When calculating 11//3, the last digit of the decimals is wrongly rounded to 5 as obviously it needs to be rounded to 7. IDLE (Python 3.7 32-bit) ---------- assignee: terry.reedy components: IDLE files: Screenshot (102)_LI.jpg messages: 336868 nosy: azihdoeun, terry.reedy priority: normal severity: normal status: open title: Incorrect answer when calculating 11//3 type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file48179/Screenshot (102)_LI.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 18:51:49 2019 From: report at bugs.python.org (Cheryl Sabella) Date: Thu, 28 Feb 2019 23:51:49 +0000 Subject: [New-bugs-announce] [issue36152] IDLE: Remove close_when_done from colorizer close() Message-ID: <1551397909.36.0.802357976023.issue36152@roundup.psfhosted.org> New submission from Cheryl Sabella : Remove the unused `close_when_done` parameter from `close()` in `colorizer.ColorDelegator()`. * The second parameter to close() is called `close_when_done` and it is expected to contain a toplevel widget that has a destroy() method. * Originally, the editor window had code that would send self.top (if colorizing was in process) as the value for this parameter: doh = colorizing and self.top self.color.close(doh) # Cancel colorization * This was changed via this commit (https://github.com/python/cpython/commit/8ce8a784bd672ba42975dec752848392ff9a7797) in 2007 to instead be: self.color.close(False) self.color = None The value of `False` made it so the destroy code in colorizer wouldn't be run even though `None` or leaving the parameter off would have been more clear. In any case, this `close_when_done` hasn't been used since 2007. ---------- assignee: cheryl.sabella components: IDLE messages: 336880 nosy: cheryl.sabella, terry.reedy priority: normal severity: normal status: open title: IDLE: Remove close_when_done from colorizer close() type: enhancement versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 19:49:32 2019 From: report at bugs.python.org (Sridhar Iyer) Date: Fri, 01 Mar 2019 00:49:32 +0000 Subject: [New-bugs-announce] [issue36153] Freeze support documentation is misleading. Message-ID: <1551401372.79.0.134624139836.issue36153@roundup.psfhosted.org> New submission from Sridhar Iyer : The documentation on freeze_support listed on https://docs.python.org/3/library/multiprocessing.html need to be fixed This mentions: "Calling freeze_support() has no effect when invoked on any operating system other than Windows. In addition, if the module is being run normally by the Python interpreter on Windows (the program has not been frozen), then freeze_support() has no effect." This is not true. Sklearn/tensorflow libraries tend to cause an infinite loop when frozen with pyinstaller (tested on python 3.6 on ubuntu 14.04). freeze_support is the only way to get around the situation and should be included before including any other module that includes a multiprocessing library (not just in main). ---------- assignee: docs at python components: Documentation messages: 336881 nosy: Sridhar Iyer, docs at python priority: normal severity: normal status: open title: Freeze support documentation is misleading. versions: Python 2.7, Python 3.4, Python 3.5, Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Feb 28 23:42:26 2019 From: report at bugs.python.org (kellena) Date: Fri, 01 Mar 2019 04:42:26 +0000 Subject: [New-bugs-announce] [issue36154] Python quit unexpectedly error Message-ID: <1551415346.96.0.363606290461.issue36154@roundup.psfhosted.org> New submission from kellena : I'm on a Mac running Mojave, version 10.14.3. I installed Python 3.7. I'm now getting a "Python quit unexpectedly" error consistently and I don't know why or if I did something to cause it. I've tried uninstalling and reinstalling but I'm still getting the error. I've checked online resources such as Stack Overflow and "reinstall" is pretty much the standard suggestion, which I've already done. Here's the full error. Please tell me how to fix this: > Process: Python [7756] > Path: /Library/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python > Identifier: Python > Version: 3.7.2 (3.7.2) > Code Type: X86-64 (Native) > Parent Process: Code Helper [785] > Responsible: Python [7756] > User ID: 501 > > Date/Time: 2019-02-27 16:27:29.136 -0600 > OS Version: Mac OS X 10.14.3 (18D109) > Report Version: 12 > Bridge OS Version: 3.3 (16P3133) > Anonymous UUID: DB5CA6AD-7D99-F9D4-0475-D401946AB77E > > Sleep/Wake UUID: D7171268-D6D9-464C-B881-A97EE3C3AB2D > > Time Awake Since Boot: 17000 seconds > Time Since Wake: 420 seconds > > System Integrity Protection: enabled > > Crashed Thread: 0 Dispatch queue: com.apple.main-thread > > Exception Type: EXC_CRASH (SIGABRT) > Exception Codes: 0x0000000000000000, 0x0000000000000000 > Exception Note: EXC_CORPSE_NOTIFY > > Application Specific Information: > dyld2 mode > abort() called > > Thread 0 Crashed:: Dispatch queue: com.apple.main-thread > 0 libsystem_kernel.dylib 0x00007fff6b87723e __pthread_kill + 10 > 1 libsystem_pthread.dylib 0x00007fff6b92dc1c pthread_kill + 285 > 2 libsystem_c.dylib 0x00007fff6b7e01c9 abort + 127 > 3 org.python.python 0x00000001049ff420 fatal_error + 592 > 4 org.python.python 0x00000001049fe784 _Py_FatalInitError + 36 > 5 org.python.python 0x0000000104a27031 pymain_main + 7921 > 6 org.python.python 0x0000000104a2713a _Py_UnixMain + 58 > 7 libdyld.dylib 0x00007fff6b737ed9 start + 1 > > Thread 0 crashed with X86 Thread State (64-bit): > rax: 0x0000000000000000 rbx: 0x000000010b3955c0 rcx: 0x00007ffeeb313e88 rdx: 0x0000000000000000 > rdi: 0x0000000000000307 rsi: 0x0000000000000006 rbp: 0x00007ffeeb313ec0 rsp: 0x00007ffeeb313e88 > r8: 0x0000000000000000 r9: 0x00000000b7d055e4 r10: 0x0000000000000000 r11: 0x0000000000000206 > r12: 0x0000000000000307 r13: 0x00007fff9e5139a0 r14: 0x0000000000000006 r15: 0x000000000000002d > rip: 0x00007fff6b87723e rfl: 0x0000000000000206 cr2: 0x00007fff9e511188 > > Logical CPU: 0 > Error Code: 0x02000148 > Trap Number: 133 > > > Binary Images: > 0x1048eb000 - 0x1048ebfff +org.python.python (3.7.2 - 3.7.2) <122E5A60-3D65-3759-AD3F-54658AA10863> /Library/Frameworks/Python.framework/Versions/3.7/Resources/Python.app/Contents/MacOS/Python > 0x1048f3000 - 0x104accff7 +org.python.python (3.7.2, [c] 2001-2018 Python Software Foundation. - 3.7.2) <779A7040-54B9-3956-85A6-C3CFE0C5A52A> /Library/Frameworks/Python.framework/Versions/3.7/Python > 0x10b2df000 - 0x10b35da87 dyld (655.1) <3EBA447F-A546-366B-B302-8DC3B21A3E30> /usr/lib/dyld > 0x7fff3e4af000 - 0x7fff3e8fcfef com.apple.CoreFoundation (6.9 - 1562) <02A2C178-9FF6-385C-A9C5-7F4FC9D66311> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation > 0x7fff687f9000 - 0x7fff687faff7 libDiagnosticMessagesClient.dylib (107) <15210AC0-61F9-3F9D-A159-A009F62EB537> /usr/lib/libDiagnosticMessagesClient.dylib > 0x7fff68bab000 - 0x7fff68bacffb libSystem.B.dylib (1252.200.5) /usr/lib/libSystem.B.dylib > 0x7fff68e05000 - 0x7fff68e5cff7 libc++.1.dylib (400.9.4) /usr/lib/libc++.1.dylib > 0x7fff68e5d000 - 0x7fff68e72fff libc++abi.dylib (400.17) <446F4748-8A89-3D2E-AE1C-27EEBE93A8AB> /usr/lib/libc++abi.dylib > 0x7fff69ac1000 - 0x7fff69d24ffb libicucore.A.dylib (62109.0.1) /usr/lib/libicucore.A.dylib > 0x7fff6a653000 - 0x7fff6add9fe7 libobjc.A.dylib (750.1) <804715F4-F52D-34D0-8FEC-A25DC08513C3> /usr/lib/libobjc.A.dylib > 0x7fff6b536000 - 0x7fff6b548ffb libz.1.dylib (70.200.4) <15F7B40A-424C-33BB-BF2C-7E8195128B78> /usr/lib/libz.1.dylib > 0x7fff6b5b9000 - 0x7fff6b5bdff3 libcache.dylib (81) <704331AC-E43D-343A-8C24-39201142AF27> /usr/lib/system/libcache.dylib > 0x7fff6b5be000 - 0x7fff6b5c8ff3 libcommonCrypto.dylib (60118.220.1) <9C865644-EE9A-3662-AB77-7C8A5E561784> /usr/lib/system/libcommonCrypto.dylib > 0x7fff6b5c9000 - 0x7fff6b5d0fff libcompiler_rt.dylib (63.4) <817772E3-E836-3FFD-A39B-BDCD1C357221> /usr/lib/system/libcompiler_rt.dylib > 0x7fff6b5d1000 - 0x7fff6b5daff3 libcopyfile.dylib (146.200.3) <5C5C4F35-DAB7-3CF1-940F-F47192AB8289> /usr/lib/system/libcopyfile.dylib > 0x7fff6b5db000 - 0x7fff6b65ffdf libcorecrypto.dylib (602.230.1) /usr/lib/system/libcorecrypto.dylib > 0x7fff6b6e6000 - 0x7fff6b720ff7 libdispatch.dylib (1008.220.2) <2FDB1401-5119-3DF0-91F5-F4E105F00CD7> /usr/lib/system/libdispatch.dylib > 0x7fff6b721000 - 0x7fff6b750ff3 libdyld.dylib (655.1) <90C801E7-5D05-37A8-810C-B58E8C53953A> /usr/lib/system/libdyld.dylib > 0x7fff6b751000 - 0x7fff6b751ffb libkeymgr.dylib (30) /usr/lib/system/libkeymgr.dylib > 0x7fff6b75f000 - 0x7fff6b75fff7 liblaunch.dylib (1336.240.2) /usr/lib/system/liblaunch.dylib > 0x7fff6b760000 - 0x7fff6b765fff libmacho.dylib (921) <6ADB99F3-D142-3A0A-B3CE-031354766ACC> /usr/lib/system/libmacho.dylib > 0x7fff6b766000 - 0x7fff6b768ffb libquarantine.dylib (86.220.1) <58524FD7-63C5-38E0-9D90-845A79551C14> /usr/lib/system/libquarantine.dylib > 0x7fff6b769000 - 0x7fff6b76aff3 libremovefile.dylib (45.200.2) /usr/lib/system/libremovefile.dylib > 0x7fff6b76b000 - 0x7fff6b782ff3 libsystem_asl.dylib (356.200.4) <33C62769-1242-3BC1-9459-13CBCDECC7FE> /usr/lib/system/libsystem_asl.dylib > 0x7fff6b783000 - 0x7fff6b783fff libsystem_blocks.dylib (73) <152EDADF-7D94-35F2-89B7-E66DCD945BBA> /usr/lib/system/libsystem_blocks.dylib > 0x7fff6b784000 - 0x7fff6b80cfff libsystem_c.dylib (1272.200.26) /usr/lib/system/libsystem_c.dylib > 0x7fff6b80d000 - 0x7fff6b810ff7 libsystem_configuration.dylib (963.200.27) <94898525-ECC8-3CC9-B312-CBEAAC305E32> /usr/lib/system/libsystem_configuration.dylib > 0x7fff6b811000 - 0x7fff6b814ff7 libsystem_coreservices.dylib (66) <10818C17-70E1-328E-A3E3-C3EB81AEC590> /usr/lib/system/libsystem_coreservices.dylib > 0x7fff6b815000 - 0x7fff6b81bffb libsystem_darwin.dylib (1272.200.26) <07468CF7-982F-37C4-83D0-D5E602A683AA> /usr/lib/system/libsystem_darwin.dylib > 0x7fff6b81c000 - 0x7fff6b822ff7 libsystem_dnssd.dylib (878.240.1) <5FEA5E1E-E80F-3616-AD33-8E936D61F31A> /usr/lib/system/libsystem_dnssd.dylib > 0x7fff6b823000 - 0x7fff6b86fff3 libsystem_info.dylib (517.200.9) <54B65F21-2E93-3579-9B72-6637A03245D9> /usr/lib/system/libsystem_info.dylib > 0x7fff6b870000 - 0x7fff6b898ff7 libsystem_kernel.dylib (4903.241.1) /usr/lib/system/libsystem_kernel.dylib > 0x7fff6b899000 - 0x7fff6b8e4ff7 libsystem_m.dylib (3158.200.7) /usr/lib/system/libsystem_m.dylib > 0x7fff6b8e5000 - 0x7fff6b909ff7 libsystem_malloc.dylib (166.220.1) <4777DC06-F9C6-356E-82AB-86A1C6D62F3A> /usr/lib/system/libsystem_malloc.dylib > 0x7fff6b90a000 - 0x7fff6b915ff3 libsystem_networkextension.dylib (767.240.1) <4DB0D4A2-83E7-3638-AAA0-39CECD5C25F8> /usr/lib/system/libsystem_networkextension.dylib > 0x7fff6b916000 - 0x7fff6b91dfff libsystem_notify.dylib (172.200.21) <65B3061D-41D7-3485-B217-A861E05AD50B> /usr/lib/system/libsystem_notify.dylib > 0x7fff6b91e000 - 0x7fff6b927fef libsystem_platform.dylib (177.200.16) <83DED753-51EC-3B8C-A98D-883A5184086B> /usr/lib/system/libsystem_platform.dylib > 0x7fff6b928000 - 0x7fff6b932fff libsystem_pthread.dylib (330.230.1) <80CC5992-823E-327E-BB6E-9D4568B84161> /usr/lib/system/libsystem_pthread.dylib > 0x7fff6b933000 - 0x7fff6b936ff7 libsystem_sandbox.dylib (851.230.3) /usr/lib/system/libsystem_sandbox.dylib > 0x7fff6b937000 - 0x7fff6b939ff3 libsystem_secinit.dylib (30.220.1) <5964B6D2-19D4-3CF9-BDBC-4EB1D42348F1> /usr/lib/system/libsystem_secinit.dylib > 0x7fff6b93a000 - 0x7fff6b941ff7 libsystem_symptoms.dylib (820.237.2) <487E1794-4C6E-3B1B-9C55-95B1A5FF9B90> /usr/lib/system/libsystem_symptoms.dylib > 0x7fff6b942000 - 0x7fff6b957ff7 libsystem_trace.dylib (906.220.1) <4D4BA88A-FA32-379D-8860-33838723B35F> /usr/lib/system/libsystem_trace.dylib > 0x7fff6b959000 - 0x7fff6b95effb libunwind.dylib (35.4) /usr/lib/system/libunwind.dylib > 0x7fff6b95f000 - 0x7fff6b98ffff libxpc.dylib (1336.240.2) /usr/lib/system/libxpc.dylib > > External Modification Summary: > Calls made by other processes targeting this process: > task_for_pid: 0 > thread_create: 0 > thread_set_state: 0 > Calls made by this process: > task_for_pid: 0 > thread_create: 0 > thread_set_state: 0 > Calls made by all processes on this machine: > task_for_pid: 25582 > thread_create: 0 > thread_set_state: 0 > > VM Region Summary: > ReadOnly portion of Libraries: Total=237.2M resident=0K(0%) swapped_out_or_unallocated=237.2M(100%) > Writable regions: Total=35.5M written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=35.5M(100%) > > VIRTUAL REGION > REGION TYPE SIZE COUNT (non-coalesced) > =========== ======= ======= > Kernel Alloc Once 8K 2 > MALLOC 19.2M 9 > MALLOC guard page 16K 4 > STACK GUARD 4K 2 > Stack 16.0M 2 > __DATA 4244K 47 > __LINKEDIT 216.9M 5 > __TEXT 20.4M 45 > __UNICODE 564K 2 > shared memory 8K 3 > =========== ======= ======= > TOTAL 277.1M 111 > > Model: MacBookPro15,1, BootROM 220.240.2.0.0 (iBridge: 16.16.3133.0.0,0), 6 processors, Intel Core i7, 2.2 GHz, 16 GB, SMC > Graphics: Intel UHD Graphics 630, Intel UHD Graphics 630, Built-In > Graphics: Radeon Pro 555X, Radeon Pro 555X, PCIe > Memory Module: BANK 0/ChannelA-DIMM0, 8 GB, DDR4, 2400 MHz, SK Hynix, - > Memory Module: BANK 2/ChannelB-DIMM0, 8 GB, DDR4, 2400 MHz, SK Hynix, - > AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x7BF), wl0: Sep 18 2018 16:24:57 version 9.130.86.7.32.6.21 FWID 01-83a3fe91 > Bluetooth: Version 6.0.10f1, 3 services, 27 devices, 1 incoming serial ports > Network Service: Wi-Fi, AirPort, en0 > USB Device: USB 3.1 Bus > USB Device: iBridge Bus > USB Device: iBridge DFR brightness > USB Device: iBridge Display > USB Device: Apple Internal Keyboard / Trackpad > USB Device: Headset > USB Device: iBridge ALS > USB Device: iBridge FaceTime HD Camera (Built-in) > USB Device: iBridge > Thunderbolt Bus: MacBook Pro, Apple Inc., 34.6 > Thunderbolt Bus: MacBook Pro, Apple Inc., 34.6 ---------- messages: 336892 nosy: kellena priority: normal severity: normal status: open title: Python quit unexpectedly error type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________