From report at bugs.python.org Wed Jul 1 02:58:32 2020 From: report at bugs.python.org (Yunfan Zhan) Date: Wed, 01 Jul 2020 06:58:32 +0000 Subject: [New-bugs-announce] [issue41180] marshal load bypass code.__new__ audit event Message-ID: <1593586712.19.0.6550600191.issue41180@roundup.psfhosted.org> New submission from Yunfan Zhan : While `code.__new__` is being audited, using `marshal.loads` to create a code object will trigger no events. Therefore, either `marshal.load(s)` event itself should be audited, or `code.__new__` should be triggered when marshal type is TYPE_CODE. Considering that importing from a pyc file also relys on unmarshalling code objects, and they have already been audited as `import`, I'm also wondering if auditing twice should be avoided for performance. ---------- messages: 372733 nosy: steve.dower, tkmk priority: normal severity: normal status: open title: marshal load bypass code.__new__ audit event type: security versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 06:32:21 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 01 Jul 2020 10:32:21 +0000 Subject: [New-bugs-announce] [issue41181] [macOS] Build macOS installer with LTO and PGO optimizations Message-ID: <1593599541.73.0.325479735185.issue41181@roundup.psfhosted.org> New submission from STINNER Victor : Link Time Optimization (LTO) and Profile-Guided Optimization (PGO) have a major impact on Python performance: they make Python between 10% and 30% faster (coarse estimation). Currently, macOS installers distributed on python.org are built with Clang 6.0 without LTO or PGO. I propose to enable LTO and PGO to make these binaries faster. IMO we should build all new Python macOS installers with these optimizations. Attached PR adds the flags. Python 3.9.0b3 binary: $ python3.9 Python 3.9.0b3 (v3.9.0b3:b484871ba7, Jun 9 2020, 16:05:25) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. configure options: >>> import sysconfig; print(sysconfig.get_config_var('CONFIG_ARGS')) '-C' '--enable-framework' '--enable-universalsdk=/' '--with-universal-archs=intel-64' '--with-computed-gotos' '--without-ensurepip' '--with-tcltk-includes=-I/tmp/_py/libraries/usr/local/include' '--with-tcltk-libs=-ltcl8.6 -ltk8.6' 'LDFLAGS=-g' 'CFLAGS=-g' 'CC=gcc' Compiler flags: >>> sysconfig.get_config_var('PY_CFLAGS') + sysconfig.get_config_var('PY_CFLAGS_NODIST') '-Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch x86_64 -g-std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I/Users/sysadmin/build/v3.9.0b3/Include/internal' Linker flags: >>> sysconfig.get_config_var('PY_LDFLAGS') + sysconfig.get_config_var('PY_LDFLAGS_NODIST') '-arch x86_64 -g' ---------- components: Build messages: 372743 nosy: vstinner priority: normal severity: normal status: open title: [macOS] Build macOS installer with LTO and PGO optimizations type: performance versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 07:08:19 2020 From: report at bugs.python.org (Abhijeet Kasurde) Date: Wed, 01 Jul 2020 11:08:19 +0000 Subject: [New-bugs-announce] [issue41182] DefaultSelector fails to detect selector on VMware ESXi Message-ID: <1593601699.76.0.361945783635.issue41182@roundup.psfhosted.org> New submission from Abhijeet Kasurde : When DefaultSelector is used on VMware ESXi, it fails with >>> import selectors >>> selector = selectors.DefaultSelector() Traceback (most recent call last): File "", line 1, in File "/build/mts/release/bora-4887370/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/selectors.py", line 390, in __init__ OSError: [Errno 38] Function not implemented After debugging, I found that it is using Selector which is not implemented for ESXi kernel. Change DefaultSelector mechanism to use 'select' implementation to choose default selector. ---------- components: Library (Lib) messages: 372746 nosy: akasurde priority: normal severity: normal status: open title: DefaultSelector fails to detect selector on VMware ESXi versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 10:35:46 2020 From: report at bugs.python.org (Larry Hastings) Date: Wed, 01 Jul 2020 14:35:46 +0000 Subject: [New-bugs-announce] [issue41183] Workaround or fix for SSL "EE_KEY_TOO_SMALL" test failures Message-ID: <1593614146.46.0.384684767233.issue41183@roundup.psfhosted.org> New submission from Larry Hastings : I'm testing 3.5.10rc1 on a freshly installed Linux (Pop!_OS 20.04), and I'm getting a lot of these test failures: ssl.SSLError: [SSL: EE_KEY_TOO_SMALL] ee key too small (_ssl.c:2951) Apparently the 2048 keys used in the tests are considered "too small" with brand-new builds of the SSL library. Christian: you upgraded the test suite keys to 3072 bits back in 2018 (issue #34542), but didn't backport this as far as 3.5 because it was in security-fixes-only mode. I experimented with taking your patch to 3.6 and applying it to 3.5, but 80% of the patches didn't apply cleanly. Could you either backport this upgrade to 3.5 (I'll happily accept the PR), or advise me on how to otherwise mitigate the problem? I don't really want to turn off all those tests. Thanks! ---------- assignee: christian.heimes components: Tests messages: 372755 nosy: christian.heimes, larry priority: high severity: normal stage: needs patch status: open title: Workaround or fix for SSL "EE_KEY_TOO_SMALL" test failures type: crash versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 11:24:15 2020 From: report at bugs.python.org (Anthropologist) Date: Wed, 01 Jul 2020 15:24:15 +0000 Subject: [New-bugs-announce] [issue41184] Reconciling IDLE's Comment Out Region / Uncomment Region with PEP 8 guidelines for commenting Message-ID: <1593617055.59.0.538237347639.issue41184@roundup.psfhosted.org> New submission from Anthropologist : IDLE's Comment Out Region formatting tool currently adds two octothorpe characters followed by no spaces. The Uncomment Region tool removes up to 2 octothorpes but does not remove spaces. The Python Style Guide (PEP 8) specifies that: "An inline comment is a comment on the same line as a statement. Inline comments should be separated by at least two spaces from the statement. They should start with a # and a single space." I propose reconciling these conflicting approaches to commenting out code, either by changing IDLE's behavior (i.e., instead of ##, Comment Out Region should insert # followed by a space, per PEP 8), or by amending the Python Style Guide to provide clear instructions about commenting out code, which is arguably a different use of comments than a single-line comment for the sake of explaining code. If the resolution involves changing IDLE's behavior, the Uncomment Region feature should also be changed to remove the space character after the octothorpe. As it currently stands, this feature fails to uncomment code that has been manually commented out following the guidelines in PEP 8. ---------- assignee: docs at python components: Documentation, IDLE messages: 372757 nosy: anthropologist, docs at python, terry.reedy priority: normal severity: normal status: open title: Reconciling IDLE's Comment Out Region / Uncomment Region with PEP 8 guidelines for commenting type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 12:05:44 2020 From: report at bugs.python.org (Marco Barisione) Date: Wed, 01 Jul 2020 16:05:44 +0000 Subject: [New-bugs-announce] [issue41185] lib2to3 generation of pickle files is racy Message-ID: <1593619544.94.0.691494373061.issue41185@roundup.psfhosted.org> New submission from Marco Barisione : The generation of pickle files in load_grammar in lib2to3/pgen2/driver.py is racy as other processes may end up reading a half-written pickle file. This is reproducible with the command line tool, but it's easier to reproduce by importing lib2to3. You just need different processes importing lib2to3 at the same time to make this happen, see the attached reproducer. I tried with Python 3.9 for completeness and, while it happens there as well, it seems to be less frequent ony my computer than when using Python 3.6 (2% failure rate instead of 50% failure rate). ---------- components: 2to3 (2.x to 3.x conversion tool) files: pool.py messages: 372760 nosy: barisione priority: normal severity: normal status: open title: lib2to3 generation of pickle files is racy versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49284/pool.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 12:37:10 2020 From: report at bugs.python.org (Bar Harel) Date: Wed, 01 Jul 2020 16:37:10 +0000 Subject: [New-bugs-announce] [issue41186] distutils.version epoch compatibility Message-ID: <1593621430.02.0.104429243352.issue41186@roundup.psfhosted.org> New submission from Bar Harel : Is distutils.version aware of the PEP440 epoch version modifier? I haven't seen any reference to this. AFAIK pypa packaging does take it into consideration, but should the stdlib also care about it? I would like to believe pip takes it into consideration but not sure if it uses distutils, or that setuptools have their own versioning scheme together with pkg_resources. ---------- components: Distutils messages: 372765 nosy: bar.harel, dstufft, eric.araujo priority: normal severity: normal status: open title: distutils.version epoch compatibility type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 12:49:25 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 01 Jul 2020 16:49:25 +0000 Subject: [New-bugs-announce] [issue41187] Convert the _msi module to Argument Clinic Message-ID: <1593622165.78.0.318018387297.issue41187@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR converts the _msi module to Argument Clinic. * Fixes deprecation warnings with the "u" format. * Adds signatures. * Adds meaningful docstrings. ---------- components: Argument Clinic, Windows messages: 372769 nosy: larry, paul.moore, serhiy.storchaka, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Convert the _msi module to Argument Clinic type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 13:43:02 2020 From: report at bugs.python.org (William Pickard) Date: Wed, 01 Jul 2020 17:43:02 +0000 Subject: [New-bugs-announce] [issue41188] Prepare CPython for opaque PyObject structure. Message-ID: <1593625382.35.0.32807501167.issue41188@roundup.psfhosted.org> New submission from William Pickard : The goal of issue 39573 is to make "PyObject" and opaque structure in the limited API. To do that, a few mandatory changes will be required to CPython in order to allow for seamless implementation. Namely: 1) User types need to get away from directly referencing PyObject. - This can be done by adding a new variable to "PyTypeObject" who's value is an offset into the object's pointer to the type's internal structure. -- Example: 'PyType_Type.tp_obj_offset' would be the value of "sizeof(PyVarObject)". -- For custom types with another base: The value would be calculated from the base's "tp_obj_offset" + "tp_basicsize" 2) Create a linkable static library to facility method calls requiring internal implementation. - This static library will be implementation defined, IE: using internal methods specific to the runtime that created it. - Public facing methods will use generic names for example, "PyObject_GetType" will get the object's ob_type (or whatever the target runtime calls it). ---------- components: C API, Interpreter Core messages: 372775 nosy: WildCard65, vstinner priority: normal severity: normal status: open title: Prepare CPython for opaque PyObject structure. type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 14:03:54 2020 From: report at bugs.python.org (Iman Sharafodin) Date: Wed, 01 Jul 2020 18:03:54 +0000 Subject: [New-bugs-announce] [issue41189] An exploitable segmentation fault in _PyEval_EvalFrameDefault Message-ID: <1593626634.77.0.865849185728.issue41189@roundup.psfhosted.org> New submission from Iman Sharafodin : Python 3.6 (June 27, 2020) (https://www.python.org/ftp/python/3.6.11/Python-3.6.11.tgz). I found an exploitable segmentation fault in Python 3.6.11 (I validated that by using GDB's Exploitable plugin). Please find the attachment. #0 0x0000000000b63bf4 in _PyEval_EvalFrameDefault (f=, throwflag=) at Python/ceval.c:3667 #1 0x0000000000b5bc5b in PyEval_EvalFrameEx (throwflag=0, f=0x7ffff7f66c50) at Python/ceval.c:754 #2 _PyEval_EvalCodeWithName (_co=_co at entry=0x7ffff7ef5030, globals=globals at entry=0x7ffff7f62168, locals=locals at entry=0x7ffff7f62168, args=args at entry=0x0, argcount=argcount at entry=0, kwnames=kwnames at entry=0x0, kwargs=0x0, kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at Python/ceval.c:4166 #3 0x0000000000b6100b in PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0, locals=locals at entry=0x7ffff7f62168, globals=globals at entry=0x7ffff7f62168, _co=_co at entry=0x7ffff7ef5030) at Python/ceval.c:4187 #4 PyEval_EvalCode (co=co at entry=0x7ffff7ef5030, globals=globals at entry=0x7ffff7f62168, locals=locals at entry=0x7ffff7f62168) at Python/ceval.c:731 ---------- files: ExploitableCrash.pyc messages: 372776 nosy: Iman Sharafodin priority: normal severity: normal status: open title: An exploitable segmentation fault in _PyEval_EvalFrameDefault versions: Python 3.6 Added file: https://bugs.python.org/file49285/ExploitableCrash.pyc _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 14:15:29 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 01 Jul 2020 18:15:29 +0000 Subject: [New-bugs-announce] [issue41190] msilib: SetProperty() accepts str, but GetProperty() returns bytes Message-ID: <1593627329.36.0.902328564176.issue41190@roundup.psfhosted.org> New submission from Serhiy Storchaka : There is an inconsistency in the _msi.SummaryInformation class. Its method SetProperty() accepts str, but GetProperty() returns bytes (encoded with the Windows ANSI encoding). Since os.fsencode()/os.fsdecode() now use UTF-8 encoding, it is not so easy to convert between bytes and str. Also, the encoding with the Windows ANSI encoding is lossy, so it may be that there is a loss in GetProperty(). ---------- components: Windows messages: 372777 nosy: paul.moore, serhiy.storchaka, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: msilib: SetProperty() accepts str, but GetProperty() returns bytes type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 1 16:07:20 2020 From: report at bugs.python.org (Matthieu Dartiailh) Date: Wed, 01 Jul 2020 20:07:20 +0000 Subject: [New-bugs-announce] [issue41191] PyType_FromModuleAndSpec is not mentioned in 3.9 What's new Message-ID: <1593634040.83.0.8340731803.issue41191@roundup.psfhosted.org> New submission from Matthieu Dartiailh : Looking at the What's new for Python 3.9 I noticed that there was no mention of PEP 573. The added functions are properly documented and should probably be mentioned in the What's new. ---------- assignee: docs at python components: Documentation messages: 372790 nosy: docs at python, mdartiailh priority: normal severity: normal status: open title: PyType_FromModuleAndSpec is not mentioned in 3.9 What's new versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 2 02:27:32 2020 From: report at bugs.python.org (Saiyang Gou) Date: Thu, 02 Jul 2020 06:27:32 +0000 Subject: [New-bugs-announce] [issue41192] Some audit events are undocumented Message-ID: <1593671252.05.0.986281783812.issue41192@roundup.psfhosted.org> New submission from Saiyang Gou : Currently the following audit events are not documented on docs.python.org: - _winapi.CreateFile - _winapi.CreateJunction - _winapi.CreateNamedPipe - _winapi.CreatePipe - _winapi.CreateProcess - _winapi.OpenProcess - _winapi.TerminateProcess - ctypes.PyObj_FromPtr - object.__getattr__ - object.__setattr__ - object.__delattr__ - function.__new__ - setopencodehook - builtins.id - os.walk - os.fwalk - pathlib.Path.glob - pathlib.Path.rglob I'm going to create a PR to add them to the documentation. However, for `_winapi` events and `ctypes.PyObj_FromPtr`, I cannot find corresponding sections in the documentation to put the `audit-event` rst directive. How should we document them and make them show up in the audit events table? ---------- assignee: docs at python components: Documentation messages: 372806 nosy: docs at python, gousaiyang priority: normal severity: normal status: open title: Some audit events are undocumented type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 2 05:13:51 2020 From: report at bugs.python.org (=?utf-8?q?Zbyszek_J=C4=99drzejewski-Szmek?=) Date: Thu, 02 Jul 2020 09:13:51 +0000 Subject: [New-bugs-announce] [issue41193] traceback when exiting on read-only file system Message-ID: <1593681231.27.0.642911112114.issue41193@roundup.psfhosted.org> New submission from Zbyszek J?drzejewski-Szmek : [Originally reported as https://bugzilla.redhat.com/show_bug.cgi?id=1852941.] $ touch ~/foo touch: cannot touch '/home/fedora/foo': Read-only file system $ python Python 3.9.0b3 (default, Jun 10 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ^D Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib64/python3.9/site.py", line 462, in write_history readline.write_history_file(history) OSError: [Errno 30] Read-only file system Looking at /usr/lib64/python3.9/site.py, it already silently skips PermissionError. If a user is running with the file system in ro mode, they almost certainly are aware of the fact, since this is done either on purpose or as a result of disk corruption, and the traceback from python is not useful. Suppression of PermissionError was added in b2499669ef2e6dc9a2cdb49b4dc498e078167e26. Version-Release number of selected component (if applicable): python3-3.9.0~b3-1.fc33.x86_64 ---------- messages: 372832 nosy: zbysz priority: normal severity: normal status: open title: traceback when exiting on read-only file system versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 2 05:28:39 2020 From: report at bugs.python.org (Arcadiy Ivanov) Date: Thu, 02 Jul 2020 09:28:39 +0000 Subject: [New-bugs-announce] [issue41194] SIGSEGV in Python 3.9.0b3 in Python-ast.c:1412 Message-ID: <1593682119.68.0.42848291784.issue41194@roundup.psfhosted.org> New submission from Arcadiy Ivanov : Built with pyenv on Fedora 32. Discovered while testing PyBuilder for 3.9 compatibility. $ abrt gdb e6ad9db GNU gdb (GDB) Fedora 9.1-5.fc32 Copyright (C) 2020 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word". No symbol table is loaded. Use the "file" command. No symbol table is loaded. Use the "file" command. Reading symbols from /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/bin/python... warning: exec file is newer than core file. [New LWP 450349] warning: Unexpected size of section `.reg-xstate/450349' in core file. [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.'. Program terminated with signal SIGSEGV, Segmentation fault. warning: Unexpected size of section `.reg-xstate/450349' in core file. #0 0x00000000005d73d3 in init_types () at Python/Python-ast.c:1412 1412 Python/Python-ast.c: No such file or directory. >From To Syms Read Shared Object Library 0x00007ff01d934050 0x00007ff01d948d69 Yes (*) /lib64/libcrypt.so.2 0x00007ff01d917af0 0x00007ff01d926b95 Yes (*) /lib64/libpthread.so.0 0x00007ff01d90b270 0x00007ff01d90c1c9 Yes (*) /lib64/libdl.so.2 0x00007ff01d9053f0 0x00007ff01d905db0 Yes (*) /lib64/libutil.so.1 0x00007ff01d7cd3d0 0x00007ff01d868078 Yes (*) /lib64/libm.so.6 0x00007ff01d619670 0x00007ff01d76780f Yes (*) /lib64/libc.so.6 0x00007ff01d9a8110 0x00007ff01d9c8574 Yes (*) /lib64/ld-linux-x86-64.so.2 0x00007ff01d9740e0 0x00007ff01d974fcc Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_heapq.cpython-39-x86_64-linux-gnu.so 0x00007ff01d9963b0 0x00007ff01d999b2a Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/zlib.cpython-39-x86_64-linux-gnu.so 0x00007ff01d97d5f0 0x00007ff01d98abd8 Yes (*) /lib64/libz.so.1 0x00007ff0101ed2d0 0x00007ff0101ee31c Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_bz2.cpython-39-x86_64-linux-gnu.so 0x00007ff0101a3570 0x00007ff0101af996 Yes (*) /lib64/libbz2.so.1 0x00007ff0101e3480 0x00007ff0101e5dd6 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_lzma.cpython-39-x86_64-linux-gnu.so 0x00007ff0101b99f0 0x00007ff0101d1076 Yes (*) /lib64/liblzma.so.5 0x00007ff01019d270 0x00007ff01019dc02 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/grp.cpython-39-x86_64-linux-gnu.so 0x00007ff01018c5e0 0x00007ff0101947f5 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/math.cpython-39-x86_64-linux-gnu.so 0x00007ff010185130 0x00007ff010185e31 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_bisect.cpython-39-x86_64-linux-gnu.so 0x00007ff01017d170 0x00007ff010180ae1 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_sha512.cpython-39-x86_64-linux-gnu.so 0x00007ff0101772c0 0x00007ff0101781dd Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_random.cpython-39-x86_64-linux-gnu.so 0x00007ff01009c570 0x00007ff0100aa101 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_datetime.cpython-39-x86_64-linux-gnu.so 0x00007ff01004d3f0 0x00007ff0100528dc Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_json.cpython-39-x86_64-linux-gnu.so 0x00007ff0100463d0 0x00007ff010047953 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_posixsubprocess.cpython-39-x86_64-linux-gnu.so 0x00007ff01003d3f0 0x00007ff01003fa46 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/select.cpython-39-x86_64-linux-gnu.so 0x00007ff00fff0510 0x00007ff00fff4e4e Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_struct.cpython-39-x86_64-linux-gnu.so 0x00007ff00ffd48c0 0x00007ff00ffe336f Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_pickle.cpython-39-x86_64-linux-gnu.so 0x00007ff00ffb97a0 0x00007ff00ffc36be Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_socket.cpython-39-x86_64-linux-gnu.so 0x00007ff00ff69540 0x00007ff00ff6ec2c Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/array.cpython-39-x86_64-linux-gnu.so 0x00007ff00ff220d0 0x00007ff00ff22431 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_opcode.cpython-39-x86_64-linux-gnu.so 0x00007ff00fdc8260 0x00007ff00fdcb03c Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/binascii.cpython-39-x86_64-linux-gnu.so 0x00007ff00fd8d4d0 0x00007ff00fdb3925 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/pyexpat.cpython-39-x86_64-linux-gnu.so 0x00007ff00fa3b680 0x00007ff00fa401f1 Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_hashlib.cpython-39-x86_64-linux-gnu.so 0x00007ff00f9898d0 0x00007ff00f9d616a Yes (*) /lib64/libssl.so.1.1 0x00007ff00f6f8000 0x00007ff00f8a26c0 Yes (*) /lib64/libcrypto.so.1.1 0x00007ff00fa2d260 0x00007ff00fa32cdf Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_blake2.cpython-39-x86_64-linux-gnu.so 0x00007ff00f61d260 0x00007ff00f6272da Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_ssl.cpython-39-x86_64-linux-gnu.so 0x00007ff00fa10a70 0x00007ff00fa1f92d Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_ctypes.cpython-39-x86_64-linux-gnu.so 0x00007ff00f4cd2c0 0x00007ff00f4d1d4c Yes (*) /lib64/libffi.so.6 0x00007ff0102b8290 0x00007ff0102b8dce Yes /home/arcivanov/Documents/src/arcivanov/pybuilder/target/venv/test/cpython-3.9.0.beta.3/lib/python3.9/lib-dynload/_multiprocessing.cpython-39-x86_64-linux-gnu.so (*): Shared library is missing debugging information. #0 0x00000000005d73d3 in init_types () at Python/Python-ast.c:1412 #1 0x00000000005eb519 in PyAST_Check (obj=0x1e6eea0) at Python/Python-ast.c:10460 #2 0x00000000005fd618 in builtin_compile_impl (module=, feature_version=, optimize=-1, dont_inherit=0, flags=0, mode=, filename=0x7ff00f2a0c30, source=0x1e6eea0) at Python/bltinmodule.c:784 #3 builtin_compile (module=, args=, nargs=, kwnames=) at Python/clinic/bltinmodule.c.h:274 #4 0x00000000005c9eaa in cfunction_vectorcall_FASTCALL_KEYWORDS (func=0x7ff01058cc20, args=0x19e47e0, nargsf=, kwnames=) at Objects/methodobject.c:440 #5 0x00000000004240f0 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775811, args=0x19e47e0, callable=0x7ff01058cc20, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #6 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775811, args=0x19e47e0, callable=0x7ff01058cc20) at ./Include/cpython/abstract.h:127 #7 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #8 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3490 #9 0x000000000041d86b in _PyEval_EvalFrame (throwflag=0, f=0x19e4640, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #10 function_code_fastcall (tstate=0x18b85a0, co=, args=, nargs=2, globals=) at Objects/call.c:329 #11 0x00000000004240f0 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775810, args=0x19f60c0, callable=0x7ff01023c940, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #12 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775810, args=0x19f60c0, callable=0x7ff01023c940) at ./Include/cpython/abstract.h:127 #13 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #14 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3490 #15 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x19f5ef0, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #16 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=1, kwnames=0x7ff0104cb3b8, kwargs=0x19e57f0, kwcount=1, kwstep=1, defs=0x7ff010306298, defcount=2, kwdefs=0x0, closure=0x0, name=0x7ff0103407f0, qualname=0x7ff0103407f0) at Python/ceval.c:4299 #17 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #18 0x0000000000424185 in _PyObject_VectorcallTstate (kwnames=0x7ff0104cb3a0, nargsf=9223372036854775809, args=0x19e57e8, callable=0x7ff01023c9d0, tstate=) at ./Include/cpython/abstract.h:118 #19 PyObject_Vectorcall (kwnames=0x7ff0104cb3a0, nargsf=9223372036854775809, args=0x19e57e8, callable=0x7ff01023c9d0) at ./Include/cpython/abstract.h:127 #20 call_function (kwnames=0x7ff0104cb3a0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #21 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3507 #22 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x19e5640, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #23 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=4, kwnames=0x0, kwargs=0x19f4748, kwcount=0, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7ff0104be170, qualname=0x7ff0102fc750) at Python/ceval.c:4299 #24 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #25 0x00000000004243fe in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775812, args=0x19f4728, callable=0x7ff0102290d0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #26 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775812, args=0x19f4728, callable=0x7ff0102290d0) at ./Include/cpython/abstract.h:127 #27 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #28 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3476 #29 0x000000000041d86b in _PyEval_EvalFrame (throwflag=0, f=0x19f45b0, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #30 function_code_fastcall (tstate=0x18b85a0, co=, args=, nargs=1, globals=) at Objects/call.c:329 #31 0x00000000005b630c in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=1, args=0x19f4508, callable=0x7ff0103239d0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #32 method_vectorcall (method=, args=0x19f4510, nargsf=, kwnames=0x0) at Objects/classobject.c:53 #33 0x00000000004240f0 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775808, args=0x19f4510, callable=0x7ff00f414c40, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #34 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775808, args=0x19f4510, callable=0x7ff00f414c40) at ./Include/cpython/abstract.h:127 #35 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #36 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3490 #37 0x000000000041d86b in _PyEval_EvalFrame (throwflag=0, f=0x19f4390, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #38 function_code_fastcall (tstate=0x18b85a0, co=, args=, nargs=2, globals=) at Objects/call.c:329 #39 0x00000000004243fe in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775810, args=0x1f86ff0, callable=0x7ff0103a81f0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #40 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775810, args=0x1f86ff0, callable=0x7ff0103a81f0) at ./Include/cpython/abstract.h:127 #41 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #42 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3476 #43 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x1f86e10, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #44 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=2, kwnames=0x0, kwargs=0x7fff023b8630, kwcount=0, kwstep=1, defs=0x7ff0103e5148, defcount=1, kwdefs=0x0, closure=0x0, name=0x7ff010580ab0, qualname=0x7ff0103db870) at Python/ceval.c:4299 #45 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #46 0x00000000005b6274 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=2, args=0x7fff023b8620, callable=0x7ff0103a83a0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #47 method_vectorcall (method=, args=, nargsf=, kwnames=0x0) at Objects/classobject.c:83 #48 0x0000000000422a8e in do_call_core (kwdict=0x7ff00f2d5e00, callargs=0x7ff00ece3b20, func=0x7ff00f33dd40, tstate=) at Python/ceval.c:5092 #49 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3552 #50 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x1cb1040, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #51 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=2, kwnames=0x0, kwargs=0x7fff023b8970, kwcount=0, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7ff0105bc1f0, qualname=0x7ff0103daa80) at Python/ceval.c:4299 #52 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #53 0x0000000000434233 in _PyObject_FastCallDictTstate (tstate=0x18b85a0, callable=0x7ff0103a8550, args=0x7fff023b8960, nargsf=2, kwargs=0x0) at Objects/call.c:118 #54 0x00000000004353ed in _PyObject_Call_Prepend (tstate=tstate at entry=0x18b85a0, callable=callable at entry=0x7ff0103a8550, obj=obj at entry=0x7ff01014bc10, args=args at entry=0x7ff010202b50, kwargs=kwargs at entry=0x0) at Objects/call.c:488 #55 0x00000000004887e1 in slot_tp_call (self=self at entry=0x7ff01014bc10, args=args at entry=0x7ff010202b50, kwds=kwds at entry=0x0) at Objects/typeobject.c:6663 #56 0x0000000000434065 in _PyObject_MakeTpCall (tstate=0x18b85a0, callable=0x7ff01014bc10, args=, nargs=, keywords=0x0) at Objects/call.c:191 #57 0x0000000000424ef1 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x7ff0101531e8, callable=0x7ff01014bc10, tstate=0x18b85a0) at ./Include/cpython/abstract.h:116 #58 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x7ff0101531e8, callable=0x7ff01014bc10, tstate=0x18b85a0) at ./Include/cpython/abstract.h:103 #59 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775809, args=0x7ff0101531e8, callable=0x7ff01014bc10) at ./Include/cpython/abstract.h:127 #60 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #61 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3490 #62 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x7ff010153040, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #63 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=2, kwnames=0x0, kwargs=0x7fff023b8cf0, kwcount=0, kwstep=1, defs=0x7ff0103a59b8, defcount=1, kwdefs=0x0, closure=0x0, name=0x7ff010580ab0, qualname=0x7ff0103a7670) at Python/ceval.c:4299 #64 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #65 0x00000000005b6274 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=2, args=0x7fff023b8ce0, callable=0x7ff0103ad430, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #66 method_vectorcall (method=, args=, nargsf=, kwnames=0x0) at Objects/classobject.c:83 #67 0x0000000000422a8e in do_call_core (kwdict=0x7ff01014e1c0, callargs=0x7ff010202250, func=0x7ff01014e680, tstate=) at Python/ceval.c:5092 #68 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3552 #69 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x7ff01021d200, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #70 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=2, kwnames=0x0, kwargs=0x7fff023b9030, kwcount=0, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7ff0105bc1f0, qualname=0x7ff0103ab080) at Python/ceval.c:4299 #71 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #72 0x0000000000434233 in _PyObject_FastCallDictTstate (tstate=0x18b85a0, callable=0x7ff0103ad310, args=0x7fff023b9020, nargsf=2, kwargs=0x0) at Objects/call.c:118 #73 0x00000000004353ed in _PyObject_Call_Prepend (tstate=tstate at entry=0x18b85a0, callable=callable at entry=0x7ff0103ad310, obj=obj at entry=0x7ff01014bb50, args=args at entry=0x7ff0102191c0, kwargs=kwargs at entry=0x0) at Objects/call.c:488 #74 0x00000000004887e1 in slot_tp_call (self=self at entry=0x7ff01014bb50, args=args at entry=0x7ff0102191c0, kwds=kwds at entry=0x0) at Objects/typeobject.c:6663 #75 0x0000000000434065 in _PyObject_MakeTpCall (tstate=0x18b85a0, callable=0x7ff01014bb50, args=, nargs=, keywords=0x0) at Objects/call.c:191 #76 0x0000000000424ef1 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x7ff01014ff08, callable=0x7ff01014bb50, tstate=0x18b85a0) at ./Include/cpython/abstract.h:116 #77 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x7ff01014ff08, callable=0x7ff01014bb50, tstate=0x18b85a0) at ./Include/cpython/abstract.h:103 #78 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775809, args=0x7ff01014ff08, callable=0x7ff01014bb50) at ./Include/cpython/abstract.h:127 #79 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #80 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3490 #81 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x7ff01014fd60, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #82 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=2, kwnames=0x0, kwargs=0x7fff023b93b0, kwcount=0, kwstep=1, defs=0x7ff0103a59b8, defcount=1, kwdefs=0x0, closure=0x0, name=0x7ff010580ab0, qualname=0x7ff0103a7670) at Python/ceval.c:4299 #83 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #84 0x00000000005b6274 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=2, args=0x7fff023b93a0, callable=0x7ff0103ad430, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #85 method_vectorcall (method=, args=, nargsf=, kwnames=0x0) at Objects/classobject.c:83 #86 0x0000000000422a8e in do_call_core (kwdict=0x7ff01014e3c0, callargs=0x7ff010219460, func=0x7ff01014e5c0, tstate=) at Python/ceval.c:5092 #87 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3552 #88 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x7ff01021d040, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #89 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=2, kwnames=0x0, kwargs=0x7fff023b96f0, kwcount=0, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7ff0105bc1f0, qualname=0x7ff0103ab080) at Python/ceval.c:4299 #90 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #91 0x0000000000434233 in _PyObject_FastCallDictTstate (tstate=0x18b85a0, callable=0x7ff0103ad310, args=0x7fff023b96e0, nargsf=2, kwargs=0x0) at Objects/call.c:118 #92 0x00000000004353ed in _PyObject_Call_Prepend (tstate=tstate at entry=0x18b85a0, callable=callable at entry=0x7ff0103ad310, obj=obj at entry=0x7ff01048edf0, args=args at entry=0x7ff010202b80, kwargs=kwargs at entry=0x0) at Objects/call.c:488 #93 0x00000000004887e1 in slot_tp_call (self=self at entry=0x7ff01048edf0, args=args at entry=0x7ff010202b80, kwds=kwds at entry=0x0) at Objects/typeobject.c:6663 #94 0x0000000000434065 in _PyObject_MakeTpCall (tstate=0x18b85a0, callable=0x7ff01048edf0, args=, nargs=, keywords=0x0) at Objects/call.c:191 #95 0x0000000000424ef1 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x19f07e8, callable=0x7ff01048edf0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:116 #96 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x19f07e8, callable=0x7ff01048edf0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:103 #97 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775809, args=0x19f07e8, callable=0x7ff01048edf0) at ./Include/cpython/abstract.h:127 #98 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #99 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3490 #100 0x000000000041d86b in _PyEval_EvalFrame (throwflag=0, f=0x19f05f0, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #101 function_code_fastcall (tstate=0x18b85a0, co=, args=, nargs=2, globals=) at Objects/call.c:329 #102 0x00000000004243fe in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775810, args=0x7ff01014f940, callable=0x7ff0103231f0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #103 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775810, args=0x7ff01014f940, callable=0x7ff0103231f0) at ./Include/cpython/abstract.h:127 #104 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #105 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3476 #106 0x000000000041d86b in _PyEval_EvalFrame (throwflag=0, f=0x7ff01014f7c0, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #107 function_code_fastcall (tstate=0x18b85a0, co=, args=, nargs=1, globals=) at Objects/call.c:329 #108 0x00000000004243fe in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775809, args=0x19e32f0, callable=0x7ff0103238b0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:118 #109 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775809, args=0x19e32f0, callable=0x7ff0103238b0) at ./Include/cpython/abstract.h:127 #110 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #111 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3476 #112 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x19e3110, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #113 _PyEval_EvalCode (tstate=tstate at entry=0x18b85a0, _co=, globals=, locals=locals at entry=0x0, args=, argcount=1, kwnames=0x0, kwargs=0x7fff023b9d78, kwcount=0, kwstep=1, defs=0x7ff010324058, defcount=11, kwdefs=0x7ff0103bbb80, closure=0x0, name=0x7ff0105bc530, qualname=0x7ff0103b5440) at Python/ceval.c:4299 #114 0x0000000000434fd6 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at Objects/call.c:395 #115 0x0000000000434233 in _PyObject_FastCallDictTstate (tstate=0x18b85a0, callable=0x7ff010323310, args=0x7fff023b9d70, nargsf=1, kwargs=0x0) at Objects/call.c:118 #116 0x00000000004353ed in _PyObject_Call_Prepend (tstate=tstate at entry=0x18b85a0, callable=callable at entry=0x7ff010323310, obj=obj at entry=0x7ff01048ee20, args=args at entry=0x7ff0105c0040, kwargs=kwargs at entry=0x0) at Objects/call.c:488 #117 0x000000000048d86d in slot_tp_init (self=0x7ff01048ee20, args=0x7ff0105c0040, kwds=0x0) at Objects/typeobject.c:6903 #118 0x0000000000482907 in type_call (type=, type at entry=0x19a2dd0, args=args at entry=0x7ff0105c0040, kwds=kwds at entry=0x0) at Objects/typeobject.c:1023 #119 0x0000000000434065 in _PyObject_MakeTpCall (tstate=0x18b85a0, callable=0x19a2dd0, args=, nargs=, keywords=0x0) at Objects/call.c:191 #120 0x0000000000426ad0 in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775808, args=0x1914d68, callable=0x19a2dd0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:116 #121 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775808, args=0x1914d68, callable=0x19a2dd0, tstate=0x18b85a0) at ./Include/cpython/abstract.h:103 #122 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775808, args=0x1914d68, callable=0x19a2dd0) at ./Include/cpython/abstract.h:127 #123 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at Python/ceval.c:5044 #124 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3459 #125 0x00000000004dda68 in _PyEval_EvalFrame (throwflag=0, f=0x1914bf0, tstate=0x18b85a0) at ./Include/internal/pycore_ceval.h:40 #126 _PyEval_EvalCode (tstate=0x18b85a0, _co=_co at entry=0x7ff010495240, globals=globals at entry=0x7ff01052d8c0, locals=locals at entry=0x7ff01052d8c0, args=args at entry=0x0, argcount=argcount at entry=0, kwnames=0x0, kwargs=0x0, kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at Python/ceval.c:4299 #127 0x00000000004ddd76 in _PyEval_EvalCodeWithName (qualname=0x0, name=0x0, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwstep=2, kwcount=0, kwargs=0x0, kwnames=0x0, argcount=0, args=0x0, locals=0x7ff01052d8c0, globals=0x7ff01052d8c0, _co=0x7ff010495240) at Python/ceval.c:4331 #128 PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0, locals=0x7ff01052d8c0, globals=0x7ff01052d8c0, _co=0x7ff010495240) at Python/ceval.c:4347 #129 PyEval_EvalCode (co=co at entry=0x7ff010495240, globals=globals at entry=0x7ff01052d8c0, locals=locals at entry=0x7ff01052d8c0) at Python/ceval.c:809 #130 0x000000000051ad31 in run_eval_code_obj (locals=0x7ff01052d8c0, globals=0x7ff01052d8c0, co=0x7ff010495240, tstate=0x18b85a0) at Python/pythonrun.c:1178 #131 run_mod (mod=mod at entry=0x192ecf8, filename=filename at entry=0x7ff0104d8cb0, globals=globals at entry=0x7ff01052d8c0, locals=locals at entry=0x7ff01052d8c0, flags=flags at entry=0x7fff023ba1d8, arena=arena at entry=0x7ff01055ef70) at Python/pythonrun.c:1199 #132 0x000000000051cb1f in PyRun_FileExFlags (fp=0x18b54f0, filename_str=, start=, globals=0x7ff01052d8c0, locals=0x7ff01052d8c0, closeit=1, flags=0x7fff023ba1d8) at Python/pythonrun.c:1116 #133 0x000000000051cc91 in PyRun_SimpleFileExFlags (fp=fp at entry=0x18b54f0, filename=, closeit=closeit at entry=1, flags=flags at entry=0x7fff023ba1d8) at Python/pythonrun.c:438 #134 0x000000000051d1e4 in PyRun_AnyFileExFlags (fp=fp at entry=0x18b54f0, filename=, closeit=closeit at entry=1, flags=flags at entry=0x7fff023ba1d8) at Python/pythonrun.c:87 #135 0x0000000000427d01 in pymain_run_file (cf=0x7fff023ba1d8, config=0x18b6590) at Modules/main.c:369 #136 pymain_run_python (exitcode=exitcode at entry=0x7fff023ba300) at Modules/main.c:553 #137 0x0000000000428288 in Py_RunMain () at Modules/main.c:632 #138 pymain_main (args=0x7fff023ba2c0) at Modules/main.c:662 #139 Py_BytesMain (argc=, argv=) at Modules/main.c:686 #140 0x00007ff01d61b042 in __libc_start_main () from /lib64/libc.so.6 #141 0x000000000042705e in _start () at ./Include/object.h:430 Missing separate debuginfos, use: dnf debuginfo-install bzip2-libs-1.0.8-2.fc32.x86_64 glibc-2.31-2.fc32.x86_64 libffi-3.1-24.fc32.x86_64 libxcrypt-4.4.16-3.fc32.x86_64 openssl-libs-1.1.1g-1.fc32.x86_64 xz-libs-5.2.5-1.fc32.x86_64 zlib-1.2.11-21.fc32.x86_64 ---------- messages: 372833 nosy: arcivanov priority: normal severity: normal status: open title: SIGSEGV in Python 3.9.0b3 in Python-ast.c:1412 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 2 06:28:17 2020 From: report at bugs.python.org (Matthew Hughes) Date: Thu, 02 Jul 2020 10:28:17 +0000 Subject: [New-bugs-announce] [issue41195] Interface to OpenSSL's security level Message-ID: <1593685697.4.0.691411084209.issue41195@roundup.psfhosted.org> New submission from Matthew Hughes : While investigating Python's SSL I noticed there was no interface for interacting with OpenSSL's SSL_CTX_{get,set}_security_level (https://www.openssl.org/docs/manmaster/man3/SSL_CTX_get_security_level.html) so I thought I'd look into adding one (see attached patch). I'd be happy to put up a PR, but I have node idea if this feature would actually be desired. ---------- assignee: christian.heimes components: SSL files: add_ssl_context_security_level.patch keywords: patch messages: 372839 nosy: christian.heimes, mhughes priority: normal severity: normal status: open title: Interface to OpenSSL's security level type: enhancement Added file: https://bugs.python.org/file49291/add_ssl_context_security_level.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 2 14:06:42 2020 From: report at bugs.python.org (aku911) Date: Thu, 02 Jul 2020 18:06:42 +0000 Subject: [New-bugs-announce] [issue41196] APPDATA directory is different in store installed python Message-ID: <1593713202.41.0.223352950453.issue41196@roundup.psfhosted.org> Change by aku911 : ---------- components: Windows nosy: aku911, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: APPDATA directory is different in store installed python type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 2 14:22:53 2020 From: report at bugs.python.org (=?utf-8?b?0KDQsNC80LfQsNC9INCRLg==?=) Date: Thu, 02 Jul 2020 18:22:53 +0000 Subject: [New-bugs-announce] [issue41197] Async magic methods in contextlib.closing Message-ID: <1593714173.91.0.0149218020987.issue41197@roundup.psfhosted.org> New submission from ?????? ?. : # Async magic methods in contextlib.closing I think `__aenter__` and `__aexit__` methods should be added to `contextlib.closing`, so that we can use `contextlib.closing` in async code too. For example: ```python3 class SomeAPI: ... async def request(self): pass async def close(self): await self.session.close() async with closing(SomeAPI()) as api: response = await api.request() print(response) ``` Also these methods can be moved to another class (like `asyncclosing` along the lines of `asynccontextmanager`). ---------- components: Library (Lib) messages: 372871 nosy: ?????? ?. priority: normal pull_requests: 20434 severity: normal status: open title: Async magic methods in contextlib.closing type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 2 16:06:26 2020 From: report at bugs.python.org (Carlos Neves) Date: Thu, 02 Jul 2020 20:06:26 +0000 Subject: [New-bugs-announce] [issue41198] Round built-in function not shows zeros acording significant figures and calculates different numbers of odd and even Message-ID: <1593720386.55.0.143813033698.issue41198@roundup.psfhosted.org> New submission from Carlos Neves : Hi, I am observing unexpected behavior with round built-in function about (1) significant figures in analytical methods and (2) number of odd and even numbers obtained by this function. https://docs.python.org/3/library/functions.html#round https://en.wikipedia.org/wiki/Significant_figures 1. Significant Figures ====================== For example, when I say 1.20 in analytical methods, I am confident about the last digit, the zero. It has a meaning. But, when I use Python, >>> round (1.203, 2) 1.2 >>> the zero not appears. It is not occur when the second digit is not zero. >>> round (1.213, 2) 1.21 >>> The zero should be printed like the other numbers to be consistent with the significant figures. Maybe other functions could present the same behavior. 2. Rounding procedure ===================== I wrote the following code to test the number of odd and even numbers during a round procedure. I should get half-and-a-half of odd and even numbers. But the result using the round function is different. We observed 5 even more and 5 odd less. This behavior causes a systematic error. https://en.wikipedia.org/wiki/Rounding I hope to be contributing to the improvement of the code. Thank advanced. ###################################################### # This code count the number of odd and even number with different procedures: truncate, round simple and round function # Test condition: Rounding with one digit after the decimal point. import numpy as np even_0 = 0 odd_0 = 0 even_1 = 0 odd_1 = 0 even_2 = 0 odd_2 = 0 even_3 = 0 odd_3 = 0 # generate 1000 numbers from 0.000 up to 1 with step of 0.001 x = np.arange(0,1,0.001) # printing for i in range(len(x)): x_truncated = int((x[i]*10)+0.0)/10 # no rounding x_rounded_simple = int((x[i]*10)+0.5)/10 # rounding up at 5 x_rounded_function = round(x[i],1) # rounding by function with one digit after the decimal point # counting odd and even numbers if int(x[i]*1000) % 2 == 0: even_0 += 1 else: odd_0 += 1 if int(x_truncated*10) % 2 == 0: even_1 += 1 else: odd_1 += 1 if int(x_rounded_simple*10) % 2 == 0: even_2 += 1 else: odd_2 += 1 if int(x_rounded_function*10) % 2 == 0: even_3 += 1 else: odd_3 += 1 print ("{0:.3f} {1:.1f} {2:.1f} {3:.1f}".format((x[i]), x_truncated, x_rounded_simple, x_rounded_function)) print ("Result:") print ("Raw: Even={0}, Odd={1}".format(even_0,odd_0)) print ("Truncated: Even={0}, Odd={1}".format(even_1,odd_1)) print ("Rounded simple: Even={0}, Odd={1}".format(even_2,odd_2)) print ("Rounded Function: Even={0}, Odd={1}".format(even_3,odd_3)) ###################################################### Output ... 0.995 0.9 1.0 1.0 0.996 0.9 1.0 1.0 0.997 0.9 1.0 1.0 0.998 0.9 1.0 1.0 0.999 0.9 1.0 1.0 Result: Raw: Even=500, Odd=500 Truncated: Even=500, Odd=500 Rounded simple: Even=500, Odd=500 Rounded Function: Even=505, Odd=495 ---- ---------- components: Library (Lib) messages: 372878 nosy: Carlos Neves, lemburg, mark.dickinson, rhettinger, stutzbach priority: normal severity: normal status: open title: Round built-in function not shows zeros acording significant figures and calculates different numbers of odd and even type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 2 21:10:13 2020 From: report at bugs.python.org (Jackstraw) Date: Fri, 03 Jul 2020 01:10:13 +0000 Subject: [New-bugs-announce] [issue41199] Docstring convention not followed for dataclasses documentation page Message-ID: <1593738613.53.0.928151938182.issue41199@roundup.psfhosted.org> New submission from Jackstraw : The InventoryItem class on the dataclasses page [https://docs.python.org/3.7/library/dataclasses.html#module-dataclasses] has single quotes for the docstring, as opposed to double quotes (as described by PEP-257. ---------- assignee: docs at python components: Documentation messages: 372899 nosy: JackStraw, docs at python priority: normal severity: normal status: open title: Docstring convention not followed for dataclasses documentation page type: enhancement versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 3 06:48:09 2020 From: report at bugs.python.org (Bruce Day) Date: Fri, 03 Jul 2020 10:48:09 +0000 Subject: [New-bugs-announce] [issue41200] Add pickle.loads fuzz test Message-ID: <1593773289.43.0.165410950375.issue41200@roundup.psfhosted.org> New submission from Bruce Day : add pickle.loads(x) fuzz test ---------- components: Tests messages: 372916 nosy: Bruce Day priority: normal pull_requests: 20438 severity: normal status: open title: Add pickle.loads fuzz test type: security versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 3 08:26:22 2020 From: report at bugs.python.org (David Srebnick) Date: Fri, 03 Jul 2020 12:26:22 +0000 Subject: [New-bugs-announce] [issue41201] Long integer arithmetic Message-ID: <1593779182.53.0.05415381128.issue41201@roundup.psfhosted.org> New submission from David Srebnick : The following program is one way of computing the sum of digits in a number. It works properly for the first case, but fails for the second one. def digitsum(num): digsum = 0 tnum = num while tnum > 0: print("tnum = %d, digsum = %d" % (tnum,digsum)) digsum += (tnum % 10) tnum = int((tnum - (tnum % 10)) / 10) return digsum print(digitsum(9999999999999999)) print(digitsum(99999999999999999)) ---------- messages: 372925 nosy: David Srebnick priority: normal severity: normal status: open title: Long integer arithmetic type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 3 10:26:06 2020 From: report at bugs.python.org (tomaszdrozdz) Date: Fri, 03 Jul 2020 14:26:06 +0000 Subject: [New-bugs-announce] [issue41202] Allo to provide custom exception handler to asyncio.run() Message-ID: <1593786366.9.0.0322970631337.issue41202@roundup.psfhosted.org> New submission from tomaszdrozdz : I wish we had: asyncio.run(coro, *, debug=False, excepton_handler=None) so we could provide custome exception handler function for the loop. ---------- components: asyncio messages: 372934 nosy: asvetlov, tomaszdrozdz, yselivanov priority: normal severity: normal status: open title: Allo to provide custom exception handler to asyncio.run() type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 3 14:23:06 2020 From: report at bugs.python.org (Patrick Reader) Date: Fri, 03 Jul 2020 18:23:06 +0000 Subject: [New-bugs-announce] [issue41203] Replace references to OS X in documentation with macOS Message-ID: <1593800586.51.0.799645858165.issue41203@roundup.psfhosted.org> New submission from Patrick Reader : Since 10.12 (Sierra, released in 2016), macOS is no longer called OS X. References to macOS in the documentation should be updated to reflect this. This is now especially important because macOS 11 (Big Sur) is now in preview, and the X meaning 10 in roman numerals is now completely incorrect (not just the wrong name). ---------- assignee: docs at python components: Documentation messages: 372951 nosy: docs at python, pxeger priority: normal severity: normal status: open title: Replace references to OS X in documentation with macOS type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 3 17:47:00 2020 From: report at bugs.python.org (Brad Larsen) Date: Fri, 03 Jul 2020 21:47:00 +0000 Subject: [New-bugs-announce] [issue41204] Use of unitialized variable `fields` along error path in code generated from asdl_c.py Message-ID: <1593812820.7.0.967275410849.issue41204@roundup.psfhosted.org> New submission from Brad Larsen : In commit b1cc6ba73 from earlier today, an error-handling path can now read an uninitialized variable. https://github.com/python/cpython/commit/b1cc6ba73a51d5cc3aeb113b5e7378fb50a0e20a#diff-fa7f27df4c8df1055048e78340f904c4R695-R697 In particular, asdl_c.py is used to generate C source, and when building that code with Clang 10, there is the attached warning. Likely fix: initialize `fields` to `NULL`. Also, perhaps a CI loop that has `-Werror=sometimes-uninitialized` would help detect these. Compiler warning: Python/Python-ast.c:1147:9: warning: variable 'fields' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized] if (state == NULL) { ^~~~~~~~~~~~~ Python/Python-ast.c:1210:16: note: uninitialized use occurs here Py_XDECREF(fields); ^~~~~~ ./Include/object.h:520:51: note: expanded from macro 'Py_XDECREF' #define Py_XDECREF(op) _Py_XDECREF(_PyObject_CAST(op)) ^~ ./Include/object.h:112:41: note: expanded from macro '_PyObject_CAST' #define _PyObject_CAST(op) ((PyObject*)(op)) ^~ Python/Python-ast.c:1147:5: note: remove the 'if' if its condition is always false if (state == NULL) { ^~~~~~~~~~~~~~~~~~~~ Python/Python-ast.c:1145:35: note: initialize the variable 'fields' to silence this warning PyObject *key, *value, *fields; ^ = NULL 1 warning generated. ---------- components: Interpreter Core messages: 372963 nosy: blarsen, vstinner priority: normal severity: normal status: open title: Use of unitialized variable `fields` along error path in code generated from asdl_c.py type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 3 18:51:25 2020 From: report at bugs.python.org (JD-Veiga) Date: Fri, 03 Jul 2020 22:51:25 +0000 Subject: [New-bugs-announce] [issue41205] Documentation Decimal power 0 to the 0 is Nan (versus 0 to the 0 which is 1) Message-ID: <1593816685.46.0.773129097137.issue41205@roundup.psfhosted.org> New submission from JD-Veiga : Hi, I would like to propose a minor addition to `decimal` package documentation page (https://docs.python.org/3/library/decimal.html). I think that we should add a paragraph in `context.power(x, y, modulo=None)` stating that `Decimal(0) ** Decimal(0)` returns `Decimal('NaN')` instead of `1` --as `0 ** 0` does-- or `1.0` --in case of `0.0 ** 0.0`. Indeed, `0 ** 0` is `NaN` is mentioned as example of operations raising `InvalidOperation` exceptions. However, I think that this is not enough to indicate the different behaviour of decimal versus int and float numbers. Moreover, in the case of `%` and `//` operators, there are clear remarks on these differences (See: ?There are some small differences between arithmetic on Decimal objects and arithmetic on integers and floats [...]? in the page). Thank you and sorry for the inconvenience. ---------- assignee: docs at python components: Documentation messages: 372969 nosy: JD-Veiga, docs at python priority: normal severity: normal status: open title: Documentation Decimal power 0 to the 0 is Nan (versus 0 to the 0 which is 1) type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 3 19:14:58 2020 From: report at bugs.python.org (David Bremner) Date: Fri, 03 Jul 2020 23:14:58 +0000 Subject: [New-bugs-announce] [issue41206] behaviour change with EmailMessage.set_content Message-ID: <1593818098.6.0.402799066413.issue41206@roundup.psfhosted.org> New submission from David Bremner : Works in 3.8.3, but not in 3.8.4rc1 from email.message import EmailMessage msg = EmailMessage() msg.set_content("") Apparently now at least one newline is required. ---------- components: email messages: 372971 nosy: barry, bremner, r.david.murray priority: normal severity: normal status: open title: behaviour change with EmailMessage.set_content versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 3 21:25:52 2020 From: report at bugs.python.org (Jason R. Coombs) Date: Sat, 04 Jul 2020 01:25:52 +0000 Subject: [New-bugs-announce] [issue41207] distutils.command.build_ext raises FileNotFoundError Message-ID: <1593825952.85.0.0742954618161.issue41207@roundup.psfhosted.org> New submission from Jason R. Coombs : In [pypa/setuptools#2228](https://github.com/pypa/setuptools/issues/2228), by adopting the distutils codebase from a late release of CPython, Setuptools stumbled onto an API-breaking change in distutils rooted at issue39763. Details are in the Setuptools investigation, but to summarize: - distutils.ccompiler.CCompiler.compile declares "raises CompileError on failure" and calls `self._compile`, implemented by subclasses. - In at least `distutils.unixcompiler.UnixCCompiler._compile`, `distutils.spawn.spawn` is called (through CCompiler.spawn). - Since GH-18743, `distutils.spawn.spawn` calls `subprocess.Popen` which raises FileNotFoundError when the target executable doesn't exist. - Programs trapping CompileError but not FileNotFoundError will crash where once they had error handling. Setuptools discovered this behavior in the 48.0 release when it incorporated these distutils changes into a vendored release of Setuptools, but the failures exhibited will apply to all builds (including pyyaml) on Python 3.9. ---------- assignee: lukasz.langa components: Distutils keywords: 3.9regression messages: 372973 nosy: dstufft, eric.araujo, jaraco, lukasz.langa priority: release blocker severity: normal status: open title: distutils.command.build_ext raises FileNotFoundError type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 4 07:56:29 2020 From: report at bugs.python.org (Iman Sharafodin) Date: Sat, 04 Jul 2020 11:56:29 +0000 Subject: [New-bugs-announce] [issue41208] An exploitable segmentation fault in marshal module Message-ID: <1593863789.9.0.238701347914.issue41208@roundup.psfhosted.org> New submission from Iman Sharafodin : It seems that all versions of Python 3 are vulnerable to de-marshaling the attached file (Python file is included). I've tested on Python 3.10.0a0 (heads/master:b40e434, Jul 4 2020), Python 3.6.11 and Python 3.7.2. This is due to lack of proper validation at Objects/tupleobject.c:413 (heads/master:b40e434). This is the result of GDB's Exploitable plugin (it's exploitable): Description: Access violation during branch instruction Short description: BranchAv (4/22) Hash: e04b830dfb409a8bbf67bff96ff0df44.4d31b48b56e0c02ed51520182d91a457 Exploitability Classification: EXPLOITABLE Explanation: The target crashed on a branch instruction, which may indicate that the control flow is tainted. Other tags: AccessViolation (21/22) ---------- components: Interpreter Core files: Crash.zip messages: 372990 nosy: Iman Sharafodin priority: normal severity: normal status: open title: An exploitable segmentation fault in marshal module type: security versions: Python 3.10 Added file: https://bugs.python.org/file49295/Crash.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 4 20:51:17 2020 From: report at bugs.python.org (James McCorkindale) Date: Sun, 05 Jul 2020 00:51:17 +0000 Subject: [New-bugs-announce] [issue41209] Scripts Folder is Empty Message-ID: <1593910277.95.0.176606187377.issue41209@roundup.psfhosted.org> New submission from James McCorkindale : The Scripts folder is empty and I'm not sure what's wrong. I'm trying to install modules and looked it up on the internet and I think I need this folder not to be empty. How should i fix this? Thanks, James ---------- components: Windows messages: 373007 nosy: Jamesss04, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Scripts Folder is Empty type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 4 21:51:25 2020 From: report at bugs.python.org (Hiroshi Miura) Date: Sun, 05 Jul 2020 01:51:25 +0000 Subject: [New-bugs-announce] [issue41210] LZMADecompressor.decompress(FORMAT_RAW) truncate output when input is paticular LZMA+BCJ data Message-ID: <1593913885.93.0.828443713776.issue41210@roundup.psfhosted.org> New submission from Hiroshi Miura : When decompressing a particular archive, result become truncated a last word. A test data attached is uncompressed size is 12800 bytes, and compressed using LZMA1+BCJ algorithm into 11327 bytes. The data is a payload of a 7zip archive. Here is a pytest code to reproduce it. :: code-block:: def test_lzma_raw_decompressor_lzmabcj(): filters = [] filters.append({'id': lzma.FILTER_X86}) filters.append(lzma._decode_filter_properties(lzma.FILTER_LZMA1, b']\x00\x00\x01\x00')) decompressor = lzma.LZMADecompressor(format=lzma.FORMAT_RAW, filters=filters) with testdata_path.joinpath('lzmabcj.bin').open('rb') as infile: out = decompressor.decompress(infile.read(11327)) assert len(out) == 12800 test become failure that len(out) become 12796 bytes, which lacks last 4 bytes, which should be b'\x00\x00\x00\x00' When specifying a filters as a single LZMA1 decompression, I got an expected length of data, 12800 bytes.(*1) When creating a test data with LZMA2+BCJ and examines it, I got an expected data. When specifying a filters as a single LZMA2 decompression against LZMA2+BCJ payload, a result is perfectly as same as (*1) data. It indicate us that a pipeline of LZMA1/LZMA2 --> BCJ is in doubt. After investigation and understanding that _lzmamodule.c is a thin wrapper of liblzma, I found the problem can be reproduced in liblzma. I've reported it to upstream xz-devel ML with a test code https://www.mail-archive.com/xz-devel at tukaani.org/msg00370.html ---------- components: Extension Modules files: lzmabcj.bin messages: 373008 nosy: miurahr priority: normal severity: normal status: open title: LZMADecompressor.decompress(FORMAT_RAW) truncate output when input is paticular LZMA+BCJ data versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49296/lzmabcj.bin _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 4 22:53:03 2020 From: report at bugs.python.org (Inada Naoki) Date: Sun, 05 Jul 2020 02:53:03 +0000 Subject: [New-bugs-announce] [issue41211] PyLong_FromUnicodeObject document is wrong Message-ID: <1593917583.65.0.783056936428.issue41211@roundup.psfhosted.org> New submission from Inada Naoki : ``` .. c:function:: PyObject* PyLong_FromUnicodeObject(PyObject *u, int base) Convert a sequence of Unicode digits in the string *u* to a Python integer value. The Unicode string is first encoded to a byte string using :c:func:`PyUnicode_EncodeDecimal` and then converted using :c:func:`PyLong_FromString`. ``` PyUnicode_EncodeDecimal is not used actually. It uses private and undocumented `_PyUnicode_TransformDecimalAndSpaceToASCII` function instead. The document of PyFloat_FromString() doesn't mention about how it convert unicode string to digits. Let's remove the second sentence. ---------- assignee: docs at python components: Documentation messages: 373009 nosy: docs at python, inada.naoki priority: normal severity: normal status: open title: PyLong_FromUnicodeObject document is wrong versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 5 04:41:53 2020 From: report at bugs.python.org (Ben Griffin) Date: Sun, 05 Jul 2020 08:41:53 +0000 Subject: [New-bugs-announce] [issue41212] Emoji Unicode failing in standard release of Python 3.8.3 / tkinter 8.6.8 Message-ID: <1593938513.79.0.0840613068197.issue41212@roundup.psfhosted.org> New submission from Ben Griffin : https://stackoverflow.com/questions/62713741/tkinter-and-32-bit-unicode-duplicating-any-fix Emoji are doubling up when using canvas.create_text() This is reported to work on tcl/tk 8.6.10 but there?s no. Way to upgrade tcl/tk using the standard installs from the python.org site ---------- components: Tkinter files: Emoji.py.txt messages: 373019 nosy: Ben Griffin priority: normal severity: normal status: open title: Emoji Unicode failing in standard release of Python 3.8.3 / tkinter 8.6.8 type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49297/Emoji.py.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 5 05:09:23 2020 From: report at bugs.python.org (Amine) Date: Sun, 05 Jul 2020 09:09:23 +0000 Subject: [New-bugs-announce] [issue41213] Cannot locate MSBuild.exe on PATH or as MSBUILD variable Message-ID: <1593940163.76.0.968846019884.issue41213@roundup.psfhosted.org> New submission from Amine : I am running windows10. I HAVE to use python 3.6 for a project downloaded it and followed the instructions in PCBuild/readme.txt and I got the following output: (I would like to mention i checked this issue #33675 but I had no idea what is it talking about, am not using a "Buildbot") C:\Users\Amine\Downloads\Python-3.6.11\Python-3.6.11\PCbuild>build.bat -e Using "C:\Users\Amine\Downloads\Python-3.6.11\Python-3.6.11\PCbuild\\..\externals\pythonx86\tools\python.exe" (found in externals directory) Fetching external libraries... bzip2-1.0.6 already exists, skipping. openssl-1.0.2q already exists, skipping. sqlite-3.21.0.0 already exists, skipping. tcl-core-8.6.6.0 already exists, skipping. tk-8.6.6.0 already exists, skipping. tix-8.4.3.6 already exists, skipping. xz-5.2.2 already exists, skipping. Fetching external binaries... nasm-2.11.06 already exists, skipping. Finished. Cannot locate MSBuild.exe on PATH or as MSBUILD variable C:\Users\Amine\Downloads\Python-3.6.11\Python-3.6.11\PCbuild> ---------- components: Windows messages: 373020 nosy: Amen8, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Cannot locate MSBuild.exe on PATH or as MSBUILD variable type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 5 06:54:27 2020 From: report at bugs.python.org (Vincent LE GARREC) Date: Sun, 05 Jul 2020 10:54:27 +0000 Subject: [New-bugs-announce] [issue41214] -O0: Segmentation fault in _PyArg_UnpackStack Message-ID: <1593946467.6.0.0689011340862.issue41214@roundup.psfhosted.org> New submission from Vincent LE GARREC : In Gentoo, I compile my system with -O0 When I compile Apache Serf, python 3.7.8 crashes. When I compile python 3.7 with -O2, python don't crash when compiling Serf. It's the first time that -O0 causes program crash. I run test suite, I don't have any problem. Please find enclosed the backtrace. I can reproduce every time so if you want me to do some tests, I can do it. I already report it to Gentoo (https://bugs.gentoo.org/730312) but it seems it's not related to them. ---------- files: backtrace.log.gz messages: 373026 nosy: Vincent LE GARREC priority: normal severity: normal status: open title: -O0: Segmentation fault in _PyArg_UnpackStack type: crash versions: Python 3.7 Added file: https://bugs.python.org/file49298/backtrace.log.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 5 07:31:23 2020 From: report at bugs.python.org (Michael Felt) Date: Sun, 05 Jul 2020 11:31:23 +0000 Subject: [New-bugs-announce] [issue41215] AIX: build fails for xlc/xlC since new PEG parser Message-ID: <1593948683.77.0.741960073898.issue41215@roundup.psfhosted.org> New submission from Michael Felt : As the bots were both running - based on gcc - this was not noticed immediately. issue40334 implements PEP 617, the new PEG parser for CPython. Using bisect I located: commit c5fc15685202cda73f7c3f5c6f299b0945f58508 (HEAD, refs/bisect/bad) Author: Pablo Galindo Date: Wed Apr 22 23:29:27 2020 +0100 bpo-40334: PEP 617 implementation: New PEG parser for CPython (GH-19503) Co-authored-by: Guido van Rossum Co-authored-by: Lysandros Nikolaou +++ the make status (abbreviated) is: root at x066:[/data/prj/python/py39-3.10]slibclean; rm -rf *; buildaix --without-computed-gotos; print; date; ./python -E -S -m sysconfig --generate-posix-vars; ./python + CPPFLAGS="-I/opt/include" CFLAGS="-I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5"\ ../git/py39-3.10/configure\ --prefix=/opt \ --sysconfdir=/var/py39/etc\ --sharedstatedir=/var/py39/com\ --localstatedir=/var/py39\ --mandir=/usr/share/man\ --infodir=/opt/share/info/py39 --without-computed-gotos\ > .buildaix/configure.out + /usr/bin/make > .buildaix/make.out 1500-016: (W) WARNING: Compiler problem occurred while compiling _PyPegen_clear_memo_statistics: A file or directory in the path name does not exist.. 1500-034: (S) Cannot create object file. make: 1254-004 The error code from the last command is 1. +++ The complete make.out (stdout) is: root at x066:[/data/prj/python/py39-3.10]cat .buildaix/make* xlc_r -c -DNDEBUG -O -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I../git/py39-3.10/Include/internal -IObjects -IInclude -IPython -I. -I../git/py39-3.10/Include -I/opt/include -I/opt/include -DPy_BUILD_CORE -o Programs/python.o ../git/py39-3.10/Programs/python.c xlc_r -c -DNDEBUG -O -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I../git/py39-3.10/Include/internal -IObjects -IInclude -IPython -I. -I../git/py39-3.10/Include -I/opt/include -I/opt/include -DPy_BUILD_CORE -o Parser/acceler.o ../git/py39-3.10/Parser/acceler.c xlc_r -c -DNDEBUG -O -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I../git/py39-3.10/Include/internal -IObjects -IInclude -IPython -I. -I../git/py39-3.10/Include -I/opt/include -I/opt/include -DPy_BUILD_CORE -o Parser/grammar1.o ../git/py39-3.10/Parser/grammar1.c xlc_r -c -DNDEBUG -O -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I../git/py39-3.10/Include/internal -IObjects -IInclude -IPython -I. -I../git/py39-3.10/Include -I/opt/include -I/opt/include -DPy_BUILD_CORE -o Parser/listnode.o ../git/py39-3.10/Parser/listnode.c xlc_r -c -DNDEBUG -O -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I../git/py39-3.10/Include/internal -IObjects -IInclude -IPython -I. -I../git/py39-3.10/Include -I/opt/include -I/opt/include -DPy_BUILD_CORE -o Parser/node.o ../git/py39-3.10/Parser/node.c xlc_r -c -DNDEBUG -O -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I../git/py39-3.10/Include/internal -IObjects -IInclude -IPython -I. -I../git/py39-3.10/Include -I/opt/include -I/opt/include -DPy_BUILD_CORE -o Parser/parser.o ../git/py39-3.10/Parser/parser.c xlc_r -c -DNDEBUG -O -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I../git/py39-3.10/Include/internal -IObjects -IInclude -IPython -I. -I../git/py39-3.10/Include -I/opt/include -I/opt/include -DPy_BUILD_CORE -o Parser/token.o ../git/py39-3.10/Parser/token.c xlc_r -c -DNDEBUG -O -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I/opt/include -O2 -qmaxmem=-1 -qarch=pwr5 -I../git/py39-3.10/Include/internal -IObjects -IInclude -IPython -I. -I../git/py39-3.10/Include -I/opt/include -I/opt/include -DPy_BUILD_CORE -o Parser/pegen/pegen.o ../git/py39-3.10/Parser/pegen/pegen.c ******* After commit a25f3c4c8f7d4878918ce1d3d67db40ae255ccc6 (HEAD) Author: Pablo Galindo Date: Thu Apr 23 01:38:11 2020 +0100 bpo-40334: Fix builds outside the source directory and regenerate autoconf files (GH-19667) /bin/sh: 7405692 Segmentation fault(coredump) make: 1254-004 The error code from the last command is 139. Stop. /usr/bin/make returned an error Sun Jul 5 11:23:39 UTC 2020 Segmentation fault(coredump) Python 3.9.0a5+ (default, Jul 5 2020, 11:23:33) [C] on aix Type "help", "copyright", "credits" or "license" for more information. *********** The above includes aixtools at x064:[/data/prj/python/git/py39-3.10]git checkout 458004bf7914f96b20bb76bc3584718cf83f652e Previous HEAD position was a25f3c4c8f bpo-40334: Fix builds outside the source directory and regenerate autoconf files (GH-19667) HEAD is now at 458004bf79 bpo-40334: Fix errors in parse_string.c with old compilers (GH-19666) This still crashes with the message: 1500-016: (W) WARNING: Compiler problem occurred while compiling _PyPegen_clear_memo_statistics: A file or directory in the path name does not exist.. 1500-034: (S) Cannot create object file. ************ I'll add more debug info in a followup - starting at: Previous HEAD position was a25f3c4c8f bpo-40334: Fix builds outside the source directory and regenerate autoconf files (GH-19667) ---------- components: Build messages: 373027 nosy: David.Edelsohn, Michael.Felt, pablogsal priority: normal severity: normal status: open title: AIX: build fails for xlc/xlC since new PEG parser type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 5 20:31:03 2020 From: report at bugs.python.org (Andrej Klychin) Date: Mon, 06 Jul 2020 00:31:03 +0000 Subject: [New-bugs-announce] [issue41216] eval don't load local variable in dict and list comprehensions. Message-ID: <1593995463.39.0.732635658928.issue41216@roundup.psfhosted.org> New submission from Andrej Klychin : I'm not sure is it a bug or a fecature of comprehensions or eval, but intuitively it seems like it should work. def foo(baz): return eval("[baz for _ in range(10)]") foo(3) Traceback (most recent call last): File "", line 1, in File "", line 2, in foo File "", line 1, in File "", line 1, in NameError: name 'baz' is not defined def bar(baz): return eval("{i: baz for i in range(10)}") bar(3) Traceback (most recent call last): File "", line 1, in File "", line 2, in bar File "", line 1, in File "", line 1, in NameError: name 'baz' is not defined ---------- components: Interpreter Core messages: 373054 nosy: Andy_kl priority: normal severity: normal status: open title: eval don't load local variable in dict and list comprehensions. type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 04:10:51 2020 From: report at bugs.python.org (Csaba Nemes) Date: Mon, 06 Jul 2020 08:10:51 +0000 Subject: [New-bugs-announce] [issue41217] Obsolete note for default asyncio event loop on Windows Message-ID: <1594023051.16.0.859268804549.issue41217@roundup.psfhosted.org> New submission from Csaba Nemes : In the documentation of asyncio, a Note is present in the "Running Subprocesses" section: "The default asyncio event loop on Windows does not support subprocesses", however, from 3.8 the default event loop is ProactorEventLoop on Windows, that does support subprocesses. This note should be removed. ---------- assignee: docs at python components: Documentation, asyncio messages: 373076 nosy: asvetlov, docs at python, waszil, yselivanov priority: normal severity: normal status: open title: Obsolete note for default asyncio event loop on Windows versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 11:00:52 2020 From: report at bugs.python.org (Matthias Bussonnier) Date: Mon, 06 Jul 2020 15:00:52 +0000 Subject: [New-bugs-announce] [issue41218] PyCF_ALLOW_TOP_LEVEL_AWAIT + list comprehension set .CO_COROUTINE falg. Message-ID: <1594047652.43.0.575402047323.issue41218@roundup.psfhosted.org> New submission from Matthias Bussonnier : As far as I can tell sometime in 3.8.x (likely 3.8.3) the following snippet changed result: import ast import inspect cell = '[x for x in l]' code = compile(cell, "<>", "exec", flags=getattr(ast,'PyCF_ALLOW_TOP_LEVEL_AWAIT', 0x0)) inspect.CO_COROUTINE & code.co_flags == inspect.CO_COROUTINE Use to be False in 3.8.2 I believe and is False after. This is problematic when you try to detect top-level await code. ---------- components: Interpreter Core messages: 373128 nosy: mbussonn priority: normal severity: normal status: open title: PyCF_ALLOW_TOP_LEVEL_AWAIT + list comprehension set .CO_COROUTINE falg. versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 12:47:19 2020 From: report at bugs.python.org (cere blanco) Date: Mon, 06 Jul 2020 16:47:19 +0000 Subject: [New-bugs-announce] [issue41219] Mimetypes doesn't support audio/webm Message-ID: <1594054039.75.0.283470806341.issue41219@roundup.psfhosted.org> New submission from cere blanco : Mimetypes doesn't support audio/webm ---------- components: Library (Lib) messages: 373140 nosy: cere blanco priority: normal severity: normal status: open title: Mimetypes doesn't support audio/webm type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 12:51:10 2020 From: report at bugs.python.org (Itay azolay) Date: Mon, 06 Jul 2020 16:51:10 +0000 Subject: [New-bugs-announce] [issue41220] add optional make_key argument to lru_cache Message-ID: <1594054270.54.0.203199023298.issue41220@roundup.psfhosted.org> New submission from Itay azolay : I'd like to add optional argument to lru_cache. This argument is a user given function that will replace the default behaviour of creating a key from the args/kwds of the function. for example: def my_make_key(my_list): return my_list[0] @lru_cache(128, make_key=my_make_key) def cached_func(my_list): return sum(my_list) This will creating a cached function that accepts immutable. Also, It will allow user to add custom functions from knowledge about the expected function input, without the need to create custom classes and/or overriding __hash__ ---------- components: Library (Lib) messages: 373141 nosy: Itayazolay priority: normal severity: normal status: open title: add optional make_key argument to lru_cache type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 13:33:53 2020 From: report at bugs.python.org (Manuel Jacob) Date: Mon, 06 Jul 2020 17:33:53 +0000 Subject: [New-bugs-announce] [issue41221] Output of print() might get truncated in unbuffered mode Message-ID: <1594056833.64.0.146611581422.issue41221@roundup.psfhosted.org> New submission from Manuel Jacob : Without unbuffered mode, it works as expected: % python -c "import sys; sys.stdout.write('x'*4294967296)" | wc -c 4294967296 % python -c "import sys; print('x'*4294967296)" | wc -c 4294967297 With unbuffered mode, writes get truncated to 2147479552 bytes on my Linux machine: % python -u -c "import sys; sys.stdout.write('x'*4294967296)" | wc -c 2147479552 % python -u -c "import sys; print('x'*4294967296)" | wc -c 2147479553 I didn?t try, but it?s probably an even bigger problem on Windows, where writes might be limited to 32767 bytes: https://github.com/python/cpython/blob/v3.9.0b4/Python/fileutils.c#L1585 Without unbuffered mode, `sys.stdout.buffer` is a `io.BufferedWriter` object. % python -c 'import sys; print(sys.stdout.buffer)' <_io.BufferedWriter name=''> With unbuffered mode, `sys.stdout.buffer` is a `io.FileIO` object. % python -u -c 'import sys; print(sys.stdout.buffer)' <_io.FileIO name='' mode='wb' closefd=False> `io.BufferedWriter` implements the `io.BufferedIOBase` interface. `io.BufferedIOBase.write()` is documented to write all passed bytes. `io.FileIO` implements the `io.RawIOBase` interface. `io.RawIOBase.write()` is documented to be able to write less bytes than passed. `io.TextIOWrapper.write()` is not documented to write all characters it has been passed, but e.g. `print()` relies on that. To fix the problem, it has to be ensured that either * `sys.stdout.buffer` is an object that guarantees that all bytes passed to its `write()` method are written (e.g. deriving from `io.BufferedIOBase`), or * `io.TextIOWrapper` calls the `write()` method of its underlying binary stream until all bytes have been written, or * users of `io.TextIOWrapper` call `write()` until all characters have been written. In the first two possibilities it probably makes sense to tighten the contract of `io.TextIOBase.write` to guarantee that all passed characters are written. ---------- components: IO messages: 373151 nosy: mjacob priority: normal severity: normal status: open title: Output of print() might get truncated in unbuffered mode type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 15:54:02 2020 From: report at bugs.python.org (Yann Dirson) Date: Mon, 06 Jul 2020 19:54:02 +0000 Subject: [New-bugs-announce] [issue41222] Undocumented behaviour change of POpen.stdout.readine with bufsize=0 or =1 Message-ID: <1594065242.21.0.841046846995.issue41222@roundup.psfhosted.org> New submission from Yann Dirson : On a POpen object created with bufsize=0, stdout.readline() does a buffered reading with python3, whereas in 2.7 it did char-by-char reading. See attached example. As a result, a poll including the stdout object suffers a behaviour change when stdout is ready for writing and there is more than one line of data available. In both cases we get notified by poll() that data is available on the fd and we can stdout.readline() and get back to our polling loop. Then: * with python2 poll() then returns immediately and stdout.readline() will then return the next line * with python3 poll() now blocks Running the attached example under strace reveals the underlying difference: write(4, "go\n", 3) = 3 poll([{fd=5, events=POLLIN|POLLERR|POLLHUP}], 1, -1) = 1 ([{fd=5, revents=POLLIN}]) -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "x", 1) = 1 -read(5, "\n", 1) = 1 -fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(0x88, 0x2), ...}) = 0 +read(5, "xxxxxxxxxxxx\nyyyyyyyyyyyyyyy\naaa"..., 8192) = 74 write(1, ">xxxxxxxxxxxx\n", 14) = 14 We can see a buffered read, which explains the behaviour difference. Changing to bufsize=1, strace does not show a difference here. This is especially troubling, as the first note in https://docs.python.org/3/library/io.html#class-hierarchy mentions that even in buffered mode there is an unoptimized readline() implementation. ---------- components: IO files: testproc-unbuffered.py messages: 373165 nosy: yann priority: normal severity: normal status: open title: Undocumented behaviour change of POpen.stdout.readine with bufsize=0 or =1 type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49299/testproc-unbuffered.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 17:19:56 2020 From: report at bugs.python.org (jakirkham) Date: Mon, 06 Jul 2020 21:19:56 +0000 Subject: [New-bugs-announce] [issue41223] `object`-backed `memoryview`'s `tolist` errors Message-ID: <1594070396.59.0.711560769808.issue41223@roundup.psfhosted.org> New submission from jakirkham : When working with an `object`-backed `memoryview`, it seems we are unable to coerce it to a `list`. This would be useful as it would provide a way to get the underlying `object`'s into something a bit easier to work with. ``` In [1]: import numpy In [2]: a = numpy.array(["abc", "def", "ghi"], dtype=object) In [3]: m = memoryview(a) In [4]: m.tolist() --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) in ----> 1 m.tolist() NotImplementedError: memoryview: format O not supported ``` ---------- messages: 373175 nosy: jakirkham priority: normal severity: normal status: open title: `object`-backed `memoryview`'s `tolist` errors versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 20:49:29 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 07 Jul 2020 00:49:29 +0000 Subject: [New-bugs-announce] [issue41224] Document is_annotated() in the symtable module Message-ID: <1594082969.57.0.57546378767.issue41224@roundup.psfhosted.org> New submission from Joannah Nanjekye : The function is_annotated() in symtable is not documented. Am using this opportunity to also add docstrings to the methods that didnot have them already. ---------- assignee: docs at python components: Documentation messages: 373200 nosy: docs at python, nanjekyejoannah priority: normal severity: normal status: open title: Document is_annotated() in the symtable module type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 20:57:55 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 07 Jul 2020 00:57:55 +0000 Subject: [New-bugs-announce] [issue41225] Add a test for get_id() in the symtable module Message-ID: <1594083475.77.0.464939543595.issue41225@roundup.psfhosted.org> New submission from Joannah Nanjekye : The method get_id() in the symtable module is implemented and documented but not tested. ---------- messages: 373201 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Add a test for get_id() in the symtable module _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 22:01:27 2020 From: report at bugs.python.org (jakirkham) Date: Tue, 07 Jul 2020 02:01:27 +0000 Subject: [New-bugs-announce] [issue41226] Supporting `strides` in `memoryview.cast` Message-ID: <1594087287.71.0.356108904694.issue41226@roundup.psfhosted.org> New submission from jakirkham : Currently one can reshape a `memoryview` using `.cast(...)` like so... ``` In [1]: m = memoryview(b"abcdef") In [2]: m2 = m.cast("B", (2, 3)) ``` However it is not currently possible to specify the `strides` when reshaping the `memoryview`. This would be useful if the `memoryview` should be F-order or otherwise strided. To that end, syntax like this would be useful... ``` In [1]: m = memoryview(b"abcdef") In [2]: m2 = m.cast("B", (2, 3), (1, 2)) ``` ---------- messages: 373202 nosy: jakirkham priority: normal severity: normal status: open title: Supporting `strides` in `memoryview.cast` versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 6 23:33:36 2020 From: report at bugs.python.org (Faris Chugthai) Date: Tue, 07 Jul 2020 03:33:36 +0000 Subject: [New-bugs-announce] [issue41227] minor typo in asyncio transport protocol Message-ID: <1594092816.12.0.525496493671.issue41227@roundup.psfhosted.org> New submission from Faris Chugthai : The penultimate sentence in asyncio transport. > The subprocess is created by th loop.subprocess_exec() method: Should be `created by the`. The sentence can be seen here: https://docs.python.org/3.10/library/asyncio-protocol.html?highlight=call_soon#loop-subprocess-exec-and-subprocessprotocol ---------- assignee: docs at python components: Documentation, asyncio messages: 373204 nosy: Faris Chugthai, asvetlov, docs at python, yselivanov priority: normal severity: normal status: open title: minor typo in asyncio transport protocol versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 7 02:37:39 2020 From: report at bugs.python.org (Nima Dini) Date: Tue, 07 Jul 2020 06:37:39 +0000 Subject: [New-bugs-announce] [issue41228] Fix the typo in the description of calendar.monthcalendar(year, month) Message-ID: <1594103859.96.0.219751919991.issue41228@roundup.psfhosted.org> Change by Nima Dini : ---------- assignee: docs at python components: Documentation nosy: docs at python, ndini priority: normal pull_requests: 20516 severity: normal status: open title: Fix the typo in the description of calendar.monthcalendar(year, month) type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 7 08:08:08 2020 From: report at bugs.python.org (JIanqiu Tao) Date: Tue, 07 Jul 2020 12:08:08 +0000 Subject: [New-bugs-announce] [issue41229] Asynchronous generator memory leak Message-ID: <1594123688.2.0.0110904549514.issue41229@roundup.psfhosted.org> New submission from JIanqiu Tao : The resource used by asynchronous generator can't be released properly when works with "asend" method. Besides, in Python 3.7-, a RuntimeError was raised when asyncio.run complete, but the message is puzzling: RuntimeError: can't send non-None value to a just-started coroutine In Python 3.8+, No Exception showed. Python3.5 unsupport yield in async function, so it seems no affect? ---------- components: Interpreter Core, asyncio files: leak.py messages: 373221 nosy: asvetlov, yselivanov, zkonge priority: normal severity: normal status: open title: Asynchronous generator memory leak type: resource usage versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49302/leak.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 7 08:17:37 2020 From: report at bugs.python.org (Saumitra Verma) Date: Tue, 07 Jul 2020 12:17:37 +0000 Subject: [New-bugs-announce] [issue41230] IDLE intellisense Message-ID: <1594124257.05.0.294353928422.issue41230@roundup.psfhosted.org> New submission from Saumitra Verma : There should be a simple autocomplete(intellisense) in idle. ---------- assignee: terry.reedy components: IDLE messages: 373222 nosy: Saumitra Verma, terry.reedy priority: normal severity: normal status: open title: IDLE intellisense type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 7 13:29:15 2020 From: report at bugs.python.org (David Caro) Date: Tue, 07 Jul 2020 17:29:15 +0000 Subject: [New-bugs-announce] [issue41231] Type annotations lost when using wraps by default Message-ID: <1594142955.77.0.852569543435.issue41231@roundup.psfhosted.org> New submission from David Caro : In version 3.2, bpo-8814 introduced copying the __annotations__ property from the wrapped function to the wrapper by default. That would be the desired behavior when your wrapper function has the same signature than the function it wraps, but in some cases (for example, with the contextlib.asynccontextmanager function) the return value is different, and then the __annotations__ property will have invalid information: In [2]: from contextlib import asynccontextmanager In [3]: @asynccontextmanager ...: async def mytest() -> int: ...: return 1 ...: In [4]: mytest.__annotations__ Out[4]: {'return': int} I propose changing the behavior of wraps, to only assign the __annotations__ by default if there's no __annotations__ already in the wrapper function, that would fit most default cases, but would allow to preserve the __annotations__ of the wrapper function when the types are explicitly specified, allowing now to change the contextlib.asynccontextmanager function with the proper types (returning now an AsyncContextManager) and keep the __annotation__ valid. I'll try to get a POC and attach to the issue, but please comment with your ideas too. ---------- components: Library (Lib) messages: 373233 nosy: David Caro priority: normal severity: normal status: open title: Type annotations lost when using wraps by default type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 7 18:00:22 2020 From: report at bugs.python.org (Thor Whalen) Date: Tue, 07 Jul 2020 22:00:22 +0000 Subject: [New-bugs-announce] [issue41232] Python `functools.wraps` doesn't deal with defaults correctly Message-ID: <1594159222.49.0.432639092165.issue41232@roundup.psfhosted.org> New submission from Thor Whalen : # PROBLEM When using `functools.wraps`, the signature claims one set of defaults, but the (wrapped) function uses the original (wrappee) defaults. Why might that be the desirable default? # PROPOSED SOLUTION Adding '__defaults__', '__kwdefaults__' to `WRAPPER_ASSIGNMENTS` so that actual defaults can be consistent with signature defaults. # Code Demo ```python from functools import wraps from inspect import signature def g(a: float, b=10): return a * b def f(a: int, b=1): return a * b assert f(3) == 3 f = wraps(g)(f) assert str(signature(f)) == '(a: float, b=10)' # signature says that b defaults to 10 assert f.__defaults__ == (1,) # ... but it's a lie! assert f(3) == 3 != g(3) == 30 # ... and one that can lead to problems! ``` Why is this so? Because `functools.wraps` updates the `__signature__` (including annotations and defaults), `__annotations__`, but not `__defaults__`, which python apparently looks to in order to assign defaults. One solution is to politely ask wraps to include these defaults. ```python from functools import wraps, WRAPPER_ASSIGNMENTS, partial my_wraps = partial(wraps, assigned=(list(WRAPPER_ASSIGNMENTS) + ['__defaults__', '__kwdefaults__'])) def g(a: float, b=10): return a * b def f(a: int, b=1): return a * b assert f(3) == 3 f = my_wraps(g)(f) assert f(3) == 30 == g(3) assert f.__defaults__ == (10,) # ... because now got g defaults! ``` Wouldn't it be better to get this out of the box? When would I want the defaults that are actually used be different than those mentioned in the signature? ---------- messages: 373254 nosy: Thor Whalen2 priority: normal severity: normal status: open title: Python `functools.wraps` doesn't deal with defaults correctly type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 7 18:39:26 2020 From: report at bugs.python.org (yyyyyyyan) Date: Tue, 07 Jul 2020 22:39:26 +0000 Subject: [New-bugs-announce] [issue41233] Missing links to errnos on Built-in Exceptions page Message-ID: <1594161566.11.0.329793156149.issue41233@roundup.psfhosted.org> New submission from yyyyyyyan : On the [Built-in Exceptions](https://docs.python.org/dev/library/exceptions.html) page, the exception [InterruptedError](https://docs.python.org/dev/library/exceptions.html#InterruptedError) correlates the error with the errno [EINTR](https://docs.python.org/dev/library/errno.html#errno.EINTR), linking the name `EINTR` with the errno page. This is great, since reading "*corresponds to errno EINTR*" is pointless if you don't know what `EINTR` means. The problem is `InterruptedError` is the only exception that put a link on the errno. All others only have the "correspondes to errno `ERRNO`", without any links, which makes it harder to understand. The same thing happens on the [errno](https://docs.python.org/dev/library/errno.html). On the section about [errno.EINTR](https://docs.python.org/dev/library/errno.html#errno.EINTR) we have a "see also" box saying "This error is mapped to the exception InterruptedError", with a link to the InterruptedError section on the exceptions page. However, for some reason the "see also" box is only on this specific errno section. The links should be added on both pages so the great pattern defined by `InterruptedError` and `errno.EINTR` is mantained. ---------- assignee: docs at python components: Documentation messages: 373260 nosy: docs at python, eric.araujo, ezio.melotti, mdk, willingc, yyyyyyyan priority: normal severity: normal status: open title: Missing links to errnos on Built-in Exceptions page type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 7 18:51:30 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 07 Jul 2020 22:51:30 +0000 Subject: [New-bugs-announce] [issue41234] Remove symbol.sym_name Message-ID: <1594162290.58.0.321462635772.issue41234@roundup.psfhosted.org> New submission from Joannah Nanjekye : symbol.sym_name was already removed yet still documented in library/symbol.rst. I suggest completely removing the docs too since the module is non-existing. ---------- components: Library (Lib) messages: 373261 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Remove symbol.sym_name type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 7 23:55:25 2020 From: report at bugs.python.org (Zackery Spytz) Date: Wed, 08 Jul 2020 03:55:25 +0000 Subject: [New-bugs-announce] [issue41235] Incorrect error handling in SSLContext.load_dh_params() Message-ID: <1594180525.01.0.333549128836.issue41235@roundup.psfhosted.org> New submission from Zackery Spytz : If the SSL_CTX_set_tmp_dh() call fails, SSLContext.load_dh_params() returns None with a live exception. It should return NULL in this case. ---------- assignee: christian.heimes components: Extension Modules, SSL messages: 373271 nosy: ZackerySpytz, christian.heimes priority: normal severity: normal status: open title: Incorrect error handling in SSLContext.load_dh_params() type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 02:05:16 2020 From: report at bugs.python.org (Baozhen Chen) Date: Wed, 08 Jul 2020 06:05:16 +0000 Subject: [New-bugs-announce] [issue41236] "about" button in MacOS caused an error Message-ID: <1594188316.81.0.4505690312.issue41236@roundup.psfhosted.org> New submission from Baozhen Chen : when clicking the MacOS menubar-File-About Python, it appears >>> can't invoke "tk_messageBox" command: application has been destroyed while executing "tk_messageBox -icon info -type ok -title [mc "About Widget Demo"] -message [mc "Tk widget demonstration application"] -detail "[mc "Copyright \u00a9..." (procedure "tkAboutDialog" line 2) invoked from within "tkAboutDialog" ---------- components: macOS messages: 373278 nosy: Baozhen Chen, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: "about" button in MacOS caused an error type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 02:09:25 2020 From: report at bugs.python.org (Christoph Gohlke) Date: Wed, 08 Jul 2020 06:09:25 +0000 Subject: [New-bugs-announce] [issue41237] Access violation in python39.dll!meth_dealloc on exit Message-ID: <1594188565.24.0.738467793313.issue41237@roundup.psfhosted.org> New submission from Christoph Gohlke : When testing extension packages on Python 3.9.0b4 for Windows, I often get access violations in `python39.dll!meth_dealloc` during interpreter exit. The crash only happens when heap verification is turned on (`"C:\Program Files (x86)\Windows Kits\10\Debuggers\x86\gflags.exe" /p /enable python.exe`; https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/gflags). The debug build does not crash. Could be related to "PEP 573: Module State Access from C Extension Methods", https://bugs.python.org/issue38787 To reproduce the crash with with numpy-1.19.0 (built from source): ``` Python 3.9.0b4 (tags/v3.9.0b4:69dec9c, Jul 2 2020, 21:46:36) [MSC v.1924 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy;numpy.test() NumPy version 1.19.0 10364 passed, 438 skipped, 108 deselected, 17 xfailed, 1 xpassed in 243.52s (0:04:03) True >>> exit() Windows fatal exception: access violation Current thread 0x00002688 (most recent call first): ``` The crash is at : ``` static void meth_dealloc(PyCFunctionObject *m) { _PyObject_GC_UNTRACK(m); if (m->m_weakreflist != NULL) { PyObject_ClearWeakRefs((PyObject*) m); } Py_XDECREF(m->m_self); Py_XDECREF(m->m_module); Py_XDECREF(PyCFunction_GET_CLASS(m)); /* <-- crash */ PyObject_GC_Del(m); } ``` Call Stack: ``` > python39.dll!meth_dealloc(PyCFunctionObject * m) Line 169 C [Inline Frame] python39.dll!_Py_Dealloc(_object *) Line 2209 C [Inline Frame] python39.dll!_Py_DECREF(_object *) Line 430 C [Inline Frame] python39.dll!_Py_XDECREF(_object * op) Line 497 C [Inline Frame] python39.dll!free_keys_object(_dictkeysobject *) Line 598 C [Inline Frame] python39.dll!dictkeys_decref(_dictkeysobject *) Line 333 C python39.dll!dict_dealloc(PyDictObject * mp) Line 2026 C [Inline Frame] python39.dll!_Py_Dealloc(_object *) Line 2209 C [Inline Frame] python39.dll!_Py_DECREF(_object *) Line 430 C python39.dll!_PyInterpreterState_ClearModules(_is * interp) Line 768 C python39.dll!_PyImport_Cleanup(_ts * tstate) Line 628 C python39.dll!Py_FinalizeEx() Line 1432 C python39.dll!Py_Exit(int sts) Line 2433 C python39.dll!handle_system_exit() Line 696 C python39.dll!_PyErr_PrintEx(_ts * tstate, int set_sys_last_vars) Line 708 C [Inline Frame] python39.dll!PyErr_Print() Line 807 C python39.dll!PyRun_InteractiveLoopFlags(_iobuf * fp, const char * filename_str, PyCompilerFlags * flags) Line 137 C python39.dll!PyRun_AnyFileExFlags(_iobuf * fp, const char * filename, int closeit, PyCompilerFlags * flags) Line 81 C python39.dll!pymain_run_stdin(PyConfig * config, PyCompilerFlags * cf) Line 509 C python39.dll!pymain_run_python(int * exitcode) Line 597 C python39.dll!Py_RunMain() Line 675 C python39.dll!Py_Main(int argc, wchar_t * * argv) Line 716 C [External Code] ``` Locals: ``` - m 0x000001fbea75ddb0 {ob_base={ob_refcnt=0 ob_type=0x00007fff5a1e5650 {python39.dll!_typeobject PyCFunction_Type} {...} } ...} PyCFunctionObject * - ob_base {ob_refcnt=0 ob_type=0x00007fff5a1e5650 {python39.dll!_typeobject PyCFunction_Type} {ob_base={ob_base=...} ...} } _object ob_refcnt 0 __int64 - ob_type 0x00007fff5a1e5650 {python39.dll!_typeobject PyCFunction_Type} {ob_base={ob_base={ob_refcnt=23 ob_type=...} ...} ...} _typeobject * + ob_base {ob_base={ob_refcnt=23 ob_type=0x00007fff5a1e7c60 {python39.dll!_typeobject PyType_Type} {ob_base={ob_base=...} ...} } ...} PyVarObject + tp_name 0x00007fff5a16d8a0 "builtin_function_or_method" const char * tp_basicsize 56 __int64 tp_itemsize 0 __int64 tp_dealloc 0x00007fff59e2c260 {python39.dll!meth_dealloc(PyCFunctionObject *)} void(*)(_object *) tp_vectorcall_offset 48 __int64 tp_getattr 0x0000000000000000 _object *(*)(_object *, char *) tp_setattr 0x0000000000000000 int(*)(_object *, char *, _object *) + tp_as_async 0x0000000000000000 PyAsyncMethods * tp_repr 0x00007fff59e5c430 {python39.dll!meth_repr(PyCFunctionObject *)} _object *(*)(_object *) + tp_as_number 0x0000000000000000 PyNumberMethods * + tp_as_sequence 0x0000000000000000 PySequenceMethods * + tp_as_mapping 0x0000000000000000 PyMappingMethods * tp_hash 0x00007fff59f44768 {python39.dll!meth_hash(PyCFunctionObject *)} __int64(*)(_object *) tp_call 0x00007fff59e28790 {python39.dll!cfunction_call(_object *, _object *, _object *)} _object *(*)(_object *, _object *, _object *) tp_str 0x00007fff59e5b978 {python39.dll!object_str(_object *)} _object *(*)(_object *) tp_getattro 0x00007fff59e2fc40 {python39.dll!PyObject_GenericGetAttr(_object *, _object *)} _object *(*)(_object *, _object *) tp_setattro 0x00007fff59e92520 {python39.dll!PyObject_GenericSetAttr(_object *, _object *, _object *)} int(*)(_object *, _object *, _object *) + tp_as_buffer 0x0000000000000000 PyBufferProcs * tp_flags 808960 unsigned long + tp_doc 0x0000000000000000 const char * tp_traverse 0x00007fff59eb0eb0 {python39.dll!meth_traverse(PyCFunctionObject *, int(*)(_object *, void *), void *)} int(*)(_object *, int(*)(_object *, void *), void *) tp_clear 0x0000000000000000 int(*)(_object *) tp_richcompare 0x00007fff59f653a4 {python39.dll!meth_richcompare(_object *, _object *, int)} _object *(*)(_object *, _object *, int) tp_weaklistoffset 40 __int64 tp_iter 0x0000000000000000 _object *(*)(_object *) tp_iternext 0x0000000000000000 _object *(*)(_object *) + tp_methods 0x00007fff5a20dad0 {python39.dll!PyMethodDef meth_methods[2]} {ml_name=0x00007fff5a1673d0 "__reduce__" ...} PyMethodDef * + tp_members 0x00007fff5a20da80 {python39.dll!PyMemberDef meth_members[2]} {name=0x00007fff5a038210 "__module__" ...} PyMemberDef * + tp_getset 0x00007fff5a20d990 {python39.dll!PyGetSetDef meth_getsets[6]} {name=0x00007fff5a038208 "__doc__" get=...} PyGetSetDef * + tp_base 0x00007fff5a1e7e00 {python39.dll!_typeobject PyBaseObject_Type} {ob_base={ob_base={ob_refcnt=10617 ob_type=...} ...} ...} _typeobject * + tp_dict 0x000001fbb3b2a240 {ob_refcnt=1 ob_type=0x00007fff5a1db8b0 {python39.dll!_typeobject PyDict_Type} {ob_base=...} } _object * tp_descr_get 0x0000000000000000 _object *(*)(_object *, _object *, _object *) tp_descr_set 0x0000000000000000 int(*)(_object *, _object *, _object *) tp_dictoffset 0 __int64 tp_init 0x00007fff59e26378 {python39.dll!object_init(_object *, _object *, _object *)} int(*)(_object *, _object *, _object *) tp_alloc 0x00007fff59e98130 {python39.dll!PyType_GenericAlloc(_typeobject *, __int64)} _object *(*)(_typeobject *, __int64) tp_new 0x0000000000000000 _object *(*)(_typeobject *, _object *, _object *) tp_free 0x00007fff59e2c610 {python39.dll!PyObject_GC_Del(void *)} void(*)(void *) tp_is_gc 0x0000000000000000 int(*)(_object *) + tp_bases 0x000001fbb3b04910 {ob_refcnt=1 ob_type=0x00007fff5a1e7920 {python39.dll!_typeobject PyTuple_Type} {...} } _object * + tp_mro 0x000001fbb3b2a400 {ob_refcnt=1 ob_type=0x00007fff5a1e7920 {python39.dll!_typeobject PyTuple_Type} {...} } _object * + tp_cache 0x0000000000000000 _object * + tp_subclasses 0x000001fbb3b2a4c0 {ob_refcnt=1 ob_type=0x00007fff5a1db8b0 {python39.dll!_typeobject PyDict_Type} {ob_base=...} } _object * + tp_weaklist 0x000001fbb3b29db0 {ob_refcnt=1 ob_type=0x00007fff5a1e8620 {python39.dll!_typeobject _PyWeakref_RefType} {...} } _object * tp_del 0x0000000000000000 void(*)(_object *) tp_version_tag 40 unsigned int tp_finalize 0x0000000000000000 void(*)(_object *) tp_vectorcall 0x0000000000000000 _object *(*)(_object *, _object * const *, unsigned __int64, _object *) - m_ml 0x000001fbe8ec3fe0 {ml_name=??? ml_meth=??? ml_flags=??? ...} PyMethodDef * ml_name ml_meth ml_flags ml_doc - m_self 0x000001fbea74eed0 {ob_refcnt=2181481948096 ob_type=0x00007fff5a1da1f0 {python39.dll!_typeobject PyCapsule_Type} {...} } _object * ob_refcnt 2181481948096 __int64 - ob_type 0x00007fff5a1da1f0 {python39.dll!_typeobject PyCapsule_Type} {ob_base={ob_base={ob_refcnt=3 ob_type=...} ...} ...} _typeobject * + ob_base {ob_base={ob_refcnt=3 ob_type=0x00007fff5a1e7c60 {python39.dll!_typeobject PyType_Type} {ob_base={ob_base=...} ...} } ...} PyVarObject + tp_name 0x00007fff5a16bb68 "PyCapsule" const char * tp_basicsize 48 __int64 tp_itemsize 0 __int64 tp_dealloc 0x00007fff59f3c880 {python39.dll!capsule_dealloc(_object *)} void(*)(_object *) tp_vectorcall_offset 0 __int64 tp_getattr 0x0000000000000000 _object *(*)(_object *, char *) tp_setattr 0x0000000000000000 int(*)(_object *, char *, _object *) + tp_as_async 0x0000000000000000 PyAsyncMethods * tp_repr 0x00007fff5a00093c {python39.dll!capsule_repr(_object *)} _object *(*)(_object *) + tp_as_number 0x0000000000000000 PyNumberMethods * + tp_as_sequence 0x0000000000000000 PySequenceMethods * + tp_as_mapping 0x0000000000000000 PyMappingMethods * tp_hash 0x00007fff59e93c1c {python39.dll!_Py_HashPointer(const void *)} __int64(*)(_object *) tp_call 0x0000000000000000 _object *(*)(_object *, _object *, _object *) tp_str 0x00007fff59e5b978 {python39.dll!object_str(_object *)} _object *(*)(_object *) tp_getattro 0x00007fff59e2fc40 {python39.dll!PyObject_GenericGetAttr(_object *, _object *)} _object *(*)(_object *, _object *) tp_setattro 0x00007fff59e92520 {python39.dll!PyObject_GenericSetAttr(_object *, _object *, _object *)} int(*)(_object *, _object *, _object *) + tp_as_buffer 0x0000000000000000 PyBufferProcs * tp_flags 4096 unsigned long + tp_doc 0x00007fff5a122650 "Capsule objects let you wrap a C \"void *\" pointer in a Python\nobject. They're a way of passing data through the Python interpreter\nwithout creating your own custom type.\n\nCapsules are used for commun... const char * tp_traverse 0x0000000000000000 int(*)(_object *, int(*)(_object *, void *), void *) tp_clear 0x0000000000000000 int(*)(_object *) tp_richcompare 0x00007fff59e839c4 {python39.dll!object_richcompare(_object *, _object *, int)} _object *(*)(_object *, _object *, int) tp_weaklistoffset 0 __int64 tp_iter 0x0000000000000000 _object *(*)(_object *) tp_iternext 0x0000000000000000 _object *(*)(_object *) + tp_methods 0x0000000000000000 PyMethodDef * + tp_members 0x0000000000000000 PyMemberDef * + tp_getset 0x0000000000000000 PyGetSetDef * + tp_base 0x00007fff5a1e7e00 {python39.dll!_typeobject PyBaseObject_Type} {ob_base={ob_base={ob_refcnt=10617 ob_type=...} ...} ...} _typeobject * + tp_dict 0x000001fbb3b2d780 {ob_refcnt=1 ob_type=0x00007fff5a1db8b0 {python39.dll!_typeobject PyDict_Type} {ob_base=...} } _object * tp_descr_get 0x0000000000000000 _object *(*)(_object *, _object *, _object *) tp_descr_set 0x0000000000000000 int(*)(_object *, _object *, _object *) tp_dictoffset 0 __int64 tp_init 0x00007fff59e26378 {python39.dll!object_init(_object *, _object *, _object *)} int(*)(_object *, _object *, _object *) tp_alloc 0x00007fff59e98130 {python39.dll!PyType_GenericAlloc(_typeobject *, __int64)} _object *(*)(_typeobject *, __int64) tp_new 0x0000000000000000 _object *(*)(_typeobject *, _object *, _object *) tp_free 0x00007fff59e50cd0 {python39.dll!PyObject_Free(void *)} void(*)(void *) tp_is_gc 0x0000000000000000 int(*)(_object *) + tp_bases 0x000001fbb3b04b50 {ob_refcnt=1 ob_type=0x00007fff5a1e7920 {python39.dll!_typeobject PyTuple_Type} {...} } _object * + tp_mro 0x000001fbb3b2d7c0 {ob_refcnt=1 ob_type=0x00007fff5a1e7920 {python39.dll!_typeobject PyTuple_Type} {...} } _object * + tp_cache 0x0000000000000000 _object * + tp_subclasses 0x0000000000000000 _object * + tp_weaklist 0x000001fbb3b2f090 {ob_refcnt=1 ob_type=0x00007fff5a1e8620 {python39.dll!_typeobject _PyWeakref_RefType} {...} } _object * tp_del 0x0000000000000000 void(*)(_object *) tp_version_tag 0 unsigned int tp_finalize 0x0000000000000000 void(*)(_object *) tp_vectorcall 0x0000000000000000 _object *(*)(_object *, _object * const *, unsigned __int64, _object *) - m_module 0x000001fbea75e3f0 {ob_refcnt=9 ob_type=0x00007fff5a1e8140 {python39.dll!_typeobject PyUnicode_Type} {...} } _object * ob_refcnt 9 __int64 - ob_type 0x00007fff5a1e8140 {python39.dll!_typeobject PyUnicode_Type} {ob_base={ob_base={ob_refcnt=495 ob_type=...} ...} ...} _typeobject * + ob_base {ob_base={ob_refcnt=495 ob_type=0x00007fff5a1e7c60 {python39.dll!_typeobject PyType_Type} {ob_base={...} ...} } ...} PyVarObject + tp_name 0x00007fff5a036c8c "str" const char * tp_basicsize 80 __int64 tp_itemsize 0 __int64 tp_dealloc 0x00007fff59e2c830 {python39.dll!unicode_dealloc(_object *)} void(*)(_object *) tp_vectorcall_offset 0 __int64 tp_getattr 0x0000000000000000 _object *(*)(_object *, char *) tp_setattr 0x0000000000000000 int(*)(_object *, char *, _object *) + tp_as_async 0x0000000000000000 PyAsyncMethods * tp_repr 0x00007fff59ea4cac {python39.dll!unicode_repr(_object *)} _object *(*)(_object *) + tp_as_number 0x00007fff5a20f9f0 {python39.dll!PyNumberMethods unicode_as_number} {nb_add=0x0000000000000000 nb_subtract=...} PyNumberMethods * + tp_as_sequence 0x00007fff5a20f8c0 {python39.dll!PySequenceMethods unicode_as_sequence} {sq_length=0x00007fff59e6c498 {python39.dll!unicode_length(_object *)} ...} PySequenceMethods * + tp_as_mapping 0x00007fff5a20f950 {python39.dll!PyMappingMethods unicode_as_mapping} {mp_length=0x00007fff59e6c498 {python39.dll!unicode_length(_object *)} ...} PyMappingMethods * tp_hash 0x00007fff59e95970 {python39.dll!unicode_hash(_object *)} __int64(*)(_object *) tp_call 0x0000000000000000 _object *(*)(_object *, _object *, _object *) tp_str 0x00007fff59ea68c0 {python39.dll!unicode_str(_object *)} _object *(*)(_object *) tp_getattro 0x00007fff59e2fc40 {python39.dll!PyObject_GenericGetAttr(_object *, _object *)} _object *(*)(_object *, _object *) tp_setattro 0x00007fff59e92520 {python39.dll!PyObject_GenericSetAttr(_object *, _object *, _object *)} int(*)(_object *, _object *, _object *) + tp_as_buffer 0x0000000000000000 PyBufferProcs * tp_flags 269227008 unsigned long + tp_doc 0x00007fff5a141740 "str(object='') -> str\nstr(bytes_or_buffer[, encoding[, errors]]) -> str\n\nCreate a new string object from the given object. If encoding or\nerrors is specified, then the object must expose a data buffer... const char * tp_traverse 0x0000000000000000 int(*)(_object *, int(*)(_object *, void *), void *) tp_clear 0x0000000000000000 int(*)(_object *) tp_richcompare 0x00007fff59e828a0 {python39.dll!PyUnicode_RichCompare(_object *, _object *, int)} _object *(*)(_object *, _object *, int) tp_weaklistoffset 0 __int64 tp_iter 0x00007fff59ea7618 {python39.dll!unicode_iter(_object *)} _object *(*)(_object *) tp_iternext 0x0000000000000000 _object *(*)(_object *) + tp_methods 0x00007fff5a20fb70 {python39.dll!PyMethodDef unicode_methods[51]} {ml_name=0x00007fff5a039c58 "encode" ...} PyMethodDef * + tp_members 0x0000000000000000 PyMemberDef * + tp_getset 0x0000000000000000 PyGetSetDef * + tp_base 0x00007fff5a1e7e00 {python39.dll!_typeobject PyBaseObject_Type} {ob_base={ob_base={ob_refcnt=10617 ob_type=...} ...} ...} _typeobject * + tp_dict 0x000001fbb3b19a40 {ob_refcnt=1 ob_type=0x00007fff5a1db8b0 {python39.dll!_typeobject PyDict_Type} {ob_base=...} } _object * tp_descr_get 0x0000000000000000 _object *(*)(_object *, _object *, _object *) tp_descr_set 0x0000000000000000 int(*)(_object *, _object *, _object *) tp_dictoffset 0 __int64 tp_init 0x00007fff59e26378 {python39.dll!object_init(_object *, _object *, _object *)} int(*)(_object *, _object *, _object *) tp_alloc 0x00007fff59e98130 {python39.dll!PyType_GenericAlloc(_typeobject *, __int64)} _object *(*)(_typeobject *, __int64) tp_new 0x00007fff59e263d8 {python39.dll!unicode_new(_typeobject *, _object *, _object *)} _object *(*)(_typeobject *, _object *, _object *) tp_free 0x00007fff59e50cd0 {python39.dll!PyObject_Free(void *)} void(*)(void *) tp_is_gc 0x0000000000000000 int(*)(_object *) + tp_bases 0x000001fbb3b04610 {ob_refcnt=1 ob_type=0x00007fff5a1e7920 {python39.dll!_typeobject PyTuple_Type} {...} } _object * + tp_mro 0x000001fbb3b19cc0 {ob_refcnt=1 ob_type=0x00007fff5a1e7920 {python39.dll!_typeobject PyTuple_Type} {...} } _object * + tp_cache 0x0000000000000000 _object * + tp_subclasses 0x000001fbb94c5f00 {ob_refcnt=1 ob_type=0x00007fff5a1db8b0 {python39.dll!_typeobject PyDict_Type} {ob_base=...} } _object * + tp_weaklist 0x000001fbb3b1fe50 {ob_refcnt=1 ob_type=0x00007fff5a1e8620 {python39.dll!_typeobject _PyWeakref_RefType} {...} } _object * tp_del 0x0000000000000000 void(*)(_object *) tp_version_tag 4 unsigned int tp_finalize 0x0000000000000000 void(*)(_object *) tp_vectorcall 0x0000000000000000 _object *(*)(_object *, _object * const *, unsigned __int64, _object *) - m_weakreflist 0x0000000000000000 _object * ob_refcnt ob_type vectorcall 0x0000000000000000 _object *(*)(_object *, _object * const *, unsigned __int64, _object *) ``` ---------- components: Interpreter Core, Windows messages: 373279 nosy: cgohlke, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Access violation in python39.dll!meth_dealloc on exit versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 03:32:46 2020 From: report at bugs.python.org (=?utf-8?q?Pawe=C5=82_Miech?=) Date: Wed, 08 Jul 2020 07:32:46 +0000 Subject: [New-bugs-announce] [issue41238] Python 3 shelve.DbfilenameShelf is generating 164 times larger files than Python 2.7 when storing dicts Message-ID: <1594193566.99.0.171960738347.issue41238@roundup.psfhosted.org> New submission from Pawe? Miech : I'm porting some code from Python 2.7 to Python 3.8. There is some code that is using shelve.DbfilenameShelf to store some nested dictionaries with sets. I found out that compared with Python 2.7 Python 3.8 shelve generates files that are approximately 164 larger on disk. Python 3.8 file is 2 027 520 size, when Python 2.7 size is 12 288. Code sample: Filename: test_anydbm.py #!/usr/bin/env python import datetime import shelve import sys import time from os import path def main(): print(sys.version) fname = 'shelf_test_{}'.format(datetime.datetime.now().isoformat()) bucket = shelve.DbfilenameShelf(fname, "n") now = time.time() limit = 1000 key = 'some key > some key > other' top_dict = {} to_store = { 1: { 'page_item_numbers': set(), 'products_on_page': None } } for i in range(limit): to_store[1]['page_item_numbers'].add(i) top_dict[key] = to_store bucket[key] = top_dict end = time.time() db_file = False try: fsize = path.getsize(fname) except Exception as e: print("file not found? {}".format(e)) try: fsize = path.getsize(fname + '.db') db_file = True except Exception as e: print("file not found? {}".format(e)) fsize = None print("Stored {} in {} filesize {}".format(limit, end - now, fsize)) print(fname) bucket.close() bucket = shelve.DbfilenameShelf(fname, flag="r") if db_file: fname += '.db' print("In file {} {}".format(fname, len(list(bucket.items())))) Output of running it in docker image: Dockerfile: FROM python:2-jessie VOLUME /scripts CMD scripts/test_anydbm.py 2.7.16 (default, Jul 10 2019, 03:39:20) [GCC 4.9.2] Stored 1000 in 0.0814290046692 filesize 12288 shelf_test_2020-07-08T07:26:23.778769 In file shelf_test_2020-07-08T07:26:23.778769 1 So you can see file size: 12 288 And now running same thing in Python 3 Dockerfile: FROM python:3.8-slim-buster VOLUME /scripts CMD scripts/test_anydbm.py 3.8.3 (default, Jun 9 2020, 17:49:41) [GCC 8.3.0] Stored 1000 in 0.02681446075439453 filesize 2027520 shelf_test_2020-07-08T07:27:18.068638 In file shelf_test_2020-07-08T07:27:18.068638 1 Notice file size: 2 027 520 Why is this happening? Is this a bug? If I'd like to fix it, do you have some ideas about causes of this? ---------- components: Library (Lib) files: test_anydbm.py messages: 373284 nosy: Pawe? Miech priority: normal severity: normal status: open title: Python 3 shelve.DbfilenameShelf is generating 164 times larger files than Python 2.7 when storing dicts type: resource usage versions: Python 3.8 Added file: https://bugs.python.org/file49304/test_anydbm.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 04:27:07 2020 From: report at bugs.python.org (Wu Wenyan) Date: Wed, 08 Jul 2020 08:27:07 +0000 Subject: [New-bugs-announce] [issue41239] SSL Certificate verify failed in Python3.6/3.7 Message-ID: <1594196827.07.0.525531183608.issue41239@roundup.psfhosted.org> New submission from Wu Wenyan : I am running the following code in python3.6 to connect to a storage. [root at controller wuwy]# python3 Python 3.6.8 (default, Jan 11 2019, 02:17:16) [GCC 8.2.1 20180905 (Red Hat 8.2.1-3)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pywbem >>> ip = '193.168.11.113' >>> user = '193_160_28_29' >>> password = '193_160_28_29' >>> url = 'https://193.168.11.113:5989' >>> ca_certs = '/home/ca.cer' >>> conn = pywbem.WBEMConnection(url,(user, password),default_namespace='root/example',ca_certs=ca_certs,no_verification=False) >>> conn.EnumerateInstances('EXAMPLE_StorageProduct') And I am getting the below error. Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/site-packages/pywbem/cim_operations.py", line 1919, in EnumerateInstances **extra) File "/usr/local/lib/python3.6/site-packages/pywbem/cim_operations.py", line 1232, in _imethodcall conn_id=self.conn_id) File "/usr/local/lib/python3.6/site-packages/pywbem/cim_http.py", line 776, in wbem_request client.endheaders() File "/usr/lib64/python3.6/http/client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1026, in _send_output self.send(msg) File "/usr/local/lib/python3.6/site-packages/pywbem/cim_http.py", line 461, in send self.connect() # pylint: disable=no-member File "/usr/local/lib/python3.6/site-packages/pywbem/cim_http.py", line 619, in connect return self.sock.connect((self.host, self.port)) File "/usr/lib64/python3.6/ssl.py", line 1064, in connect self._real_connect(addr, False) File "/usr/lib64/python3.6/ssl.py", line 1055, in _real_connect self.do_handshake() File "/usr/lib64/python3.6/ssl.py", line 1032, in do_handshake self._sslobj.do_handshake() File "/usr/lib64/python3.6/ssl.py", line 648, in do_handshake raise ValueError("check_hostname needs server_hostname " ValueError: check_hostname needs server_hostname argument When I am running the same code in python3.7, error changed. Traceback (most recent call last): File "", line 1, in File "/usr/python3/lib/python3.7/site-packages/pywbem/_cim_operations.py", line 2494, in EnumerateInstances **extra) File "/usr/python3/lib/python3.7/site-packages/pywbem/_cim_operations.py", line 1763, in _imethodcall conn_id=self.conn_id) File "/usr/python3/lib/python3.7/site-packages/pywbem/_cim_http.py", line 824, in wbem_request client.endheaders() File "/usr/python3/lib/python3.7/http/client.py", line 1224, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/python3/lib/python3.7/http/client.py", line 1016, in _send_output self.send(msg) File "/usr/python3/lib/python3.7/site-packages/pywbem/_cim_http.py", line 483, in send self.connect() # pylint: disable=no-member File "/usr/python3/lib/python3.7/site-packages/pywbem/_cim_http.py", line 661, in connect conn_id=conn_id) pywbem._exceptions.ConnectionError: SSL error : [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '193.168.11.113'. (_ssl.c:1045); OpenSSL version: OpenSSL 1.1.1c FIPS 28 May 2019 This code works fine with python2.7 version. And I checked the CN and SAN of the certificate, seems no problem here. So could anyone tell me what's the problem here? ---------- assignee: christian.heimes components: SSL files: 19316811113.crt messages: 373286 nosy: Chirs, christian.heimes priority: normal severity: normal status: open title: SSL Certificate verify failed in Python3.6/3.7 type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file49305/19316811113.crt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 05:08:25 2020 From: report at bugs.python.org (Nghia Minh) Date: Wed, 08 Jul 2020 09:08:25 +0000 Subject: [New-bugs-announce] [issue41240] Use the same kind of quotation mark in f-string Message-ID: <1594199305.52.0.531988229747.issue41240@roundup.psfhosted.org> New submission from Nghia Minh : I want to use the same type of quotation mark in f-string, like this: f'Hey, {' this quote is wrong.'}' Currently we have to: f'But, {" this quote is right."}' ---------- components: Library (Lib) messages: 373296 nosy: Nghia Minh priority: normal severity: normal status: open title: Use the same kind of quotation mark in f-string type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 10:25:35 2020 From: report at bugs.python.org (Wansoo Kim) Date: Wed, 08 Jul 2020 14:25:35 +0000 Subject: [New-bugs-announce] [issue41241] Unnecessary Type casting in 'if condition' Message-ID: <1594218335.75.0.478641917124.issue41241@roundup.psfhosted.org> New submission from Wansoo Kim : Hello! When using 'if syntax', casting condition to bool type is unnecessary. Rather, it only occurs overhead. https://github.com/python/cpython/blob/b26a0db8ea2de3a8a8e4b40e69fc8642c7d7cb68/Lib/asyncio/futures.py#L118 If you look at the link above, the `val` has been cast to bool type. This works well without bool casting. This issue is my first issue. So if you have a problem, please tell me! Thanks You! ---------- components: asyncio messages: 373309 nosy: asvetlov, ys19991, yselivanov priority: normal severity: normal status: open title: Unnecessary Type casting in 'if condition' type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 10:49:19 2020 From: report at bugs.python.org (Wansoo Kim) Date: Wed, 08 Jul 2020 14:49:19 +0000 Subject: [New-bugs-announce] [issue41242] When concating strings, I think it is better to use += than join the list Message-ID: <1594219759.44.0.547271428934.issue41242@roundup.psfhosted.org> New submission from Wansoo Kim : Hello I think it's better to use += than list.join() when concating strings. This is more intuitive than other methods. Also, I personally think it is not good for one variable to change to another type during runtime. https://github.com/python/cpython/blob/b26a0db8ea2de3a8a8e4b40e69fc8642c7d7cb68/Lib/asyncio/base_events.py#L826 If you look at the link above, `msg` was a list type at first, in the end become a str type. ---------- components: asyncio messages: 373310 nosy: asvetlov, ys19991, yselivanov priority: normal severity: normal status: open title: When concating strings, I think it is better to use += than join the list type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 11:45:02 2020 From: report at bugs.python.org (Deon) Date: Wed, 08 Jul 2020 15:45:02 +0000 Subject: [New-bugs-announce] [issue41243] Android Game Message-ID: <1594223102.88.0.458247505326.issue41243@roundup.psfhosted.org> New submission from Deon : Download FIFA 14 apk https://apkgreat.com/fifa-14-apk/ ---------- components: Interpreter Core messages: 373315 nosy: Deon257 priority: normal severity: normal status: open title: Android Game type: security versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 12:56:39 2020 From: report at bugs.python.org (Wansoo Kim) Date: Wed, 08 Jul 2020 16:56:39 +0000 Subject: [New-bugs-announce] [issue41244] Change to use str.join() instead of += when concatenating string Message-ID: <1594227399.77.0.946194436689.issue41244@roundup.psfhosted.org> New submission from Wansoo Kim : https://bugs.python.org/issue41242 According to BPO-41242, it is better to use join than += when concatenating multiple strings. https://github.com/python/cpython/blob/b26a0db8ea2de3a8a8e4b40e69fc8642c7d7cb68/Lib/asyncio/queues.py#L82 However, the link above uses += in the same pattern. I think we'd better change this to `str.join()` ---------- components: asyncio messages: 373317 nosy: asvetlov, ys19991, yselivanov priority: normal severity: normal status: open title: Change to use str.join() instead of += when concatenating string type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 13:19:22 2020 From: report at bugs.python.org (Mark Dickinson) Date: Wed, 08 Jul 2020 17:19:22 +0000 Subject: [New-bugs-announce] [issue41245] cmath module documentation is misleading on branch cuts Message-ID: <1594228762.89.0.0337466076831.issue41245@roundup.psfhosted.org> New submission from Mark Dickinson : The documentation for the cmath module is misleading on the behaviour near branch cuts. For example, the documentation for cmath.acos says: Return the arc cosine of x. There are two branch cuts: One extends right from 1 along the real axis to ?, continuous from below. The other extends left from -1 along the real axis to -?, continuous from above. That "continuous from below" and "continuous from above" language is misleading; in fact what happens on the vast majority of systems (those for which the floating-point format used is IEEE 754 binary64), if the imaginary part of x is zero, the sign of x is used to determine which side of the branch cut x lies. ---------- messages: 373323 nosy: mark.dickinson priority: normal severity: normal status: open title: cmath module documentation is misleading on branch cuts _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 13:20:08 2020 From: report at bugs.python.org (Tony) Date: Wed, 08 Jul 2020 17:20:08 +0000 Subject: [New-bugs-announce] [issue41246] IOCP Proactor same socket overlapped callbacks Message-ID: <1594228808.35.0.788748619653.issue41246@roundup.psfhosted.org> New submission from Tony : In IocpProactor I saw that the callbacks to the functions recv, recv_into, recvfrom, sendto, send and sendfile all give the same callback function for when the overlapped operation is done. I just wanted cleaner code so I made a static function inside the class that I give to each of these functions as the overlapped callbacks. ---------- messages: 373324 nosy: tontinton priority: normal severity: normal status: open title: IOCP Proactor same socket overlapped callbacks _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 15:00:14 2020 From: report at bugs.python.org (Tony) Date: Wed, 08 Jul 2020 19:00:14 +0000 Subject: [New-bugs-announce] [issue41247] asyncio module better caching for set and get_running_loop Message-ID: <1594234814.85.0.697243284436.issue41247@roundup.psfhosted.org> New submission from Tony : There is a cache variable for the running loop holder, but once set_running_loop is called the variable was set to NULL so the next time get_running_loop would have to query a dictionary to receive the running loop holder. I thought why not always cache the latest set_running_loop? The only issue I thought of here is in the details of the implementation: I have too little experience in python to know if there could be a context switch to get_running_loop while set_running_loop is running. If a context switch is possible there then this issue would be way harder to solve, but it is still solvable. ---------- messages: 373333 nosy: tontinton priority: normal severity: normal status: open title: asyncio module better caching for set and get_running_loop _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 15:17:31 2020 From: report at bugs.python.org (Mischiew Rithe) Date: Wed, 08 Jul 2020 19:17:31 +0000 Subject: [New-bugs-announce] [issue41248] Python manual forced in maximized window Message-ID: <1594235851.53.0.79291256825.issue41248@roundup.psfhosted.org> New submission from Mischiew Rithe : In versions 3.8.1 and 3.8.3-amd64 (only versions tested), on Windows, the "Python 3.8 Manuals" opens in a maximized window. This is unexpected and undesired, as the user cannot see the other windows, including the one he's developing from (IDE, editor, shell, ...). The problem seems to come from the shortcut created in the menu, which forces a maximized window. It has to be set to "normal window" instead. ---------- assignee: docs at python components: Documentation messages: 373335 nosy: Mischiew Rithe, docs at python priority: normal severity: normal status: open title: Python manual forced in maximized window type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 19:47:38 2020 From: report at bugs.python.org (Keith Blaha) Date: Wed, 08 Jul 2020 23:47:38 +0000 Subject: [New-bugs-announce] [issue41249] TypedDict inheritance doesn't work with get_type_hints and postponed evaluation of annotations across modules Message-ID: <1594252058.32.0.479050209218.issue41249@roundup.psfhosted.org> New submission from Keith Blaha : Copied from https://github.com/python/typing/issues/737 I came across this issue while using inheritance to express required keys in a TypedDict, as is recommended by the docs. It's most easily explained by a minimal example I cooked up. Let's say we have a module foo.py: from __future__ import annotations from typing import Optional from typing_extensions import TypedDict class Foo(TypedDict): a: Optional[int] And another module bar.py: from __future__ import annotations from typing import get_type_hints from foo import Foo class Bar(Foo, total=False): b: int print(get_type_hints(Bar)) Note that both foo.py and bar.py have adopted postponed evaluation of annotations (PEP 563) by using the __future__ import. If we execute bar.py, we get the error message NameError: name 'Optional' is not defined. This is due to the combination of: get_type_hints relies on the MRO to resolve types: https://github.com/python/cpython/blob/3.7/Lib/typing.py#L970 TypedDict does not preserve the original bases, so Foo is not in the MRO for Bar: typing/typing_extensions/src_py3/typing_extensions.py Line 1652 in d79edde tp_dict = super(_TypedDictMeta, cls).__new__(cls, name, (dict,), ns) Thus, get_type_hints is unable to resolve the types for annotations that are only imported in foo.py. I ran this example using typing_extensions 3.7.4.2 (released via #709) and Python 3.7.3, but it seems like this would be an issue using the current main branches of both repositories as well. I'm wondering what the right approach is to tackling this issue. It is of course solvable by defining Bar in foo.py instead, but it isn't ideal or intuitive to always need to inherit from a TypedDict in the same module. I was thinking that similarly to __required_keys__ and __optional_keys__, the TypedDict could preserve its original bases in a new dunder attribute, and get_type_hints could work off of that instead of MRO when it is dealing with a TypedDict. I would be willing to contribute the PRs to implement this if the design is acceptable, but am open to other ideas as well. ---------- components: Library (Lib) messages: 373360 nosy: keithblaha priority: normal severity: normal status: open title: TypedDict inheritance doesn't work with get_type_hints and postponed evaluation of annotations across modules type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 21:30:21 2020 From: report at bugs.python.org (wyz23x2) Date: Thu, 09 Jul 2020 01:30:21 +0000 Subject: [New-bugs-announce] [issue41250] Number separators in different places Message-ID: <1594258221.41.0.149493736122.issue41250@roundup.psfhosted.org> New submission from wyz23x2 : The current syntax is this for thousand separators: f'{var:,}' It will return this when var is 1234567: '1,234,567' But sometimes we need a way to insert them in other places. For example: 123456789 ? '1,2345,6789' (4) 62938757312 ? '6,29387,57312' (5) This could be done like this: Idea 1: Add a new method to string: string.sep(num: int_or_float, interval: int_or_iterable = 3, sepchar: str = ',') >>> import string >>> string.sep(1234567, 3) '1,234,567' >>> string.sep(1234567890, range(1, 4)) '1,23,456,7890' >>> string.sep('Hello') TypeError: Invalid number 'Hello' >>> string.sep(12345678, sepchar=' ') '12 345 678' >>> string.sep(123456789, 4, '|') '1|2345|6789' Idea 2: (Not as powerful as above) (Future) >>> f'{123456789:4,}' '1,2345,6789' >>> f'{62938757312:5,}' '6,29387,57312' >>> f'{1234567:,}' # Equal to f'{1234567:3,}' '1,234,567' (Current) >>> f'{12345678:5,}' # 5 discarded '12,345,678' ---------- components: Interpreter Core, Library (Lib) messages: 373367 nosy: wyz23x2 priority: normal severity: normal status: open title: Number separators in different places versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 21:59:15 2020 From: report at bugs.python.org (wyz23x2) Date: Thu, 09 Jul 2020 01:59:15 +0000 Subject: [New-bugs-announce] [issue41251] __future__.barry_as_FLUFL.getMandatoryRelease() is wrong Message-ID: <1594259955.25.0.153567227259.issue41251@roundup.psfhosted.org> New submission from wyz23x2 : __future__.barry_as_FLUFL turns x!=y into x<>y. But the doc by help() says: Help on _Feature in module __future__ object: class _Feature(builtins.object) --snip-- | getMandatoryRelease(self) | Return release in which this feature will become mandatory. | | This is a 5-tuple, of the same form as sys.version_info, or, if | the feature was dropped, is None. --snip-- Since <> is dropped, __future__.barry_as_FLUFL.getMandatoryRelease() should be None. But it instead returns (4, 0, 0, 'alpha', 0), which means it will become default in Python 4 and drop != (!= is invalid after the __future__ import). That shouldn't be right. ---------- messages: 373369 nosy: wyz23x2 priority: normal severity: normal status: open title: __future__.barry_as_FLUFL.getMandatoryRelease() is wrong _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 8 23:53:04 2020 From: report at bugs.python.org (Zackery Spytz) Date: Thu, 09 Jul 2020 03:53:04 +0000 Subject: [New-bugs-announce] [issue41252] Incorrect reference counting in _ssl.c's _servername_callback() Message-ID: <1594266784.28.0.798839997492.issue41252@roundup.psfhosted.org> New submission from Zackery Spytz : In _servername_callback(), servername_bytes will be used after being decrefed if PyUnicode_FromEncodedObject() fails. ---------- assignee: christian.heimes components: Extension Modules, SSL messages: 373371 nosy: ZackerySpytz, christian.heimes priority: normal severity: normal status: open title: Incorrect reference counting in _ssl.c's _servername_callback() type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 00:37:30 2020 From: report at bugs.python.org (Faris Chugthai) Date: Thu, 09 Jul 2020 04:37:30 +0000 Subject: [New-bugs-announce] [issue41253] unittest -h shows a flag -s but it doesn't work Message-ID: <1594269450.68.0.624090697882.issue41253@roundup.psfhosted.org> New submission from Faris Chugthai : I'm not 100% sure what's happening here but running: `python -m unittest -h` shows a flag `-s` as does `python -m unittest discover -h`. When run as: `python -m unittest discover -s test` the command runs correctly but when run as `python -m unittest -s test` the command fails. ```sh $ python -m unittest -s test usage: python -m unittest [-h] [-v] [-q] [--locals] [-f] [-c] [-b] [-k TESTNAMEPATTERNS] [tests [tests ...] python -m unittest: error: unrecognized arguments: -s ``` Which I believe to be a bug as the help generated by the discover subcommand indicates that the flag -s should be recognized. ---------- components: Tests messages: 373372 nosy: Faris Chugthai priority: normal severity: normal status: open title: unittest -h shows a flag -s but it doesn't work type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 02:46:58 2020 From: report at bugs.python.org (Miki Tebeka) Date: Thu, 09 Jul 2020 06:46:58 +0000 Subject: [New-bugs-announce] [issue41254] Add to/from string methods to datetime.timedelta Message-ID: <1594277218.28.0.16887324914.issue41254@roundup.psfhosted.org> New submission from Miki Tebeka : I suggest adding datetime.timedelta methods that convert to/from str. The reason is that I have several places where configuration contains various timeouts. I'd like to write '50ms' and not 0.05 which is more human readable. See https://golang.org/pkg/time/#ParseDuration for how Go does it. ---------- components: Library (Lib) messages: 373380 nosy: tebeka priority: normal severity: normal status: open title: Add to/from string methods to datetime.timedelta type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 04:40:33 2020 From: report at bugs.python.org (Matthew Hughes) Date: Thu, 09 Jul 2020 08:40:33 +0000 Subject: [New-bugs-announce] [issue41255] Argparse.parse_args exits on unrecognized option with exit_on_error=False Message-ID: <1594284033.21.0.100091861021.issue41255@roundup.psfhosted.org> New submission from Matthew Hughes : >>> import argparse >>> parser = argparse.ArgumentParser(exit_on_error=False) >>> parser.parse_args(["--unknown"]) usage: [-h] : error: unrecognized arguments: --unknown The docs https://docs.python.org/3.10/library/argparse.html#exit-on-error say: > Normally, when you pass an invalid argument list to the parse_args() method of an ArgumentParser, it will exit with error info. > If the user would like catch errors manually, the feature can be enable by setting exit_on_error to False: This description _appears_ to be at odds with the observed behavior. ---------- components: Library (Lib) messages: 373382 nosy: mhughes priority: normal severity: normal status: open title: Argparse.parse_args exits on unrecognized option with exit_on_error=False type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 05:57:08 2020 From: report at bugs.python.org (kunaltyagi) Date: Thu, 09 Jul 2020 09:57:08 +0000 Subject: [New-bugs-announce] [issue41256] activate script created by venv is not smart enough Message-ID: <1594288628.21.0.90149093497.issue41256@roundup.psfhosted.org> New submission from kunaltyagi : TLDR: `activate` script should be able to: * inform user if it has been run and not sourced * act as a placeholder to detect the shell being used and source the necessary `activate.{SHELL}` instead of throwing an error --- It's mildly infuriating that `activate` on different setups needs to be called differently. The lack of messages when it's not sourced is also beginner unfriendly. Both the issues are relatively easy to fix. First, making it shell agnostic. We can move the contents of `activate` to `activate.sh` and change `activate` to contain code like: ```sh [ $FISH_VERSION ] && . activate.fish [ $BASH_VERSION ] && . activate.sh ... ``` This of course will fail hard when you try to `. /bin/activate`. Finding the path of the file is not trivial, but doable. If we assume `dirname` is not present on the system, we can use `/activate.`. Making the "sourced or ran" logic shell agnostic is slightly easier to accomplish due to `$_`, `$0`, `$BASH_SOURCE`. It'll possibly take a non-trivial amount of code to accomplish something this trivial, but it'll save people with custom shells 3 keystrokes and make the workflow smoother. ---------- components: Library (Lib) messages: 373386 nosy: kunaltyagi priority: normal severity: normal status: open title: activate script created by venv is not smart enough versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 09:23:34 2020 From: report at bugs.python.org (Saber Hayati) Date: Thu, 09 Jul 2020 13:23:34 +0000 Subject: [New-bugs-announce] [issue41257] mimetypes.guess_extension('video/x-matroska') return wrong value Message-ID: <1594301014.01.0.923541225883.issue41257@roundup.psfhosted.org> New submission from Saber Hayati : Code: import mimetypes print(mimetypes.guess_extension('video/x-matroska')) # return '.mpv' instead of '.mkv' Python 3.8.3 on Linux ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 373402 nosy: Saber Hayati priority: normal severity: normal status: open title: mimetypes.guess_extension('video/x-matroska') return wrong value type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 10:09:03 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 09 Jul 2020 14:09:03 +0000 Subject: [New-bugs-announce] [issue41258] CVE-2020-14422 Message-ID: <1594303743.77.0.551962800467.issue41258@roundup.psfhosted.org> Change by STINNER Victor : ---------- nosy: vstinner priority: normal severity: normal status: open title: CVE-2020-14422 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 12:42:02 2020 From: report at bugs.python.org (Rim Chatti) Date: Thu, 09 Jul 2020 16:42:02 +0000 Subject: [New-bugs-announce] [issue41259] Find adverbs is not correct Message-ID: <1594312922.82.0.0846663616251.issue41259@roundup.psfhosted.org> New submission from Rim Chatti : re.findall(r"\w+ly", text) does not find all adverbs in a sentence, it finds words that contain ly (not ending with ly) : if text= 'andlying clearly', output: [andly, clearly] which is wrong! the right way to do this is re.findall(r'\b(\w+ly)\b',text) output= clearly! ---------- assignee: docs at python components: Documentation messages: 373406 nosy: Rim Chatti, docs at python priority: normal severity: normal status: open title: Find adverbs is not correct versions: Python 3.5 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 13:01:48 2020 From: report at bugs.python.org (Anthony Sottile) Date: Thu, 09 Jul 2020 17:01:48 +0000 Subject: [New-bugs-announce] [issue41260] datetime: strftime method takes different keyword argument: fmt (pure) or format (C) Message-ID: <1594314108.83.0.615817711225.issue41260@roundup.psfhosted.org> New submission from Anthony Sottile : C: https://github.com/python/cpython/blob/8b33961e4bc4020d8b2d5b949ad9d5c669300e89/Modules/_datetimemodule.c#L3183 pure python: https://github.com/python/cpython/blob/8b33961e4bc4020d8b2d5b949ad9d5c669300e89/Lib/datetime.py#L927 this makes it difficult to properly type in mypy: https://github.com/python/typeshed/blob/209b6bb127f61fe173a60776e23883ac450cf1c8/stdlib/2and3/datetime.pyi#L55 and calling with `.strftime(fmt=...)` or `.strftime(format=...)` is inconsistent (that said, it should _probably_ be a positional-only argument) ---------- components: Extension Modules, Library (Lib) messages: 373407 nosy: Anthony Sottile priority: normal severity: normal status: open title: datetime: strftime method takes different keyword argument: fmt (pure) or format (C) type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 14:25:50 2020 From: report at bugs.python.org (Arcadiy Ivanov) Date: Thu, 09 Jul 2020 18:25:50 +0000 Subject: [New-bugs-announce] [issue41261] 3.9-dev SEGV in object_recursive_isinstance in ast.literal_eval Message-ID: <1594319150.59.0.418042347071.issue41261@roundup.psfhosted.org> New submission from Arcadiy Ivanov : "Short" reproducer: repro.py: ``` import sys from os import getcwd, chdir from runpy import run_path def smoke_test(script, *args): old_argv = list(sys.argv) del sys.argv[:] sys.argv.append(script) sys.argv.extend(args) old_modules = dict(sys.modules) old_meta_path = list(sys.meta_path) old_cwd = getcwd() try: return run_path(script, run_name="__main__") except SystemExit as e: if e.code: print("Test did not exit successfully") finally: del sys.argv[:] sys.argv.extend(old_argv) sys.modules.clear() sys.modules.update(old_modules) del sys.meta_path[:] sys.meta_path.extend(old_meta_path) chdir(old_cwd) smoke_test("script.py") smoke_test("script.py") ``` script.py: ``` import sys import subprocess import ast _PYTHON_INFO_SCRIPT = """import platform, sys, os, sysconfig _executable = os.path.normcase(os.path.abspath(getattr(sys, "_base_executable", sys.executable))) _platform = sys.platform if _platform == "linux2": _platform = "linux" print({ "_platform": _platform, "_os_name": os.name, "_executable": (_executable, ), "_exec_dir": os.path.normcase(os.path.abspath(os.path.dirname(_executable))), "_name": platform.python_implementation(), "_type": platform.python_implementation().lower(), "_version": tuple(sys.version_info), "_is_pypy": "__pypy__" in sys.builtin_module_names, "_is_64bit": (getattr(sys, "maxsize", None) or getattr(sys, "maxint")) > 2 ** 32, "_versioned_dir_name": "%s-%s" % (platform.python_implementation().lower(), ".".join(str(f) for f in sys.version_info)), "_environ": dict(os.environ), "_darwin_python_framework": sysconfig.get_config_var("PYTHONFRAMEWORK") }) """ result = subprocess.check_output([sys.executable, "-c", _PYTHON_INFO_SCRIPT], universal_newlines=True) python_info = ast.literal_eval(result) print(python_info) ``` ---------- components: Interpreter Core messages: 373418 nosy: arcivanov priority: normal severity: normal status: open title: 3.9-dev SEGV in object_recursive_isinstance in ast.literal_eval type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 15:48:54 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 09 Jul 2020 19:48:54 +0000 Subject: [New-bugs-announce] [issue41262] Convert memoryview to Argument Clinic Message-ID: <1594324134.91.0.173002376427.issue41262@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR converts Objects/memoryobject.c to Argument Clinic. Advantages: * Highly optimized code is used to parse arguments instead of slow PyArg_ParseTupleAndKeywords(). * All future improvements of argument parsing (better performance or errors handling) will be automatically applied to memoryobject. Previously Argument Clinic was used for memoryview.hex(). ---------- components: Extension Modules messages: 373425 nosy: serhiy.storchaka, skrah priority: normal severity: normal status: open title: Convert memoryview to Argument Clinic type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 9 16:21:09 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 09 Jul 2020 20:21:09 +0000 Subject: [New-bugs-announce] [issue41263] Convert code.__new__ to Argument Clinic Message-ID: <1594326069.58.0.0732196430915.issue41263@roundup.psfhosted.org> New submission from Serhiy Storchaka : Argument Clinic is already used for code.replace(). The proposed PR converts code.__new__() to Argument Clinic. ---------- components: Argument Clinic, Installation messages: 373428 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Convert code.__new__ to Argument Clinic versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 02:55:52 2020 From: report at bugs.python.org (Wansoo Kim) Date: Fri, 10 Jul 2020 06:55:52 +0000 Subject: [New-bugs-announce] [issue41264] Do not use the name of the built-in function as a variable. Message-ID: <1594364152.69.0.806564284031.issue41264@roundup.psfhosted.org> New submission from Wansoo Kim : Using the name of the built-in function as a variable can cause unexpected problems. ``` # example type = 'Hello' ... type('Happy') Traceback (most recent call last): File "", line 1, in TypeError: 'str' object is not callable ``` You can go back without any problems right now, but you may have problems later when others make corrections. This code can be returned without any problems right now, but it may cause problems later when others make a change. In the Lib/xml/etree function/_default, assign a value for the type. ``` ... type = self._doctype[1] if type == "PUBLIC" and n == 4: name, type, pubis, system = self._doctype ... ``` ---------- messages: 373442 nosy: ys19991 priority: normal severity: normal status: open title: Do not use the name of the built-in function as a variable. type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 04:45:33 2020 From: report at bugs.python.org (Ma Lin) Date: Fri, 10 Jul 2020 08:45:33 +0000 Subject: [New-bugs-announce] [issue41265] lzma/bz2 module: inefficient buffer growth algorithm Message-ID: <1594370733.96.0.701626485898.issue41265@roundup.psfhosted.org> New submission from Ma Lin : lzma/bz2 modules are using the same buffer growth algorithm: [1][2] newsize = size + (size >> 3) + 6; lzma/bz2 modules' default output buffer is 8192 bytes [3][4], so the growth step is below. For many cases, maybe the buffer is resized too many times. Is it possible to design a new growth algorithm that grows faster when the size is not very large. 1: 8,196 bytes 2: 9,226 bytes 3: 10,385 bytes 4: 11,689 bytes 5: 13,156 bytes 6: 14,806 bytes 7: 16,662 bytes 8: 18,750 bytes 9: 21,099 bytes 10: 23,742 bytes 11: 26,715 bytes 12: 30,060 bytes 13: 33,823 bytes 14: 38,056 bytes 15: 42,819 bytes 16: 48,177 bytes 17: 54,205 bytes 18: 60,986 bytes 19: 68,615 bytes 20: 77,197 bytes 21: 86,852 bytes 22: 97,714 bytes 23: 109,934 bytes 24: 123,681 bytes 25: 139,147 bytes 26: 156,546 bytes 27: 176,120 bytes 28: 198,141 bytes 29: 222,914 bytes 30: 250,784 bytes 31: 282,138 bytes 32: 317,411 bytes 33: 357,093 bytes 34: 401,735 bytes 35: 451,957 bytes 36: 508,457 bytes 37: 572,020 bytes 38: 643,528 bytes 39: 723,975 bytes 40: 814,477 bytes 41: 916,292 bytes 42: 1,030,834 bytes 43: 1,159,694 bytes 44: 1,304,661 bytes 45: 1,467,749 bytes 46: 1,651,223 bytes 47: 1,857,631 bytes 48: 2,089,840 bytes 49: 2,351,076 bytes 50: 2,644,966 bytes 51: 2,975,592 bytes 52: 3,347,547 bytes 53: 3,765,996 bytes 54: 4,236,751 bytes 55: 4,766,350 bytes 56: 5,362,149 bytes 57: 6,032,423 bytes 58: 6,786,481 bytes 59: 7,634,797 bytes 60: 8,589,152 bytes 61: 9,662,802 bytes 62: 10,870,658 bytes 63: 12,229,496 bytes 64: 13,758,189 bytes 65: 15,477,968 bytes 66: 17,412,720 bytes 67: 19,589,316 bytes 68: 22,037,986 bytes 69: 24,792,740 bytes 70: 27,891,838 bytes 71: 31,378,323 bytes 72: 35,300,619 bytes 73: 39,713,202 bytes 74: 44,677,358 bytes 75: 50,262,033 bytes 76: 56,544,793 bytes 77: 63,612,898 bytes 78: 71,564,516 bytes 79: 80,510,086 bytes 80: 90,573,852 bytes 81: 101,895,589 bytes 82: 114,632,543 bytes 83: 128,961,616 bytes 84: 145,081,824 bytes 85: 163,217,058 bytes 86: 183,619,196 bytes 87: 206,571,601 bytes 88: 232,393,057 bytes 89: 261,442,195 bytes 90: 294,122,475 bytes 91: 330,887,790 bytes 92: 372,248,769 bytes 93: 418,779,871 bytes 94: 471,127,360 bytes 95: 530,018,286 bytes 96: 596,270,577 bytes 97: 670,804,405 bytes 98: 754,654,961 bytes 99: 848,986,837 bytes 100: 955,110,197 bytes [1] lzma buffer growth algorithm: https://github.com/python/cpython/blob/v3.9.0b4/Modules/_lzmamodule.c#L133 [2] bz2 buffer growth algorithm: https://github.com/python/cpython/blob/v3.9.0b4/Modules/_bz2module.c#L121 [3] lzma default buffer size: https://github.com/python/cpython/blob/v3.9.0b4/Modules/_lzmamodule.c#L124 [4] bz2 default buffer size: https://github.com/python/cpython/blob/v3.9.0b4/Modules/_bz2module.c#L109 ---------- components: Library (Lib) messages: 373454 nosy: malin priority: normal severity: normal status: open title: lzma/bz2 module: inefficient buffer growth algorithm type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 04:49:55 2020 From: report at bugs.python.org (wyz23x2) Date: Fri, 10 Jul 2020 08:49:55 +0000 Subject: [New-bugs-announce] [issue41266] Wrong hint when class methods and builtins named same Message-ID: <1594370995.93.0.883531134582.issue41266@roundup.psfhosted.org> New submission from wyz23x2 : There is a function hex(number, /), and float objects have a method hex(). When something like 1.3.hex( is typed, the yellow box's first line contains hex(number, /). But the method is actually hex(), no arguments. It confuses users. And when 1.3.list( is typed, there isn't a list() method in floats, but the hint still pops up and shows the __doc__ for list(iterable=(), /). ---------- assignee: terry.reedy components: IDLE messages: 373455 nosy: terry.reedy, wyz23x2 priority: normal severity: normal status: open title: Wrong hint when class methods and builtins named same versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 06:43:52 2020 From: report at bugs.python.org (owais) Date: Fri, 10 Jul 2020 10:43:52 +0000 Subject: [New-bugs-announce] [issue41267] Attribute error: Pandas module doesn't have 'Plotting' attribute Message-ID: <1594377832.45.0.135300974033.issue41267@roundup.psfhosted.org> New submission from owais : Hello... I am deploying a django application over the AWS Cloud Server EC2 windows instance having AMD64 architecture. I have installed python 3.7.6 over the server and install all modules (numpy pandas matplotlib django etc) using pip. I have set the sqlite server on AWS. I changed the path variables from local dir to server dir and also change the database server name. but When I run the server from cmd by typing "python manage.py runserver" I am facing an issue/error that says pandas module does not have plotting module. Whereas I checked in site-pakages under pandas folder, there is a subfolder named as 'plotting'. I have again installed the newer version of pip by upgrade pip command then install the pandas but no success acheived. Kindly help me out I am attaching 2 pictures, one is the error and other is project directory where all scripts are present. ---------- components: Windows files: Error.zip messages: 373461 nosy: owais.ali, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Attribute error: Pandas module doesn't have 'Plotting' attribute type: performance versions: Python 3.7 Added file: https://bugs.python.org/file49312/Error.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 08:47:44 2020 From: report at bugs.python.org (Hugo van Kemenade) Date: Fri, 10 Jul 2020 12:47:44 +0000 Subject: [New-bugs-announce] [issue41268] 3.9-dev regression? TypeError: exec_module() missing 1 required positional argument: 'module' Message-ID: <1594385264.3.0.0834998947861.issue41268@roundup.psfhosted.org> New submission from Hugo van Kemenade : For the past 3 months we've been testing Pillow on Travis CI using 3.9-dev, which Travis builds nightly from the 3.9 branch. Two days ago the 3.9-dev build passed, but it failed yesterday with the same Pillow commit, and all subsequent builds. * Last pass: https://travis-ci.org/github/python-pillow/Pillow/jobs/706015838 * First fail: https://travis-ci.org/github/hugovk/Pillow/jobs/706476038 Diffing the logs, here's the CPython commits for each: platform linux 3.9.0b4+ (heads/3.9:edeaf61, Jul 7 2020, 06:29:52) platform linux 3.9.0b4+ (heads/3.9:1d1c574, Jul 8 2020, 07:05:21) Diff of those commits: https://github.com/python/cpython/compare/edeaf61b6827ab3a8673aff1fb7717917f08f003..1d1c5743400bdf384ec83eb6ba5b39a355d121e3 It's also failing with the most recent: platform linux 3.9.0b4+ (heads/3.9:e689789, Jul 9 2020, 07:57:24) I didn't see anything obvious to cause it, so thought I'd better report it here to be on the safe side. Our tracking issue: https://github.com/python-pillow/Pillow/issues/4769 Here's the traceback: Finished processing dependencies for Pillow==7.3.0.dev0 python3 selftest.py Traceback (most recent call last): File "/home/travis/build/hugovk/Pillow/selftest.py", line 6, in from PIL import Image, features File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 664, in _load_unlocked File "", line 627, in _load_backward_compatible File "", line 259, in load_module File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/Pillow-7.3.0.dev0-py3.9-linux-x86_64.egg/PIL/Image.py", line 94, in File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 664, in _load_unlocked File "", line 627, in _load_backward_compatible File "", line 259, in load_module File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/Pillow-7.3.0.dev0-py3.9-linux-x86_64.egg/PIL/_imaging.py", line 8, in File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/Pillow-7.3.0.dev0-py3.9-linux-x86_64.egg/PIL/_imaging.py", line 7, in __bootstrap__ TypeError: exec_module() missing 1 required positional argument: 'module' Makefile:59: recipe for target 'install-coverage' failed make: *** [install-coverage] Error 1 Traceback (most recent call last): File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 495, in _importconftest return self._conftestpath2mod[key] KeyError: PosixPath('/home/travis/build/hugovk/Pillow/conftest.py') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/python/3.9-dev/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/python/3.9-dev/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pytest/__main__.py", line 7, in raise SystemExit(pytest.main()) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 105, in main config = _prepareconfig(args, plugins) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 257, in _prepareconfig return pluginmanager.hook.pytest_cmdline_parse( File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__ return self._hookexec(self, self.get_hookimpls(), kwargs) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/manager.py", line 84, in self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall( File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/callers.py", line 203, in _multicall gen.send(outcome) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/helpconfig.py", line 90, in pytest_cmdline_parse config = outcome.get_result() File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result raise ex[1].with_traceback(ex[2]) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall res = hook_impl.function(*args) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 836, in pytest_cmdline_parse self.parse(args) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1044, in parse self._preparse(args, addopts=addopts) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1001, in _preparse self.hook.pytest_load_initial_conftests( File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__ return self._hookexec(self, self.get_hookimpls(), kwargs) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/manager.py", line 84, in self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall( File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall return outcome.get_result() File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result raise ex[1].with_traceback(ex[2]) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall res = hook_impl.function(*args) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 899, in pytest_load_initial_conftests self.pluginmanager._set_initial_conftests(early_config.known_args_namespace) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 441, in _set_initial_conftests self._try_load_conftest(anchor) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 447, in _try_load_conftest self._getconftestmodules(anchor) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 473, in _getconftestmodules mod = self._importconftest(conftestpath) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 520, in _importconftest self.consider_conftest(mod) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 575, in consider_conftest self.register(conftestmodule, name=conftestmodule.__file__) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 386, in register self.consider_module(plugin) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 581, in consider_module self._import_plugin_specs(getattr(mod, "pytest_plugins", [])) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 586, in _import_plugin_specs self.import_plugin(import_spec) File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/config/__init__.py", line 613, in import_plugin __import__(importspec) File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/_pytest/assertion/rewrite.py", line 152, in exec_module exec(co, module.__dict__) File "/home/travis/build/hugovk/Pillow/Tests/helper.py", line 14, in from PIL import Image, ImageMath, features File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 664, in _load_unlocked File "", line 627, in _load_backward_compatible File "", line 259, in load_module File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/Pillow-7.3.0.dev0-py3.9-linux-x86_64.egg/PIL/Image.py", line 94, in File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 664, in _load_unlocked File "", line 627, in _load_backward_compatible File "", line 259, in load_module File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/Pillow-7.3.0.dev0-py3.9-linux-x86_64.egg/PIL/_imaging.py", line 8, in File "/home/travis/virtualenv/python3.9-dev/lib/python3.9/site-packages/Pillow-7.3.0.dev0-py3.9-linux-x86_64.egg/PIL/_imaging.py", line 7, in __bootstrap__ TypeError: exec_module() missing 1 required positional argument: 'module' ---------- messages: 373464 nosy: hugovk priority: normal severity: normal status: open title: 3.9-dev regression? TypeError: exec_module() missing 1 required positional argument: 'module' versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 09:43:09 2020 From: report at bugs.python.org (Ivan) Date: Fri, 10 Jul 2020 13:43:09 +0000 Subject: [New-bugs-announce] [issue41269] Wrong subtraction calculations Message-ID: <1594388589.34.0.510717006355.issue41269@roundup.psfhosted.org> New submission from Ivan : I've started to learn python and tried command: print(-2.989 + 2) it gives me result of -0.9889999999999999 same error can be observed with numbers from 4 and below like: print(-2.989 + 4) 1.0110000000000001 print(-2.989 + 3) 0.01100000000000012 print(-2.989 + 1) -1.9889999999999999 Numbers above 4 seam to work fine ---------- components: Windows files: python error.jpg messages: 373465 nosy: Svabo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Wrong subtraction calculations type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49313/python error.jpg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 09:58:42 2020 From: report at bugs.python.org (Seth Sims) Date: Fri, 10 Jul 2020 13:58:42 +0000 Subject: [New-bugs-announce] [issue41270] NamedTemporaryFile is not its own iterator. Message-ID: <1594389522.96.0.51886326593.issue41270@roundup.psfhosted.org> New submission from Seth Sims : _TemporaryFileWrapper does not proxy __next__ to the underlying file object. There was a discussion on the mailing list in 2016 mentioning this, however it seems it was dropped without a consensus. Biopython encountered this issue (referenced below) and we agree it violates our assumptions about how the NamedTemporaryFile should work. I think it would be fairly trivial to fix by just returning `self.file.readline()` mailing list thread: https://mail.python.org/pipermail/python-list/2016-July/862590.html biopython discussion: https://github.com/biopython/biopython/pull/3031 ---------- components: Library (Lib) messages: 373467 nosy: Seth Sims priority: normal severity: normal status: open title: NamedTemporaryFile is not its own iterator. type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 15:54:06 2020 From: report at bugs.python.org (Cooper Lees) Date: Fri, 10 Jul 2020 19:54:06 +0000 Subject: [New-bugs-announce] [issue41271] Add support for io_uring to cpython Message-ID: <1594410846.99.0.0025674439126.issue41271@roundup.psfhosted.org> New submission from Cooper Lees : Would adding support for io_uring in Linux to stadlib IO and/or asyncio make sense? More info on io_uring: - https://kernel.dk/io_uring.pdf - https://lwn.net/Articles/810414/ ---------- components: IO messages: 373477 nosy: cooperlees priority: normal severity: normal status: open title: Add support for io_uring to cpython type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 16:44:00 2020 From: report at bugs.python.org (catrudis) Date: Fri, 10 Jul 2020 20:44:00 +0000 Subject: [New-bugs-announce] [issue41272] New clause in FOR and WHILE instead of ELSE Message-ID: <1594413840.56.0.248219439277.issue41272@roundup.psfhosted.org> New submission from catrudis : ELSE-clause in FOR and WHILE has unclear syntax. I suggest new clause instead: if COND: ... [elif COND: ...] [else: ...] This IF-clause like must be immediately after FOR- or WHILE-cycle (only comment allowed between). It looks like a regular IF, but COND is special. COND may be "break", "pass" or "finally". "if break:" - if used break-operator to exit cycle. "if pass:" - cycle executed 0 times. "if finally:" - cycle executed 0 or more times ("pass-case" is included in "finally-case"). For compatibility only "else:" means "if finally:". It's compatible enhancement. No new keyword. There can be no combination "break", "pass" or "finally" after "if"/"elif:" in current version. ---------- components: Interpreter Core messages: 373479 nosy: catrudis priority: normal severity: normal status: open title: New clause in FOR and WHILE instead of ELSE type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 17:08:56 2020 From: report at bugs.python.org (Tony) Date: Fri, 10 Jul 2020 21:08:56 +0000 Subject: [New-bugs-announce] [issue41273] asyncio: proactor read transport: use recv_into instead of recv Message-ID: <1594415336.97.0.770801888844.issue41273@roundup.psfhosted.org> New submission from Tony : Using recv_into instead of recv in the transport _loop_reading will speed up the process. >From what I checked it's about 120% performance increase. This is only because there should not be a new buffer allocated each time we call recv, it's really wasteful. ---------- messages: 373483 nosy: tontinton priority: normal severity: normal status: open title: asyncio: proactor read transport: use recv_into instead of recv _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 17:44:43 2020 From: report at bugs.python.org (Juan Jimenez) Date: Fri, 10 Jul 2020 21:44:43 +0000 Subject: [New-bugs-announce] [issue41274] Better way to random.seed()? Message-ID: <1594417483.64.0.247162665029.issue41274@roundup.psfhosted.org> New submission from Juan Jimenez : I have invented a new way to seed the random number generator with about as random a source of seeds as can be found: hashes generated from high cadence, high resolution images of the surface of the Sun. These are captured by the Solar Dynamics Observatory's (SDO) Atmospheric Imaging Assembly's (AIA) cameras at various frequencies. I wrote the POC code in Python and can be seen at https://github.com/flybd5/heliorandom. The HelioViewer project liked the idea and modified their API to do essentially what my POC code does at https://api.helioviewer.org/?action=getRandomSeed. Perhaps a solarseed() call could be created for the library to get seeds that way rather than from the system clock? ---------- components: Library (Lib) messages: 373487 nosy: flybd5 priority: normal severity: normal status: open title: Better way to random.seed()? type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 10 22:41:58 2020 From: report at bugs.python.org (JustAnotherArchivist) Date: Sat, 11 Jul 2020 02:41:58 +0000 Subject: [New-bugs-announce] [issue41275] Clarify whether Futures can be awaited multiple times Message-ID: <1594435318.81.0.780838422603.issue41275@roundup.psfhosted.org> New submission from JustAnotherArchivist : While the situation is clear regarding coroutine objects (#25887), as far as I can see, the documentation doesn't specify whether asyncio.Futures can be awaited multiple times. The code has always (at least since the integration into CPython) allowed for it since Future.__await__ simply returns Future.result() if it is already done. Is this guaranteed/intended behaviour, as also implied by some of the comments on #25887, or is it considered an implementation detail? Here are the only two things I found in the documentation regarding this: > library/asyncio-task: When a Future object is awaited it means that the coroutine will wait until the Future is resolved in some other place. > library/asyncio-future: Future is an awaitable object. Coroutines can await on Future objects until they either have a result or an exception set, or until they are cancelled. Neither of these say anything about awaiting a Future that is already resolved, i.e. has a result, has an exception, or was cancelled. If this is intended to be guaranteed, it should be mentioned in the Future documentation. If it is considered an implementation detail, it's probably not necessary to explicitly mention this anywhere, but it might be a good idea to add another line to e.g. the asyncio.wait example on how to correctly retrieve the result of an already-awaited Future/Task. ---------- components: asyncio messages: 373504 nosy: JustAnotherArchivist, asvetlov, yselivanov priority: normal severity: normal status: open title: Clarify whether Futures can be awaited multiple times type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 11 00:02:02 2020 From: report at bugs.python.org (Calvin Davis) Date: Sat, 11 Jul 2020 04:02:02 +0000 Subject: [New-bugs-announce] [issue41276] Min / Max returns different values depending on parameter order Message-ID: <1594440122.62.0.379783406728.issue41276@roundup.psfhosted.org> New submission from Calvin Davis : See attached image The behavior of min() (and probably max and other related functions) changes depending on the order of the parameters it sorts. In the image, I sorted two tuples, coordinate points, with the same Y value and different X values. When the X values were in increasing order, finding the minimum x value and minimum y value were the same. However if the list was reversed, finding the minimum x and y values in the list provided different results. ---------- assignee: terry.reedy components: IDLE files: yqzRk0Y.png messages: 373512 nosy: Calvin Davis, terry.reedy priority: normal severity: normal status: open title: Min / Max returns different values depending on parameter order type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49314/yqzRk0Y.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 11 01:45:59 2020 From: report at bugs.python.org (Pablo Dumas) Date: Sat, 11 Jul 2020 05:45:59 +0000 Subject: [New-bugs-announce] [issue41277] documentation: os.setxattr() errno EEXIST and ENODATA Message-ID: <1594446359.56.0.57719330174.issue41277@roundup.psfhosted.org> New submission from Pablo Dumas : Shouldn't os.setxattr() errno EEXIST be when "XATTR_CREATE was specified, and the attribute exists already" and errno ENODATA be when "XATTR_REPLACE was specified, and the attribute does not exist."? In the current os module documentation, it's the other way around: "If XATTR_REPLACE is given and the attribute does not exist, EEXISTS will be raised. If XATTR_CREATE is given and the attribute already exists, the attribute will not be created and ENODATA will be raised."... Thanks! ---------- messages: 373516 nosy: w0rthle$$ priority: normal severity: normal status: open title: documentation: os.setxattr() errno EEXIST and ENODATA type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 11 03:12:44 2020 From: report at bugs.python.org (Alex) Date: Sat, 11 Jul 2020 07:12:44 +0000 Subject: [New-bugs-announce] [issue41278] Wrong Completion on Editing Mode of IDLE Message-ID: <1594451564.07.0.601184325808.issue41278@roundup.psfhosted.org> New submission from Alex <2423067593 at qq.com>: When I type (on editing mode, not interacting mode) __main__. + Ctrl+Space, the completion window shows 'idlelib'. ---------- assignee: terry.reedy components: IDLE messages: 373518 nosy: Alex-Python-Programmer, terry.reedy priority: normal severity: normal status: open title: Wrong Completion on Editing Mode of IDLE type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 11 12:23:40 2020 From: report at bugs.python.org (Tony) Date: Sat, 11 Jul 2020 16:23:40 +0000 Subject: [New-bugs-announce] [issue41279] Convert StreamReaderProtocol to a BufferedProtocol Message-ID: <1594484620.37.0.437755734421.issue41279@roundup.psfhosted.org> New submission from Tony : This will greatly increase performance, from my internal tests it was about 150% on linux. Using read_into instead of read will make it so we do not allocate a new buffer each time data is received. ---------- messages: 373526 nosy: tontinton priority: normal severity: normal status: open title: Convert StreamReaderProtocol to a BufferedProtocol _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 11 18:46:12 2020 From: report at bugs.python.org (Tom Forbes) Date: Sat, 11 Jul 2020 22:46:12 +0000 Subject: [New-bugs-announce] [issue41280] lru_cache on 0-arity functions should default to maxsize=None Message-ID: <1594507572.96.0.37330867627.issue41280@roundup.psfhosted.org> New submission from Tom Forbes : `functools.lru_cache` has a maxsize=128 default for all functions. If a function has no arguments then this maxsize default is redundant and should be set to `maxsize=None`: ``` @functools.lru_cache() def function_with_no_args(): pass ``` Currently you need to add `maxsize=None` manually, and ensure that it is also updated if you alter the function to add arguments. ---------- components: Library (Lib) messages: 373542 nosy: Tom Forbes priority: normal severity: normal status: open title: lru_cache on 0-arity functions should default to maxsize=None type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 01:26:29 2020 From: report at bugs.python.org (yyyyyyyan) Date: Sun, 12 Jul 2020 05:26:29 +0000 Subject: [New-bugs-announce] [issue41281] Wrong/missing code formats in datetime documentation Message-ID: <1594531589.9.0.00878151530973.issue41281@roundup.psfhosted.org> New submission from yyyyyyyan : The datetime page in the docs is missing a lot of needed backquotes syntax for inline code samples. There are some wrong role links too, due to ambiguity in the text roles. ---------- assignee: docs at python components: Documentation messages: 373547 nosy: docs at python, eric.araujo, ezio.melotti, mdk, willingc, yyyyyyyan priority: normal severity: normal status: open title: Wrong/missing code formats in datetime documentation type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 04:30:25 2020 From: report at bugs.python.org (Jason R. Coombs) Date: Sun, 12 Jul 2020 08:30:25 +0000 Subject: [New-bugs-announce] [issue41282] Deprecate and remove distutils Message-ID: <1594542625.3.0.789939298104.issue41282@roundup.psfhosted.org> New submission from Jason R. Coombs : Setuptools has adopted distutils as outlined in [pypa/packaging-problems#127](https://github.com/pypa/packaging-problems/issues/127). Although there are some straggling issues, the current release of Setuptools fully obviates distutils if a certain environment variable is set. Soon, that behavior will be default. Additionally, the distutils codebase remains maintained at [pypa/distutils](https://github.com/pypa/distutils) in a form suitable for releasing as a third-party package, should the need arise (i.e. pip install distutils). The plan now is to freeze, deprecate, and in Python N + 0.1, remove distutils. Already, Setuptools is identifying emergent bugs and other defects in distutils and providing fixes for them (issue41207, [pypa/setuptools#2212](https://github.com/pypa/setuptools/issues/2212)). Keeping these changes in sync across three repos and different supported versions is tedious, so I'd like to move forward with the deprecation process as soon as possible. ---------- components: Distutils messages: 373548 nosy: dstufft, eric.araujo, jaraco priority: normal severity: normal status: open title: Deprecate and remove distutils versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 05:20:31 2020 From: report at bugs.python.org (Adam Eltawla) Date: Sun, 12 Jul 2020 09:20:31 +0000 Subject: [New-bugs-announce] [issue41283] The parameter name for imghdr.what in the documentation is wrong Message-ID: <1594545631.96.0.481890798822.issue41283@roundup.psfhosted.org> New submission from Adam Eltawla : I noticed the parameter name for imghdr.what in the documentation is wrong Link: https://docs.python.org/3.8/library/imghdr.html?highlight=imghdr function imghdr.what(filename, h=None) In reality: def what(file, h=None): It is 'file' not 'filename'. ---------- assignee: docs at python components: Documentation messages: 373551 nosy: aeltawela, docs at python priority: normal severity: normal status: open title: The parameter name for imghdr.what in the documentation is wrong type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 09:20:43 2020 From: report at bugs.python.org (Wansoo Kim) Date: Sun, 12 Jul 2020 13:20:43 +0000 Subject: [New-bugs-announce] [issue41284] High Level API for json file parsing Message-ID: <1594560043.3.0.756696786523.issue41284@roundup.psfhosted.org> New submission from Wansoo Kim : Many Python users use the following snippets to read Json File. ``` with oepn(filepath, 'r') as f: data = json.load(f) ``` I suggest providing this snippet as a function. ``` data = json.read(filepath) ``` Reading Json is very frequent task for python users. I think it is worth providing this with the High Level API. ---------- components: Library (Lib) messages: 373552 nosy: ys19991 priority: normal severity: normal status: open title: High Level API for json file parsing versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 10:23:24 2020 From: report at bugs.python.org (Michiel de Hoon) Date: Sun, 12 Jul 2020 14:23:24 +0000 Subject: [New-bugs-announce] [issue41285] memoryview does not support subclassing Message-ID: <1594563804.44.0.878376005542.issue41285@roundup.psfhosted.org> New submission from Michiel de Hoon : Currently memoryview does not support subclassing: >>> class B(memoryview): pass ... Traceback (most recent call last): File "", line 1, in TypeError: type 'memoryview' is not an acceptable base type Subclassing memoryview can be useful when - class A supports the buffer protocol; - class B wraps class A and should support the buffer protocol provided by class A; - class A does not support subclassing. In this situation, class B(memoryview): def __new__(cls, a): return super(B, cls).__new__(cls, a) where a is an instance of class A, would let instances of B support the buffer protocol provided by a. Is there any particular reason why memoryview does not support subclassing? ---------- components: C API messages: 373554 nosy: mdehoon priority: normal severity: normal status: open title: memoryview does not support subclassing type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 12:15:24 2020 From: report at bugs.python.org (=?utf-8?q?Bo=C5=A1tjan_Mejak?=) Date: Sun, 12 Jul 2020 16:15:24 +0000 Subject: [New-bugs-announce] [issue41286] Built-in platform module does not offer to check for processor instructions Message-ID: <1594570524.81.0.0843136035191.issue41286@roundup.psfhosted.org> New submission from Bo?tjan Mejak : The platform module does not offer to check whether a processor supports the POPCNT or BMI/BMI2 processor instructions. Am I missing something or is it actually missing this feature? ---------- components: Library (Lib) messages: 373563 nosy: PedanticHacker priority: normal severity: normal status: open title: Built-in platform module does not offer to check for processor instructions type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 16:34:43 2020 From: report at bugs.python.org (Sergei Izmailov) Date: Sun, 12 Jul 2020 20:34:43 +0000 Subject: [New-bugs-announce] [issue41287] __doc__ attribute is not set in property-derived classes Message-ID: <1594586083.6.0.902653661287.issue41287@roundup.psfhosted.org> New submission from Sergei Izmailov : MRE: class Property(property): pass print(Property(None, None, None, "hello").__doc__) Expected: hello Actual: None ---------- messages: 373571 nosy: Sergei Izmailov priority: normal severity: normal status: open title: __doc__ attribute is not set in property-derived classes _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 17:36:09 2020 From: report at bugs.python.org (Iman Sharafodin) Date: Sun, 12 Jul 2020 21:36:09 +0000 Subject: [New-bugs-announce] [issue41288] Pickle crashes using a crafted datetime object Message-ID: <1594589769.15.0.967548182299.issue41288@roundup.psfhosted.org> New submission from Iman Sharafodin : The following code generates a segfault on the Pickle module [it's a crafted datetime object] (Python 3.10.0a0 (heads/master:b40e434, Jul 4 2020), Python 3.6.11 and Python 3.7.2): import io import pickle hex_string = "8004952A000000000000008C086461746574696D65948C086461746574696D65949388430A07B2010100000000000092059452942E" myb = bytes.fromhex(hex_string) f = io.BytesIO(myb) print(f) data = pickle.load(f) print(data) print('We have segfault but we cannot see!') ---------- components: Interpreter Core messages: 373573 nosy: Iman Sharafodin priority: normal severity: normal status: open title: Pickle crashes using a crafted datetime object type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 12 19:56:17 2020 From: report at bugs.python.org (Neil Godber) Date: Sun, 12 Jul 2020 23:56:17 +0000 Subject: [New-bugs-announce] [issue41289] '%' character in help= for argparse causes ValueError: incomplete format Message-ID: <1594598177.06.0.634893124159.issue41289@roundup.psfhosted.org> New submission from Neil Godber : '%' character in help= for argparse causes ValueError: incomplete format. I am attempting to use the percentage character in my help string but get the above error. Presumably argparse assumes % denotes formatting when this is not the case. I have tried f-strings, escape and raw strings, none of which rectify the issue. The only solution is that dev's cannot use % character in argpase help strings which is something of an oddity. ---------- components: Interpreter Core messages: 373578 nosy: Neil Godber priority: normal severity: normal status: open title: '%' character in help= for argparse causes ValueError: incomplete format type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 13 07:22:25 2020 From: report at bugs.python.org (Allayna Wilson) Date: Mon, 13 Jul 2020 11:22:25 +0000 Subject: [New-bugs-announce] [issue41290] ipaddress module doesn't recognize 100.64.0.0/10 as a private network Message-ID: <1594639345.1.0.636804345302.issue41290@roundup.psfhosted.org> New submission from Allayna Wilson : import IPv4Address as n4 In [10]: n4('100.64.0.0/24').is_private Out[10]: False In [11]: n4('100.64.0.0/10').is_private Out[11]: False https://en.wikipedia.org/wiki/Reserved_IP_addresses#IPv4 keep this on the dl though I don't want anybody else using this /10. I'm tired of people's crap always overlapping with my private networks. ---------- components: Extension Modules, Library (Lib) messages: 373592 nosy: Allayna Wilson priority: normal severity: normal status: open title: ipaddress module doesn't recognize 100.64.0.0/10 as a private network type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 13 10:04:01 2020 From: report at bugs.python.org (Giovanni Pizzi) Date: Mon, 13 Jul 2020 14:04:01 +0000 Subject: [New-bugs-announce] [issue41291] Race conditions when opening and deleting a file on Mac OS X Message-ID: <1594649041.52.0.898762861377.issue41291@roundup.psfhosted.org> New submission from Giovanni Pizzi : Hello, when creating deleting (with `os.remove`/`os.unlink`) a file and opening it in a different process at (almost) the same time, on Mac OS X relatively often I get a file that is empty instead of either a `FileNotFoundError` exception, or an open handle with the original content, i.e. at least one of the two operations (unlinking or opening) seems to be non-atomic. (With empty I mean that if after I open the file (mode 'rb'), then `fhandle.read()` returns me b''. More in particular, after quite some debugging I noticed that what happens is that if I stat the file descriptor, `st_ino` is zero. This can reproduced very easily. I set up a GitHub repository here: https://github.com/giovannipizzi/concurrent-delete-race-problem with simple examples and tests (and both a Python implementation and a C implementation). There are also GitHub Actions that run on Mac, Ubuntu and Windows, and apparently the problem exists only on Mac (and happens on all versions). For completeness I attach also here a tar.gz with the two very short python files that just need be run in parallel - in my Mac OS X (macOS 10.14.6, 15" 2016 MacBook Pro, python 3.6) I get the error essentially at every run. Important: while much harder to reproduce, I can get the same error and behaviour also with the (I think equivalent) C code. Therefore, this seems to be more a problem with the Mac OS standard libraries? My first question is: has anybody ever seen this problem? It seems to me quite easy to reproduce, but I'm surprised that I couldn't find any reference in the internet even after searching for various hours (maybe I used the wrong keywords?) Second question: should this be reported directly to Apple? Third question: Even if this is a bug, would it make sense to implement a patch in python that raises some exception if st_ino is zero when opening a file? Or am I simplifying too much and in some conditions st_ino=0 is valid on some types of mounted filesystems? Is there some other way to have a more atomic behaviour for this race condition in python? Thanks a lot! ---------- components: IO, macOS files: concurrency-tests.tar.gz messages: 373606 nosy: Giovanni Pizzi, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Race conditions when opening and deleting a file on Mac OS X versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49315/concurrency-tests.tar.gz _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 13 10:49:36 2020 From: report at bugs.python.org (Michel Samia) Date: Mon, 13 Jul 2020 14:49:36 +0000 Subject: [New-bugs-announce] [issue41292] Dead link in Windows FAQ Message-ID: <1594651776.92.0.628399384515.issue41292@roundup.psfhosted.org> New submission from Michel Samia : In https://github.com/python/cpython/blob/master/Doc/faq/windows.rst, the link "https://anthony-tuininga.github.io/cx_Freeze/" is dead. The new valid URL is probably https://cx-freeze.readthedocs.io/en/latest/ ---------- assignee: docs at python components: Documentation messages: 373610 nosy: Michel Samia, docs at python priority: normal severity: normal status: open title: Dead link in Windows FAQ versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 13 16:26:50 2020 From: report at bugs.python.org (Pavel Trukhanov) Date: Mon, 13 Jul 2020 20:26:50 +0000 Subject: [New-bugs-announce] [issue41293] fix confusing example in hashlib docs Message-ID: <1594672010.58.0.839432407468.issue41293@roundup.psfhosted.org> New submission from Pavel Trukhanov : The documentation found in https://docs.python.org/3/library/hashlib.html#hash-algorithms give us the following two examples: ``` For example, to obtain the digest of the byte string b'Nobody inspects the spammish repetition': >>> >>> import hashlib >>> m = hashlib.sha256() >>> m.update(b"Nobody inspects") >>> m.update(b" the spammish repetition") >>> m.digest() b'\x03\x1e\xdd}Ae\x15\x93\xc5\xfe\\\x00o\xa5u+7\xfd\xdf\xf7\xbcN\x84:\xa6\xaf\x0c\x95\x0fK\x94\x06' >>> m.digest_size 32 >>> m.block_size 64 More condensed: >>> hashlib.sha224(b"Nobody inspects the spammish repetition").hexdigest() 'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2' hashlib ``` It's confusing because two examples use different algo - sha256 and sha224, respectfully. Also the first one gets `.digest()` while the other - `.hexdigest()`, which are incomparable. ---------- assignee: docs at python components: Documentation messages: 373619 nosy: Pavel Trukhanov, docs at python priority: normal severity: normal status: open title: fix confusing example in hashlib docs _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 13 19:30:12 2020 From: report at bugs.python.org (William Pickard) Date: Mon, 13 Jul 2020 23:30:12 +0000 Subject: [New-bugs-announce] [issue41294] Allow '__qualname__' to be an instance of 'DynamicClassAttribute' Message-ID: <1594683012.02.0.626450993111.issue41294@roundup.psfhosted.org> New submission from William Pickard : Currently within Python, the attribute '__qualname__' is restricted to only be a string valued attribute. This makes is rather cumbersome for anyone who wants to implement '__qualname__' as a property, instead of a plain attribute (especially if '__slots__' are used) Python also has the type 'DynamicClassAttribute' who's only first party user is the 'enum' module, BUT it only supports shadow get requests. Therefore, I'm requesting both changing DynamicClassAttribute to supoort __set__ and __del__ to function like __get__ AND to allow __qualname__ to be an instance of DynamicClassAttribute. ---------- messages: 373621 nosy: WildCard65 priority: normal severity: normal status: open title: Allow '__qualname__' to be an instance of 'DynamicClassAttribute' type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 14 10:57:16 2020 From: report at bugs.python.org (kam193) Date: Tue, 14 Jul 2020 14:57:16 +0000 Subject: [New-bugs-announce] [issue41295] CPython 3.8.4 regression on __setattr__ in multiinheritance with metaclasses Message-ID: <1594738636.27.0.745451783859.issue41295@roundup.psfhosted.org> New submission from kam193 : CPython 3.8.4 broke previously correct custom __setattr__ implementation, when metaclass inheritances from multiple classes, including not-meta. Following code: ``` class Meta(type): def __setattr__(cls, key, value): type.__setattr__(cls, key, value) class OtherClass: pass class DefaultMeta(OtherClass, Meta): pass obj = DefaultMeta('A', (object,), {}) obj.test = True print(obj.test) ``` Works in Python up to 3.8.3, but in 3.8.4 it raises: Traceback (most recent call last): File "repr.py", line 13, in obj.test = True File "repr.py", line 3, in __setattr__ type.__setattr__(cls, key, value) TypeError: can't apply this __setattr__ to DefaultMeta object This change affects e.g. https://github.com/pallets/flask-sqlalchemy/issues/852 ---------- messages: 373637 nosy: kam193 priority: normal severity: normal status: open title: CPython 3.8.4 regression on __setattr__ in multiinheritance with metaclasses type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 14 15:03:59 2020 From: report at bugs.python.org (john passaro) Date: Tue, 14 Jul 2020 19:03:59 +0000 Subject: [New-bugs-announce] [issue41296] unittest.mock: patched mocks do not propagate calls to parent when set via setattr Message-ID: <1594753439.43.0.455060750875.issue41296@roundup.psfhosted.org> New submission from john passaro : I expected the following code to print True:
import os
from unittest.mock import call, Mock, patch

parent = Mock()
parent.getenv = patch('os.getenv')
with parent.getenv:
    os.getenv('FOO', 'bar')
expected = [call.getenv('FOO', 'bar')]
print(expected, '==', parent.mock_calls, ':', expected == parent.mock_calls)
It works fine if you replace the statement `parent.getenv = patch('os.getenv')` with:
parent.getenv = patch('os.getenv', _mock_parent=parent,
                      _mock_new_parent=parent, _mock_new_name='getenv')
Background: I was trying to make assertions about a mocked call within a mocked context generator.
# c.py
from . import a
from . import b
def func():
    with a.context('c'):
        b.operation()
# test_c.py
from package.c import func
from unittest.mock import Mock, call, patch
parent = Mock()
parent.context = patch('package.a.context')
parent.operation = patch('package.b.operation')
with parent.context, parent.operation:
    func()
assert parent.mock_calls == [
   call.context('c'),
   call.context().__enter__(),
   call.operation(),
   call.context().__exit__(None, None, None),
]
in other words, to ensure the correct order of the __enter__/__exit__ calls relative to the actual operation. I have my workaround but it's very awkward looking. ---------- components: Library (Lib) messages: 373654 nosy: thinkingmachin6 priority: normal severity: normal status: open title: unittest.mock: patched mocks do not propagate calls to parent when set via setattr type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 14 17:49:24 2020 From: report at bugs.python.org (alex c) Date: Tue, 14 Jul 2020 21:49:24 +0000 Subject: [New-bugs-announce] [issue41297] Remove doctest import from heapq Message-ID: <1594763364.19.0.785950162701.issue41297@roundup.psfhosted.org> New submission from alex c : heapq.py imports doctest in the last 4 lines to perform unit tests: if __name__ == "__main__": import doctest # pragma: no cover print(doctest.testmod()) # pragma: no cover This disrupts dependency tracking modules and software, like modulegraph and pyinstaller, as doctest brings in many dependencies (including ctypes as of 3.8). This functionality could be factored out into a separate unit-testing file. ---------- components: Tests messages: 373658 nosy: alexchandel priority: normal severity: normal status: open title: Remove doctest import from heapq type: resource usage versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 14 18:02:26 2020 From: report at bugs.python.org (Eryk Sun) Date: Tue, 14 Jul 2020 22:02:26 +0000 Subject: [New-bugs-announce] [issue41298] Enable handling logoff and shutdown Windows console events Message-ID: <1594764146.97.0.760016525687.issue41298@roundup.psfhosted.org> New submission from Eryk Sun : A console script should be able to handle Windows console logoff and shutdown events with relatively simple ctypes code, such as the following: import ctypes kernel32 = ctypes.WinDLL('kernel32', use_last_error=True) CTRL_C_EVENT = 0 CTRL_BREAK_EVENT = 1 CTRL_CLOSE_EVENT = 2 CTRL_LOGOFF_EVENT = 5 CTRL_SHUTDOWN_EVENT = 6 @ctypes.WINFUNCTYPE(ctypes.c_int, ctypes.c_ulong) def console_ctrl_handler(event): if event == CTRL_SHUTDOWN_EVENT: return handle_shutdown() if event == CTRL_LOGOFF_EVENT: return handle_logoff() if event == CTRL_CLOSE_EVENT: return handle_close() if event == CTRL_BREAK_EVENT: return handle_break() if event == CTRL_C_EVENT: return handle_cancel() return False # chain to next handler if not kernel32.SetConsoleCtrlHandler(console_ctrl_handler, True): raise ctypes.WinError(ctypes.get_last_error()) As of 3.9, it's not possible for python.exe to receive the above logoff and shutdown events via ctypes. In these two cases, the console doesn't even get to send a close event, so a console script cannot exit gracefully. The session server (csrss.exe) doesn't send the logoff and shutdown console events to python.exe because it's seen as a GUI process, which is expected to handle WM_QUERYENDSESSION and WM_ENDSESSION instead. That requires creating a hidden window and running a message loop, which is not nearly as simple as a console control handler. The system registers python.exe as a GUI process because user32.dll is loaded, which means the process is ready to interact with the desktop in every way, except for the final step of actually creating UI objects. In particular, loading user32.dll causes the system to extend the process and its threads with additional kernel data structures for use by win32k.sys (e.g. a message queue for each thread). It also opens handles for and connects to the session's "WinSta0" interactive window station (a container for an atom table, clipboard, and desktops) and "Default" desktop (a container for UI objects such as windows, menus, and hooks). (The process can connect to a different desktop or window station if set in the lpDesktop field of the process startup info. Also, if the process access token is for a service or batch logon, by default it connects to a non-interactive window station that's named for the logon ID. For example, the SYSTEM logon ID is 0x3e7, so a SYSTEM service or batch process gets connected to "Service-0x0-3e7$".) Prior to 3.9, python3x.dll loads shlwapi.dll (the lightweight shell API) to access the Windows path functions PathCanonicalizeW and PathCombineW. shlwapi.dll in turn loads user32.dll. 3.9+ is one step closer to the non-GUI goal because it no longer depends on shlwapi.dll. Instead it always uses the newer PathCchCanonicalizeEx and PathCchCombineEx functions from api-ms-win-core-path-l1-1-0.dll, which is implemented by the base API (kernelbase.dll) instead of the shell API. The next hurdle is extension modules, especially the _ctypes extension module, since it's needed for the console control handler. _ctypes.pyd loads ole32.dll, which in turn loads user32.dll. This is just to call ProgIDFromCLSID, which is rarely used. I see no reason that ole32.dll can't be delay loaded or just manually link to ProgIDFromCLSID on first use via GetModuleHandleW / LoadLibraryExW and GetProcAddress. I did a quick patch to implement the latter, and, since user32.dll no longer gets loaded, the console control handler is enabled for console logoff and shutdown events. So this is the minimal fix to resolve this issue in 3.9+. --- Additional modules winsound loads user32.dll for MessageBeep. The Beep and PlaySound functions don't require user32.dll, so winsound is still useful if it gets delay loaded. _ssl and _hashlib depend on libcrypto, which loads user32.dll for MessageBoxW, GetProcessWindowStation and GetUserObjectInformationW. The latter two are called in OPENSSL_isservice [1] in order to get the window station name. If StandardError isn't a valid file handle, OPENSSL_isservice determines whether an error should be reported as an event or interactively shown with a message box. user32.dll can be delay loaded for this, which, if I'm reading the source right, will never occur as long as StandardError is a valid file. [1]: https://github.com/openssl/openssl/blob/e7fb44e7c3f7a37ff83a6b69ba51a738e549bf5c/crypto/cryptlib.c#L193 ---------- components: Extension Modules, Library (Lib), Windows, ctypes messages: 373659 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Enable handling logoff and shutdown Windows console events type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 14 18:15:57 2020 From: report at bugs.python.org (SD) Date: Tue, 14 Jul 2020 22:15:57 +0000 Subject: [New-bugs-announce] [issue41299] Python3 threading.Event().wait time is twice as large as Python27 Message-ID: <1594764957.86.0.842448865524.issue41299@roundup.psfhosted.org> New submission from SD : The overhead in Python 3 for threading.Event().wait() is much larger than Python 2. I am trying to run a script at 60hz which worked correctly in Python2 but does not in Python 3. Here is a minimal example to demonstrate: #!/usr/bin/env python import threading import time def sample_thread(stop_ev): while not stop_ev.is_set(): t2 = time.time() stop_ev.wait(0.016999959945) print((time.time() - t2)) def main(): stop_ev = threading.Event() sample_t = threading.Thread(target=sample_thread, args=(stop_ev, )) sample_t.start() # Other stuff here, sleep is just dummy time.sleep(14) stop_ev.set() print('End reached.') if __name__ == '__main__': main() Python 2.7.0 consistently prints : 0.0169999599457 0.0169999599457 0.0170001983643 0.0169999599457 0.0169999599457 0.0169999599457 0.0169999599457 0.0169999599457 Python 3.8.2 waits much longer 0.031026363372802734 0.0320279598236084 0.031026363372802734 0.031026840209960938 0.031527042388916016 0.031026601791381836 0.03103041648864746 0.03302431106567383 ---------- messages: 373660 nosy: SD priority: normal severity: normal status: open title: Python3 threading.Event().wait time is twice as large as Python27 type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 14 21:23:31 2020 From: report at bugs.python.org (nirinA raseliarison) Date: Wed, 15 Jul 2020 01:23:31 +0000 Subject: [New-bugs-announce] [issue41300] IDLE: missing import io in iomenu.py Message-ID: <1594776211.42.0.863093629881.issue41300@roundup.psfhosted.org> New submission from nirinA raseliarison : idle cannot save file with non ascii character, leading to: Exception in Tkinter callback Traceback (most recent call last): File "/usr/lib64/python3.8/tkinter/__init__.py", line 1883, in __call__ return self.func(*args) File "/usr/lib64/python3.8/idlelib/multicall.py", line 176, in handler r = l[i](event) File "/usr/lib64/python3.8/idlelib/iomenu.py", line 199, in save else: File "/usr/lib64/python3.8/idlelib/iomenu.py", line 232, in writefile text = self.fixnewlines() File "/usr/lib64/python3.8/idlelib/iomenu.py", line 271, in encode encoded = chars.encode('ascii', 'replace') NameError: name 'io' is not defined just adding `import io` seems to fix this. ---------- assignee: terry.reedy components: IDLE messages: 373664 nosy: nirinA raseliarison, terry.reedy priority: normal severity: normal status: open title: IDLE: missing import io in iomenu.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 03:59:49 2020 From: report at bugs.python.org (Yashwanth Barad) Date: Wed, 15 Jul 2020 07:59:49 +0000 Subject: [New-bugs-announce] [issue41301] Assignment operation of list is not working as expected Message-ID: <1594799989.7.0.436076447294.issue41301@roundup.psfhosted.org> Change by Yashwanth Barad : ---------- components: Windows files: List_assignment_in_functions.py nosy: paul.moore, steve.dower, tim.golden, yashwanthbarad at gmail.com, zach.ware priority: normal severity: normal status: open title: Assignment operation of list is not working as expected type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49317/List_assignment_in_functions.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 06:54:56 2020 From: report at bugs.python.org (Felix Yan) Date: Wed, 15 Jul 2020 10:54:56 +0000 Subject: [New-bugs-announce] [issue41302] _decimal failed to build with system libmpdec 2.5 Message-ID: <1594810496.64.0.595713191405.issue41302@roundup.psfhosted.org> New submission from Felix Yan : In bpo-40874, mpdecimal.h in the vendored libmpdec has defines of UNUSED while the standalone released version of mpdecimal 2.5.0 doesn't. This breaks _decimal module build with system libmpdec due to UNUSED is undefined. Errors are like: cpython/Modules/_decimal/_decimal.c:277:36: error: expected ?;?, ?,? or ?)? before ?UNUSED? 277 | dec_traphandler(mpd_context_t *ctx UNUSED) /* GCOV_NOT_REACHED */ | ^~~~~~ Reproducible in both 3.8 branch and master (didn't test 3.9, but should be affected too). ---------- components: Extension Modules messages: 373676 nosy: felixonmars, skrah priority: normal severity: normal status: open title: _decimal failed to build with system libmpdec 2.5 type: compile error versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 08:43:19 2020 From: report at bugs.python.org (nooB) Date: Wed, 15 Jul 2020 12:43:19 +0000 Subject: [New-bugs-announce] [issue41303] perf_counter result does not count system sleep time in Mac OS Message-ID: <1594816999.52.0.200760723432.issue41303@roundup.psfhosted.org> New submission from nooB : Documentation for time.perf_counter says "does include time elapsed during sleep". https://docs.python.org/3.8/library/time.html#time.perf_counter But it does not work as expected in Mac OS. I learnt that perf_counter uses the underlying c function: "mach_absolute_time". ------------------------- import time time.get_clock_info('perf_counter') namespace(adjustable=False, implementation='mach_absolute_time()', monotonic=True, resolution=1e-09) ------------------------- The documentation for "mach_absolute_time" clearly states that "this clock does not increment while the system is asleep" https://developer.apple.com/documentation/kernel/1462446-mach_absolute_time FWIW, Mac kernel does offer another function which returns monotonic ticks "including while the system is asleep" https://developer.apple.com/documentation/kernel/1646199-mach_continuous_time But it seems to be available only on Mac 10.12+. ---------- components: macOS messages: 373690 nosy: ned.deily, nooB, ronaldoussoren priority: normal severity: normal status: open type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 10:41:17 2020 From: report at bugs.python.org (Jimmy Girardet) Date: Wed, 15 Jul 2020 14:41:17 +0000 Subject: [New-bugs-announce] [issue41304] python 38 embed ignore python38._pth file Message-ID: <1594824077.92.0.734722675541.issue41304@roundup.psfhosted.org> New submission from Jimmy Girardet : Hi, With python embed unziped in `38` directory(python 3.8.4): ``` # python38._pth python38.zip . ..\\app # Uncomment to run site.main() automatically #import site ``` ``` PS C:\Users\jimmy\rien\embed> .\38\python.exe -c "import sys;print(sys.path);import hello" ['', 'C:\\Users\\jimmy\\rien\\embed\\38\\python38.zip', 'C:\\Users\\jimmy\\rien\\embed\\38\\DLLs', 'C:\\Users\\jimmy\\ri en\\embed\\38\\lib', 'C:\\Users\\jimmy\\rien\\embed\\38'] Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'hello' '\\app' is not added to sys.path. it is under python 3. ``` Note It's working under python 3.7.8 ---------- components: Interpreter Core messages: 373698 nosy: jgirardet priority: normal severity: normal status: open title: python 38 embed ignore python38._pth file versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 12:16:44 2020 From: report at bugs.python.org (Tony) Date: Wed, 15 Jul 2020 16:16:44 +0000 Subject: [New-bugs-announce] [issue41305] Add StreamReader.readinto() Message-ID: <1594829804.0.0.326071195505.issue41305@roundup.psfhosted.org> New submission from Tony : Add a StreamReader.readinto(buf) function. Exactly like StreamReader.read() with *n* being equal to the length of buf. Instead of allocating a new buffer, copy the read buffer into buf. ---------- messages: 373702 nosy: tontinton priority: normal severity: normal status: open title: Add StreamReader.readinto() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 15:12:33 2020 From: report at bugs.python.org (Felix Yan) Date: Wed, 15 Jul 2020 19:12:33 +0000 Subject: [New-bugs-announce] [issue41306] test_tk failure on Arch Linux Message-ID: <1594840353.83.0.567403527611.issue41306@roundup.psfhosted.org> New submission from Felix Yan : test_from (tkinter.test.test_tkinter.test_widgets.ScaleTest) is currently failing on Arch Linux, and at least another place: https://python-build-standalone.readthedocs.io/en/latest/status.html The error looks like: ====================================================================== FAIL: test_from (tkinter.test.test_tkinter.test_widgets.ScaleTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.8/tkinter/test/test_tkinter/test_widgets.py", line 939, in test_from self.checkFloatParam(widget, 'from', 100, 14.9, 15.1, conv=float_round) File "/usr/lib/python3.8/tkinter/test/widget_tests.py", line 106, in checkFloatParam self.checkParam(widget, name, value, conv=conv, **kwargs) File "/usr/lib/python3.8/tkinter/test/widget_tests.py", line 64, in checkParam self.assertEqual2(widget.cget(name), expected, eq=eq) File "/usr/lib/python3.8/tkinter/test/widget_tests.py", line 47, in assertEqual2 self.assertEqual(actual, expected, msg) AssertionError: 14.9 != 15.0 It's the only failure in the current Python 3.8.4 release's test suite here. Also adding Python 3.9 and 3.10 as I am able to reproduce it on master too. ---------- components: Tests messages: 373710 nosy: felixonmars priority: normal severity: normal status: open title: test_tk failure on Arch Linux versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 15:18:03 2020 From: report at bugs.python.org (Dieter Maurer) Date: Wed, 15 Jul 2020 19:18:03 +0000 Subject: [New-bugs-announce] [issue41307] "email.message.Message.as_bytes": fails to correctly handle "charset" Message-ID: <1594840683.21.0.418149762397.issue41307@roundup.psfhosted.org> New submission from Dieter Maurer : In the transscript below, "ms" and "mb" should be equivalent: >>> from email import message_from_string, message_from_bytes >>> mt = """\ ... Mime-Version: 1.0 ... Content-Type: text/plain; charset=UTF-8 ... Content-Transfer-Encoding: 8bit ... ... ? ... """ >>> ms = message_from_string(mt) >>> mb = message_from_bytes(mt.encode("UTF-8")) But "mb.as_bytes" succeeds while "ms.as_bytes" raises a "UnicodeEncodeError": >>> mb.as_bytes() b'Mime-Version: 1.0\nContent-Type: text/plain; charset=UTF-8\nContent-Transfer-Encoding: 8bit\n\n\xc3\xa4\n' >>> ms.as_bytes() Traceback (most recent call last): ... File "/usr/local/lib/python3.9/email/generator.py", line 155, in _write_lines self.write(line) File "/usr/local/lib/python3.9/email/generator.py", line 406, in write self._fp.write(s.encode('ascii', 'surrogateescape')) UnicodeEncodeError: 'ascii' codec can't encode character '\xe4' in position 0: ordinal not in range(128) Apparently, the "as_bytes" ignores the "charset" parameter from the "Content-Type" header (it should use "utf-8", not "ascii" for the encoding). ---------- components: email messages: 373711 nosy: barry, dmaurer, r.david.murray priority: normal severity: normal status: open title: "email.message.Message.as_bytes": fails to correctly handle "charset" type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 17:21:16 2020 From: report at bugs.python.org (Steve Dower) Date: Wed, 15 Jul 2020 21:21:16 +0000 Subject: [New-bugs-announce] [issue41308] socket.connect() slow to time out on Windows Message-ID: <1594848076.72.0.0566311490676.issue41308@roundup.psfhosted.org> New submission from Steve Dower : When connecting to localhost, socket.connect() takes two seconds on Windows (the default) to time out, but on Linux (including WSL) it times out immediately. Test code (assuming port 9999 has no listener): >>> import socket >>> socket.socket().connect(('localhost', 9999)) For a remote host, the timeout is approx 10s on Windows and 20s on WSL (I didn't test on a native Linux box). I'm told the correct fix is to specify TCP_INITIAL_RTO_NO_SYN_RETRANSMISSIONS [1] when connecting to localhost. ---------- components: Windows messages: 373725 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: test needed status: open title: socket.connect() slow to time out on Windows type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 17:56:13 2020 From: report at bugs.python.org (Steve Dower) Date: Wed, 15 Jul 2020 21:56:13 +0000 Subject: [New-bugs-announce] [issue41309] test_subprocess timing out randomly on Windows Message-ID: <1594850173.83.0.809593966079.issue41309@roundup.psfhosted.org> New submission from Steve Dower : Spotted at https://dev.azure.com/Python/cpython/_build/results?buildId=66387&view=logs&j=d554cd63-f8f4-5b2d-871b-33e4ea76e915&t=5a14d0eb-dbd4-5b80-f5d0-7909f950a1cc&l=1859 test_empty_input (test.test_asyncio.test_subprocess.SubprocessProactorTests) ... ok test_exec_loop_deprecated (test.test_asyncio.test_subprocess.SubprocessProactorTests) ... ok test_kill (test.test_asyncio.test_subprocess.SubprocessProactorTests) ... ok Warning -- Unraisable exception Warning -- Unraisable exception Timeout (0:20:00)! Thread 0x00001eac (most recent call first): File "D:\a\1\s\lib\asyncio\windows_events.py", line 779 in _poll File "D:\a\1\s\lib\asyncio\windows_events.py", line 430 in select File "D:\a\1\s\lib\asyncio\base_events.py", line 1854 in _run_once File "D:\a\1\s\lib\asyncio\base_events.py", line 596 in run_forever File "D:\a\1\s\lib\asyncio\windows_events.py", line 316 in run_forever File "D:\a\1\s\lib\asyncio\base_events.py", line 629 in run_until_complete File "D:\a\1\s\lib\test\test_asyncio\test_subprocess.py", line 302 in test_pause_reading File "D:\a\1\s\lib\unittest\case.py", line 549 in _callTestMethod File "D:\a\1\s\lib\unittest\case.py", line 592 in run File "D:\a\1\s\lib\unittest\case.py", line 652 in __call__ File "D:\a\1\s\lib\unittest\suite.py", line 122 in run File "D:\a\1\s\lib\unittest\suite.py", line 84 in __call__ File "D:\a\1\s\lib\unittest\suite.py", line 122 in run File "D:\a\1\s\lib\unittest\suite.py", line 84 in __call__ File "D:\a\1\s\lib\unittest\suite.py", line 122 in run File "D:\a\1\s\lib\unittest\suite.py", line 84 in __call__ File "D:\a\1\s\lib\unittest\suite.py", line 122 in run File "D:\a\1\s\lib\unittest\suite.py", line 84 in __call__ File "D:\a\1\s\lib\unittest\suite.py", line 122 in run File "D:\a\1\s\lib\unittest\suite.py", line 84 in __call__ File "D:\a\1\s\lib\unittest\runner.py", line 176 in run File "D:\a\1\s\lib\test\support\__init__.py", line 975 in _run_suite File "D:\a\1\s\lib\test\support\__init__.py", line 1098 in run_unittest File "D:\a\1\s\lib\test\libregrtest\runtest.py", line 211 in _test_module File "D:\a\1\s\lib\test\libregrtest\runtest.py", line 236 in _runtest_inner2 File "D:\a\1\s\lib\test\libregrtest\runtest.py", line 272 in _runtest_inner File "D:\a\1\s\lib\test\libregrtest\runtest.py", line 155 in _runtest File "D:\a\1\s\lib\test\libregrtest\runtest.py", line 195 in runtest File "D:\a\1\s\lib\test\libregrtest\main.py", line 319 in rerun_failed_tests File "D:\a\1\s\lib\test\libregrtest\main.py", line 696 in _main File "D:\a\1\s\lib\test\libregrtest\main.py", line 639 in main File "D:\a\1\s\lib\test\libregrtest\main.py", line 717 in main File "D:\a\1\s\lib\test\__main__.py", line 2 in File "D:\a\1\s\lib\runpy.py", line 87 in _run_code File "D:\a\1\s\lib\runpy.py", line 197 in _run_module_as_main ##[error]Cmd.exe exited with code '1'. ---------- components: Tests, Windows messages: 373726 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_subprocess timing out randomly on Windows type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 18:21:55 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Wed, 15 Jul 2020 22:21:55 +0000 Subject: [New-bugs-announce] [issue41310] micro-optimization: increase our float parsing speed by Nx Message-ID: <1594851715.46.0.792624118962.issue41310@roundup.psfhosted.org> New submission from Gregory P. Smith : See https://lemire.me/blog/2020/03/10/fast-float-parsing-in-practice/ for inspiration and a reference (possibly a library to use, but if not the techniques still apply). Primarily usable when creating the float objects from the string data as is common in python code and particularly common in JSON serialized data. ---------- components: Interpreter Core messages: 373730 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: micro-optimization: increase our float parsing speed by Nx type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 15 20:10:31 2020 From: report at bugs.python.org (Oscar Benjamin) Date: Thu, 16 Jul 2020 00:10:31 +0000 Subject: [New-bugs-announce] [issue41311] Add a function to get a random sample from an iterable (reservoir sampling) Message-ID: <1594858231.04.0.366290640753.issue41311@roundup.psfhosted.org> New submission from Oscar Benjamin : The random.choice/random.sample functions will only accept a sequence to select from. Can there be a function in the random module for selecting from an arbitrary iterable? It is possible to make an efficient function that can make random selections from an arbitrary iterable e.g.: from math import exp, log, floor from random import random, randrange from itertools import islice def sample_iter(iterable, k=1): """Select k items uniformly from iterable. Returns the whole population if there are k or fewer items """ iterator = iter(iterable) values = list(islice(iterator, k)) W = exp(log(random())/k) while True: # skip is geometrically distributed skip = floor( log(random())/log(1-W) ) selection = list(islice(iterator, skip, skip+1)) if selection: values[randrange(k)] = selection[0] W *= exp(log(random())/k) else: return values https://en.wikipedia.org/wiki/Reservoir_sampling#An_optimal_algorithm This could be used for random sampling from sets/dicts or also to choose something like a random line from a text file. The algorithm needs to fully consume the iterable but does so efficiently using islice. In the case of a dict this is faster than converting to a list and using random.choice: In [2]: n = 6 In [3]: d = dict(zip(range(10**n), range(10**n))) In [4]: %timeit sample_iter(d) 15.5 ms ? 326 ?s per loop (mean ? std. dev. of 7 runs, 100 loops each In [5]: %timeit list(d) 26.1 ms ? 1.72 ms per loop (mean ? std. dev. of 7 runs, 10 loops each) In [6]: %timeit sample_iter(d, 2) 15.8 ms ? 427 ?s per loop (mean ? std. dev. of 7 runs, 100 loops each) In [7]: %timeit sample_iter(d, 20) 17.6 ms ? 2.17 ms per loop (mean ? std. dev. of 7 runs, 10 loops each) In [8]: %timeit sample_iter(d, 100) 19.9 ms ? 297 ?s per loop (mean ? std. dev. of 7 runs, 10 loops each) This was already discussed on python-ideas: https://mail.python.org/archives/list/python-ideas at python.org/thread/4OZTRD7FLXXZ6R6RU4BME6DYR3AXHOBD/ ---------- components: Library (Lib) messages: 373733 nosy: oscarbenjamin priority: normal severity: normal status: open title: Add a function to get a random sample from an iterable (reservoir sampling) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 16 00:17:41 2020 From: report at bugs.python.org (Charles Machalow) Date: Thu, 16 Jul 2020 04:17:41 +0000 Subject: [New-bugs-announce] [issue41312] add !p to pprint.pformat() in str.format() an f-strings Message-ID: <1594873061.38.0.656587142662.issue41312@roundup.psfhosted.org> New submission from Charles Machalow : Right now in str.format(), we have !s, !r, and !a to allow us to call str(), repr(), and ascii() respectively on the given expression. I'm proposing that we add a !p conversion to have pprint.pformat() be called to convert the given expression to a 'pretty' string. Calling ``` print(f"My dict: {d!p}") ``` is a lot more concise than: ``` import pprint print(f"My dict: {pprint.pformat(d)}") ``` We may even be able to have a static attribute stored to change the various default kwargs of pprint.pformat(). ---------- components: IO, Library (Lib) messages: 373738 nosy: Charles Machalow priority: normal severity: normal status: open title: add !p to pprint.pformat() in str.format() an f-strings type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 16 03:47:19 2020 From: report at bugs.python.org (wyz23x2) Date: Thu, 16 Jul 2020 07:47:19 +0000 Subject: [New-bugs-announce] [issue41313] OverflowError still raised when int limited in sys.maxsize Message-ID: <1594885639.25.0.559043335267.issue41313@roundup.psfhosted.org> New submission from wyz23x2 : Consider this code: import sys sys.setrecursionlimit(sys.maxsize) Causes this: OverflowError: Python int too large to convert to C int So what is the limit? It should be sys.maxsize. These 2 also don't work: sys.setrecursionlimit(sys.maxsize-1) sys.setrecursionlimit(sys.maxsize//2) That is a big difference with at least 50%. ---------- components: C API, Library (Lib) messages: 373747 nosy: wyz23x2 priority: normal severity: normal status: open title: OverflowError still raised when int limited in sys.maxsize versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 16 06:34:24 2020 From: report at bugs.python.org (wyz23x2) Date: Thu, 16 Jul 2020 10:34:24 +0000 Subject: [New-bugs-announce] [issue41314] __future__ doc and PEP 563 conflict Message-ID: <1594895664.43.0.284456467746.issue41314@roundup.psfhosted.org> New submission from wyz23x2 : In https://docs.python.org/3/library/__future__.html: annotations | 3.7.0b1 | *4.0* | PEP 563: Postponed evaluation of annotations In PEP 563: Starting with Python 3.7, a __future__ import is required to use the described functionality. No warnings are raised. In Python *3.10* this will become the default behavior. Use of annotations incompatible with this PEP is no longer supported. Python 4.0 or 3.10? Not clear. ---------- messages: 373753 nosy: wyz23x2 priority: normal severity: normal status: open title: __future__ doc and PEP 563 conflict _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 16 10:13:33 2020 From: report at bugs.python.org (Jean Abou Samra) Date: Thu, 16 Jul 2020 14:13:33 +0000 Subject: [New-bugs-announce] [issue41315] Add mathematical functions as wrapper to decimal.Decimal methods Message-ID: <1594908813.57.0.321710561647.issue41315@roundup.psfhosted.org> New submission from Jean Abou Samra : Common mathematical functions such as sqrt(), exp(), etc. are available for decimal numbers as methods of decimal.Decimal instances (like https://docs.python.org/3/library/decimal.html#decimal.Decimal.exp). This does not pair well with the math and cmath modules as well as NumPy and SymPy which all define these as functions. It also makes it harder to switch to decimals instead of floats when you realize that your program lacks arithmetic precision. It would be nice to have functions in the decimal module that called the corresponding methods. This would unify the interface with other modules while keeping backwards compatibility and preserving the possibility to subclass Decimal. ---------- components: Library (Lib) messages: 373754 nosy: Jean Abou Samra, facundobatista, mark.dickinson, rhettinger, skrah priority: normal severity: normal status: open title: Add mathematical functions as wrapper to decimal.Decimal methods type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 16 14:37:23 2020 From: report at bugs.python.org (Artem Bulgakov) Date: Thu, 16 Jul 2020 18:37:23 +0000 Subject: [New-bugs-announce] [issue41316] tarfile: Do not write full path in FNAME field Message-ID: <1594924643.04.0.7239005729.issue41316@roundup.psfhosted.org> New submission from Artem Bulgakov : tarfile sets FNAME field to the path given by user: Lib/tarfile.py:424 It writes full path instead of just basename if user specified absolute path. Some archive viewer apps like 7-Zip may process file incorrectly. Also it creates security issue because anyone can know structure of directories on system and know username or other personal information. You can reproduce this by running below lines in Python interpreter. Tested on Windows and Linux. Python 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> import tarfile >>> open("somefile.txt", "w").write("sometext") 8 >>> tar = tarfile.open("/home/bulgakovas/file.tar.gz", "w|gz") >>> tar.add("somefile.txt") >>> tar.close() >>> open("file.tar.gz", "rb").read()[:50] b'\x1f\x8b\x08\x08cE\x10_\x02\xff/home/bulgakovas/file.tar\x00\xed\xd3M\n\xc20\x10\x86\xe1\xac=EO\x90' You can see full path to file.tar (/home/bulgakovas/file.tar) as FNAME field. If you will write just tarfile.open("file.tar.gz", "w|gz"), FNAME will be equal to file.tar. RFC1952 says about FNAME: This is the original name of the file being compressed, with any directory components removed. So tarfile must remove directory names from FNAME and write only basename of file. ---------- components: Library (Lib) messages: 373759 nosy: ArtemSBulgakov, lars.gustaebel priority: normal severity: normal status: open title: tarfile: Do not write full path in FNAME field type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 16 18:12:05 2020 From: report at bugs.python.org (=?utf-8?q?Alex_Gr=C3=B6nholm?=) Date: Thu, 16 Jul 2020 22:12:05 +0000 Subject: [New-bugs-announce] [issue41317] sock_accept() does not remove server socket reader on cancellation Message-ID: <1594937525.52.0.117066426287.issue41317@roundup.psfhosted.org> New submission from Alex Gr?nholm : Unlike with all the other sock_* functions, sock_accept() only removes the reader on the server socket when the socket becomes available for reading (ie. when there's an incoming connection). If the operation is cancelled instead, the reader is not removed. If then the server socket is closed and a new socket is created which has the same file number and it is used for a socket operation, it will cause a FileNotFoundError because the event loop thinks it has this fd registered but the epoll object does not agree since all closed sockets are automatically removed from it. The attached script reproduces the problem on Fedora Linux 32 (all relevant Python versions), but not on Windows (on any tested Python versions from 3.6 to 3.8). ---------- components: asyncio files: bug.py messages: 373777 nosy: alex.gronholm, asvetlov, yselivanov priority: normal severity: normal status: open title: sock_accept() does not remove server socket reader on cancellation type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49319/bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 16 23:41:03 2020 From: report at bugs.python.org (Heyi Tang) Date: Fri, 17 Jul 2020 03:41:03 +0000 Subject: [New-bugs-announce] [issue41318] Better error message of "Cannot recover from stack overflow." Message-ID: <1594957263.52.0.457195944754.issue41318@roundup.psfhosted.org> New submission from Heyi Tang : Is it possible to add more detailed message for the error "Cannot recover from stack overflow"? Something like "Cannot recover from stack overflow, it may be caused by catching a RecursionError but reaching the limit again before properly handling it." Maybe the detailed design in https://github.com/python/cpython/blob/master/Include/ceval.h#L48 could also be shown to the developer? It is hard to understand what happened only with the message "Cannot recover from stack overflow". I hit the error because I write the code as following: @some_logger def f(): return f() try: f() except RecursionError: print("Recursion Error is raised!") And it took me a lot of time to figure out why RecursionError is not raised but the "Fatal Python error" is shown. Finally I realized that the problem is that the following code piece in "some_logger" (Which is an internal library provided by others) caught the exception and make tstate->overflowed=1. def some_logger(func): @functools.wraps(func) def new_func(*args, **kwargs): try: # Unfortunately this code hit RecursionError and catched it logger.info(some_message) except Exception as e: pass # Avoid affecting user function return func(*args, **kwargs) return new_func So I think it might be better to provide more information to the developer that "Cannot recover" means that "RecursionError is caught and stack overflow again." and hint user to know the design of _Py_EnterRecursiveCall. ---------- components: Interpreter Core messages: 373796 nosy: thyyyy priority: normal severity: normal status: open title: Better error message of "Cannot recover from stack overflow." type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 00:14:29 2020 From: report at bugs.python.org (=?utf-8?b?5a6L5ZiJ6IW+?=) Date: Fri, 17 Jul 2020 04:14:29 +0000 Subject: [New-bugs-announce] [issue41319] IDLE 3.8 can not save and run this file Message-ID: <1594959269.96.0.591290294869.issue41319@roundup.psfhosted.org> New submission from ??? : If I delete the chinese in this py file, it can run. I use notepad and edit it, and it can run derectly in cmd. ---------- assignee: terry.reedy components: IDLE files: tempconvert.py messages: 373797 nosy: terry.reedy, ??? priority: normal severity: normal status: open title: IDLE 3.8 can not save and run this file type: compile error versions: Python 3.8 Added file: https://bugs.python.org/file49320/tempconvert.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 03:38:23 2020 From: report at bugs.python.org (Kuang-che Wu) Date: Fri, 17 Jul 2020 07:38:23 +0000 Subject: [New-bugs-announce] [issue41320] async process closing after event loop closed Message-ID: <1594971503.41.0.525176892628.issue41320@roundup.psfhosted.org> New submission from Kuang-che Wu : (following code is attached as well) import asyncio import time workaround = False async def slow_proc(): proc = await asyncio.create_subprocess_exec('sleep', '10', stdout=asyncio.subprocess.PIPE) try: return await proc.stdout.read() except asyncio.CancelledError: if workaround: proc.terminate() time.sleep(0.1) # hope the machine is not too busy async def func(): try: return await asyncio.wait_for(slow_proc(), timeout=0.1) except asyncio.TimeoutError: return 'timeout' def main(): while True: print('test') asyncio.run(func()) main() ------------------------------------ Run above code, it may work without error message (expected behavior). However, depends on timing, it may show warning/error messages case 1. Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 1257652 is closed case 2. Exception ignored in: Traceback (most recent call last): File "/usr/lib/python3.8/asyncio/base_subprocess.py", line 126, in __del__ self.close() File "/usr/lib/python3.8/asyncio/base_subprocess.py", line 104, in close proto.pipe.close() File "/usr/lib/python3.8/asyncio/unix_events.py", line 536, in close self._close(None) File "/usr/lib/python3.8/asyncio/unix_events.py", line 560, in _close self._loop.call_soon(self._call_connection_lost, exc) File "/usr/lib/python3.8/asyncio/base_events.py", line 719, in call_soon self._check_closed() File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed ------------------------------------ Although running tasks will be cancelled when asyncio.run is finishing, subprocess' exit handler may be invoked after the event loop is closed. In above code, I provided a workaround. However it just mitigates the problem and not really fix the root cause. This is related to https://bugs.python.org/issue35539 p.s. My test environment is Debian 5.5.17 ---------- components: asyncio files: cancel_proc.py messages: 373799 nosy: asvetlov, kcwu, yselivanov priority: normal severity: normal status: open title: async process closing after event loop closed type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49321/cancel_proc.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 05:20:57 2020 From: report at bugs.python.org (dh4931) Date: Fri, 17 Jul 2020 09:20:57 +0000 Subject: [New-bugs-announce] [issue41321] Calculate timestamp is wrong in datetime.datetime Message-ID: <1594977657.17.0.556958376615.issue41321@roundup.psfhosted.org> New submission from dh4931 : like so datetime.datetime(1986, 5, 4, 7, 13, 22).timestamp() - datetime.datetime(1986, 5, 4, 0, 0, 0).timestamp() the result is 22402.0, the result is wrong. but on May 2nd datetime.datetime(1986, 5, 2, 7, 13, 22).timestamp() - datetime.datetime(1986, 5, 2, 0, 0, 0).timestamp() the result is 26002.0, the result is true ---------- components: Extension Modules messages: 373805 nosy: dh4931 priority: normal severity: normal status: open title: Calculate timestamp is wrong in datetime.datetime versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 05:46:57 2020 From: report at bugs.python.org (Alexander Hungenberg) Date: Fri, 17 Jul 2020 09:46:57 +0000 Subject: [New-bugs-announce] [issue41322] unittest: Generator test methods will always be marked as passed Message-ID: <1594979217.32.0.316534480608.issue41322@roundup.psfhosted.org> New submission from Alexander Hungenberg : The following testcase will always be marked as passed: from unittest import TestCase class BuggyTestCase(TestCase): def test_generator(self): self.assertTrue(False) yield None It happened to us that someone accidentally made the test method a generator function. That error was caught very late, because it always appears to have executed correctly. Maybe an additional check can be introduced in the unittest module, to check if a test_ method was executed correctly? ---------- components: Library (Lib) messages: 373807 nosy: defreng priority: normal severity: normal status: open title: unittest: Generator test methods will always be marked as passed type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 06:27:02 2020 From: report at bugs.python.org (Mark Shannon) Date: Fri, 17 Jul 2020 10:27:02 +0000 Subject: [New-bugs-announce] [issue41323] Perform "peephole" optimization directly on control-flow graph. Message-ID: <1594981622.83.0.602836344044.issue41323@roundup.psfhosted.org> New submission from Mark Shannon : Currently we perform various bytecode improvements as a pass on the code objects after generating the code object. This requires parsing the bytecode to find instructions, recreating the CFG, and rewriting the line number table. If we perform the optimizations directly on the CFG we can avoid all that additional work. This would save hundreds of lines of code and avoid coupling the optimization to the line number table format. ---------- assignee: Mark.Shannon messages: 373811 nosy: Mark.Shannon priority: normal severity: normal stage: needs patch status: open title: Perform "peephole" optimization directly on control-flow graph. type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 06:55:47 2020 From: report at bugs.python.org (Stefan Krah) Date: Fri, 17 Jul 2020 10:55:47 +0000 Subject: [New-bugs-announce] [issue41324] Add a minimal decimal capsule API Message-ID: <1594983347.35.0.123464083334.issue41324@roundup.psfhosted.org> Change by Stefan Krah : ---------- assignee: skrah components: Extension Modules nosy: skrah priority: normal severity: normal stage: needs patch status: open title: Add a minimal decimal capsule API versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 13:10:26 2020 From: report at bugs.python.org (Jordan Speicher) Date: Fri, 17 Jul 2020 17:10:26 +0000 Subject: [New-bugs-announce] [issue41325] Document addition of `mock.call_args.args` and `mock.call_args.kwargs` in 3.8 Message-ID: <1595005826.72.0.880461567186.issue41325@roundup.psfhosted.org> New submission from Jordan Speicher : `args` and `kwargs` were added to unittest `mock.call_args` in https://bugs.python.org/issue21269 however documentation was not updated to state that this was added in python 3.8 ---------- assignee: docs at python components: Documentation messages: 373839 nosy: docs at python, uspike priority: normal severity: normal status: open title: Document addition of `mock.call_args.args` and `mock.call_args.kwargs` in 3.8 versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 14:18:11 2020 From: report at bugs.python.org (Mariatta) Date: Fri, 17 Jul 2020 18:18:11 +0000 Subject: [New-bugs-announce] [issue41326] Build failure in blurb-it repo: "Failed building wheel for yarl" Message-ID: <1595009891.48.0.976644515939.issue41326@roundup.psfhosted.org> New submission from Mariatta : We're seeing travis CI build failure against the nightly Python build v3.10.0a0. Build log: https://travis-ci.org/github/python/blurb_it/jobs/707532881 Last few lines of the build log: In file included from /opt/python/3.10-dev/include/python3.10/unicodeobject.h:1026:0, from /opt/python/3.10-dev/include/python3.10/Python.h:97, from yarl/_quoting.c:4: /opt/python/3.10-dev/include/python3.10/cpython/unicodeobject.h:551:42: note: declared here Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode( ^ yarl/_quoting.c: In function ?__Pyx_decode_c_bytes?: yarl/_quoting.c:9996:9: warning: ?PyUnicode_FromUnicode? is deprecated [-Wdeprecated-declarations] return PyUnicode_FromUnicode(NULL, 0); ^ In file included from /opt/python/3.10-dev/include/python3.10/unicodeobject.h:1026:0, from /opt/python/3.10-dev/include/python3.10/Python.h:97, from yarl/_quoting.c:4: /opt/python/3.10-dev/include/python3.10/cpython/unicodeobject.h:551:42: note: declared here Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode( ^ error: command '/usr/bin/gcc' failed with exit code 1 ---------------------------------------- ERROR: Failed building wheel for yarl I'm not familiar with that part of codebase. If anyone has any insight, it would be appreciated. Thanks. ---------- messages: 373843 nosy: Mariatta priority: normal severity: normal status: open title: Build failure in blurb-it repo: "Failed building wheel for yarl" type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 14:42:52 2020 From: report at bugs.python.org (Paul Moore) Date: Fri, 17 Jul 2020 18:42:52 +0000 Subject: [New-bugs-announce] [issue41327] Windows Store "stub" Python executables give confusing behaviour Message-ID: <1595011372.47.0.909809714315.issue41327@roundup.psfhosted.org> New submission from Paul Moore : First of all, I do know that this is an issue with the Windows Store distribution, rather than the python.org one. But (a) I don't know where to report a bug against the Store implementation except here, and (b) it's arguably a case of the Windows implementation "stealing" the Python command, so worth flagging against core Python. The problem is that Windows 10 installs `python` and `python3` executables by default, which open the Windows Store offering to install Python. That's not a bad thing - it's nice to have Python available with the OS! Although (see below) hijacking the `python` and `python3` commands for this purpose has some problems. But those stubs silently do nothing when run with command line arguments, and because the python.org distribution doesn't put Python on PATH by default, users end up with the stubs available even if they install python.org Python. We're getting a lot of bug reports from users as a result of this, when they follow standard instructions like "execute `python -m pip`". With the stub on the path, this silently does nothing, even though the user has installed Python. This is a very bad user experience, and not one that projects like pip can do anything to alleviate, other than having extremely convoluted "how to run pip" commands: """ To run pip, type `python -m pip` if you're on Unix, and `py -m pip` on Windows. Unless you're using the Windows Store version when you should do `python -m pip` on Windows too. Or on Unix when you might need to do `python3 -m pip`. Or... """ I don't have a good answer to this issue. Maybe the Windows Store stubs could detect core Python and do something better if it's installed? Maybe the stubs should print an explanatory message if run with command line arguments? Maybe Store Python should be available on Windows via some less error-prone mechanism? I'm pretty sure if a Linux distribution shipped a `python` command that didn't run Python the way users expected, there would be complaints. After all, we have PEP 394 that explicitly states how we expect Unix systems to work. Maybe we need something similar for Windows? ---------- assignee: steve.dower components: Windows messages: 373845 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows Store "stub" Python executables give confusing behaviour versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 16:28:36 2020 From: report at bugs.python.org (Dmytro Litvinov) Date: Fri, 17 Jul 2020 20:28:36 +0000 Subject: [New-bugs-announce] [issue41328] Hudson CI is not available anymore Message-ID: <1595017716.52.0.971991236143.issue41328@roundup.psfhosted.org> New submission from Dmytro Litvinov : In the documentation (https://docs.python.org/3.8/library/unittest.html) there is a mention of Hudson(http://hudson-ci.org/) as continuous integration system for tests. According to wikipedia(https://en.wikipedia.org/wiki/Hudson_(software)), Hudson "Having been replaced by Jenkins, Hudson is no longer maintained[9][10] and was announced as obsolete in February 2017.[11]" My recommendation for that is to change the mention of "Hudson" to "Jenkins". I am ready to prepare PR for that change. ---------- assignee: docs at python components: Documentation messages: 373852 nosy: DmytroLitvinov, docs at python priority: normal severity: normal status: open title: Hudson CI is not available anymore type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 17 16:30:19 2020 From: report at bugs.python.org (Nick.Lupariello) Date: Fri, 17 Jul 2020 20:30:19 +0000 Subject: [New-bugs-announce] [issue41329] IDLE not saving .py files Message-ID: <1595017819.2.0.254691783518.issue41329@roundup.psfhosted.org> New submission from Nick.Lupariello : IDLE will not save .py files. Neither Ctrl + S nor file -> Save file nor file -> Save file as function as intended and the .py file is not updated. This happens for both 32 bit and 64 bit distributions on multiple computers with 4 GB of ram running on a AMD A6-9220e RADEON R4. Tested installation in multiple locations. IDLE will not write to file anywhere in the file system including documents, desktop, and the installation folder itself. ---------- assignee: terry.reedy components: IDLE messages: 373854 nosy: nicklupe13, terry.reedy priority: normal severity: normal status: open title: IDLE not saving .py files type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 18 00:53:39 2020 From: report at bugs.python.org (Ma Lin) Date: Sat, 18 Jul 2020 04:53:39 +0000 Subject: [New-bugs-announce] [issue41330] Inefficient error-handle for CJK encodings Message-ID: <1595048019.57.0.348575052072.issue41330@roundup.psfhosted.org> New submission from Ma Lin : CJK encode/decode functions only have three error-handler fast-paths: replace ignore strict See the code: [1][2] If use other built-in error-handlers, need to get the error-handler object, and call it with an Unicode Exception argument. See the code: [3] But the error-handler object is not cached, it needs to be looked up from a dict every time, which is very inefficient. Another possible optimization is to write fast-path for common error-handlers, Python has these built-in error-handlers: strict replace ignore backslashreplace xmlcharrefreplace namereplace surrogateescape surrogatepass (only for utf-8/utf-16/utf-32 family) For example, maybe `xmlcharrefreplace` is heavily used in Web application, it can be implemented as a fast-path, so that no need to call the error-handler object every time. Just like the `xmlcharrefreplace` fast-path in `PyUnicode_EncodeCharmap` [4]. [1] encode function: https://github.com/python/cpython/blob/v3.9.0b4/Modules/cjkcodecs/multibytecodec.c#L192 [2] decode function: https://github.com/python/cpython/blob/v3.9.0b4/Modules/cjkcodecs/multibytecodec.c#L347 [3] `call_error_callback` function: https://github.com/python/cpython/blob/v3.9.0b4/Modules/cjkcodecs/multibytecodec.c#L82 [4] `xmlcharrefreplace` fast-path in `PyUnicode_EncodeCharmap`: https://github.com/python/cpython/blob/v3.9.0b4/Objects/unicodeobject.c#L8662 ---------- components: Unicode messages: 373871 nosy: ezio.melotti, malin, vstinner priority: normal severity: normal status: open title: Inefficient error-handle for CJK encodings type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 18 04:26:35 2020 From: report at bugs.python.org (Julien Palard) Date: Sat, 18 Jul 2020 08:26:35 +0000 Subject: [New-bugs-announce] [issue41331] Sphinx can't find asdl.py when not started from the Doc/ directory Message-ID: <1595060795.82.0.837754146284.issue41331@roundup.psfhosted.org> New submission from Julien Palard : When running the following command from the Doc/ directory: ./venv/bin/sphinx-build -Q -b gettext -D gettext_compact=0 . ../pot/ everything goes right, but when running the following from cpython direcory: ./Doc/venv/bin/sphinx-build -Q -b gettext -D gettext_compact=0 Doc pot we get: Extension error: Could not import extension asdl_highlight (exception: No module named 'asdl') This is because sys.path.append(os.path.abspath("../Parser/")) starts from the current directory, cpython/../Parser don't exists while Doc/../Parser exists. It could be fixed by starting with a Path(__file__).resolve(), will PR it. ---------- assignee: mdk components: Documentation messages: 373888 nosy: BTaskaya, mdk priority: normal severity: normal status: open title: Sphinx can't find asdl.py when not started from the Doc/ directory versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 18 10:18:13 2020 From: report at bugs.python.org (=?utf-8?q?Alex_Gr=C3=B6nholm?=) Date: Sat, 18 Jul 2020 14:18:13 +0000 Subject: [New-bugs-announce] [issue41332] connect_accepted_socket() missing from AbstractEventLoop Message-ID: <1595081893.19.0.549157566181.issue41332@roundup.psfhosted.org> New submission from Alex Gr?nholm : The connect_accepted_socket() method seems to be missing from the AbstractEventLoop ABC. I assume this was a simple mistake of omission. I will ready a PR to add it. ---------- components: asyncio messages: 373904 nosy: alex.gronholm, asvetlov, yselivanov priority: normal severity: normal status: open title: connect_accepted_socket() missing from AbstractEventLoop type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 18 10:33:13 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 18 Jul 2020 14:33:13 +0000 Subject: [New-bugs-announce] [issue41333] Convert OrderedDict.pop() to Argument Clinic Message-ID: <1595082793.92.0.490500501555.issue41333@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR converts OrderedDict.pop() to Argument Clinic. It makes it 2 times faster. $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python -s "from collections import OrderedDict; od = OrderedDict()" "od.pop('x', None)" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 119 ns +- 2 ns -> [/home/serhiy/py/cpython-release/python] 56.3 ns +- 1.2 ns: 2.12x faster (-53%) It was not converted before because Argument Clinic generated incorrect signature for it. It still is not able to generate correct signature, but at least it does not generate incorrect signature. And we now have other reason for using Argument Clinic -- performance. ---------- components: Argument Clinic, Extension Modules messages: 373905 nosy: larry, rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Convert OrderedDict.pop() to Argument Clinic type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 18 10:58:51 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 18 Jul 2020 14:58:51 +0000 Subject: [New-bugs-announce] [issue41334] Convert str(), bytes() and bytearray() to Argument Clinic Message-ID: <1595084331.14.0.93182273048.issue41334@roundup.psfhosted.org> New submission from Serhiy Storchaka : Constructors str(), bytes() and bytearray() were not converted to Argument Clinic because it was not possible to generate correct signature for them. But now there is other reason of using Argument Clinic -- it generates more efficient code for parsing arguments. The proposed PR converts str(), bytes() and bytearray() to Argument Clinic but generated docstrings are not used. $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "str()" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 81.9 ns +- 4.5 ns -> [/home/serhiy/py/cpython-release/python] 60.0 ns +- 1.9 ns: 1.36x faster (-27%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "str('abcdefgh')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 121 ns +- 3 ns -> [/home/serhiy/py/cpython-release/python] 87.6 ns +- 2.8 ns: 1.38x faster (-28%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "str(12345)" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 188 ns +- 8 ns -> [/home/serhiy/py/cpython-release/python] 149 ns +- 5 ns: 1.27x faster (-21%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "str(b'abcdefgh', 'ascii')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 210 ns +- 7 ns -> [/home/serhiy/py/cpython-release/python] 164 ns +- 6 ns: 1.28x faster (-22%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "str(b'abcdefgh', 'ascii', 'strict')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 222 ns +- 9 ns -> [/home/serhiy/py/cpython-release/python] 170 ns +- 6 ns: 1.30x faster (-23%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "str(b'abcdefgh', encoding='ascii', errors='strict')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 488 ns +- 20 ns -> [/home/serhiy/py/cpython-release/python] 306 ns +- 5 ns: 1.59x faster (-37%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytes()" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 85.1 ns +- 2.2 ns -> [/home/serhiy/py/cpython-release/python] 60.2 ns +- 2.3 ns: 1.41x faster (-29%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytes(8)" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 160 ns +- 5 ns -> [/home/serhiy/py/cpython-release/python] 115 ns +- 4 ns: 1.39x faster (-28%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytes(b'abcdefgh')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 131 ns +- 6 ns -> [/home/serhiy/py/cpython-release/python] 91.0 ns +- 2.8 ns: 1.43x faster (-30%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytes('abcdefgh', 'ascii')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 190 ns +- 6 ns -> [/home/serhiy/py/cpython-release/python] 144 ns +- 5 ns: 1.32x faster (-24%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytes('abcdefgh', 'ascii', 'strict')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 214 ns +- 5 ns -> [/home/serhiy/py/cpython-release/python] 156 ns +- 4 ns: 1.37x faster (-27%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytes('abcdefgh', encoding='ascii', errors='strict')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 442 ns +- 9 ns -> [/home/serhiy/py/cpython-release/python] 269 ns +- 8 ns: 1.64x faster (-39%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytearray()" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 93.5 ns +- 2.1 ns -> [/home/serhiy/py/cpython-release/python] 73.1 ns +- 1.8 ns: 1.28x faster (-22%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytearray(8)" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 154 ns +- 3 ns -> [/home/serhiy/py/cpython-release/python] 117 ns +- 3 ns: 1.32x faster (-24%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytearray(b'abcdefgh')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 164 ns +- 5 ns -> [/home/serhiy/py/cpython-release/python] 131 ns +- 2 ns: 1.25x faster (-20%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytearray('abcdefgh', 'ascii')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 239 ns +- 7 ns -> [/home/serhiy/py/cpython-release/python] 193 ns +- 4 ns: 1.24x faster (-19%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytearray('abcdefgh', 'ascii', 'strict')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 260 ns +- 5 ns -> [/home/serhiy/py/cpython-release/python] 207 ns +- 10 ns: 1.26x faster (-21%) $ ./python -m pyperf timeit -q --compare-to=../cpython-release2/python "bytearray('abcdefgh', encoding='ascii', errors='strict')" Mean +- std dev: [/home/serhiy/py/cpython-release2/python] 505 ns +- 11 ns -> [/home/serhiy/py/cpython-release/python] 322 ns +- 7 ns: 1.57x faster (-36%) ---------- components: Argument Clinic, Installation, Unicode messages: 373906 nosy: ezio.melotti, larry, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Convert str(), bytes() and bytearray() to Argument Clinic type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 18 14:38:21 2020 From: report at bugs.python.org (Howard A. Landman) Date: Sat, 18 Jul 2020 18:38:21 +0000 Subject: [New-bugs-announce] [issue41335] free(): invalid pointer in list_ass_item() in Python 3.7.3 Message-ID: <1595097501.98.0.660179014456.issue41335@roundup.psfhosted.org> New submission from Howard A. Landman : I have a program qtd.py that reliably dies with free(): invalid pointer after about 13 hours of runtime (on a RPi3B+). This is hard to debug because (1) try:except: will not catch SIGABRT (2) signal.signal(signal.SIGABRT, sigabrt_handler) also fails to catch SIGABRT (3) python3-dbg does not work with libraries like RPi.GPIO and spidev() This happens under both Buster and Stretch, so I don't think it's OS-dependent. I managed to get a core dump and gdb back trace gives: warning: core file may not match specified executable file. [New LWP 10277] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1". Core was generated by `python3 qtd.py'. Program terminated with signal SIGABRT, Aborted. #0 __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 #1 0x76c5e308 in __GI_abort () at abort.c:100 #2 0x76cae51c in __libc_message (action=action at entry=do_abort, fmt=) at ../sysdeps/posix/libc_fatal.c:181 #3 0x76cb5044 in malloc_printerr (str=) at malloc.c:5341 #4 0x76cb6d50 in _int_free (av=0x76d917d4 , p=0x43417c , have_lock=) at malloc.c:4165 #5 0x001b3bb4 in list_ass_item (a=, i=, v=, a=, i=, v=) at ../Objects/listobject.c:739 Backtrace stopped: Cannot access memory at address 0x17abeff8 So, as far as I can tell, it looks like list_ass_item() is where the error happens. I'm willing to help debug and maybe even fix this, but is there any way to remove any of the roadblocks (1-3) above? NOTE: qtd.py will exit unless there is a Texas Instruments TDC7201 chip attached to the RPI's SPI pins. ---------- components: Interpreter Core hgrepos: 389 messages: 373911 nosy: Howard_Landman priority: normal severity: normal status: open title: free(): invalid pointer in list_ass_item() in Python 3.7.3 type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 18 15:05:41 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 18 Jul 2020 19:05:41 +0000 Subject: [New-bugs-announce] [issue41336] Random segfaults during zoneinfo object creation stopped using Ctrl-C Message-ID: <1595099141.79.0.268322250341.issue41336@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : I get segfaults on random basis in the below program I wrote for issue41321 while trying to use Ctrl-C to stop the program. I used faulthandler but couldn't get to the exact case where it occurs. I tested it on Python 3.9 from deadsnakes ppa and it crashes. I tried using gdb to get a backtrace but using ctrl-C is caught by gdb. I tried using "handle SIGINT noprint pass" to ensure the ctrl-C is passed to the program by gdb and attached below is a backtrace during a sample segfault. I am wording it as during zoneinfo creation but isolated object creation under an infinite loop where ctrl-c is passed doesn't trigger this. Feel free to modify this as needed and also to add more since I am not good with debugging C issues. import datetime import zoneinfo timezones = zoneinfo.available_timezones() for year in range(1900, 2020): for tz in timezones: d1 = datetime.datetime(year, 5, 4, 7, 13, 22, tzinfo=zoneinfo.ZoneInfo(tz)).timestamp() d2 = datetime.datetime(year, 5, 4, 0, 0, 0, tzinfo=zoneinfo.ZoneInfo(tz)).timestamp() diff = d1 - d2 if diff != 26002: print(year, diff, tz) for tz in timezones: d1 = datetime.datetime(year, 5, 2, 7, 13, 22, tzinfo=zoneinfo.ZoneInfo(tz)).timestamp() d2 = datetime.datetime(year, 5, 2, 0, 0, 0, tzinfo=zoneinfo.ZoneInfo(tz)).timestamp() diff = d1 - d2 if diff != 26002: print("Diff using second day", year, diff, tz) $ python3.9 -X faulthandler ../backups/bpo41321.py ^CFatal Python error: Segmentation fault Current thread 0x00007f23bb2b9740 (most recent call first): File "/root/cpython/../backups/bpo41321.py", line 7 in [1] 1234 segmentation fault (core dumped) python3.9 -X faulthandler ../backups/bpo41321.py Debug build on master branch $ ./python -X faulthandler ../backups/bpo41321.py ^Cpython: Objects/typeobject.c:3244: _PyType_Lookup: Assertion `!PyErr_Occurred()' failed. Fatal Python error: Aborted Current thread 0x00007f28c2256080 (most recent call first): File "/root/cpython/../backups/bpo41321.py", line 14 in [1] 1386 abort (core dumped) ./python -X faulthandler ../backups/bpo41321.py (gdb) r Starting program: /root/cpython/python ../backups/bpo41321.py [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". ^C Program received signal SIGSEGV, Segmentation fault. zoneinfo_no_cache (cls=, args=, kwargs=) at /root/cpython/Modules/_zoneinfo.c:380 380 PyObject *out = zoneinfo_new_instance(cls, key); (gdb) bt #0 zoneinfo_no_cache (cls=, args=, kwargs=) at /root/cpython/Modules/_zoneinfo.c:380 #1 0x00007ffff65ff826 in zoneinfo_new ( type=0x7ffff6801480 , args=, kw=) at /root/cpython/Modules/_zoneinfo.c:274 #2 0x000055555562a635 in type_call ( type=type at entry=0x7ffff6801480 , args=args at entry=0x7ffff6fb7520, kwds=kwds at entry=0x0) at Objects/typeobject.c:1020 #3 0x00005555555c3018 in _PyObject_MakeTpCall ( tstate=tstate at entry=0x555555b2c350, callable=callable at entry=0x7ffff6801480 , args=args at entry=0x7ffff6f20df8, nargs=, keywords=0x0) at Objects/call.c:191 #4 0x00005555555ab48b in _PyObject_VectorcallTstate (kwnames=, nargsf=, args=, callable=, tstate=) at ./Include/cpython/abstract.h:112 #5 PyObject_Vectorcall (kwnames=, nargsf=, args=, callable=) at ./Include/cpython/abstract.h:123 #6 call_function (tstate=tstate at entry=0x555555b2c350, pp_stack=pp_stack at entry=0x7fffffffdf30, oparg=, kwnames=kwnames at entry=0x0) at Python/ceval.c:5121 #7 0x00005555555b17dd in _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at Python/ceval.c:3516 #8 0x00005555556864f5 in _PyEval_EvalFrame (throwflag=0, f=0x7ffff6f20c40, tstate=0x555555b2c350) at ./Include/internal/pycore_ceval.h:40 #9 _PyEval_EvalCode (qualname=0x7ffff6e860f0, name=0x7ffff6e860f0, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwstep=2, kwcount=0, kwargs=0x0, kwnames=0x0, argcount=0, args=0x0, locals=0x0, globals=0x555555b2c350, _co=0x7ffff6ee6d40, tstate=0x555555b2c350) at Python/ceval.c:4376 #10 _PyEval_EvalCodeWithName (qualname=0x0, name=0x0, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwstep=2, kwcount=0, kwargs=0x0, kwnames=0x0, argcount=0, args=0x0, locals=locals at entry=0x0, globals=globals at entry=0x555555b2c350, _co=0x7ffff6ee6d40, _co at entry=0x7ffff6f20da0) at Python/ceval.c:4408 #11 PyEval_EvalCodeEx (closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=0x0, argcount=0, args=0x0, locals=locals at entry=0x0, globals=globals at entry=0x555555b2c350, _co=0x7ffff6ee6d40, _co at entry=0x7ffff6f20da0) at Python/ceval.c:4424 ---Type to continue, or q to quit--- #12 PyEval_EvalCode (co=co at entry=0x7ffff6ee6d40, globals=globals at entry=0x7ffff6f4e440, locals=locals at entry=0x7ffff6f4e440) at Python/ceval.c:857 #13 0x00005555556d69c6 in run_eval_code_obj (locals=0x7ffff6f4e440, globals=0x7ffff6f4e440, co=0x7ffff6ee6d40, tstate=0x555555b2c350) at Python/pythonrun.c:1124 #14 run_mod (arena=0x7ffff6fa1910, flags=0x7fffffffe128, locals=0x7ffff6f4e440, globals=0x7ffff6f4e440, filename=0x7ffff6e815d0, mod=) at Python/pythonrun.c:1145 #15 PyRun_FileExFlags (fp=fp at entry=0x555555b89b40, filename_str=filename_str at entry=0x7ffff6f5a050 "/root/cpython/../backups/bpo41321.py", start=start at entry=257, globals=globals at entry=0x7ffff6f4e440, locals=locals at entry=0x7ffff6f4e440, closeit=closeit at entry=1, flags=0x7fffffffe128) at Python/pythonrun.c:1062 #16 0x00005555556d6b9d in PyRun_SimpleFileExFlags (fp=fp at entry=0x555555b89b40, filename=, closeit=closeit at entry=1, flags=flags at entry=0x7fffffffe128) at Python/pythonrun.c:397 #17 0x00005555556d70e3 in PyRun_AnyFileExFlags (fp=fp at entry=0x555555b89b40, filename=, closeit=closeit at entry=1, flags=flags at entry=0x7fffffffe128) at Python/pythonrun.c:80 #18 0x00005555555b52a0 in pymain_run_file (cf=0x7fffffffe128, config=0x555555b28db0) at Modules/main.c:369 #19 pymain_run_python (exitcode=0x7fffffffe11c) at Modules/main.c:594 #20 Py_RunMain () at Modules/main.c:673 #21 0x00005555555b57d6 in pymain_main (args=0x7fffffffe210) at Modules/main.c:703 #22 Py_BytesMain (argc=, argv=) at Modules/main.c:727 #23 0x00007ffff7041b97 in __libc_start_main (main=0x5555555aa2b0
, argc=2, argv=0x7fffffffe368, init=, fini=, rtld_fini=, stack_end=0x7fffffffe358) at ../csu/libc-start.c:310 #24 0x00005555555b446a in _start () ---------- messages: 373912 nosy: belopolsky, p-ganssle, serhiy.storchaka, xtreak priority: normal severity: normal status: open title: Random segfaults during zoneinfo object creation stopped using Ctrl-C type: crash versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 19 03:14:14 2020 From: report at bugs.python.org (Justin Hodder) Date: Sun, 19 Jul 2020 07:14:14 +0000 Subject: [New-bugs-announce] [issue41337] strangnedd with the parser Message-ID: <1595142854.93.0.625652406.issue41337@roundup.psfhosted.org> New submission from Justin Hodder : Here a colab that demostrates the bug https://colab.research.google.com/drive/1OWSEoV7Wx-EBA_2IprNZoASNvXIky3iC?usp=sharing ++++the following code gives"received an invalid combination of arguments - got (Tensor, Tensor), but expected one of:" import torch torch.nn.Linear( torch.as_strided(torch.ones((4,6*5*5*70*10)),(4,6*5*5,10),(4,2,-1)) ,torch.ones(10,6*5*6)) #(4,2,3),(2,3,1)) ++++The following code gives "TypeError: ones(): argument 'size' (position 1) must be tuple of ints, not str import torch" import torch #torch.nn.Linear( torch.as_strided(torch.ones((4,6*5*5*70*10)),(4,6*5*5,10),(4,2,-1)) ,torch.ones(10,6*5*6)#) #(4,2,3),(2,3,1)) ++++ ---------- components: Interpreter Core files: pythos_parser_bug.ipynb messages: 373937 nosy: Justin Hodder priority: normal severity: normal status: open title: strangnedd with the parser versions: Python 3.6 Added file: https://bugs.python.org/file49325/pythos_parser_bug.ipynb _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 19 03:41:32 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sun, 19 Jul 2020 07:41:32 +0000 Subject: [New-bugs-announce] [issue41338] test_decimal emits DeprecationWarning due to PyUnicode_FromUnicode(NULL, size) Message-ID: <1595144492.98.0.788138334745.issue41338@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : Probably related to issue36346 ./python -Wall -m test test_decimal 0:00:00 load avg: 3.07 Run tests sequentially 0:00:00 load avg: 3.07 [1/1] test_decimal /root/cpython/Lib/test/test_decimal.py:592: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead s = _testcapi.unicode_legacy_string('9.999999') /root/cpython/Lib/test/test_decimal.py:592: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead s = _testcapi.unicode_legacy_string('9.999999') /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2834: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead s = _testcapi.unicode_legacy_string('ROUND_\x00UP') /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2828: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead c.rounding = _testcapi.unicode_legacy_string(rnd) /root/cpython/Lib/test/test_decimal.py:2834: DeprecationWarning: PyUnicode_FromUnicode(NULL, size) is deprecated; use PyUnicode_New() instead s = _testcapi.unicode_legacy_string('ROUND_\x00UP') == Tests result: SUCCESS == 1 test OK. Total duration: 14.8 sec Tests result: SUCCESS ---------- components: Tests messages: 373939 nosy: inada.naoki, xtreak priority: normal severity: normal status: open title: test_decimal emits DeprecationWarning due to PyUnicode_FromUnicode(NULL, size) type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 19 06:03:55 2020 From: report at bugs.python.org (Alan Iwi) Date: Sun, 19 Jul 2020 10:03:55 +0000 Subject: [New-bugs-announce] [issue41339] make queue.Queue objects iterable Message-ID: <1595153035.37.0.668957068395.issue41339@roundup.psfhosted.org> New submission from Alan Iwi : It is possible to make `queue.Queue` and `queue.SimpleQueue` objects iterable by adding simple `__iter__` and `__next__` methods. This is to suggest adding these methods to the existing `Queue` and `SimpleQueue` so that they are iterable by default. ``` class Fifo(SimpleQueue): def __iter__(self): return self def __next__(self): if not self.empty(): return self.get() raise StopIteration ``` ---------- components: Library (Lib) messages: 373950 nosy: alaniwi priority: normal severity: normal status: open title: make queue.Queue objects iterable type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 19 06:36:04 2020 From: report at bugs.python.org (fj92f3jj923f923) Date: Sun, 19 Jul 2020 10:36:04 +0000 Subject: [New-bugs-announce] [issue41340] Not very good strcpy implementation in cpython/Python/strdup.c Message-ID: <1595154964.78.0.522638535626.issue41340@roundup.psfhosted.org> New submission from fj92f3jj923f923 : Hi, all! strdup implementation inside cpython/Python/strdup.c is not the best one. It calls strlen + strcpy, which is the same as calling strlen twice + memcpy. So I replaced it by the call of strlen + memcpy. It is easy to look any implementation in any library. Here for example: https://code.woboq.org/userspace/glibc/string/strdup.c.html So I fixed it here: https://github.com/python/cpython/pull/21544 ---------- components: C API messages: 373955 nosy: fj92f3jj923f923 priority: normal severity: normal status: open title: Not very good strcpy implementation in cpython/Python/strdup.c type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 19 07:28:27 2020 From: report at bugs.python.org (Joseph Perez) Date: Sun, 19 Jul 2020 11:28:27 +0000 Subject: [New-bugs-announce] [issue41341] Recursive evaluation of ForwardRef (and PEP 563) Message-ID: <1595158107.48.0.291386757001.issue41341@roundup.psfhosted.org> New submission from Joseph Perez : (This issue is already broached in https://bugs.python.org/issue38605, and a in some way in https://bugs.python.org/issue35834, but only as a secondary subject, that's why I've opened a ticket on this particular issue) ForwardRef of ForwardRef are not currently evaluated by get_type_hints, only the first level is, as illustrated in these examples: ```python from typing import ForwardRef, Optional, get_type_hints def func(a: "Optional[\"int\"]"): pass assert get_type_hints(func)["a"] == Optional[ForwardRef("int")] # one would expect get_type_hints(func)["a"] == Optional[int] ``` ```python from __future__ import annotations from typing import ForwardRef, Optional, get_type_hints def func(a: Optional["int"]): pass assert get_type_hints(func)["a"] == Optional[ForwardRef("int")] # one would expect get_type_hints(func)["a"] == Optional[int] (which is the case without the import of __future__.annotations!) ``` On the one hand I find this behavior quite counter-intuitive; I rather think ForwardRef as kind of internal (and wonder why there is no leading underscore, like _GenericAlias where it's used) and I don't understand the purpose of exposing it as the result of the public API get_type_hints. By the way, if ForwardRef can be obtained by retrieving annotations without get_type_hints, stringified annotations (especially since PEP 563) make get_type_hints kind of mandatory, and thus make ForwardRef disappeared (only at the first level so ?) On the other hand, the second example show that adoptions of postponed annotations can change the result of get_type_hints; several libraries relying of get_type_hints could be broken. An other issue raised here is that if these ForwardRef are not evaluated by get_type_hints, how will be done their evaluatation by the user? It would require to retrieve some globalns/localns ? too bad, it's exactly what is doing get_type_hints. And if the ForwardRef is in a class field, the class globalns/localns will have to be kept somewhere while waiting to encounter these random ForwardRef; that's feasible, but really tedious. Agreeing with Guido Von Rossum (https://bugs.python.org/msg370232), this behavior could be easily "fixed" in get_type_hints. Actually, there would be only one line to change in ForwardRef._evaluate: ```python # from self.__forward_value__ = _type_check( eval(self.__forward_code__, globalns, localns), "Forward references must evaluate to types.", is_argument=self.__forward_is_argument__) # to self.__forward_value__ = _eval_type( _type_check( eval( self.__forward_code__, globalns, localns), "Forward references must evaluate to types.", is_argument=self.__forward_is_argument__, ), globalns, localns, ) And if this fix could solve the "double ForwardRef" issue mentionned in https://bugs.python.org/issue38605, it would also resolve https://bugs.python.org/issue35834 in the same time, raising NameError in case of unknown ForwardRef with postponed annotation. ---------- messages: 373960 nosy: BTaskaya, eric.smith, gvanrossum, joperez, levkivskyi, lukasz.langa, vstinner priority: normal severity: normal status: open title: Recursive evaluation of ForwardRef (and PEP 563) type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 19 08:38:29 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 19 Jul 2020 12:38:29 +0000 Subject: [New-bugs-announce] [issue41342] Convert int.__round__ to Argument Clinic Message-ID: <1595162309.27.0.341065260309.issue41342@roundup.psfhosted.org> New submission from Serhiy Storchaka : int.__round__ was not converted to Argument Clinic because it is not impossible to express a correct signature for it in Python. But now we can at least make Argument Clinic not producing incorrect signature. And converting to Argument Clinic has a performance benefit. $ ./python -m pyperf timeit -q --compare-to=../cpython-baseline/python "round(12345)" Mean +- std dev: [/home/serhiy/py/cpython-baseline/python] 123 ns +- 6 ns -> [/home/serhiy/py/cpython-release/python] 94.4 ns +- 2.4 ns: 1.31x faster (-23%) $ ./python -m pyperf timeit -q --compare-to=../cpython-baseline/python "round(12345, 0)" Mean +- std dev: [/home/serhiy/py/cpython-baseline/python] 159 ns +- 4 ns -> [/home/serhiy/py/cpython-release/python] 98.6 ns +- 2.4 ns: 1.61x faster (-38%) $ ./python -m pyperf timeit -q --compare-to=../cpython-baseline/python "round(12345, -2)" Mean +- std dev: [/home/serhiy/py/cpython-baseline/python] 585 ns +- 9 ns -> [/home/serhiy/py/cpython-release/python] 534 ns +- 14 ns: 1.09x faster (-9%) ---------- components: Argument Clinic, Interpreter Core messages: 373963 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Convert int.__round__ to Argument Clinic type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 19 10:49:49 2020 From: report at bugs.python.org (Dong-hee Na) Date: Sun, 19 Jul 2020 14:49:49 +0000 Subject: [New-bugs-announce] [issue41343] Convert complex methods to Argument Clinic Message-ID: <1595170189.2.0.291274613507.issue41343@roundup.psfhosted.org> New submission from Dong-hee Na : Mean +- std dev: [master_format] 3.54 us +- 0.06 us -> [clinic_format] 3.29 us +- 0.07 us: 1.08x faster (-7%) Mean +- std dev: [master_conjugate] 3.56 us +- 0.09 us -> [clinic_conjugate] 3.32 us +- 0.06 us: 1.07x faster (-7%) Mean +- std dev: [master__getnewargs__] 425 ns +- 18 ns -> [clinic__getnewargs__] 412 ns +- 7 ns: 1.03x faster (-3%) ---------- components: Argument Clinic messages: 373966 nosy: corona10, larry priority: normal severity: normal status: open title: Convert complex methods to Argument Clinic type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 03:35:36 2020 From: report at bugs.python.org (Vinay Sharma) Date: Mon, 20 Jul 2020 07:35:36 +0000 Subject: [New-bugs-announce] [issue41344] SharedMemory crash when size is 0 Message-ID: <1595230536.74.0.865685098947.issue41344@roundup.psfhosted.org> New submission from Vinay Sharma : On running this: shm = SharedMemory(create=True, size=0) I get the following error: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 111, in __init__ self._mmap = mmap.mmap(self._fd, size) ValueError: cannot mmap an empty file Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in from apport.report import Report File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in import apport.fileutils File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in from apport.packaging_impl import impl as packaging File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 24, in import apt File "/usr/lib/python3/dist-packages/apt/__init__.py", line 23, in import apt_pkg ModuleNotFoundError: No module named 'apt_pkg' Original exception was: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 111, in __init__ self._mmap = mmap.mmap(self._fd, size) ValueError: cannot mmap an empty file >>> shm = SharedMemory(create=True, size=0) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 111, in __init__ self._mmap = mmap.mmap(self._fd, size) ValueError: cannot mmap an empty file Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in from apport.report import Report File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in import apport.fileutils File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in from apport.packaging_impl import impl as packaging File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 24, in import apt File "/usr/lib/python3/dist-packages/apt/__init__.py", line 23, in import apt_pkg ModuleNotFoundError: No module named 'apt_pkg' Original exception was: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 111, in __init__ self._mmap = mmap.mmap(self._fd, size) ValueError: cannot mmap an empty file This can be simply resolved by adding a check when size passed is 0, so that a shared memory segment is never created. Currently, the following is coded: if not size >= 0: raise ValueError("'size' must be a positive integer") I believe this should be changed to: if not size > 0: raise ValueError("'size' must be a positive integer") As zero is not a positive and integer and is causing problems. ---------- components: Library (Lib) messages: 373990 nosy: vinay0410 priority: normal severity: normal status: open title: SharedMemory crash when size is 0 type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 07:43:46 2020 From: report at bugs.python.org (Michal Arbet) Date: Mon, 20 Jul 2020 11:43:46 +0000 Subject: [New-bugs-announce] [issue41345] Remote end closed connection without response Message-ID: <1595245426.33.0.812684814237.issue41345@roundup.psfhosted.org> New submission from Michal Arbet : Hi, I'm not sure if this is really python bug, but I hope that you can check the issue. Issue is that from time to time i'm getting exception from python when sending request to server which has http keepalive option turned on. Requests send a request but in few miliseconds apache2 server is closing persistent connection by sending FIN packet which generate traceback. I can reproduce it by following simple script. #!/usr/bin/python3 import requests from time import sleep import logging logging.basicConfig(level=logging.DEBUG) s = requests.Session() s.verify = False # self-signed cert counter = 0 txt = "test" while True: counter = counter + 1 s.post('http://localhost', data={counter:txt}) sleep(5) Everything is working fine, but from time to time I get following traceback. When FIN is received right after request was sent. michalarbet at pixla:~/work$ ./request_test.py DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:80 DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 DEBUG:urllib3.connectionpool:Resetting dropped connection: localhost DEBUG:urllib3.connectionpool:http://localhost:80 "POST / HTTP/1.1" 200 0 Traceback (most recent call last): File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen httplib_response = self._make_request( File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 421, in _make_request six.raise_from(e, None) File "", line 3, in raise_from File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 416, in _make_request httplib_response = conn.getresponse() File "/usr/lib/python3.8/http/client.py", line 1332, in getresponse response.begin() File "/usr/lib/python3.8/http/client.py", line 303, in begin version, status, reason = self._read_status() File "/usr/lib/python3.8/http/client.py", line 272, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/michalarbet/.local/lib/python3.8/site-packages/requests/adapters.py", line 439, in send resp = conn.urlopen( File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 719, in urlopen retries = retries.increment( File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 400, in increment raise six.reraise(type(error), error, _stacktrace) File "/usr/lib/python3/dist-packages/six.py", line 702, in reraise raise value.with_traceback(tb) File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen httplib_response = self._make_request( File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 421, in _make_request six.raise_from(e, None) File "", line 3, in raise_from File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 416, in _make_request httplib_response = conn.getresponse() File "/usr/lib/python3.8/http/client.py", line 1332, in getresponse response.begin() File "/usr/lib/python3.8/http/client.py", line 303, in begin version, status, reason = self._read_status() File "/usr/lib/python3.8/http/client.py", line 272, in _read_status raise RemoteDisconnected("Remote end closed connection without" urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./request_test.py", line 26, in s.post('http://localhost', data={counter:txt}) File "/home/michalarbet/.local/lib/python3.8/site-packages/requests/sessions.py", line 578, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/michalarbet/.local/lib/python3.8/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/home/michalarbet/.local/lib/python3.8/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/home/michalarbet/.local/lib/python3.8/site-packages/requests/adapters.py", line 498, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) Apache using keepalive. /etc/apache2/apache2.conf:KeepAliveTimeout 5 /etc/apache2/apache2.conf:MaxKeepAliveRequests 100 /etc/apache2/apache2.conf:KeepAlive On Sending also pcap file from different reproduce test, but behaviour is same. ---------- files: mycap.pcap messages: 374000 nosy: Michal Arbet priority: normal severity: normal status: open title: Remote end closed connection without response versions: Python 3.8 Added file: https://bugs.python.org/file49326/mycap.pcap _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 08:56:10 2020 From: report at bugs.python.org (Felix Yan) Date: Mon, 20 Jul 2020 12:56:10 +0000 Subject: [New-bugs-announce] [issue41346] test_thousand and compileall hangs on riscv64 Message-ID: <1595249770.4.0.20860830279.issue41346@roundup.psfhosted.org> New submission from Felix Yan : In my riscv64 build, test_thousand (test.test_multiprocessing_forkserver.WithProcessesTestBarrier) always hangs on some locking thing, and the compileall part during installation hangs the same way. I am not sure if it's toolchain related or something else, though. Some relevant versions: Linux riscv64-unknown-linux-gnu Python 3.8.4 glibc 2.31 gcc 10.1.0 configure switches: ./configure --prefix=/usr \ --enable-shared \ --with-computed-gotos \ --enable-optimizations \ --with-lto \ --enable-ipv6 \ --with-system-expat \ --with-dbmliborder=gdbm:ndbm \ --with-system-ffi \ --with-system-libmpdec \ --enable-loadable-sqlite-extensions \ --without-ensurepip When Ctrl-C: test_thousand (test.test_multiprocessing_forkserver.WithProcessesTestBarrier) ... ^CProcess Process-1305: Traceback (most recent call last): File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/build/python/src/Python-3.8.4/Lib/test/_test_multiprocessing.py", line 1970, in _test_thousand_f barrier.wait() File "/build/python/src/Python-3.8.4/Lib/threading.py", line 610, in wait self._enter() # Block while the barrier drains. File "/build/python/src/Python-3.8.4/Lib/threading.py", line 631, in _enter self._cond.wait() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/synchronize.py", line 261, in wait return self._wait_semaphore.acquire(True, timeout) KeyboardInterrupt Process Process-1304: Traceback (most recent call last): File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/build/python/src/Python-3.8.4/Lib/test/_test_multiprocessing.py", line 1970, in _test_thousand_f barrier.wait() File "/build/python/src/Python-3.8.4/Lib/threading.py", line 610, in wait self._enter() # Block while the barrier drains. File "/build/python/src/Python-3.8.4/Lib/threading.py", line 631, in _enter self._cond.wait() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/synchronize.py", line 261, in wait return self._wait_semaphore.acquire(True, timeout) KeyboardInterrupt Process Process-1306: Traceback (most recent call last): File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/build/python/src/Python-3.8.4/Lib/test/_test_multiprocessing.py", line 1970, in _test_thousand_f barrier.wait() File "/build/python/src/Python-3.8.4/Lib/threading.py", line 610, in wait self._enter() # Block while the barrier drains. File "/build/python/src/Python-3.8.4/Lib/threading.py", line 631, in _enter self._cond.wait() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/synchronize.py", line 261, in wait return self._wait_semaphore.acquire(True, timeout) KeyboardInterrupt Process Process-1302: Traceback (most recent call last): File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/build/python/src/Python-3.8.4/Lib/test/_test_multiprocessing.py", line 1970, in _test_thousand_f barrier.wait() File "/build/python/src/Python-3.8.4/Lib/threading.py", line 610, in wait self._enter() # Block while the barrier drains. File "/build/python/src/Python-3.8.4/Lib/threading.py", line 631, in _enter self._cond.wait() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/synchronize.py", line 261, in wait return self._wait_semaphore.acquire(True, timeout) KeyboardInterrupt Process Process-1303: Traceback (most recent call last): File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/build/python/src/Python-3.8.4/Lib/test/_test_multiprocessing.py", line 1970, in _test_thousand_f barrier.wait() File "/build/python/src/Python-3.8.4/Lib/threading.py", line 610, in wait self._enter() # Block while the barrier drains. File "/build/python/src/Python-3.8.4/Lib/threading.py", line 631, in _enter self._cond.wait() File "/build/python/src/Python-3.8.4/Lib/multiprocessing/synchronize.py", line 261, in wait return self._wait_semaphore.acquire(True, timeout) KeyboardInterrupt Warning -- multiprocessing.process._dangling was modified by test_multiprocessing_forkserver Before: set() After: {, , , , } test_multiprocessing_fork passed in 1 min 46 sec == Tests result: FAILURE, INTERRUPTED == Test suite interrupted by signal SIGINT. ---------- components: Library (Lib) files: configure.output messages: 374005 nosy: felixonmars priority: normal severity: normal status: open title: test_thousand and compileall hangs on riscv64 type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49327/configure.output _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 10:46:30 2020 From: report at bugs.python.org (Dong-hee Na) Date: Mon, 20 Jul 2020 14:46:30 +0000 Subject: [New-bugs-announce] [issue41347] collections.deque.count performance enhancement Message-ID: <1595256390.45.0.494229030052.issue41347@roundup.psfhosted.org> New submission from Dong-hee Na : Same situation as: https://bugs.python.org/issue39425 Mean +- std dev: [master_count] 946 ns +- 14 ns -> [ac_count] 427 ns +- 7 ns: 2.22x faster (-55%) ---------- assignee: corona10 components: Extension Modules messages: 374010 nosy: corona10 priority: normal severity: normal status: open title: collections.deque.count performance enhancement type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 11:21:21 2020 From: report at bugs.python.org (Unai Martinez) Date: Mon, 20 Jul 2020 15:21:21 +0000 Subject: [New-bugs-announce] [issue41348] Support replacing global function pointers in a shared library Message-ID: <1595258481.74.0.314238368417.issue41348@roundup.psfhosted.org> New submission from Unai Martinez : As discussed in https://stackoverflow.com/questions/62947076/replace-a-function-pointer-in-a-shared-library-with-ctypes, it seems currently not possible to replace an existing global variable in a shared library which contains a function pointer, with a callback defined in Python (through ctypes). However, it is possible to replace global variables of other types, such as `int`. ---------- components: ctypes messages: 374013 nosy: Unai Martinez priority: normal severity: normal status: open title: Support replacing global function pointers in a shared library type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 11:38:15 2020 From: report at bugs.python.org (Tim Z) Date: Mon, 20 Jul 2020 15:38:15 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue41349=5D_idle_not_going_fu?= =?utf-8?q?ll_screen_when_I_rotate_screen_90=C2=B0_on_mac?= Message-ID: <1595259495.1.0.547279913373.issue41349@roundup.psfhosted.org> New submission from Tim Z : It refuses to go full screen when I rotate screen 90? on mac ---------- components: macOS messages: 374014 nosy: Tim Z, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: idle not going full screen when I rotate screen 90? on mac type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 12:09:49 2020 From: report at bugs.python.org (Nick Henderson) Date: Mon, 20 Jul 2020 16:09:49 +0000 Subject: [New-bugs-announce] [issue41350] Use of zipfile.Path causes attempt to write after ZipFile is closed Message-ID: <1595261389.35.0.991424986164.issue41350@roundup.psfhosted.org> New submission from Nick Henderson : In both Python 3.8.3 and 3.9.0b3, using zipfile.Path to write a file in a context manager results in an attempt to write to the zip file after it is closed. In Python 3.9.0b3: import io from zipfile import ZipFile, Path def make_zip(): """Make zip file and return bytes.""" bytes_io = io.BytesIO() zip_file = ZipFile(bytes_io, mode="w") zip_path = Path(zip_file, "file-a") # use zipp.Path.open with zip_path.open(mode="wb") as fp: fp.write(b"contents of file-a") zip_file.close() data = bytes_io.getvalue() bytes_io.close() return data zip_data = make_zip() # Exception ignored in: # Traceback (most recent call last): # File "/Users/nick/.pyenv/versions/3.9.0b3/lib/python3.9/zipfile.py", line 1807, in __del__ # self.close() # File "/Users/nick/.pyenv/versions/3.9.0b3/lib/python3.9/zipfile.py", line 1824, in close # self.fp.seek(self.start_dir) # ValueError: I/O operation on closed file. In Python 3.8.3: import io from zipfile import ZipFile, Path def make_zip(): """Make zip file and return bytes.""" bytes_io = io.BytesIO() zip_file = ZipFile(bytes_io, mode="w") zip_path = Path(zip_file, "file-a") # use zipp.Path.open with zip_path.open(mode="w") as fp: fp.write(b"contents of file-a") zip_file.close() data = bytes_io.getvalue() bytes_io.close() return data zip_data = make_zip() # Exception ignored in: # Traceback (most recent call last): # File "/Users/nick/.pyenv/versions/3.8.3/lib/python3.8/zipfile.py", line 1820, in __del__ # self.close() # File "/Users/nick/.pyenv/versions/3.8.3/lib/python3.8/zipfile.py", line 1837, in close # self.fp.seek(self.start_dir) # ValueError: I/O operation on closed file. In the Python 3.8 example, mode="w" is used in the open method on zip_path. ---------- components: Library (Lib) files: zippath_bug_39.py messages: 374015 nosy: Nick Henderson priority: normal severity: normal status: open title: Use of zipfile.Path causes attempt to write after ZipFile is closed type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49329/zippath_bug_39.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 13:28:28 2020 From: report at bugs.python.org (Saumitra Verma) Date: Mon, 20 Jul 2020 17:28:28 +0000 Subject: [New-bugs-announce] [issue41351] IDLE does not close the brackets and does not insert the closing quotes Message-ID: <1595266108.62.0.449571815495.issue41351@roundup.psfhosted.org> New submission from Saumitra Verma : This feature must be added ---------- assignee: terry.reedy components: IDLE messages: 374021 nosy: Saumitra Verma, terry.reedy priority: normal severity: normal status: open title: IDLE does not close the brackets and does not insert the closing quotes type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 17:14:00 2020 From: report at bugs.python.org (Ziyi Wang) Date: Mon, 20 Jul 2020 21:14:00 +0000 Subject: [New-bugs-announce] [issue41352] FileIO.readall() should raise "UnsupportedOperation" when in "w" mode Message-ID: <1595279640.58.0.284092405556.issue41352@roundup.psfhosted.org> New submission from Ziyi Wang : Here are the two test cases: the one with FileIO.readall() fails def testReadWithWritingMode(self): r, w = os.pipe() w = os.fdopen(w, "w") w.write("hello") w.close() with io.FileIO(r, mode="w") as f: with self.assertRaises(_io.UnsupportedOperation): f.read() def testReadallWithWritingMode(self): r, w = os.pipe() w = os.fdopen(w, "w") w.write("hello") w.close() with io.FileIO(r, mode="w") as f: with self.assertRaises(_io.UnsupportedOperation): f.readall() With FileIO.read() raises "UnsupportedOperation" in "w" mode, I expect FileIO.readall() do the same. But in fact FileIO.readall() does not check for readable and does not raise "UnsupportedOperation" in "w"mode. I'm happy to write a pull request if you want. ---------- components: IO, Library (Lib) messages: 374027 nosy: Ziyi Wang priority: normal severity: normal status: open title: FileIO.readall() should raise "UnsupportedOperation" when in "w" mode type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 20 21:59:17 2020 From: report at bugs.python.org (Joannah Nanjekye) Date: Tue, 21 Jul 2020 01:59:17 +0000 Subject: [New-bugs-announce] [issue41353] Indicate supported sound header formats Message-ID: <1595296757.63.0.289277741355.issue41353@roundup.psfhosted.org> New submission from Joannah Nanjekye : The documentation for the sndhdr module does not have supported file formats. Something like below could help: +------------+------------------------------------+ | Value | Sound header format | +============+====================================+ | ``'aifc'`` | Compressed Audio Interchange Files | +------------+------------------------------------+ | ``'aiff'`` | Audio Interchange Files | +------------+------------------------------------+ | ``'au'`` | AU Files | +------------+------------------------------------+ | ``'hcom'`` | HCOM Files | +------------+------------------------------------+ | ``'sndr'`` | SNDR Files | +------------+------------------------------------+ | ``'sndt'`` | SNDT Files | +------------+------------------------------------+ | ``'voc'`` | VOC Files | +------------+------------------------------------+ | ``'wav'`` | WAV Files | +------------+------------------------------------+ | ``'8svx'`` | 8SVX Files | +------------+------------------------------------+ | ``'sb'`` | SB Files | +------------+------------------------------------+ | ``'ub'`` | UB Files | +------------+------------------------------------+ | ``'ul'`` | uLAW Audio Files | +------------+------------------------------------+ ---------- messages: 374047 nosy: nanjekyejoannah priority: normal severity: normal status: open title: Indicate supported sound header formats _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 03:29:03 2020 From: report at bugs.python.org (Christof Hanke) Date: Tue, 21 Jul 2020 07:29:03 +0000 Subject: [New-bugs-announce] [issue41354] filecmp.cmp documentation does not match actual code Message-ID: <1595316543.83.0.558279011146.issue41354@roundup.psfhosted.org> New submission from Christof Hanke : help(filecmp.cmp) says: """ cmp(f1, f2, shallow=True) Compare two files. Arguments: f1 -- First file name f2 -- Second file name shallow -- Just check stat signature (do not read the files). defaults to True. Return value: True if the files are the same, False otherwise. This function uses a cache for past comparisons and the results, with cache entries invalidated if their stat information changes. The cache may be cleared by calling clear_cache(). """ However, looking at the code, the shallow-argument is taken only into account if the signatures are the same: """ s1 = _sig(os.stat(f1)) s2 = _sig(os.stat(f2)) if s1[0] != stat.S_IFREG or s2[0] != stat.S_IFREG: return False if shallow and s1 == s2: return True if s1[1] != s2[1]: return False outcome = _cache.get((f1, f2, s1, s2)) if outcome is None: outcome = _do_cmp(f1, f2) if len(_cache) > 100: # limit the maximum size of the cache clear_cache() _cache[f1, f2, s1, s2] = outcome return outcome """ Therefore, if I call cmp with shallow=True and the stat-signatures differ, cmp actually does a "deep" compare. This "deep" compare however does not check the stat-signatures. Thus I propose follwing patch: cmp always checks the "full" signature. return True if shallow and above test passed. It does not make sense to me that when doing a "deep" compare, that only the size is compared, but not the mtime. --- filecmp.py.orig 2020-07-16 12:00:57.000000000 +0200 +++ filecmp.py 2020-07-16 12:00:30.000000000 +0200 @@ -52,10 +52,10 @@ s2 = _sig(os.stat(f2)) if s1[0] != stat.S_IFREG or s2[0] != stat.S_IFREG: return False - if shallow and s1 == s2: - return True - if s1[1] != s2[1]: + if s1 != s2: return False + if shallow: + return True outcome = _cache.get((f1, f2, s1, s2)) if outcome is None: ---------- components: Library (Lib) messages: 374054 nosy: chanke priority: normal severity: normal status: open title: filecmp.cmp documentation does not match actual code type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 05:28:04 2020 From: report at bugs.python.org (Ronald Oussoren) Date: Tue, 21 Jul 2020 09:28:04 +0000 Subject: [New-bugs-announce] [issue41355] os.link(..., follow_symlinks=False) without linkat(3) Message-ID: <1595323684.61.0.723587728817.issue41355@roundup.psfhosted.org> New submission from Ronald Oussoren : The code for os.link() seems to ignore follow_symlinks when the linkat(2) function is not available on the platform, which results in unexpected behaviour when "follow_symlinks" is false. ---------- components: Extension Modules messages: 374057 nosy: ronaldoussoren priority: normal severity: normal stage: test needed status: open title: os.link(..., follow_symlinks=False) without linkat(3) type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 06:00:36 2020 From: report at bugs.python.org (Dennis Sweeney) Date: Tue, 21 Jul 2020 10:00:36 +0000 Subject: [New-bugs-announce] [issue41356] Convert bool.__new__ to argument clinic Message-ID: <1595325636.21.0.753393963516.issue41356@roundup.psfhosted.org> New submission from Dennis Sweeney : Benchmarked on my machine (Windows 10): .\python.bat -m pyperf timeit -s "from collections import deque; x = [[], [1]] * 1_000_000" "deque(map(bool, x), maxlen=0)" --- Win32 build configuration --- Master: 105 ms +- 2 ms With this change: 88.2 ms +- 2.1 ms --- x64 build configuration --- Master: 80.2 ms +- 1.3 ms With this change: 74.2 ms +- 0.5 ms ---------- components: Argument Clinic messages: 374058 nosy: Dennis Sweeney, larry priority: normal severity: normal status: open title: Convert bool.__new__ to argument clinic type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 06:31:29 2020 From: report at bugs.python.org (Jendrik Weise) Date: Tue, 21 Jul 2020 10:31:29 +0000 Subject: [New-bugs-announce] [issue41357] pathlib.Path.resolve incorrect os.path equivalent Message-ID: <1595327489.48.0.566751491612.issue41357@roundup.psfhosted.org> New submission from Jendrik Weise : According to the table at the bottom of the pathlib documentation Path.resolve() is equivalent to os.path.abspath(path). In contrast to the latter Path.resolve() does follow symlinks though, making it more similar to os.path.realpath(path). ---------- assignee: docs at python components: Documentation messages: 374060 nosy: Jendrik Weise, docs at python priority: normal severity: normal status: open title: pathlib.Path.resolve incorrect os.path equivalent type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 06:39:57 2020 From: report at bugs.python.org (hap) Date: Tue, 21 Jul 2020 10:39:57 +0000 Subject: [New-bugs-announce] [issue41358] Unable to uninstall Python launcher using command line Message-ID: <1595327997.84.0.414162156618.issue41358@roundup.psfhosted.org> New submission from hap : Trying to create a command line keeping the following in mind, and my script is python-3.8.4.exe" /quiet InstallLauncherAllUsers=0 Include_pip=1 Include_tcltk=1 Include_test=1 PrependPath=1 1. Python installs in user profile 2. Install pip as a part of installation All good here. So, my script installs python and also installs Python Launcher. So far so good! The problem is, when uninstall python i use command "python-3.8.4.exe" /quiet /uninstall, it does uninstall python, but this does not uninstall Python Launcher. Therefore, i call msiexec.exe /x {339192BE-2520-4C34-89DF-81CF98EB7E6C} /qn+ and it does remove all components from %localappdata%\Package Cache folder, except that it still keeps launcher.msi (588kb) and also entry in appwiz.cpl (around 7.15 MB). What i am trying to achieve is, 1. If user wants to uninstall Python, it should uninstall python and python launcher because i don't want to leave away any installer files which an contribute to my system space. I have gone through many links with failed help. Any suggestion to save my 7.2 MB space is greatly appreciated. ---------- components: Installation messages: 374063 nosy: ahmedparvez321 at gmail.com priority: normal severity: normal status: open title: Unable to uninstall Python launcher using command line type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 07:46:47 2020 From: report at bugs.python.org (Krzysiek) Date: Tue, 21 Jul 2020 11:46:47 +0000 Subject: [New-bugs-announce] [issue41359] argparse mutually exclusive group does not exclude in some cases Message-ID: <1595332007.75.0.283126282834.issue41359@roundup.psfhosted.org> New submission from Krzysiek : The documentation for `ArgumentParser.add_mutually_exclusive_group` states: "argparse will make sure that only one of the arguments in the mutually exclusive group was present on the command line". This is not the case in certain circumstances: ```python import argparse parser = argparse.ArgumentParser() group = parser.add_mutually_exclusive_group() group.add_argument('-a') group.add_argument('-b', nargs='?') parser.parse_args('-a a -b'.split()) ``` The above code does not produce any error, even though both exclusive arguments are present. My guess is that the check for mutual exclusion is not done during processing of each command line argument, but rather afterwards. It seems the check only ensures at most one argument from group is not `None`. The issue exists at least on Python 2.7.13, 3.6, 3.7.5, 3.8, and 3.10. ---------- components: Library (Lib) messages: 374065 nosy: kkarbowiak priority: normal severity: normal status: open title: argparse mutually exclusive group does not exclude in some cases type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 09:58:01 2020 From: report at bugs.python.org (Alexey) Date: Tue, 21 Jul 2020 13:58:01 +0000 Subject: [New-bugs-announce] [issue41360] method _tunnel does not allow for a standard proxy authentication solution Message-ID: <1595339881.01.0.101864505877.issue41360@roundup.psfhosted.org> New submission from Alexey : The _tunnel method of the HTTPConnection class in the http.client module, if the CONNECT request is unsuccessful, closes the connection and raises an OSError without returning the response data. This behavior does not allow to implement the authentication procedure on the proxy server using methods similar to those used for authentication on servers (using hooks). And at the moment proxy authentication (Kerberos, Digest, NTLM - all other than Basic) is not supported by the urllib3 and accordingly requests, pip and many others. As a result, a large number of people cannot use Python to create scripts that involve working on the Internet (if they don't know workarounds). This problem has been mentioned many times here (Issue 7291, 24333, 24964). There are many Issues related to this task in requests, urllib3, pip and other. This problem is many years old (at least 5), but there is still no complete solution (as far as I can tell). There are several workarounds, but there is still no solution that could be used in urllib3, requests, pip and other (in many discussions of Issues _tunnel method is indicated as a reason preventing a proxy authentication solution from being implemented). Hopefully someone can finally solve this problem or explain why it can't be solved ---------- components: Library (Lib) messages: 374067 nosy: Namyotkin priority: normal severity: normal status: open title: method _tunnel does not allow for a standard proxy authentication solution type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 11:26:01 2020 From: report at bugs.python.org (Dong-hee Na) Date: Tue, 21 Jul 2020 15:26:01 +0000 Subject: [New-bugs-announce] [issue41361] Converting collections.deque methods to Argument Clinic Message-ID: <1595345161.63.0.950006640514.issue41361@roundup.psfhosted.org> New submission from Dong-hee Na : There was no performance regressin with this change. ---------- components: Argument Clinic messages: 374068 nosy: corona10, larry priority: normal severity: normal status: open title: Converting collections.deque methods to Argument Clinic type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 13:09:20 2020 From: report at bugs.python.org (Constant Marks) Date: Tue, 21 Jul 2020 17:09:20 +0000 Subject: [New-bugs-announce] [issue41362] Regenerating parser table fails (windows) Message-ID: <1595351360.26.0.0463175809492.issue41362@roundup.psfhosted.org> New submission from Constant Marks : When calling `build.bat -d --regen' the graminit.c and graminit.h files are not generated with the following warnings: Time Elapsed 00:02:49.06 Killing any running python_d.exe instances... Getting build info from "C:\Program Files\Git\cmd\git.exe" Building heads/3.9-dirty:9e84a2c424 3.9 pythoncore.vcxproj -> C:\Users\crm0376\source\repos\cpython\PCbuild\win32\python39_d.dll C:\Users\crm0376\source\repos\cpython\Parser\pgen\__main__.py:43: ResourceWarning: unclosed file <_io.TextIOWrapper name='C:\\Users\\crm0376\\source\\repos\\cpython\\PCbuild\\obj\\39win32_Debug\\regen\\graminit.h' mode='w' encoding='cp 1252'> main() ResourceWarning: Enable tracemalloc to get the object allocation traceback C:\Users\crm0376\source\repos\cpython\Parser\pgen\__main__.py:43: ResourceWarning: unclosed file <_io.TextIOWrapper name='C:\\Users\\crm0376\\source\\repos\\cpython\\PCbuild\\obj\\39win32_Debug\\regen\\graminit.c' mode='w' encoding='cp 1252'> main() ResourceWarning: Enable tracemalloc to get the object allocation traceback ---------- messages: 374074 nosy: constantmarks priority: normal severity: normal status: open title: Regenerating parser table fails (windows) versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 13:38:02 2020 From: report at bugs.python.org (=?utf-8?b?0JDQvdC00YDQtdC5INCf0LXRgNC10LLRkdGA0YLQutC40L0=?=) Date: Tue, 21 Jul 2020 17:38:02 +0000 Subject: [New-bugs-announce] [issue41363] python 3.8.5 folder not visible in https://www.python.org/ftp/python/ listing Message-ID: <1595353082.43.0.231243194909.issue41363@roundup.psfhosted.org> New submission from ?????? ??????????? : Open https://www.python.org/ftp/python/ and observe that there is no 3.8.5 folder. Open https://www.python.org/ftp/python/3.8.5/ and observe that all the expected files are present. Conclusion: https://www.python.org/ftp/python/ index needs an update. ---------- messages: 374076 nosy: ?????? ??????????? priority: normal severity: normal status: open title: python 3.8.5 folder not visible in https://www.python.org/ftp/python/ listing versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 16:41:32 2020 From: report at bugs.python.org (Steve Dower) Date: Tue, 21 Jul 2020 20:41:32 +0000 Subject: [New-bugs-announce] [issue41364] Optimise uuid platform detection Message-ID: <1595364092.15.0.501059723351.issue41364@roundup.psfhosted.org> New submission from Steve Dower : The uuid module calls platform.system() multiple times, even when the result is known from sys.platform. We can avoid some of these. ---------- assignee: steve.dower components: Library (Lib) messages: 374079 nosy: steve.dower priority: normal severity: normal stage: needs patch status: open title: Optimise uuid platform detection type: performance versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 23:28:48 2020 From: report at bugs.python.org (Chuck) Date: Wed, 22 Jul 2020 03:28:48 +0000 Subject: [New-bugs-announce] [issue41365] Python Launcher is sorry to say... No pyvenv.cfg file Message-ID: <1595388528.0.0.559799161472.issue41365@roundup.psfhosted.org> New submission from Chuck : on Windows 10 64 after install of Python 3.8.5 and after install of decrypt_bitcoinj_seed.pyw <--- and after double click, I get "Python Launcher is sorry to say... No pyvenv.cfg file". I have tried multiple reinstalls of Python to no avail. When a search is performed on the entire C:\ drive, no pyvenv.cfg file is present so c:\>python -m venv myenv c:\path\to\myenv does not create the pyvenv.cfg file. The Python Script requires PIP (Google Protobuf and pylibscrypt) and I understand it's included in 3.8.5. Please provide step by step instructions for how to either create or download the pyvenv.cfg file and the exact Python path to folder name it needs to go so I can overcome this error. Or if someone has a different solution, that's fine too... I just need the decrypt_bitcoinj_seed.pyw to work. ---------- components: Windows messages: 374083 nosy: Packhash, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python Launcher is sorry to say... No pyvenv.cfg file type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 21 23:49:41 2020 From: report at bugs.python.org (Henry Schreiner) Date: Wed, 22 Jul 2020 03:49:41 +0000 Subject: [New-bugs-announce] [issue41366] sign-conversion warning in 3.9 beta Message-ID: <1595389781.15.0.650471141646.issue41366@roundup.psfhosted.org> New submission from Henry Schreiner : The macro Py_UNICODE_COPY was replaced by a function in CPython 3.9, and that function cannot compile with full warnings enabled + warnings as errors under Clang. The problematic line: Py_DEPRECATED(3.3) static inline void Py_UNICODE_COPY(Py_UNICODE *target, const Py_UNICODE *source, Py_ssize_t length) { memcpy(target, source, length * sizeof(Py_UNICODE)); } has an implicit conversion of Py_ssize_t into size_t, which creates: /Users/runner/hostedtoolcache/Python/3.9.0-beta.5/x64/include/python3.9/cpython/unicodeobject.h:55:: implicit conversion changes signedness: 'Py_ssize_t' (aka 'long') to 'unsigned long' [-Werror,-Wsign-conversion] error: implicit conversion changes signedness: 'Py_ssize_t' (aka 'long') to 'unsigned long' [-Werror,-Wsign-conversion] memcpy(target, source, length * sizeof(Py_UNICODE)); memcpy(target, source, length * sizeof(Py_UNICODE)); An explicit cast to size_t (is there a Py_size_t?) should be made. ---------- components: C API messages: 374084 nosy: Henry Schreiner priority: normal severity: normal status: open title: sign-conversion warning in 3.9 beta type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 22 04:42:59 2020 From: report at bugs.python.org (Joan Prat Rigol) Date: Wed, 22 Jul 2020 08:42:59 +0000 Subject: [New-bugs-announce] [issue41367] Popen Timeout raised on 3.6 but not on 3.8 Message-ID: <1595407379.06.0.696065311763.issue41367@roundup.psfhosted.org> New submission from Joan Prat Rigol : If I run this code in Python 3.8 I get the result as expected: Python 3.8.2 (default, Apr 27 2020, 15:53:34) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> process = subprocess.Popen("sudo -Si ls -l /home", shell=True, stdin=-1, stdout=-1, stderr=-1) >>> out, err = process.communicate(input="Pr4tR1g01J04n\n".encode(), timeout=2) >>> print(out) b'total 4\ndrwxr-xr-x 43 joan joan 4096 Jul 22 09:46 joan\n' >>> But If I run the code in Python 3.6 I am getting a timeout: Python 3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> process = subprocess.Popen("sudo -Si ls -l /home", shell=True, stdin=-1, stdout=-1, stderr=-1) >>> out, err = process.communicate(input="Pr4tR1g01J04n\n".encode(), timeout=2) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.6/subprocess.py", line 863, in communicate stdout, stderr = self._communicate(input, endtime, timeout) File "/usr/lib/python3.6/subprocess.py", line 1535, in _communicate self._check_timeout(endtime, orig_timeout) File "/usr/lib/python3.6/subprocess.py", line 891, in _check_timeout raise TimeoutExpired(self.args, orig_timeout) subprocess.TimeoutExpired: Command 'sudo -Si ls -l /home' timed out after 2 seconds >>> Is this a bug? ---------- messages: 374089 nosy: Joan Prat Rigol priority: normal severity: normal status: open title: Popen Timeout raised on 3.6 but not on 3.8 versions: Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 22 14:13:57 2020 From: report at bugs.python.org (William Pickard) Date: Wed, 22 Jul 2020 18:13:57 +0000 Subject: [New-bugs-announce] [issue41368] Allow compiling Python with llvm-clang on Windows. Message-ID: <1595441637.36.0.539445300031.issue41368@roundup.psfhosted.org> New submission from William Pickard : Since Visual Studio 2017, Microsoft has an optional C++ Desktop Development option for compiling C/C++ code with LLVM's Clang compiler. It's called: Clang with Microsoft CodeGen. While the code is parsed with LLVM's Clang parser, the code is generated with a MSVC compatible code generator (hence the "with Microsoft CodeGen" name suffix) Currently, Python 3.10 is uncompilable with Clang, but I would appreciate it if it can be supported. I have attached a build log from MSBuild, below is the commandline I used to run the build. Start-Process -FilePath $(Join-Path $PCBUILD -ChildPath 'build.bat' -Resolve) -WorkingDirectory $PCBuild -NoNewWindow -RedirectStandardOut $BUILD_LOG -RedirectStandardError $BUILDERROR_LOG -ArgumentList '-m', '-v', '-c', 'Debug', '-p', 'x64', '-t', 'Build', '"/p:PlatformToolset=ClangCL"' PS L:\GIT\cpython> Start-Process -FilePath $(Join-Path $PCBUILD -ChildPath 'build.bat' -Resolve) -WorkingDirectory $PCBuild -NoNewWindow -RedirectStandardOut $BUILD_LOG -RedirectStandardError $BUILDERROR_LOG -ArgumentList '-m', '-v', '-c', 'Debug', '-p', 'x64', '-t', 'Build', '"/p:PlatformToolset=ClangCL"' ---------- components: Cross-Build, Windows files: build.log messages: 374099 nosy: Alex.Willmer, William Pickard, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Allow compiling Python with llvm-clang on Windows. type: compile error versions: Python 3.10 Added file: https://bugs.python.org/file49331/build.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 22 15:27:21 2020 From: report at bugs.python.org (Stefan Krah) Date: Wed, 22 Jul 2020 19:27:21 +0000 Subject: [New-bugs-announce] [issue41369] Update to libmpdec-2.5.1 Message-ID: <1595446041.62.0.150992911256.issue41369@roundup.psfhosted.org> New submission from Stefan Krah : This issue tracks the update of the included libmpdec to version 2.5.1. The version includes features required for #41324. ---------- assignee: skrah components: Extension Modules messages: 374101 nosy: skrah priority: normal severity: normal status: open title: Update to libmpdec-2.5.1 type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 22 16:16:14 2020 From: report at bugs.python.org (Joseph Perez) Date: Wed, 22 Jul 2020 20:16:14 +0000 Subject: [New-bugs-announce] [issue41370] PEP 585 and ForwardRef Message-ID: <1595448974.26.0.905372860878.issue41370@roundup.psfhosted.org> New submission from Joseph Perez : PEP 585 current implementation (3.10.0a0) differs from current Generic implementation about ForwardRef, as illustrated bellow: ```python from dataclasses import dataclass, field from typing import get_type_hints, List, ForwardRef @dataclass class Node: children: list["Node"] = field(default_factory=list) children2: List["Node"] = field(default_factory=list) assert get_type_hints(Node) == {"children": list["Node"], "children2": List[Node]} assert List["Node"].__args__ == (ForwardRef("Node"),) assert list["Node"].__args__ == ("Node",) # No ForwardRef here, so no evaluation by get_type_hints ``` There is indeed no kind of ForwardRef for `list` arguments. As shown in the example, this affects the result of get_type_hints for recursive types handling. He could be "fixed" in 2 lines in `typing._eval_type` with something like this : ```python def _eval_type(t, globalns, localns, recursive_guard=frozenset()): if isinstance(t, str): t = ForwardRef(t) if isinstance(t, ForwardRef): ... ``` but it's kind of hacky/dirty. It's true that this issue will not concern legacy code, 3.9 still being not released. So developers of libraries using get_type_hints could add in their documentation that `from __future__ import annotations` is mandatory for recursive types with PEP 585 (I think I will do it). By the way, Guido has quickly given his opinion about it in PR 21553: "We probably will not ever support this: importing ForwardRef from the built-in generic alias code would be problematic, and once from __future__ import annotations is always on there's no need to quote the argument anyway." (So feel free to close this issue) ---------- messages: 374105 nosy: BTaskaya, eric.smith, gvanrossum, joperez, levkivskyi, lukasz.langa, vstinner priority: normal severity: normal status: open title: PEP 585 and ForwardRef type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 22 16:17:30 2020 From: report at bugs.python.org (doodspav) Date: Wed, 22 Jul 2020 20:17:30 +0000 Subject: [New-bugs-announce] [issue41371] test_zoneinfo fails Message-ID: <1595449050.37.0.742582022147.issue41371@roundup.psfhosted.org> New submission from doodspav : Issue: ====== `_lzma` is not built because the required libraries are not available on my machine. `test_zoneinfo` assumes it is always available, leading it to crash on my machine. How I build and ran the tests: ============================== git clone https://github.com/python/cpython.git (bpo-41364) cd cpython mkdir build && cd build ../configure make -j8 make test > test_output.txt Test traceback: =============== File "/home/doodspav/Jetbrains/CLionProjects/cpython/Lib/test/libregrtest/runtest.py", line 272, in _runtest_inner refleak = _runtest_inner2(ns, test_name) File "/home/doodspav/Jetbrains/CLionProjects/cpython/Lib/test/libregrtest/runtest.py", line 223, in _runtest_inner2 the_module = importlib.import_module(abstest) File "/home/doodspav/Jetbrains/CLionProjects/cpython/Lib/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 790, in exec_module File "", line 228, in _call_with_frames_removed File "/home/doodspav/Jetbrains/CLionProjects/cpython/Lib/test/test_zoneinfo/__init__.py", line 1, in from .test_zoneinfo import * File "/home/doodspav/Jetbrains/CLionProjects/cpython/Lib/test/test_zoneinfo/test_zoneinfo.py", line 9, in import lzma File "/home/doodspav/Jetbrains/CLionProjects/cpython/Lib/lzma.py", line 27, in from _lzma import * ModuleNotFoundError: No module named '_lzma' ---------- components: Tests files: test_output.txt messages: 374106 nosy: doodspav priority: normal severity: normal status: open title: test_zoneinfo fails type: crash versions: Python 3.10 Added file: https://bugs.python.org/file49332/test_output.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 22 18:39:11 2020 From: report at bugs.python.org (Bar Harel) Date: Wed, 22 Jul 2020 22:39:11 +0000 Subject: [New-bugs-announce] [issue41372] Log exception never retrieved in concurrent.futures Message-ID: <1595457551.15.0.0734023234178.issue41372@roundup.psfhosted.org> New submission from Bar Harel : Asyncio has an amazing mechanism showing "Task exception was never retrieved" or "Future exception was never retrieved" if the task threw an exception, and no one checked its `.result()` or `.exception()`. It does so by setting a private variable named `__log_traceback` to False upon retrieval, and logging at `__del__` if the exception wasn't retrieved. I believe it's a very important mechanism missing from concurrent.futures. It's very easy to implement. I wanted to see if there's any disagreement before I implement it though. It's small enough to not need python-ideas yet important enough for inclusion I believe. Regarding potential issues - I can only see issues with unlikely deadlocks at `__del__` (think of a handler taking a lock and this occurs during the handling of another log record). Asyncio however already took that bet, and although it's less planned for multi-threading, it's still a bet that was totally worth it. ---------- components: Library (Lib) messages: 374114 nosy: bar.harel priority: normal severity: normal status: open title: Log exception never retrieved in concurrent.futures type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 22 23:47:16 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 23 Jul 2020 03:47:16 +0000 Subject: [New-bugs-announce] [issue41373] IDLE: edit/save files created by Windows Explorer Message-ID: <1595476036.59.0.577119776706.issue41373@roundup.psfhosted.org> New submission from Terry J. Reedy : #41300 fixed one bug in the patch for #41158. A user reported another on on idle-dev list. Create a file in Windows Explorer. Leave as .txt or rename to .py. Right click and Edit with IDLE 3.8 (.4/.5 or 3.9.0b4 or 5). Edit works, Save (or Run) does not. Open the same file from Command Prompt, change it, and trying to save produces Exception in Tkinter callback Traceback (most recent call last): File "C:\Programs\Python38\lib\tkinter\__init__.py", line 1883, in __call__ return self.func(*args) File "C:\Programs\Python38\lib\idlelib\multicall.py", line 176, in handler r = l[i](event) File "C:\Programs\Python38\lib\idlelib\iomenu.py", line 200, in save if self.writefile(self.filename): File "C:\Programs\Python38\lib\idlelib\iomenu.py", line 232, in writefile text = self.fixnewlines() File "C:\Programs\Python38\lib\idlelib\iomenu.py", line 252, in fixnewlines text = text.replace("\n", self.eol_convention) TypeError: replace() argument 2 must be str, not None The replacement line is guarded with if self.eol_convention != "\n": Either the condition should be include 'and self.eol_convention is not None' or the setting of the attribute should be changed. I will look into how it was set before the #41158 patch. ---------- messages: 374119 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: edit/save files created by Windows Explorer type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 23 03:51:44 2020 From: report at bugs.python.org (Christoph Reiter) Date: Thu, 23 Jul 2020 07:51:44 +0000 Subject: [New-bugs-announce] [issue41374] socket.TCP_* no longer available with cygwin 3.1.6+ Message-ID: <1595490704.07.0.649120374256.issue41374@roundup.psfhosted.org> New submission from Christoph Reiter : The TCP macros are provided by netinet/tcp.h, which for some reason is skipped here: https://github.com/python/cpython/blob/592527f3ee59616eca2bd1da771f7c14cee808d5/Modules/socketmodule.h#L11 Until cygwin 3.1.6 these macros were also provided by sys/socket.h, but this got removed in https://cygwin.com/git/?p=newlib-cygwin.git;a=commit;h=e037192b505b4f233fca9a6deafc9797210f6693 This leads to socket.TCP_NODELAY for example not being available anymore. git blame leads me to https://github.com/python/cpython/commit/b5daaed30d7c54ba1f516289f3a7a30a864133af introducing this special case, which isn't very helpful. I'd suggest to just remove the cygwin check and always include it (which works fine on my machine) Downstream bug report for extra context: https://github.com/msys2/MSYS2-packages/issues/2050 ---------- components: Build messages: 374126 nosy: lazka priority: normal severity: normal status: open title: socket.TCP_* no longer available with cygwin 3.1.6+ type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 23 11:07:20 2020 From: report at bugs.python.org (YoSTEALTH) Date: Thu, 23 Jul 2020 15:07:20 +0000 Subject: [New-bugs-announce] [issue41375] `mode` security concern Message-ID: <1595516840.65.0.992254225297.issue41375@roundup.psfhosted.org> New submission from YoSTEALTH : import os import stat import os.path def problem(tmp_path): # result: # ------- # check: False # mode: 416 # create temp file fd = os.open(tmp_path, os.O_CREAT, 0o660) os.close(fd) # Directory is effected as well # os.mkdir(tmp_path, 0o660) def solution(tmp_path): # result: # ------- # check: True # mode: 432 old_umask = os.umask(0) # create temp file fd = os.open(tmp_path, os.O_CREAT, 0o660) os.close(fd) # create temp dir # os.mkdir(tmp_path, 0o660) os.umask(old_umask) def main(): tmp_path = '_testing-chmod' problem(tmp_path) # solution(tmp_path) try: s = os.stat(tmp_path) mode = stat.S_IMODE(s.st_mode) print('check:', mode == 0o660) print('mode:', mode) # this should be: 432 finally: # delete temp file try: os.unlink(tmp_path) except IsADirectoryError: os.rmdir(tmp_path) if __name__ == '__main__': main() This result is not same for all os and distro, on multiple linux system for example the results will be different. I think Python should account for such behavior by default as it can lead to file/dir creation with security issues. ---------- components: IO messages: 374138 nosy: YoSTEALTH priority: normal severity: normal status: open title: `mode` security concern _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 23 11:22:11 2020 From: report at bugs.python.org (Phil Elson) Date: Thu, 23 Jul 2020 15:22:11 +0000 Subject: [New-bugs-announce] [issue41376] site.getusersitepackages() incorrectly claims that PYTHONNOUSERSITE is respected Message-ID: <1595517731.78.0.301651868818.issue41376@roundup.psfhosted.org> New submission from Phil Elson : The documentation for site.getusersitepackages() states at https://docs.python.org/3.10/library/site.html#site.getusersitepackages: Return the path of the user-specific site-packages directory, USER_SITE. If it is not initialized yet, this function will also set it, respecting PYTHONNOUSERSITE and USER_BASE. Yet the implementation does not agree: ``` $ python -c "import site; print(site.getusersitepackages())" /home/user/.local/lib/python3.7/site-packages $ PYTHONNOUSERSITE=1 python -c "import site; print(site.getusersitepackages())" /home/user/.local/lib/python3.7/site-packages ``` (same result for -s and -I flags) ---------- assignee: docs at python components: Documentation messages: 374139 nosy: docs at python, pelson priority: normal severity: normal status: open title: site.getusersitepackages() incorrectly claims that PYTHONNOUSERSITE is respected versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 23 16:46:27 2020 From: report at bugs.python.org (jakirkham) Date: Thu, 23 Jul 2020 20:46:27 +0000 Subject: [New-bugs-announce] [issue41377] memoryview of str (unicode) Message-ID: <1595537187.85.0.858771420995.issue41377@roundup.psfhosted.org> New submission from jakirkham : When working with lower level C/C++ code, the Python Buffer Protocol[1] has been immensely useful as it allows common Python `bytes`-like objects to expose the underlying memory buffer in a pointer that C/C++ code can easily work with zero-copy. In fact `memoryview` objects can be quite handy when facilitating coercion of Python objects supporting the Python Buffer Protocol to something that Python and/or C/C++ code can use easily. This works with several Python objects, many Python APIs, and in is relied on heavily by many performance conscious 3rd party libraries. However one object that gets a lot of use in Python that doesn't support this API is the Python `str` (previously `unicode`) object (see code below). ```python In [1]: s = "Hello World!" In [2]: mv = memoryview(s) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in ----> 1 mv = memoryview(s) TypeError: memoryview: a bytes-like object is required, not 'str' ``` The canonical answer today is [to encode to `bytes` first]( https://stackoverflow.com/a/54449407 ) and decode to `str` later. While this is ok for a smallish piece of text, it can start to slowdown considerably for larger pieces of text. So being able to skip this encode/decode step can be quite impactful. ```python In [1]: s = "Hello World!" In [2]: %timeit s.encode(); 54.9 ns ? 0.0788 ns per loop (mean ? std. dev. of 7 runs, 10000000 loops each) In [3]: s = 100_000_000 * "Hello World!" In [4]: %timeit s.encode(); 729 ms ? 1.23 ms per loop (mean ? std. dev. of 7 runs, 1 loop each) ``` AIUI (though I could be misunderstanding things) `str` objects do use some kind of typed array of unicode characters (either 16-bit narrow or 32-bit wide). So it seems like it *should* be possible to expose this as a 1-D contiguous array that C/C++ code could use. Though I may be misunderstanding how `str`s actually work under-the-hood (if so apologies). It would be quite helpful to bypass this encoding/decoding step and instead work directly with the underlying buffer in these situations where C/C++ is involved to help performance critical code. [1]: https://docs.python.org/3/c-api/buffer.html ---------- components: Library (Lib) messages: 374147 nosy: jakirkham priority: normal severity: normal status: open title: memoryview of str (unicode) type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 23 20:42:59 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Fri, 24 Jul 2020 00:42:59 +0000 Subject: [New-bugs-announce] [issue41378] IDLE EOL convention not set on a empty file Message-ID: <1595551379.62.0.314572347741.issue41378@roundup.psfhosted.org> New submission from Raymond Hettinger : I think this is a new bug, present in Python 3.8.5 but not present in 3.8.3. Steps to reproduce: * $ touch xyzpdq.py # Creates new file, zero bytes in length * $ python3.8 -m idlelib.idle xyzdpq.py * Enter text: print('hello world') * Attempt to save the file with Cmd-S * This seems to be unrecoverable and results in losing all the code that was entered. ---------------- example session --------------- ~ $ cd tmp ~/tmp $ touch qwerty.py ~/tmp $ python3.8 -m idlelib.idle qwerty.py Exception in Tkinter callback Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/tkinter/__init__.py", line 1883, in __call__ return self.func(*args) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/idlelib/multicall.py", line 176, in handler r = l[i](event) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/idlelib/iomenu.py", line 200, in save if self.writefile(self.filename): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/idlelib/iomenu.py", line 232, in writefile text = self.fixnewlines() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/idlelib/iomenu.py", line 252, in fixnewlines text = text.replace("\n", self.eol_convention) TypeError: replace() argument 2 must be str, not None ---------- assignee: terry.reedy components: IDLE keywords: 3.8regression, 3.9regression messages: 374155 nosy: rhettinger, terry.reedy priority: high severity: normal status: open title: IDLE EOL convention not set on a empty file type: crash versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 02:01:23 2020 From: report at bugs.python.org (Vignesh Rajendran) Date: Fri, 24 Jul 2020 06:01:23 +0000 Subject: [New-bugs-announce] [issue41379] Configparser is not reading Indended Sections as Mentioned in Docs Message-ID: <1595570483.9.0.509189951267.issue41379@roundup.psfhosted.org> New submission from Vignesh Rajendran : https://github.com/jaraco/configparser/issues/55 Please check the Bug is raised also in Github, Sections can be intended is specified in the documentation. When I read the indented section, its throwing section not found an error in python 3.8 as per Documentation https://docs.python.org/3/library/configparser.html [You can use comments] like this ; or this [Sections Can Be Indented] can_values_be_as_well = True does_that_mean_anything_special = False purpose = formatting for readability but this is not working. check this https://stackoverflow.com/questions/62833787/how-to-read-indentated-sections-with-python-configparser/62836972?noredirect=1#comment111176192_62836972 if i read an indented section, its showing section not found. my file: [section] a = 0.3 [subsection] b = 123 import configparser conf = configparser.ConfigParser() conf.read("./test.conf") a = conf['section']['a'] print(a) Output of a: 0.3 [subsection] b = 123 Expected a : 0.3 only But Section b is not found ---------- messages: 374160 nosy: Vignesh Rajendran priority: normal severity: normal status: open title: Configparser is not reading Indended Sections as Mentioned in Docs type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 03:13:31 2020 From: report at bugs.python.org (Ehsonjon Gadoev) Date: Fri, 24 Jul 2020 07:13:31 +0000 Subject: [New-bugs-announce] [issue41380] Create snake.py Message-ID: <1595574811.16.0.254153483734.issue41380@roundup.psfhosted.org> New submission from Ehsonjon Gadoev : Created simple and basic snake game using python turtle! We use the simplest algorithm to made it! It will be good example for beginners! ---------- components: Tkinter messages: 374163 nosy: Ehsonjon Gadoev priority: normal severity: normal status: open title: Create snake.py versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 05:46:40 2020 From: report at bugs.python.org (Anand Tripathi) Date: Fri, 24 Jul 2020 09:46:40 +0000 Subject: [New-bugs-announce] [issue41381] Google chat handler in Logging module Message-ID: <1595584000.4.0.416616508258.issue41381@roundup.psfhosted.org> Change by Anand Tripathi : ---------- nosy: anandtripathi5 priority: normal severity: normal status: open title: Google chat handler in Logging module type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 09:56:19 2020 From: report at bugs.python.org (n-io) Date: Fri, 24 Jul 2020 13:56:19 +0000 Subject: [New-bugs-announce] [issue41382] print() unpredictable behaviour Message-ID: <1595598979.89.0.398594885815.issue41382@roundup.psfhosted.org> New submission from n-io : There seems to be a behavioural issue with the print() function. Using python3.8 and the following line: >>> print("\t".join(['arith_int_512-cuda.sfeat', '__hipsyclkernel$wrapped_kernelname$MicroBenchArithmeticKernel_512_1', '578', '65', '5', '64', '4', '1025', '128', '1', '1', '512', '1'])) arith_int_512-cuda.sfeat __hipsyclkernel$wrapped_kernelname$MicroBenchArithmeticKernel_512_1 578 65 5 64 4 1025 128 512 1 Notice the missing numbers between 128 and 512. If I do random modifications the either of the first two string, some of the missing numbers may appear. For instance: >>> print("\t".join(['arith_int_512-cuda', '__hipsycl_kernel_$wrapped_kernel_name_$MicroBenchArithmeticKernel_512_1', '578', '65', '5', '64', '4', '1025', '128', '1', '1', '512', '1'])) arith_int_512-cuda __hipsycl_kernel_$wrapped_kernel_name_$MicroBenchArithmeticKernel_512_1 578 65 5 64 4 1025 128 1 512 1 Notice that one of the two missing numbers has appeared. There appears nothing wrong with the value used to invoke print. The error appears to be linked to joining on the "\t" character and does not appear to occur when joining on other whitespace characters such as " ".join(...) or "\n".join(...) ---------- components: IO messages: 374176 nosy: n-io priority: normal severity: normal status: open title: print() unpredictable behaviour type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 10:04:53 2020 From: report at bugs.python.org (Bernat Gabor) Date: Fri, 24 Jul 2020 14:04:53 +0000 Subject: [New-bugs-announce] [issue41383] Provide a limit arguments for __repr__ Message-ID: <1595599493.43.0.939554432548.issue41383@roundup.psfhosted.org> New submission from Bernat Gabor : Quoting Elizaveta Shashkova from EuroPython 2020 chat about how can we improve debug experience for users: I would like to have a lazy repr evaluation for the objects! Sometimes users have many really large objects, and when debugger is trying to show them in Variables View (=show their string representation) it can takes a lot of time. We do some tricks, but they not always work. It would be really-really cool to have parameter in repr, which defines max number of symbols we want to evaluate during repr for this object. What do people, would this optional str limit be a good thing? Does anyone has a better idea? ---------- components: Interpreter Core messages: 374179 nosy: Bernat Gabor priority: normal severity: normal status: open title: Provide a limit arguments for __repr__ versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 10:48:31 2020 From: report at bugs.python.org (Akuli) Date: Fri, 24 Jul 2020 14:48:31 +0000 Subject: [New-bugs-announce] [issue41384] tkinter raises TypeError when it's supposed to raise TclError Message-ID: <1595602111.15.0.773557179803.issue41384@roundup.psfhosted.org> New submission from Akuli : from Lib:tkinter/__init__.py: raise TclError('unknown option -'+kwargs.keys()[0]) This is no longer valid in Python 3. ---------- components: Tkinter messages: 374188 nosy: Akuli priority: normal pull_requests: 20748 severity: normal status: open title: tkinter raises TypeError when it's supposed to raise TclError _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 10:50:50 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Fri, 24 Jul 2020 14:50:50 +0000 Subject: [New-bugs-announce] [issue41385] test_executable_without_cwd fails on appx test run in Azure pipelines Message-ID: <1595602250.31.0.334276668507.issue41385@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : https://dev.azure.com/Python/cpython/_build/results?buildId=66688&view=logs&j=0fcf9c9b-89fc-526f-8708-363e467e119e&t=fa5ef4ee-3911-591e-4444-19482ab189b7&l=740 ====================================================================== ERROR: test_executable_without_cwd (test.test_subprocess.ProcessTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\a\1\b\layout-appx-win32\lib\test\test_subprocess.py", line 470, in test_executable_without_cwd self._assert_cwd(os.getcwd(), "somethingyoudonthave", File "D:\a\1\b\layout-appx-win32\lib\test\test_subprocess.py", line 388, in _assert_cwd normcase(p.stdout.read().decode("utf-8"))) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe6 in position 40: unexpected end of data https://dev.azure.com/Python/cpython/_pipeline/analytics/stageawareoutcome?definitionId=4 for 30 days. 99.16% of pipeline failures are due to failures in task - appx Tests ---------- components: Tests, Windows messages: 374189 nosy: paul.moore, steve.dower, tim.golden, xtreak, zach.ware priority: normal severity: normal status: open title: test_executable_without_cwd fails on appx test run in Azure pipelines type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 11:54:41 2020 From: report at bugs.python.org (Nico) Date: Fri, 24 Jul 2020 15:54:41 +0000 Subject: [New-bugs-announce] [issue41386] Popen.wait does not account for negative return codes Message-ID: <1595606081.95.0.551964515363.issue41386@roundup.psfhosted.org> New submission from Nico : Following problem occurred. A C style program can have negative return codes. However this is not correctly implemented in the Windows API. Therefore it is being returned as unsigned long by the API hence it leads to ambiguity while comparing return codes. For reference regarding this topic see: https://docs.microsoft.com/en-us/cpp/cpp/main-function-command-line-args?redirectedfrom=MSDN&view=vs-2019 https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-getexitcodeprocess I suggest a ---------- components: Windows messages: 374194 nosy: MrTroble, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Popen.wait does not account for negative return codes type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 12:35:57 2020 From: report at bugs.python.org (Antonio Gutierrez) Date: Fri, 24 Jul 2020 16:35:57 +0000 Subject: [New-bugs-announce] [issue41387] Wrong example, need scpae \" Message-ID: <1595608557.6.0.839758343082.issue41387@roundup.psfhosted.org> New submission from Antonio Gutierrez : In the main documentation, the example: #!/usr/bin/env python3 import smtplib from email.message import EmailMessage from email.headerregistry import Address from email.utils import make_msgid # Create the base text message. msg = EmailMessage() msg['Subject'] = "Ayons asperges pour le d?jeuner" msg['From'] = Address("Pep? Le Pew", "pepe", "example.com") msg['To'] = (Address("Penelope Pussycat", "penelope", "example.com"), Address("Fabrette Pussycat", "fabrette", "example.com")) msg.set_content("""\ Salut! Cela ressemble ? un excellent recipie[1] d?jeuner. [1] http://www.yummly.com/recipe/Roasted-Asparagus-Epicurious-203718 --Pep? """) # Add the html version. This converts the message into a multipart/alternative # container, with the original text message as the first part and the new html # message as the second part. asparagus_cid = make_msgid() msg.add_alternative("""\

Salut!

Cela ressemble ? un excellent recipie d?jeuner.

""".format(asparagus_cid=asparagus_cid[1:-1]), subtype='html') # note that we needed to peel the <> off the msgid for use in the html. # Now add the related image to the html part. with open("roasted-asparagus.jpg", 'rb') as img: msg.get_payload()[1].add_related(img.read(), 'image', 'jpeg', cid=asparagus_cid) # Make a local copy of what we are going to send. with open('outgoing.msg', 'wb') as f: f.write(bytes(msg)) # Send the message via local SMTP server. with smtplib.SMTP('localhost') as s: s.send_message(msg) in the line the " has to be escape, , otherwise it wont read the image. ---------- messages: 374197 nosy: Antonio Gutierrez priority: normal severity: normal status: open title: Wrong example, need scpae \" _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 15:10:41 2020 From: report at bugs.python.org (Alexey Burdin) Date: Fri, 24 Jul 2020 19:10:41 +0000 Subject: [New-bugs-announce] [issue41388] IDLE fails to detect corresponding opening parenthesis Message-ID: <1595617841.76.0.15009470661.issue41388@roundup.psfhosted.org> New submission from Alexey Burdin : ``` answers_field_order=sorted( set(j for i in data['items'] for j in i), key=cmp_to_key(lambda x,y:( -1 if (x,y) in answer_order else (0 if x==y else 1))) ) ``` when the cursor is placed in line 5 col 31 (between `)` and `))` ) hitting Ctrl-0 (Show surrounding parens) produces an error sound, though there's no error, the script works fine. ``` $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic $ uname -a Linux odd-one 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux $ python3.8 Python 3.8.0 (default, Oct 28 2019, 16:14:01) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import importlib >>> importlib.util.find_spec('idlelib.pyshell') ModuleSpec(name='idlelib.pyshell', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f1ea8ee14c0>, origin='/usr/lib/python3.8/idlelib/pyshell.py') >>> exit() $ dpkg-query -S /usr/lib/python3.8/idlelib/pyshell.py idle-python3.8: /usr/lib/python3.8/idlelib/pyshell.py $ dpkg -l | grep idle-python3\.8 ii idle-python3.8 3.8.0-3~18.04 all IDE for Python (v3.8) using Tkinter ``` ---------- assignee: terry.reedy components: IDLE messages: 374205 nosy: Alexey Burdin, terry.reedy priority: normal severity: normal status: open title: IDLE fails to detect corresponding opening parenthesis type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 24 15:56:23 2020 From: report at bugs.python.org (Ian O'Shaughnessy) Date: Fri, 24 Jul 2020 19:56:23 +0000 Subject: [New-bugs-announce] [issue41389] Garbage Collector Ignoring Some (Not All) Circular References of Identical Type Message-ID: <1595620583.22.0.931401196723.issue41389@roundup.psfhosted.org> New submission from Ian O'Shaughnessy : Using a script that has two classes A and B which contain a circular reference variable, it is possible to cause a memory leak that is not captured by default gc collection. Only by running gc.collect() manually do the circular references get collected. Attached is a sample script that replicates the issue. Output starts: Ram used: 152.17 MB - A: Active(125) / Total(2485) - B: Active(124) / Total(2484) Ram used: 148.17 MB - A: Active(121) / Total(12375) - B: Active(120) / Total(12374) Ram used: 65.88 MB - A: Active(23) / Total(22190) - B: Active(22) / Total(22189) Ram used: 77.92 MB - A: Active(35) / Total(31935) - B: Active(34) / Total(31934) After 1,000,000 cycles 1GB of ram is being consumed: Ram used: 1049.68 MB - A: Active(1019) / Total(975133) - B: Active(1018) / Total(975132) Ram used: 1037.64 MB - A: Active(1007) / Total(984859) - B: Active(1006) / Total(984858) Ram used: 952.34 MB - A: Active(922) / Total(994727) - B: Active(921) / Total(994726) Ram used: 970.41 MB - A: Active(940) / Total(1000000) - B: Active(940) / Total(1000000) ---------- files: gc.bug.py messages: 374210 nosy: ian_osh priority: normal severity: normal status: open title: Garbage Collector Ignoring Some (Not All) Circular References of Identical Type type: resource usage versions: Python 3.7 Added file: https://bugs.python.org/file49337/gc.bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 25 06:26:54 2020 From: report at bugs.python.org (Chih-Hsuan Yen) Date: Sat, 25 Jul 2020 10:26:54 +0000 Subject: [New-bugs-announce] [issue41391] Make test_unicodedata pass when running without network Message-ID: <1595672814.77.0.412111562602.issue41391@roundup.psfhosted.org> New submission from Chih-Hsuan Yen : I setup a buildbot worker to test Python 3.x on Android monthly. This month network in the Android emulator is broken and I got an additional test failure: 0:05:28 load avg: 1.21 [376/423/11] test_unicodedata failed test test_unicodedata failed -- Traceback (most recent call last): File "/data/local/tmp/lib/python3.10/urllib/request.py", line 1342, in do_open h.request(req.get_method(), req.selector, req.data, headers, socket.gaierror: [Errno 7] No address associated with hostname During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/data/local/tmp/lib/python3.10/test/test_unicodedata.py", line 329, in test_normalization testdata = open_urlresource(TESTDATAURL, encoding="utf-8", urllib.error.URLError: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/data/local/tmp/lib/python3.10/test/test_unicodedata.py", line 335, in test_normalization self.fail(f"Could not retrieve {TESTDATAURL}") AssertionError: Could not retrieve http://www.pythontest.net/unicode/13.0.0/NormalizationTest.txt I propose wrapping the test in socket_helper.transient_internet() so that this test is skipped if the Internet is not available. ---------- components: Tests messages: 374249 nosy: yan12125 priority: normal severity: normal status: open title: Make test_unicodedata pass when running without network type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 25 07:58:35 2020 From: report at bugs.python.org (Philip R Brenan) Date: Sat, 25 Jul 2020 11:58:35 +0000 Subject: [New-bugs-announce] [issue41392] Syntax error rather than run time error Message-ID: <1595678315.6.0.422536039763.issue41392@roundup.psfhosted.org> New submission from Philip R Brenan : a = false # Traceback (most recent call last): # File "test.py", line 1, in # a = false # NameError: name 'false' is not defined # Compilation finished successfully. Please make this a syntax error rather than a run time error as the user intention is obvious and there cannot possibly be a variable called 'false' yet. ---------- components: Build messages: 374252 nosy: philiprbrenan at gmail.com priority: normal severity: normal status: open title: Syntax error rather than run time error type: compile error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 25 08:38:04 2020 From: report at bugs.python.org (wyz23x2) Date: Sat, 25 Jul 2020 12:38:04 +0000 Subject: [New-bugs-announce] [issue41393] Fix FAQ example to use __import__('functools').reduce Message-ID: <1595680684.82.0.451241324021.issue41393@roundup.psfhosted.org> New submission from wyz23x2 : https://docs.python.org/3/faq/programming.html#is-it-possible-to-write-obfuscated-one-liners-in-python https://github.com/python/cpython/blob/3.8/Doc/faq/programming.rst The 3rd raises a NameError because reduce was moved into functools. __import__('functools').reduce should fix this. ---------- assignee: docs at python components: Documentation messages: 374258 nosy: docs at python, wyz23x2 priority: normal severity: normal status: open title: Fix FAQ example to use __import__('functools').reduce versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 25 08:54:28 2020 From: report at bugs.python.org (wyz23x2) Date: Sat, 25 Jul 2020 12:54:28 +0000 Subject: [New-bugs-announce] [issue41394] Behiavior of '_' strange in shell Message-ID: <1595681668.98.0.691710937609.issue41394@roundup.psfhosted.org> New submission from wyz23x2 : >>> (None and True) >>> print(_) False >>> print((None and True)) # Not same?! None >>> This isn't right. P.S. What component should this be? IDLE? It's the shell, not just IDLE. Core? Not that deep! ---------- messages: 374260 nosy: wyz23x2 priority: normal severity: normal status: open title: Behiavior of '_' strange in shell type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 25 10:12:10 2020 From: report at bugs.python.org (Karthikeyan Singaravelan) Date: Sat, 25 Jul 2020 14:12:10 +0000 Subject: [New-bugs-announce] [issue41395] pickle and pickletools cli interface doesn't close input and output file. Message-ID: <1595686330.63.0.838691465271.issue41395@roundup.psfhosted.org> New submission from Karthikeyan Singaravelan : pickle and pickletools use argparse with FileType which is not automatically closed. Other cli interfaces like json [0], ast [1] use context manager to close filetype objects. pickle : https://github.com/python/cpython/blob/af08db7bac3087aac313d052c1a6302bee7c9c89/Lib/pickle.py#L1799 mypickle >>> import pickle >>> with open("mypickle", "wb") as f: pickle.dump({"a": 1}, f) ./python -Wall -m pickle mypickle {'a': 1} sys:1: ResourceWarning: unclosed file <_io.BufferedReader name='mypickle'> pickletools : https://github.com/python/cpython/blob/af08db7bac3087aac313d052c1a6302bee7c9c89/Lib/pickletools.py#L2850-L2855 ./python -Wall -m pickletools mypickle -o mypickle.py sys:1: ResourceWarning: unclosed file <_io.BufferedReader name='mypickle'> sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name='mypickle.py' mode='w' encoding='UTF-8'> [0] https://github.com/python/cpython/blob/af08db7bac3087aac313d052c1a6302bee7c9c89/Lib/json/tool.py#L61 [1] https://github.com/python/cpython/blob/af08db7bac3087aac313d052c1a6302bee7c9c89/Lib/ast.py#L1510 ---------- components: Library (Lib) messages: 374269 nosy: alexandre.vassalotti, xtreak priority: normal severity: normal status: open title: pickle and pickletools cli interface doesn't close input and output file. type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jul 25 11:27:05 2020 From: report at bugs.python.org (Tomasz Pytel) Date: Sat, 25 Jul 2020 15:27:05 +0000 Subject: [New-bugs-announce] [issue41396] pystate.c:_PyCrossInterpreterData_Release() does not clear py exception on error Message-ID: <1595690825.61.0.00830808011082.issue41396@roundup.psfhosted.org> New submission from Tomasz Pytel : The call to _PyInterpreterState_LookUpID() may generate a Python exception but it is not explicitly cleared on error and no indicator is returned to signal failure. This can lead to a "a function returned a result with an error set" fatal error, and does in fact do so in a case I encountered in Modules/_xxsubinterpreters.c:channel_destroy(). ---------- components: Interpreter Core messages: 374270 nosy: Tomasz Pytel priority: normal severity: normal status: open title: pystate.c:_PyCrossInterpreterData_Release() does not clear py exception on error type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 04:25:52 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 26 Jul 2020 08:25:52 +0000 Subject: [New-bugs-announce] [issue41397] Restore default implementation of __ne__ in Counter Message-ID: <1595751952.59.0.424974344046.issue41397@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently collections.Counter implements both __eq__ and __ne__ methods. The problem is that if you subclass Counter and override its __eq__ method you will need to implement also the __ne__ method. Usually you do not need to implement __ne__ because the implementation inherited from the object class does the right thing in most cases (unless you implement NumPy or symbolic expressions). Also, the Python implementation of Counter.__ne__ is a tiny bit slower than the C implementation of object.__ne__. Counter.__ne__ was added because the implementation of __ne__ inherited from dict did not work correct for Counter. But we can just restore the default implementation: __ne__ = object.__ne__ Of all Python classes in the stdlib which implement __eq__ only Counter, WeakRef and some mock classes implement also __ne__. In case of Counter I think it is not necessary. ---------- components: Library (Lib) messages: 374306 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Restore default implementation of __ne__ in Counter versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 05:58:23 2020 From: report at bugs.python.org (Magnus Johnsson) Date: Sun, 26 Jul 2020 09:58:23 +0000 Subject: [New-bugs-announce] [issue41398] cgi module, parse_multipart fails Message-ID: <1595757503.98.0.375306375596.issue41398@roundup.psfhosted.org> New submission from Magnus Johnsson : When using the cgi module, parse_multipart fails with the supplied file with the error: Invalid boundary in multipart form: b'' A sample program that demonstrates the error: import cgi f = open("60_Request.txt", "r") print(cgi.parse_multipart(f, {'boundary': b'BgTzK0jM20UH01naJdsmAWUj7sqqeoikGZvh3mo9', 'CONTENT-LENGTH': 3992})) This affects for instance Twisted, and all its dependencies. ---------- files: 60_Request.txt messages: 374307 nosy: Magnus Johnsson priority: normal severity: normal status: open title: cgi module, parse_multipart fails versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49340/60_Request.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 06:17:57 2020 From: report at bugs.python.org (wyz23x2) Date: Sun, 26 Jul 2020 10:17:57 +0000 Subject: [New-bugs-announce] [issue41399] Add stacklevel support for exceptions Message-ID: <1595758677.39.0.962051820395.issue41399@roundup.psfhosted.org> New submission from wyz23x2 : Now warnings.warn supports a stacklevel parameter. But many users want exceptions to support it too. Related: https://stackoverflow.com/questions/34175111/raise-an-exception-from-a-higher-level-a-la-warnings ---------- components: Interpreter Core messages: 374308 nosy: wyz23x2 priority: normal severity: normal status: open title: Add stacklevel support for exceptions type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 06:18:46 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 26 Jul 2020 10:18:46 +0000 Subject: [New-bugs-announce] [issue41400] Remove references to nonexisting __ne__ methods Message-ID: <1595758726.41.0.0930061303675.issue41400@roundup.psfhosted.org> New submission from Serhiy Storchaka : There is the documentation for method __ne__ implementations in classes Set, Mapping, Header, Charset, Binary, but all these implementations are removed now and the default implementation of object.__ne__ is used instead. ---------- assignee: docs at python components: Documentation messages: 374309 nosy: barry, docs at python, maxking, r.david.murray, serhiy.storchaka priority: normal severity: normal status: open title: Remove references to nonexisting __ne__ methods type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 10:07:03 2020 From: report at bugs.python.org (Michael Felt) Date: Sun, 26 Jul 2020 14:07:03 +0000 Subject: [New-bugs-announce] [issue41401] Using non-ascii that require UTF-8 breaks AIX testing Message-ID: <1595772423.92.0.81107313537.issue41401@roundup.psfhosted.org> New submission from Michael Felt : issue41069 introduces tests for paths/files containing non-ascii characters. On AIX - since the merge of PR21035 and PR21156 - the bots have been broken, i.e., returning test failed. commit 700cfa8c90a90016638bac13c4efd03786b2b2a0 Author: Serhiy Storchaka Date: Thu Jun 25 17:56:31 2020 +0300 bpo-41069: Make TESTFN and the CWD for tests containing non-ascii characters. (GH-21035) commit f925407a19eeb9bf5f7640143979638adce2c677 Author: Serhiy Storchaka Date: Thu Jun 25 20:39:12 2020 +0300 [3.9] bpo-41069: Make TESTFN and the CWD for tests containing non-ascii characters. (GH-21035). (GH-21156) (cherry picked from commit 700cfa8c90a90016638bac13c4efd03786b2b2a0) ++++++++ Sadly, I cannot determine - exactly - where it is going wrong as the verbose results ends (says SUCCESS, but there is an ENV change, so bot says FAILED) as: ---------------------------------------------------------------------- Ran 614 tests in 59.122s OK (skipped=8) Warning -- files was modified by test_io Before: [] After: ['@test_23134518_tmp?'] test_io failed (env changed) in 59.7 sec == Tests result: SUCCESS == 1 test altered the execution environment: test_io Total duration: 59.8 sec ---------- components: IO, Tests messages: 374312 nosy: Michael.Felt priority: normal severity: normal status: open title: Using non-ascii that require UTF-8 breaks AIX testing type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 13:59:16 2020 From: report at bugs.python.org (Johannes Reiff) Date: Sun, 26 Jul 2020 17:59:16 +0000 Subject: [New-bugs-announce] [issue41402] email: ContentManager.set_content calls nonexistent method encode() on bytes Message-ID: <1595786356.19.0.0772009953039.issue41402@roundup.psfhosted.org> New submission from Johannes Reiff : If assigning binary content to an EmailMessage via set_content(), the function email.contentmanager.set_bytes_content() is called. This function fails when choosing the 7bit transfer encoding because of a call to data.decode('ascii'). ---------- components: email messages: 374337 nosy: Johannes Reiff, barry, r.david.murray priority: normal severity: normal status: open title: email: ContentManager.set_content calls nonexistent method encode() on bytes versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 15:15:44 2020 From: report at bugs.python.org (webisteme) Date: Sun, 26 Jul 2020 19:15:44 +0000 Subject: [New-bugs-announce] [issue41403] Uncaught AttributeError in unittest.mock._get_target Message-ID: <1595790944.09.0.556229072962.issue41403@roundup.psfhosted.org> New submission from webisteme : When calling `mock.patch` incorrectly, as in the following example, an uncaught error is thrown: ```shell >>> from unittest import mock >>> class Foo: ... pass ... >>> mock.patch(Foo()) Traceback (most recent call last): File "", line 1, in File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/unittest/mock.py", line 1624, in patch getter, attribute = _get_target(target) File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/unittest/mock.py", line 1469, in _get_target target, attribute = target.rsplit('.', 1) AttributeError: 'Foo' object has no attribute 'rsplit' ``` This can happen when confusing `mock.patch` with `mock.patch.object`. However, the uncaught error is not informative, as it does not indicate that the wrong type of object was passed to `mock.patch`. ---------- components: Library (Lib) messages: 374339 nosy: webisteme priority: normal severity: normal status: open title: Uncaught AttributeError in unittest.mock._get_target type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 16:12:19 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 26 Jul 2020 20:12:19 +0000 Subject: [New-bugs-announce] [issue41404] IDLE: test iomenu Message-ID: <1595794339.44.0.389957783464.issue41404@roundup.psfhosted.org> New submission from Terry J. Reedy : Test parts of iomenu changed by #41158 and fixed by #41300 and #41373. ---------- assignee: terry.reedy components: IDLE messages: 374342 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: test iomenu type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jul 26 17:11:06 2020 From: report at bugs.python.org (YoSTEALTH) Date: Sun, 26 Jul 2020 21:11:06 +0000 Subject: [New-bugs-announce] [issue41405] python 3.9.0b5 test Message-ID: <1595797866.02.0.778171485398.issue41405@roundup.psfhosted.org> New submission from YoSTEALTH : >>> /opt/python/3.9.0/bin/python3 -m test -uall == CPython 3.9.0b5 (default, Jul 22 2020, 13:13:23) [GCC 10.1.0] == Linux-5.8.0-1-MANJARO-x86_64-with-glibc2.31 little-endian == cwd: /tmp/test_python_39605? == CPU count: 16 == encodings: locale=UTF-8, FS=utf-8 0:00:00 load avg: 1.24 Run tests sequentially 0:00:00 load avg: 1.24 [ 1/425] test_grammar 0:00:00 load avg: 1.24 [ 2/425] test_opcodes 0:00:00 load avg: 1.24 [ 3/425] test_dict 0:00:00 load avg: 1.24 [ 4/425] test_builtin 0:00:00 load avg: 1.24 [ 5/425] test_exceptions 0:00:02 load avg: 1.22 [ 6/425] test_types 0:00:02 load avg: 1.22 [ 7/425] test_unittest 0:00:05 load avg: 1.22 [ 8/425] test_doctest 0:00:06 load avg: 1.20 [ 9/425] test_doctest2 0:00:07 load avg: 1.20 [ 10/425] test_support 0:00:09 load avg: 1.20 [ 11/425] test___all__ 0:00:11 load avg: 1.20 [ 12/425] test___future__ 0:00:11 load avg: 1.27 [ 13/425] test__locale 0:00:11 load avg: 1.27 [ 14/425] test__opcode 0:00:12 load avg: 1.27 [ 15/425] test__osx_support 0:00:12 load avg: 1.27 [ 16/425] test__xxsubinterpreters 0:00:13 load avg: 1.27 [ 17/425] test_abc 0:00:14 load avg: 1.27 [ 18/425] test_abstract_numbers 0:00:14 load avg: 1.27 [ 19/425] test_aifc 0:00:15 load avg: 1.27 [ 20/425] test_argparse 0:00:18 load avg: 1.24 [ 21/425] test_array 0:00:21 load avg: 1.22 [ 22/425] test_asdl_parser test_asdl_parser skipped -- test irrelevant for an installed Python 0:00:21 load avg: 1.22 [ 23/425] test_ast -- test_asdl_parser skipped 0:00:24 load avg: 1.22 [ 24/425] test_asyncgen 0:00:25 load avg: 1.22 [ 25/425] test_asynchat 0:00:27 load avg: 1.13 [ 26/425] test_asyncio 0:02:10 load avg: 1.39 [ 27/425] test_asyncore -- test_asyncio passed in 1 min 43 sec 0:02:12 load avg: 1.28 [ 28/425] test_atexit 0:02:12 load avg: 1.28 [ 29/425] test_audioop 0:02:13 load avg: 1.28 [ 30/425] test_audit 0:02:14 load avg: 1.28 [ 31/425] test_augassign 0:02:15 load avg: 1.28 [ 32/425] test_base64 0:02:15 load avg: 1.28 [ 33/425] test_baseexception 0:02:15 load avg: 1.28 [ 34/425] test_bdb 0:02:16 load avg: 1.28 [ 35/425] test_bigaddrspace 0:02:16 load avg: 1.26 [ 36/425] test_bigmem 0:02:17 load avg: 1.26 [ 37/425] test_binascii 0:02:17 load avg: 1.26 [ 38/425] test_binhex 0:02:17 load avg: 1.26 [ 39/425] test_binop 0:02:18 load avg: 1.26 [ 40/425] test_bisect 0:02:18 load avg: 1.26 [ 41/425] test_bool 0:02:18 load avg: 1.26 [ 42/425] test_buffer 0:02:29 load avg: 1.30 [ 43/425] test_bufio 0:02:30 load avg: 1.30 [ 44/425] test_bytes 0:02:32 load avg: 1.35 [ 45/425] test_bz2 0:02:37 load avg: 1.33 [ 46/425] test_c_locale_coercion 0:02:40 load avg: 1.33 [ 47/425] test_calendar 0:02:44 load avg: 1.38 [ 48/425] test_call 0:02:44 load avg: 1.38 [ 49/425] test_capi 0:03:07 load avg: 1.23 [ 50/425] test_cgi 0:03:08 load avg: 1.23 [ 51/425] test_cgitb 0:03:08 load avg: 1.23 [ 52/425] test_charmapcodec 0:03:09 load avg: 1.23 [ 53/425] test_check_c_globals test_check_c_globals skipped -- c-analyzer directory could not be found 0:03:09 load avg: 1.23 [ 54/425] test_class -- test_check_c_globals skipped 0:03:09 load avg: 1.23 [ 55/425] test_clinic test_clinic skipped -- clinic directory could not be found 0:03:10 load avg: 1.23 [ 56/425] test_cmath -- test_clinic skipped 0:03:10 load avg: 1.23 [ 57/425] test_cmd 0:03:10 load avg: 1.23 [ 58/425] test_cmd_line 0:03:14 load avg: 1.21 [ 59/425] test_cmd_line_script 0:03:18 load avg: 1.27 [ 60/425] test_code 0:03:19 load avg: 1.27 [ 61/425] test_code_module 0:03:19 load avg: 1.27 [ 62/425] test_codeccallbacks 0:03:19 load avg: 1.27 [ 63/425] test_codecencodings_cn 0:03:20 load avg: 1.27 [ 64/425] test_codecencodings_hk 0:03:20 load avg: 1.27 [ 65/425] test_codecencodings_iso2022 0:03:21 load avg: 1.25 [ 66/425] test_codecencodings_jp 0:03:22 load avg: 1.25 [ 67/425] test_codecencodings_kr 0:03:23 load avg: 1.25 [ 68/425] test_codecencodings_tw 0:03:23 load avg: 1.25 [ 69/425] test_codecmaps_cn 0:03:27 load avg: 1.23 [ 70/425] test_codecmaps_hk 0:03:28 load avg: 1.23 [ 71/425] test_codecmaps_jp 0:03:33 load avg: 1.13 [ 72/425] test_codecmaps_kr 0:03:37 load avg: 1.12 [ 73/425] test_codecmaps_tw 0:03:40 load avg: 1.12 [ 74/425] test_codecs 0:03:42 load avg: 1.83 [ 75/425] test_codeop 0:03:42 load avg: 1.83 [ 76/425] test_collections 0:03:43 load avg: 1.83 [ 77/425] test_colorsys 0:03:44 load avg: 1.83 [ 78/425] test_compare 0:03:44 load avg: 1.83 [ 79/425] test_compile 0:03:47 load avg: 1.85 [ 80/425] test_compileall 0:04:00 load avg: 1.71 [ 81/425] test_complex 0:04:01 load avg: 1.71 [ 82/425] test_concurrent_futures 0:06:35 load avg: 1.28 [ 83/425] test_configparser -- test_concurrent_futures passed in 2 min 34 sec 0:06:36 load avg: 1.26 [ 84/425] test_contains 0:06:37 load avg: 1.26 [ 85/425] test_context 0:06:39 load avg: 1.26 [ 86/425] test_contextlib 0:06:39 load avg: 1.26 [ 87/425] test_contextlib_async Task was destroyed but it is pending! task: ()>> Task was destroyed but it is pending! task: ()>> Task was destroyed but it is pending! task: ()>> 0:06:40 load avg: 1.26 [ 88/425] test_copy 0:06:40 load avg: 1.26 [ 89/425] test_copyreg 0:06:40 load avg: 1.26 [ 90/425] test_coroutines 0:06:42 load avg: 1.24 [ 91/425] test_cprofile 0:06:43 load avg: 1.24 [ 92/425] test_crashers 0:06:43 load avg: 1.24 [ 93/425] test_crypt 0:06:44 load avg: 1.24 [ 94/425] test_csv 0:06:45 load avg: 1.24 [ 95/425] test_ctypes 0:06:48 load avg: 1.38 [ 96/425] test_curses test test_curses failed -- Traceback (most recent call last): File "/opt/python/3.9.0/lib/python3.9/test/test_curses.py", line 289, in test_colors_funcs curses.pair_content(curses.COLOR_PAIRS - 1) OverflowError: signed short integer is greater than maximum 0:06:49 load avg: 1.38 [ 97/425/1] test_dataclasses -- test_curses failed 0:06:49 load avg: 1.38 [ 98/425/1] test_datetime 0:06:55 load avg: 1.35 [ 99/425/1] test_dbm 0:06:55 load avg: 1.35 [100/425/1] test_dbm_dumb 0:06:56 load avg: 1.35 [101/425/1] test_dbm_gnu 0:06:56 load avg: 1.56 [102/425/1] test_dbm_ndbm 0:06:57 load avg: 1.56 [103/425/1] test_decimal 0:07:11 load avg: 1.58 [104/425/1] test_decorators 0:07:12 load avg: 1.58 [105/425/1] test_defaultdict 0:07:12 load avg: 1.58 [106/425/1] test_deque 0:07:17 load avg: 1.54 [107/425/1] test_descr 0:07:20 load avg: 1.54 [108/425/1] test_descrtut 0:07:21 load avg: 1.54 [109/425/1] test_devpoll test_devpoll skipped -- test works only on Solaris OS family 0:07:21 load avg: 1.54 [110/425/1] test_dict_version -- test_devpoll skipped 0:07:22 load avg: 1.57 [111/425/1] test_dictcomps 0:07:22 load avg: 1.57 [112/425/1] test_dictviews 0:07:22 load avg: 1.57 [113/425/1] test_difflib 0:07:24 load avg: 1.57 [114/425/1] test_dis 0:07:25 load avg: 1.57 [115/425/1] test_distutils 0:07:30 load avg: 1.53 [116/425/1] test_docxmlrpc 0:07:34 load avg: 1.41 [117/425/1] test_dtrace 0:07:35 load avg: 1.41 [118/425/1] test_dynamic 0:07:35 load avg: 1.41 [119/425/1] test_dynamicclassattribute 0:07:36 load avg: 1.41 [120/425/1] test_eintr 0:07:43 load avg: 1.26 [121/425/1] test_email 0:07:50 load avg: 1.24 [122/425/1] test_embed 0:07:50 load avg: 1.24 [123/425/1] test_ensurepip 0:07:51 load avg: 1.24 [124/425/1] test_enum 0:07:51 load avg: 1.30 [125/425/1] test_enumerate 0:07:52 load avg: 1.30 [126/425/1] test_eof 0:07:53 load avg: 1.30 [127/425/1] test_epoll 0:07:55 load avg: 1.30 [128/425/1] test_errno 0:07:56 load avg: 1.30 [129/425/1] test_exception_hierarchy 0:07:56 load avg: 1.30 [130/425/1] test_exception_variations 0:07:57 load avg: 1.36 [131/425/1] test_extcall 0:07:57 load avg: 1.36 [132/425/1] test_faulthandler 0:08:23 load avg: 1.24 [133/425/1] test_fcntl 0:08:23 load avg: 1.24 [134/425/1] test_file 0:08:24 load avg: 1.24 [135/425/1] test_file_eintr 0:08:26 load avg: 1.22 [136/425/1] test_filecmp 0:08:27 load avg: 1.22 [137/425/1] test_fileinput 0:08:28 load avg: 1.22 [138/425/1] test_fileio 0:08:28 load avg: 1.22 [139/425/1] test_finalization 0:08:32 load avg: 1.20 [140/425/1] test_float 0:08:32 load avg: 1.20 [141/425/1] test_flufl 0:08:33 load avg: 1.20 [142/425/1] test_fnmatch 0:08:33 load avg: 1.20 [143/425/1] test_fork1 0:08:39 load avg: 1.19 [144/425/1] test_format 0:08:39 load avg: 1.19 [145/425/1] test_fractions 0:08:40 load avg: 1.19 [146/425/1] test_frame 0:08:41 load avg: 1.19 [147/425/1] test_frozen 0:08:41 load avg: 1.19 [148/425/1] test_fstring 0:08:42 load avg: 1.17 [149/425/1] test_ftplib 0:08:46 load avg: 1.17 [150/425/1] test_funcattrs 0:08:46 load avg: 1.24 [151/425/1] test_functools 0:08:47 load avg: 1.24 [152/425/1] test_future 0:08:48 load avg: 1.24 [153/425/1] test_future3 0:08:48 load avg: 1.24 [154/425/1] test_future4 0:08:49 load avg: 1.24 [155/425/1] test_future5 0:08:49 load avg: 1.24 [156/425/1] test_gc 0:08:56 load avg: 1.36 [157/425/1] test_gdb test_gdb skipped -- test_gdb only works on source builds at the moment. 0:08:58 load avg: 1.36 [158/425/1] test_generator_stop -- test_gdb skipped 0:08:58 load avg: 1.36 [159/425/1] test_generators 0:08:59 load avg: 1.36 [160/425/1] test_genericalias 0:09:00 load avg: 1.36 [161/425/1] test_genericclass 0:09:00 load avg: 1.36 [162/425/1] test_genericpath 0:09:01 load avg: 1.36 [163/425/1] test_genexps 0:09:01 load avg: 1.41 [164/425/1] test_getargs2 0:09:02 load avg: 1.41 [165/425/1] test_getopt 0:09:02 load avg: 1.41 [166/425/1] test_getpass 0:09:03 load avg: 1.41 [167/425/1] test_gettext 0:09:04 load avg: 1.41 [168/425/1] test_glob 0:09:05 load avg: 1.41 [169/425/1] test_global 0:09:05 load avg: 1.41 [170/425/1] test_graphlib 0:09:06 load avg: 1.41 [171/425/1] test_grp 0:09:06 load avg: 1.41 [172/425/1] test_gzip 0:09:07 load avg: 1.45 [173/425/1] test_hash 0:09:09 load avg: 1.45 [174/425/1] test_hashlib 0:09:13 load avg: 1.42 [175/425/1] test_heapq 0:09:14 load avg: 1.42 [176/425/1] test_hmac 0:09:15 load avg: 1.42 [177/425/1] test_html 0:09:16 load avg: 1.42 [178/425/1] test_htmlparser 0:09:16 load avg: 1.42 [179/425/1] test_http_cookiejar 0:09:17 load avg: 1.63 [180/425/1] test_http_cookies 0:09:18 load avg: 1.63 [181/425/1] test_httplib 0:09:20 load avg: 1.63 [182/425/1] test_httpservers 0:09:23 load avg: 1.58 [183/425/1] test_idle 0:09:33 load avg: 1.57 [184/425/1] test_imaplib 0:09:56 load avg: 1.27 [185/425/1] test_imghdr 0:09:57 load avg: 1.27 [186/425/1] test_imp 0:09:58 load avg: 1.27 [187/425/1] test_import 0:10:00 load avg: 1.27 [188/425/1] test_importlib 0:10:03 load avg: 1.40 [189/425/1] test_index 0:10:04 load avg: 1.40 [190/425/1] test_inspect 0:10:06 load avg: 1.40 [191/425/1] test_int 0:10:06 load avg: 1.45 [192/425/1] test_int_literal 0:10:07 load avg: 1.45 [193/425/1] test_io 0:11:00 load avg: 1.22 [194/425/1] test_ioctl -- test_io passed in 53.3 sec 0:11:01 load avg: 1.22 [195/425/1] test_ipaddress 0:11:02 load avg: 1.28 [196/425/1] test_isinstance 0:11:02 load avg: 1.28 [197/425/1] test_iter 0:11:03 load avg: 1.28 [198/425/1] test_iterlen 0:11:04 load avg: 1.28 [199/425/1] test_itertools 0:11:09 load avg: 1.34 [200/425/1] test_json 0:11:13 load avg: 1.47 [201/425/1] test_keyword 0:11:14 load avg: 1.47 [202/425/1] test_keywordonlyarg 0:11:15 load avg: 1.47 [203/425/1] test_kqueue test_kqueue skipped -- test works only on BSD 0:11:15 load avg: 1.47 [204/425/1] test_largefile -- test_kqueue skipped 0:11:16 load avg: 1.47 [205/425/1] test_lib2to3 0:11:58 load avg: 1.38 [206/425/1] test_linecache -- test_lib2to3 passed in 42.0 sec 0:11:58 load avg: 1.38 [207/425/1] test_list 0:12:00 load avg: 1.38 [208/425/1] test_listcomps 0:12:00 load avg: 1.38 [209/425/1] test_lltrace 0:12:01 load avg: 1.38 [210/425/1] test_locale 0:12:02 load avg: 1.43 [211/425/1] test_logging 0:12:21 load avg: 1.31 [212/425/1] test_long 0:12:26 load avg: 1.31 [213/425/1] test_longexp 0:12:27 load avg: 1.28 [214/425/1] test_lzma 0:12:32 load avg: 1.26 [215/425/1] test_mailbox 0:12:34 load avg: 1.26 [216/425/1] test_mailcap 0:12:34 load avg: 1.26 [217/425/1] test_marshal 0:12:35 load avg: 1.26 [218/425/1] test_math 0:12:39 load avg: 1.24 [219/425/1] test_memoryio 0:12:40 load avg: 1.24 [220/425/1] test_memoryview 0:12:44 load avg: 1.22 [221/425/1] test_metaclass 0:12:44 load avg: 1.22 [222/425/1] test_mimetypes 0:12:45 load avg: 1.22 [223/425/1] test_minidom 0:12:46 load avg: 1.22 [224/425/1] test_mmap 0:12:47 load avg: 1.20 [225/425/1] test_module 0:12:48 load avg: 1.20 [226/425/1] test_modulefinder 0:12:49 load avg: 1.20 [227/425/1] test_msilib test_msilib skipped -- No module named 'msilib' 0:12:50 load avg: 1.20 [228/425/1] test_multibytecodec -- test_msilib skipped 0:12:52 load avg: 1.18 [229/425/1] test_multiprocessing_fork abc0:13:57 load avg: 1.30 [230/425/1] test_multiprocessing_forkserver -- test_multiprocessing_fork passed in 1 min 5 sec 0:15:05 load avg: 0.97 [231/425/1] test_multiprocessing_main_handling -- test_multiprocessing_forkserver passed in 1 min 7 sec 0:15:15 load avg: 1.46 [232/425/1] test_multiprocessing_spawn 0:16:39 load avg: 1.64 [233/425/1] test_named_expressions -- test_multiprocessing_spawn passed in 1 min 24 sec 0:16:39 load avg: 1.64 [234/425/1] test_netrc 0:16:40 load avg: 1.64 [235/425/1] test_nis 0:16:41 load avg: 1.64 [236/425/1] test_nntplib 0:17:10 load avg: 1.43 [237/425/1] test_ntpath 0:17:11 load avg: 1.43 [238/425/1] test_numeric_tower 0:17:12 load avg: 1.48 [239/425/1] test_openpty 0:17:13 load avg: 1.48 [240/425/1] test_operator 0:17:13 load avg: 1.48 [241/425/1] test_optparse 0:17:14 load avg: 1.48 [242/425/1] test_ordered_dict 0:17:20 load avg: 1.52 [243/425/1] test_os 0:17:25 load avg: 1.56 [244/425/1] test_ossaudiodev test_ossaudiodev skipped -- [Errno 2] No such file or directory: '/dev/dsp' 0:17:25 load avg: 1.56 [245/425/1] test_osx_env -- test_ossaudiodev skipped 0:17:26 load avg: 1.56 [246/425/1] test_parser 0:17:27 load avg: 1.60 [247/425/1] test_pathlib 0:17:29 load avg: 1.60 [248/425/1] test_pdb 0:17:31 load avg: 1.60 [249/425/1] test_peepholer 0:17:32 load avg: 1.87 [250/425/1] test_peg_generator 0:17:33 load avg: 1.87 [251/425/1] test_peg_parser 0:17:34 load avg: 1.87 [252/425/1] test_pickle 0:17:52 load avg: 1.70 [253/425/1] test_picklebuffer 0:17:53 load avg: 1.70 [254/425/1] test_pickletools 0:17:56 load avg: 1.70 [255/425/1] test_pipes 0:17:57 load avg: 1.72 [256/425/1] test_pkg 0:17:58 load avg: 1.72 [257/425/1] test_pkgutil 0:17:59 load avg: 1.72 [258/425/1] test_platform 0:18:00 load avg: 1.72 [259/425/1] test_plistlib 0:18:01 load avg: 1.72 [260/425/1] test_poll 0:18:13 load avg: 1.55 [261/425/1] test_popen 0:18:14 load avg: 1.55 [262/425/1] test_poplib 0:18:18 load avg: 1.51 [263/425/1] test_positional_only_arg 0:18:19 load avg: 1.51 [264/425/1] test_posix 0:18:21 load avg: 1.51 [265/425/1] test_posixpath 0:18:22 load avg: 1.55 [266/425/1] test_pow 0:18:23 load avg: 1.55 [267/425/1] test_pprint 0:18:24 load avg: 1.55 [268/425/1] test_print 0:18:25 load avg: 1.55 [269/425/1] test_profile 0:18:26 load avg: 1.55 [270/425/1] test_property 0:18:27 load avg: 1.50 [271/425/1] test_pstats 0:18:28 load avg: 1.50 [272/425/1] test_pty 0:18:29 load avg: 1.50 [273/425/1] test_pulldom 0:18:29 load avg: 1.50 [274/425/1] test_pwd test test_pwd failed -- Traceback (most recent call last): File "/opt/python/3.9.0/lib/python3.9/test/test_pwd.py", line 54, in test_values_extended self.assertIn(pwd.getpwuid(e.pw_uid), entriesbyuid[e.pw_uid]) AssertionError: pwd.struct_passwd(pw_name='nobody', pw_passwd='*', pw_uid=65534, pw_gid=65534, pw_gecos='User Nobody', pw_dir='/', pw_shell='/usr/bin/nologin') not found in [pwd.struct_passwd(pw_name='nobody', pw_passwd='x', pw_uid=65534, pw_gid=65534, pw_gecos='nobody', pw_dir='/', pw_shell='/usr/bin/nologin')] 0:18:30 load avg: 1.50 [275/425/2] test_py_compile -- test_pwd failed 0:18:31 load avg: 1.50 [276/425/2] test_pyclbr 0:18:34 load avg: 1.62 [277/425/2] test_pydoc 0:19:08 load avg: 1.59 [278/425/2] test_pyexpat -- test_pydoc passed in 33.2 sec 0:19:08 load avg: 1.59 [279/425/2] test_queue 0:19:18 load avg: 1.82 [280/425/2] test_quopri 0:19:19 load avg: 1.82 [281/425/2] test_raise 0:19:20 load avg: 1.82 [282/425/2] test_random 0:19:22 load avg: 1.91 [283/425/2] test_range 0:19:24 load avg: 1.91 [284/425/2] test_re 0:19:26 load avg: 1.91 [285/425/2] test_readline 0:19:27 load avg: 1.92 [286/425/2] test_regrtest 0:19:44 load avg: 1.93 [287/425/2] test_repl 0:19:45 load avg: 1.93 [288/425/2] test_reprlib 0:19:46 load avg: 1.93 [289/425/2] test_resource 0:19:47 load avg: 1.86 [290/425/2] test_richcmp 0:19:48 load avg: 1.86 [291/425/2] test_rlcompleter 0:19:49 load avg: 1.86 [292/425/2] test_robotparser 0:19:50 load avg: 1.86 [293/425/2] test_runpy 0:19:52 load avg: 1.95 [294/425/2] test_sax 0:19:53 load avg: 1.95 [295/425/2] test_sched 0:19:54 load avg: 1.95 [296/425/2] test_scope 0:19:55 load avg: 1.95 [297/425/2] test_script_helper 0:19:56 load avg: 1.95 [298/425/2] test_secrets 0:19:57 load avg: 1.87 [299/425/2] test_select 0:20:09 load avg: 1.73 [300/425/2] test_selectors 0:20:30 load avg: 1.65 [301/425/2] test_set 0:20:33 load avg: 1.60 [302/425/2] test_setcomps 0:20:34 load avg: 1.60 [303/425/2] test_shelve 0:20:35 load avg: 1.60 [304/425/2] test_shlex 0:20:36 load avg: 1.60 [305/425/2] test_shutil 0:20:37 load avg: 1.63 [306/425/2] test_signal 0:21:26 load avg: 1.31 [307/425/2] test_site -- test_signal passed in 48.4 sec 0:21:27 load avg: 1.77 [308/425/2] test_slice 0:21:28 load avg: 1.77 [309/425/2] test_smtpd 0:21:29 load avg: 1.77 [310/425/2] test_smtplib 0:21:31 load avg: 1.77 [311/425/2] test_smtpnet 0:21:34 load avg: 1.79 [312/425/2] test_sndhdr 0:21:35 load avg: 1.79 [313/425/2] test_socket 0:22:01 load avg: 1.30 [314/425/2] test_socketserver 0:22:03 load avg: 1.35 [315/425/2] test_sort 0:22:04 load avg: 1.35 [316/425/2] test_source_encoding 0:22:05 load avg: 1.35 [317/425/2] test_spwd 0:22:06 load avg: 1.35 [318/425/2] test_sqlite 0:22:08 load avg: 1.48 [319/425/2] test_ssl Resource 'ipv6.google.com' is not available 0:22:13 load avg: 1.37 [320/425/2] test_startfile test_startfile skipped -- object has no attribute 'startfile' 0:22:13 load avg: 1.37 [321/425/2] test_stat -- test_startfile skipped 0:22:14 load avg: 1.37 [322/425/2] test_statistics 0:22:24 load avg: 1.46 [323/425/2] test_strftime 0:22:25 load avg: 1.46 [324/425/2] test_string 0:22:26 load avg: 1.46 [325/425/2] test_string_literals 0:22:28 load avg: 1.51 [326/425/2] test_stringprep 0:22:28 load avg: 1.51 [327/425/2] test_strptime 0:22:29 load avg: 1.51 [328/425/2] test_strtod 0:22:31 load avg: 1.51 [329/425/2] test_struct 0:22:33 load avg: 1.55 [330/425/2] test_structmembers 0:22:34 load avg: 1.55 [331/425/2] test_structseq 0:22:34 load avg: 1.55 [332/425/2] test_subclassinit 0:22:35 load avg: 1.55 [333/425/2] test_subprocess 0:23:48 load avg: 1.45 [334/425/2] test_sunau -- test_subprocess passed in 1 min 12 sec 0:23:49 load avg: 1.45 [335/425/2] test_sundry 0:23:50 load avg: 1.45 [336/425/2] test_super 0:23:50 load avg: 1.45 [337/425/2] test_symbol 0:23:51 load avg: 1.45 [338/425/2] test_symtable 0:23:52 load avg: 1.50 [339/425/2] test_syntax 0:23:53 load avg: 1.50 [340/425/2] test_sys 0:23:57 load avg: 1.46 [341/425/2] test_sys_setprofile 0:23:58 load avg: 1.46 [342/425/2] test_sys_settrace 0:23:59 load avg: 1.46 [343/425/2] test_sysconfig 0:24:00 load avg: 1.46 [344/425/2] test_syslog 0:24:01 load avg: 1.46 [345/425/2] test_tabnanny 0:24:02 load avg: 1.42 [346/425/2] test_tarfile 0:24:14 load avg: 1.66 [347/425/2] test_tcl 0:24:20 load avg: 1.69 [348/425/2] test_telnetlib 0:24:21 load avg: 1.69 [349/425/2] test_tempfile 0:24:22 load avg: 1.71 [350/425/2] test_textwrap 0:24:23 load avg: 1.71 [351/425/2] test_thread 0:24:25 load avg: 1.71 [352/425/2] test_threadedtempfile 0:24:26 load avg: 1.71 [353/425/2] test_threading 0:24:39 load avg: 2.13 [354/425/2] test_threading_local 0:24:41 load avg: 2.13 [355/425/2] test_threadsignals 0:24:47 load avg: 2.03 [356/425/2] test_time 0:24:50 load avg: 2.03 [357/425/2] test_timeit 0:24:51 load avg: 2.03 [358/425/2] test_timeout 0:25:01 load avg: 1.87 [359/425/2] test_tix 0:25:02 load avg: 1.88 [360/425/2] test_tk test test_tk failed -- Traceback (most recent call last): File "/opt/python/3.9.0/lib/python3.9/tkinter/test/test_tkinter/test_widgets.py", line 939, in test_from self.checkFloatParam(widget, 'from', 100, 14.9, 15.1, conv=float_round) File "/opt/python/3.9.0/lib/python3.9/tkinter/test/widget_tests.py", line 106, in checkFloatParam self.checkParam(widget, name, value, conv=conv, **kwargs) File "/opt/python/3.9.0/lib/python3.9/tkinter/test/widget_tests.py", line 63, in checkParam self.assertEqual2(widget[name], expected, eq=eq) File "/opt/python/3.9.0/lib/python3.9/tkinter/test/widget_tests.py", line 47, in assertEqual2 self.assertEqual(actual, expected, msg) AssertionError: 14.9 != 15.0 0:25:15 load avg: 1.96 [361/425/3] test_tokenize -- test_tk failed 0:26:26 load avg: 1.30 [362/425/3] test_tools -- test_tokenize passed in 1 min 10 sec 0:26:27 load avg: 1.30 [363/425/3] test_trace 0:26:34 load avg: 1.41 [364/425/3] test_traceback 0:26:36 load avg: 1.41 [365/425/3] test_tracemalloc 0:26:39 load avg: 1.53 [366/425/3] test_ttk_guionly 0:26:53 load avg: 2.09 [367/425/3] test_ttk_textonly 0:26:54 load avg: 2.09 [368/425/3] test_tuple 0:27:09 load avg: 2.08 [369/425/3] test_turtle 0:27:10 load avg: 2.08 [370/425/3] test_type_comments 0:27:11 load avg: 2.08 [371/425/3] test_typechecks 0:27:12 load avg: 2.08 [372/425/3] test_typing 0:27:13 load avg: 1.99 [373/425/3] test_ucn 0:27:14 load avg: 1.99 [374/425/3] test_unary 0:27:15 load avg: 1.99 [375/425/3] test_unicode 0:27:21 load avg: 1.91 [376/425/3] test_unicode_file 0:27:22 load avg: 1.91 [377/425/3] test_unicode_file_functions 0:27:23 load avg: 1.84 [378/425/3] test_unicode_identifiers 0:27:24 load avg: 1.84 [379/425/3] test_unicodedata 0:27:45 load avg: 2.02 [380/425/3] test_univnewlines 0:27:46 load avg: 2.02 [381/425/3] test_unpack 0:27:47 load avg: 2.02 [382/425/3] test_unpack_ex 0:27:48 load avg: 2.01 [383/425/3] test_unparse 0:28:49 load avg: 2.04 [384/425/3] test_urllib -- test_unparse passed in 1 min 0:28:50 load avg: 2.04 [385/425/3] test_urllib2 0:28:52 load avg: 2.12 [386/425/3] test_urllib2_localnet 0:28:54 load avg: 2.12 [387/425/3] test_urllib2net 0:29:02 load avg: 1.95 [388/425/3] test_urllib_response 0:29:03 load avg: 1.95 [389/425/3] test_urllibnet 0:29:06 load avg: 1.95 [390/425/3] test_urlparse 0:29:08 load avg: 2.19 [391/425/3] test_userdict 0:29:09 load avg: 2.19 [392/425/3] test_userlist 0:29:10 load avg: 2.19 [393/425/3] test_userstring 0:29:13 load avg: 2.18 [394/425/3] test_utf8_mode 0:29:15 load avg: 2.18 [395/425/3] test_utf8source 0:29:16 load avg: 2.18 [396/425/3] test_uu 0:29:17 load avg: 2.08 [397/425/3] test_uuid 0:29:19 load avg: 2.08 [398/425/3] test_venv 0:29:33 load avg: 2.00 [399/425/3] test_wait3 0:29:39 load avg: 2.00 [400/425/3] test_wait4 0:29:45 load avg: 2.16 [401/425/3] test_warnings 0:29:47 load avg: 2.16 [402/425/3] test_wave 0:29:48 load avg: 2.23 [403/425/3] test_weakref 0:30:35 load avg: 1.86 [404/425/3] test_weakset -- test_weakref passed in 47.2 sec 0:30:37 load avg: 1.86 [405/425/3] test_webbrowser 0:30:39 load avg: 1.87 [406/425/3] test_winconsoleio test_winconsoleio skipped -- test only relevant on win32 0:30:39 load avg: 1.87 [407/425/3] test_winreg -- test_winconsoleio skipped test_winreg skipped -- No module named 'winreg' 0:30:39 load avg: 1.87 [408/425/3] test_winsound -- test_winreg skipped test_winsound skipped -- No module named 'winsound' 0:30:40 load avg: 1.87 [409/425/3] test_with -- test_winsound skipped 0:30:41 load avg: 1.87 [410/425/3] test_wsgiref 0:30:42 load avg: 1.87 [411/425/3] test_xdrlib 0:30:43 load avg: 1.88 [412/425/3] test_xml_dom_minicompat 0:30:44 load avg: 1.88 [413/425/3] test_xml_etree 0:30:47 load avg: 1.88 [414/425/3] test_xml_etree_c 0:30:51 load avg: 1.89 [415/425/3] test_xmlrpc 0:30:56 load avg: 1.90 [416/425/3] test_xmlrpc_net 0:30:57 load avg: 1.90 [417/425/3] test_xxtestfuzz 0:30:58 load avg: 1.99 [418/425/3] test_yield_from 0:30:59 load avg: 1.99 [419/425/3] test_zipapp 0:31:00 load avg: 1.99 [420/425/3] test_zipfile 0:31:11 load avg: 1.98 [421/425/3] test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run 0:31:12 load avg: 1.98 [422/425/3] test_zipimport -- test_zipfile64 skipped (resource denied) 0:31:13 load avg: 1.90 [423/425/3] test_zipimport_support 0:31:15 load avg: 1.90 [424/425/3] test_zlib 0:31:17 load avg: 1.90 [425/425/3] test_zoneinfo == Tests result: FAILURE == 409 tests OK. 3 tests failed: test_curses test_pwd test_tk 13 tests skipped: test_asdl_parser test_check_c_globals test_clinic test_devpoll test_gdb test_kqueue test_msilib test_ossaudiodev test_startfile test_winconsoleio test_winreg test_winsound test_zipfile64 Total duration: 31 min 18 sec Tests result: FAILURE ---------- components: Tests messages: 374344 nosy: YoSTEALTH priority: normal severity: normal status: open title: python 3.9.0b5 test _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 04:26:25 2020 From: report at bugs.python.org (Frost Ming) Date: Mon, 27 Jul 2020 08:26:25 +0000 Subject: [New-bugs-announce] [issue41406] BufferedReader causes Popen.communicate losing the remaining output. Message-ID: <1595838385.02.0.156491511968.issue41406@roundup.psfhosted.org> New submission from Frost Ming : The following snippet behaves differently between Windows and POSIX. import subprocess import time p = subprocess.Popen("ls -l", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) print(p.stdout.read(1)) # read 1 byte print(p.communicate()) # Returns empty output It works fine on Windows and Python 2.x(communicate() returning the remaining output). So from the best guess it should be the expected behavior. The reason behind this is that Popen.stdout is a BufferedReader. It stores all output in the buffer when calling read(). However, communicate() and the lower API _communicate() use a lower level method os.read() to get the output, which does not respect the underlying buffer. When an empty output is retrieved the file object is closed then. First time to submit a bug report and pardon me if I am getting anything wrong. ---------- components: 2to3 (2.x to 3.x conversion tool), IO, Library (Lib) messages: 374366 nosy: Frost Ming, brett.cannon, vstinner priority: normal severity: normal status: open title: BufferedReader causes Popen.communicate losing the remaining output. versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 06:01:38 2020 From: report at bugs.python.org (DarrenDanielDay) Date: Mon, 27 Jul 2020 10:01:38 +0000 Subject: [New-bugs-announce] [issue41407] Tricky behavior of builtin-function map Message-ID: <1595844098.22.0.235301411882.issue41407@roundup.psfhosted.org> New submission from DarrenDanielDay : # The following is a tricky example: # This is an example function that possibly raises `StopIteration`. # In some cases, this function may raise `StopIteration` in the first iteration. def raise_stop_iteration(param): if param == 10: raise StopIteration # Suppose this function will do some simple calculation return -param # Then when we use builtin-function `map` and for-loop: print('example 1'.center(30, '=')) for item in map(raise_stop_iteration, range(5)): print(item) print('end of example 1'.center(30, '=')) # It works well. The output of example 1 is 0 to -4. # But the following can be triky: print('example 2'.center(30, '=')) for item in map(raise_stop_iteration, range(10, 20)): print(item) print('end of example 2'.center(30, '=')) # The output of exapmle 2 is just nothing, # and no errors are reported, # because `map` simply let the exception spread upward when executing `raise_stop_iteration(10)`. # However, the exception type is StopIteration, so the for-loop catches it and breaks the loop, # assuming this is the end of the loop. # When the exception raised from buggy mapping function is exactly `StopIteration`, # for example, functions implemented with builtin-function `next`, # it can be confusing to find the real issue. # I think it might be better improved in this way: class my_map: def __init__(self, mapping_function, *iterators): self.mapping = mapping_function self.iterators = iterators def __iter__(self): for parameter_tuple in zip(*self.iterators): try: yield self.mapping(*parameter_tuple) except BaseException as e: raise RuntimeError(*e.args) from e # It works like the map in most cases: print('example 3'.center(30, '=')) for item in my_map(raise_stop_iteration, range(5)): print(item) print('end of example 3'.center(30, '=')) # And then, the crash of the buggy mapping function will be reported: print('example 4'.center(30, '=')) for item in my_map(raise_stop_iteration, range(10, 20)): print(item) print('end of example 4'.center(30, '=')) ---------- components: Library (Lib) files: issue.py messages: 374371 nosy: DarrenDanielDay priority: normal severity: normal status: open title: Tricky behavior of builtin-function map type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49342/issue.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 08:28:33 2020 From: report at bugs.python.org (Avinash Maddikonda) Date: Mon, 27 Jul 2020 12:28:33 +0000 Subject: [New-bugs-announce] [issue41408] Add a `clamp` function to the `math` module Message-ID: <1595852913.43.0.233033170193.issue41408@roundup.psfhosted.org> New submission from Avinash Maddikonda : Add a `clamp` function to the `math` module which does something like this: ```py def clamp(value=0.5, minimum=0, maximum=1): """Clamps the *value* between the *minimum* and *maximum* and returns it.. """ return max(minimum, min(value, maximum)) ``` Because even `C++` has built-in clamp function (`std::clamp`) (which can be used using `#include `) ---------- components: Library (Lib) messages: 374373 nosy: SFM61319 priority: normal severity: normal status: open title: Add a `clamp` function to the `math` module type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 10:18:55 2020 From: report at bugs.python.org (Akuli) Date: Mon, 27 Jul 2020 14:18:55 +0000 Subject: [New-bugs-announce] [issue41409] deque.pop(index) is not supported Message-ID: <1595859535.68.0.0423357008273.issue41409@roundup.psfhosted.org> New submission from Akuli : The pop method of collections.deque can't be used like deque.pop(index), even though `del deque[index]` or deque.pop() without an argument works. This breaks the Liskov substitution principle because collections.abc.MutableMapping supports the .pop(index) usage. Is this intentional? related typeshed issue: https://github.com/python/typeshed/issues/4364 ---------- components: Library (Lib) messages: 374378 nosy: Akuli priority: normal severity: normal status: open title: deque.pop(index) is not supported _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 11:21:25 2020 From: report at bugs.python.org (Bob Kline) Date: Mon, 27 Jul 2020 15:21:25 +0000 Subject: [New-bugs-announce] [issue41410] Opening a file in binary mode makes a difference on all platforms in Python 3 Message-ID: <1595863285.0.0.72970010693.issue41410@roundup.psfhosted.org> New submission from Bob Kline : The documentation for tempfile.mkstemp() says "If text is specified, it indicates whether to open the file in binary mode (the default) or text mode. On some platforms, this makes no difference." That might have been true for Python 2.x, but in Python 3, there are no platforms for which the choice whether to open a file in binary mode makes no difference. ---------- assignee: docs at python components: Documentation messages: 374385 nosy: bkline, docs at python priority: normal severity: normal status: open title: Opening a file in binary mode makes a difference on all platforms in Python 3 versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 12:31:33 2020 From: report at bugs.python.org (Ezio Melotti) Date: Mon, 27 Jul 2020 16:31:33 +0000 Subject: [New-bugs-announce] [issue41411] Improve and consolidate f-strings docs Message-ID: <1595867493.01.0.94068638385.issue41411@roundup.psfhosted.org> New submission from Ezio Melotti : [Creating a new issue from #41045] I was just just trying to link to someone the documentation for f-strings, but: 1) Searching "fstring" only returns two results about xdrlib[0]; 2) Searching "f-string" returns many unrelated results[1]; 3) The first (and closer) result (string -- Common string operations[2]) yields nothing while using ctrl+f with fstring, f-string, f', f"; 4) at the top of that page there are two links in a "see also": * Text Sequence Type ? str[3]: it mentions raw strings at the beginning, but also yields no results for fstring, f-string, f', f"; * String Methods[4]: that is another section of the previous page (so ctrl+f doesn't find anything), but has a link to "Format String Syntax"[5]; 5) The "Format String Syntax" page[5] has another link in the middle of a paragraph that points to "formatted string literals", that eventually brings us to the right page[6]; 6) the "right page"[6] has a wall of text with a block of code containing the grammar, luckily followed by a few examples. I think we should: 1) add index entries for "f-string" and "fstring" in the relevant sections of the docs, so that they appear in the search result; 2) in the Text Sequence Type ? str[3] section, have a bullet list for f-strings, raw-strings, and possibly u-strings; 3) possibly use the term "f-string" in addition (or instead) of "formatted string literal", to make the pages more ctrl+f-friendly throughout the docs; 4) possibly have another more newbie-friendly section on f-string (compared to the lexical analysis page) in the tutorial, in the stdtypes page[7] (e.g. before the printf-style String Formatting" section[8]), or in the string page[2] (e.g. after the "Format String Syntax" section[10]); 5) possibly reorganize and consolidate the different sections about strings, string methods, str.format(), the format mini-language, f-strings, raw/unicode-strings, %-style formatting in a single page or two (a page for the docs and one for the grammar), since it seems to me that over the years these sections got a bit scattered around as they were being added. [0]: https://docs.python.org/3/search.html?q=fstring [1]: https://docs.python.org/3/search.html?q=f-string [2]: https://docs.python.org/3/library/string.html [3]: https://docs.python.org/3/library/stdtypes.html#textseq [4]: https://docs.python.org/3/library/stdtypes.html#string-methods [5]: https://docs.python.org/3/library/string.html#formatstrings [6]: https://docs.python.org/3/reference/lexical_analysis.html#f-strings [7]: https://docs.python.org/3/library/stdtypes.html [8]: https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting ---------- assignee: docs at python components: Documentation messages: 374398 nosy: docs at python, eric.smith, ezio.melotti priority: normal severity: normal stage: needs patch status: open title: Improve and consolidate f-strings docs type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 13:07:35 2020 From: report at bugs.python.org (Martin Borus) Date: Mon, 27 Jul 2020 17:07:35 +0000 Subject: [New-bugs-announce] [issue41412] After installation on Windows7, 64bit Python 3.9.0b5 reports "api-ms-win-core-path-l1-1-0.dll" missing and doesn't start Message-ID: <1595869655.44.0.79039964549.issue41412@roundup.psfhosted.org> New submission from Martin Borus : I just installed Python 3.9.0b5 using the provided beta installer python-3.9.0b5-amd64 on a Windows7, 64bit machine. I did the installation without the Py Launcher update, into the folder "c:\python39" The installer finished without problem. Running "cmd.exe", navigating to "c:\python39" and then running python.exe results in an error message popup which tells me in German system language that the "api-ms-win-core-path-l1-1-0.dll" is missing on the computer. I googled for it and it seems that this dll is not a part of Windows 7. Message: --------------------------- python.exe - Systemfehler --------------------------- Das Programm kann nicht gestartet werden, da api-ms-win-core-path-l1-1-0.dll auf dem Computer fehlt. Installieren Sie das Programm erneut, um das Problem zu beheben. --------------------------- OK --------------------------- ---------- components: Windows messages: 374403 nosy: Martin Borus, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: After installation on Windows7, 64bit Python 3.9.0b5 reports "api-ms-win-core-path-l1-1-0.dll" missing and doesn't start type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 14:03:06 2020 From: report at bugs.python.org (Irv Kalb) Date: Mon, 27 Jul 2020 18:03:06 +0000 Subject: [New-bugs-announce] [issue41413] At prompt for input(), pressing Command q kills IDLE Message-ID: <1595872986.33.0.489785676605.issue41413@roundup.psfhosted.org> New submission from Irv Kalb : This is probably related to earlier problems with running IDLE on MacOS Catalina: Environment: MacOS Catalina: 10.15.5 Python: 3.7.3 IDLE: 3.7.3 Steps to reproduce: - Open IDLE, create a new file. (Can reopen this file again for subsequent tests.) - Code can consist of a single line: dontCare = input('While this prompt for input is up, press Command q: ') - Save and run - When you see the prompt, press Command q to quit. - See correct dialog box: Your program is still running! Do you want to kill it? - Press OK. Results: - Python program quits. Source file window closes. IDLE is left in an "unstable" state with only the IDLE menu option available. At this point, any attempt to do anything with IDLE crashes IDLE, and shows a Mac system dialog box: IDLE quit unexpectedly. Click Reopen to open the application. Click Report to ..... Note: This is not a huge deal because I have to restart IDLE anyway because of a bug 38946, which does not allow me to allow me to double click on a Python file if IDLE is already running. But I thought this new information might help track things down. I have run in to this because I am correcting many student's homework files, where I ask them to build a loop where they ask the user for information, do some processing, and generate output with that information. Then the loop goes around again and asks for input again. I want to quit the application at that point, but I always end up crashing IDLE. ---------- assignee: terry.reedy components: IDLE, macOS messages: 374407 nosy: IrvKalb, ned.deily, ronaldoussoren, terry.reedy priority: normal severity: normal status: open title: At prompt for input(), pressing Command q kills IDLE type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jul 27 17:36:15 2020 From: report at bugs.python.org (James Foster) Date: Mon, 27 Jul 2020 21:36:15 +0000 Subject: [New-bugs-announce] [issue41414] AST for arguments shows extra element Message-ID: <1595885775.67.0.745382264551.issue41414@roundup.psfhosted.org> New submission from James Foster : https://docs.python.org/3.8/library/ast.html shows seven elements: arguments = (arg* posonlyargs, arg* args, arg? vararg, arg* kwonlyargs, expr* kw_defaults, arg? kwarg, expr* defaults) https://docs.python.org/3.7/library/ast.html shows six elements: arguments = (arg* args, arg? vararg, arg* kwonlyargs, expr* kw_defaults, arg? kwarg, expr* defaults) based on ast.c:1479 I believe that six is the proper number and that the first element ("arg* posonlyargs, ") is a duplicate of the second element ("arg* args, ") and should be removed. ---------- assignee: docs at python components: Documentation messages: 374425 nosy: docs at python, jgfoster priority: normal severity: normal status: open title: AST for arguments shows extra element type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 00:39:59 2020 From: report at bugs.python.org (Sergey Fedoseev) Date: Tue, 28 Jul 2020 04:39:59 +0000 Subject: [New-bugs-announce] [issue41415] duplicated signature of dataclass in help() Message-ID: <1595911199.09.0.446200648486.issue41415@roundup.psfhosted.org> New submission from Sergey Fedoseev : In [191]: import dataclasses, pydoc In [192]: @dataclass ...: class C: ...: pass ...: In [193]: print(pydoc.render_doc(C)) Python Library Documentation: class C in module __main__ class C(builtins.object) | C() -> None | | C() | | Methods defined here: | .... It's duplicated because dataclass __doc__ defaults to signature: In [195]: C.__doc__ Out[195]: 'C()' ---------- components: Library (Lib) messages: 374461 nosy: sir-sigurd priority: normal severity: normal status: open title: duplicated signature of dataclass in help() type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 02:34:53 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 28 Jul 2020 06:34:53 +0000 Subject: [New-bugs-announce] [issue41416] Restore default implementation of __ne__ in mixins Set and Mapping Message-ID: <1595918093.33.0.049225231479.issue41416@roundup.psfhosted.org> New submission from Serhiy Storchaka : According to the documentation [1] abstract classes collections.abc.Set and collections.abc.Mapping provide mixin method __ne__. But implementations of __ne__ in these classes were removed in 3.4 (see issue21408), so the default implementation is inherited from object. The reason is that it works better with classes which override __ne__ to return non-boolean, e.g. NumPy, SymPy and sqlalchemy classes. Previously the != operator falled back to other__eq__(), now it falls back to other__ne__(). [1] https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes But now we have a discrepancy between the decomentation and the code. According to the documentation, if we have a class which inherits from both Set and int, in that order, the __ne__ method should be inherited from class Set instead of class int. But currently it is inherited from int. >>> import collections.abc >>> class A(collections.abc.Set, int): pass ... >>> A.__ne__ One way to solve this -- remove __ne__ from lists of mixin methods (see issue41400). Other way -- add the __ne__ implementations which are identical to the default implementation but has preference in the inheritance. I prefer the latter. ---------- components: Library (Lib) messages: 374470 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Restore default implementation of __ne__ in mixins Set and Mapping type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 03:59:37 2020 From: report at bugs.python.org (=?utf-8?b?SmFuIMSMZcWhcGl2bw==?=) Date: Tue, 28 Jul 2020 07:59:37 +0000 Subject: [New-bugs-announce] [issue41417] SyntaxError: assignment expression within assert Message-ID: <1595923177.97.0.806397891633.issue41417@roundup.psfhosted.org> New submission from Jan ?e?pivo : Hi, it should be useful if assignment expression works within assertion. For example (real use-case in tests): assert r := re.match(r"result is (\d+)", tested_text) assert int(r.group(1)) == expected_number I haven't found a mention about assertions in https://www.python.org/dev/peps/pep-0572/ so it isn't technically a bug but it might be omission (?). Thx! ---------- components: Interpreter Core messages: 374476 nosy: jan.cespivo priority: normal severity: normal status: open title: SyntaxError: assignment expression within assert type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 10:08:34 2020 From: report at bugs.python.org (Ehsonjon Gadoev) Date: Tue, 28 Jul 2020 14:08:34 +0000 Subject: [New-bugs-announce] [issue41418] Add snake game to tools/demo! Message-ID: <1595945314.49.0.985793426755.issue41418@roundup.psfhosted.org> New submission from Ehsonjon Gadoev : Its simple snake game for beginners! It based on simple algorithm! If you know! And it must be on python/cpython repo! Cause, I made python (snake) game with Python Programming Language xD (maybe not funny)! We love python as much as you do! Yea! ---------- components: Tkinter files: snake.py messages: 374492 nosy: Ehsonjon Gadoev priority: normal severity: normal status: open title: Add snake game to tools/demo! versions: Python 3.8 Added file: https://bugs.python.org/file49344/snake.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 10:54:47 2020 From: report at bugs.python.org (Christopher Harrison) Date: Tue, 28 Jul 2020 14:54:47 +0000 Subject: [New-bugs-announce] [issue41419] Path.mkdir and os.mkdir don't respect setgid if its parent is g-s Message-ID: <1595948087.76.0.888842556227.issue41419@roundup.psfhosted.org> New submission from Christopher Harrison : The setgid bit is not set when creating a directory, even when explicitly specified in the mode argument, when its containing directory doesn't have its own setgid bit set. When the parent does have the setgid bit, it works as expected. Steps to reproduce: 1. Outside of Python, create a working directory with mode 0770, such that: >>> from pathlib import Path >>> oct(Path().stat().st_mode) '0o40770' 2. Set the umask to 0, to be sure it's not a masking issue: >>> import os >>> _ = os.umask(0) 3. Create a subdirectory with mode ug+rwx,g+s (2770): >>> (test := Path("test")).mkdir(0o2770) >>> oct(test.stat().st_mode) '0o40770' Notice that setgid is not respected. 4. Set setgid to the working directory: >>> Path().chmod(0o2770) >>> oct(Path().stat().st_mode) '0o42770' This works as expected. 5. Create another subdirectory with mode ug+rwx,g+s: >>> (test2 := Path("test2")).mkdir(0o2770) >>> oct(test2.stat().st_mode) '0o42770' The setgid bit of the new directory is now correctly set. This also affects os.mkdir. I have only tested this under Python 3.8.2 and 3.8.3 on a POSIX filesystem. (I assume it's not relevant to non-POSIX filesystems.) ---------- components: Library (Lib) messages: 374496 nosy: Xophmeister priority: normal severity: normal status: open title: Path.mkdir and os.mkdir don't respect setgid if its parent is g-s versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 11:06:02 2020 From: report at bugs.python.org (Dmytro Litvinov) Date: Tue, 28 Jul 2020 15:06:02 +0000 Subject: [New-bugs-announce] [issue41420] Academic Free License v. 2.1 link is not found and is obsolete Message-ID: <1595948762.34.0.747967566864.issue41420@roundup.psfhosted.org> New submission from Dmytro Litvinov : Link to Acamedic Free License v. 2.1(https://www.samurajdata.se/opensource/mirror/licenses/afl-2.1.php) at https://www.python.org/psf/contrib/ is not found. Also, Guido mentioned that version 2.1 seems obsolete and we may need to review that with the lawyer (https://bugs.python.org/issue41328#msg374495). ---------- assignee: docs at python components: Documentation messages: 374501 nosy: DmytroLitvinov, docs at python priority: normal severity: normal status: open title: Academic Free License v. 2.1 link is not found and is obsolete type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 13:59:09 2020 From: report at bugs.python.org (David MacIver) Date: Tue, 28 Jul 2020 17:59:09 +0000 Subject: [New-bugs-announce] [issue41421] Random.paretovariate sometimes raises ZeroDivisionError for small alpha Message-ID: <1595959149.08.0.339696183777.issue41421@roundup.psfhosted.org> New submission from David MacIver : The following code raises a ZeroDivisionError: from random import Random Random(14064741636871487939).paretovariate(0.01) This raises: random.py, line 692, in paretovariate return 1.0 / u ** (1.0/alpha) ZeroDivisionError: float division by zero That specific stack trace is from 3.8.5 but I've tried this on 3.6 and 3.7 as well. It's a little hard to tell what the intended correct range of parameter values is for paretovariate, and this may just be lack of validation for the alpha < 1 case - perhaps that was never intended to be supported? Based on some very informal inspection, what seems to happen is that the probability of getting a ZeroDivisionError approaches one as you approach zero. They rarely occur at this level of alpha (0.01), but for alpha=0.001 they seem to occur just under half the time, and for alpha=0.0001 they occur more than 90% of the time. (For the interested, this bug was found by Hypothesis as part of its own testing of our integration with the Random API) ---------- components: Library (Lib) messages: 374516 nosy: David MacIver priority: normal severity: normal status: open title: Random.paretovariate sometimes raises ZeroDivisionError for small alpha versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 14:26:29 2020 From: report at bugs.python.org (kale-smoothie) Date: Tue, 28 Jul 2020 18:26:29 +0000 Subject: [New-bugs-announce] [issue41422] C Unpickler memory leak via memo Message-ID: <1595960789.44.0.329453427634.issue41422@roundup.psfhosted.org> New submission from kale-smoothie : I'm not familiar with the workings of GC/pickle, but it looks like the traverse code in the C Unpickler omits a visit to the memo, potentially causing a memory leak? ---------- components: Library (Lib) files: leak_pickler.py messages: 374518 nosy: kale-smoothie priority: normal severity: normal status: open title: C Unpickler memory leak via memo type: resource usage versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49345/leak_pickler.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 14:34:04 2020 From: report at bugs.python.org (James Thistlewood) Date: Tue, 28 Jul 2020 18:34:04 +0000 Subject: [New-bugs-announce] [issue41423] `multiprocessing.Array` and `multiprocessing.managers.SyncManager.Array` APIs are similar but not the same Message-ID: <1595961244.31.0.535074566554.issue41423@roundup.psfhosted.org> New submission from James Thistlewood : I stumbled across this by trying to implement a call to the latter, while reading the docs for the former. I think this is quite confusing and unnecessary that the APIs between these two definitions should differ. The same goes for `multiprocessing.Value` and `multiprocessing.managers.SyncManager.Value`. This is especially exacerbated by the fact that the docs _are incorrect_. On this page [1], under 'Server process', a link to 'Array' is made. If it were correct, it would link to `multiprocessing.managers.SyncManager.Array`, since it's talking about a manager object - the kind returned by `Manager()`. But it links to `multiprocessing.Array`. Of course, the simple solution would be to change the link to link to the correct place, but I think this shows an unnecessary inconsistency in the API itself. I don't have a PR for this yet, nor have I fully investigated, but should it be feasible I would like to implement it myself. I'm interested to hear what people think. [1] https://docs.python.org/3/library/multiprocessing.html#sharing-state-between-processes ---------- components: Library (Lib) messages: 374519 nosy: jthistle priority: normal severity: normal status: open title: `multiprocessing.Array` and `multiprocessing.managers.SyncManager.Array` APIs are similar but not the same type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 15:00:07 2020 From: report at bugs.python.org (Brett Cannon) Date: Tue, 28 Jul 2020 19:00:07 +0000 Subject: [New-bugs-announce] [issue41424] [tkinter] Grammatical error in "Packer" docs Message-ID: <1595962807.42.0.795060125481.issue41424@roundup.psfhosted.org> New submission from Brett Cannon : "Geometry managers are used to specify the relative positioning of the positioning of widgets within their container". Remove "of the positioning". ---------- assignee: docs at python components: Documentation keywords: newcomer friendly messages: 374520 nosy: brett.cannon, docs at python priority: normal severity: normal status: open title: [tkinter] Grammatical error in "Packer" docs versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 15:08:10 2020 From: report at bugs.python.org (Brett Cannon) Date: Tue, 28 Jul 2020 19:08:10 +0000 Subject: [New-bugs-announce] [issue41425] [tkinter] "Coupling Widget Variables" example missing code Message-ID: <1595963290.36.0.00195793356515.issue41425@roundup.psfhosted.org> New submission from Brett Cannon : The example for https://docs.python.org/3.8/library/tkinter.html#coupling-widget-variables is missing: 1. Imports 2. Code to launch the example ---------- assignee: docs at python components: Documentation keywords: easy messages: 374522 nosy: brett.cannon, docs at python priority: normal severity: normal status: open title: [tkinter] "Coupling Widget Variables" example missing code versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 15:46:20 2020 From: report at bugs.python.org (Brett Cannon) Date: Tue, 28 Jul 2020 19:46:20 +0000 Subject: [New-bugs-announce] [issue41426] [curses] Grammar mistake in curses.getmouse() docs Message-ID: <1595965580.82.0.751899511239.issue41426@roundup.psfhosted.org> New submission from Brett Cannon : https://docs.python.org/3/library/curses.html#curses.getmouse "this method should be call to retrieve" "call" -> "called" ---------- assignee: docs at python components: Documentation keywords: newcomer friendly messages: 374524 nosy: brett.cannon, docs at python priority: normal severity: normal status: open title: [curses] Grammar mistake in curses.getmouse() docs versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 16:47:00 2020 From: report at bugs.python.org (Yonatan Goldschmidt) Date: Tue, 28 Jul 2020 20:47:00 +0000 Subject: [New-bugs-announce] [issue41427] howto/descriptor.rst unnecessarily mentions method.__class__ Message-ID: <1595969220.64.0.879377287441.issue41427@roundup.psfhosted.org> New submission from Yonatan Goldschmidt : In Doc/howto/descriptor.rst: # Internally, the bound method stores the underlying function, # the bound instance, and the class of the bound instance. >>> d.f.__func__ >>> d.f.__self__ <__main__.D object at 0x1012e1f98> >>> d.f.__class__ The bound method (PyMethodObject) does not store "the class of the bound instance" - it only stores the "function" and "self". d.f.__class__ is the class of the "method" type itself, not the class of d.f's instance (D from d = D()) I think this mention should be removed from the documentation? ---------- assignee: docs at python components: Documentation messages: 374526 nosy: Yonatan Goldschmidt, docs at python priority: normal severity: normal status: open title: howto/descriptor.rst unnecessarily mentions method.__class__ type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 18:32:30 2020 From: report at bugs.python.org (Maggie Moss) Date: Tue, 28 Jul 2020 22:32:30 +0000 Subject: [New-bugs-announce] [issue41428] PEP 604 -- Allow writing union types as X | Y Message-ID: <1595975550.94.0.895278514403.issue41428@roundup.psfhosted.org> New submission from Maggie Moss : https://www.python.org/dev/peps/pep-0604/ ---------- messages: 374535 nosy: maggiemoss priority: normal pull_requests: 20811 severity: normal status: open title: PEP 604 -- Allow writing union types as X | Y type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 19:41:58 2020 From: report at bugs.python.org (=?utf-8?q?Andr=C3=A9s_Delfino?=) Date: Tue, 28 Jul 2020 23:41:58 +0000 Subject: [New-bugs-announce] [issue41429] Let fnmatch.filter accept a tuple of patterns Message-ID: <1595979718.87.0.129861821474.issue41429@roundup.psfhosted.org> New submission from Andr?s Delfino : I propose to let fnmatch.filter accept a tuple of patterns as its pat parameter, while still supporting a single string argument (just like str.endswith does). The code to do this manually and efficiently pretty much involves copying filter. ---------- components: Library (Lib) messages: 374540 nosy: adelfino priority: normal severity: normal status: open title: Let fnmatch.filter accept a tuple of patterns type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 20:00:19 2020 From: report at bugs.python.org (James Corbett) Date: Wed, 29 Jul 2020 00:00:19 +0000 Subject: [New-bugs-announce] [issue41430] Document C docstring behavior Message-ID: <1595980819.55.0.541837599725.issue41430@roundup.psfhosted.org> New submission from James Corbett : As described in https://stackoverflow.com/questions/25847035/what-are-signature-and-text-signature-used-for-in-python-3-4, https://bugs.python.org/issue20586, and https://stackoverflow.com/questions/50537407/add-a-signature-with-annotations-to-extension-methods, it is possible to embed a signature in docstrings for C functions, so that `help` and `inspect.signature` work properly on them. However, this functionality isn't documented anywhere. I think something should be added to the "extending and embedding the Python interpreter" tutorial. ---------- assignee: docs at python components: Documentation messages: 374547 nosy: docs at python, jameshcorbett priority: normal severity: normal status: open title: Document C docstring behavior type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 20:29:38 2020 From: report at bugs.python.org (Inada Naoki) Date: Wed, 29 Jul 2020 00:29:38 +0000 Subject: [New-bugs-announce] [issue41431] Optimize dict_merge for copy Message-ID: <1595982578.69.0.880267074289.issue41431@roundup.psfhosted.org> New submission from Inada Naoki : Although there are dict.copy() and PyDict_Copy(), dict_merge can be used for copying dict. * d={}; d.update(orig) * d=dict(orig) * d=orig.copy() # orig has many dummy keys. ---------- components: Interpreter Core messages: 374550 nosy: inada.naoki priority: normal severity: normal status: open title: Optimize dict_merge for copy versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jul 28 23:33:23 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 29 Jul 2020 03:33:23 +0000 Subject: [New-bugs-announce] [issue41432] IDLE: Handle bad highlight tab color config Message-ID: <1595993603.42.0.358377843337.issue41432@roundup.psfhosted.org> New submission from Terry J. Reedy : https://stackoverflow.com/questions/35397377/python-freezes-when-configuring-idle froze IDLE with a custom theme missing 'colours for the blinker and highlighting'. I reported some experiments there. Tracebacks might help, but Proposal 1: check theme before try to paint. All keys present? Replace missing with normal colors. https://stackoverflow.com/questions/63137659/idle-crashing-when-i-press-configure-idle froze IDLE with a custom theme with a bad color "#00224". File "C:\Users\...\idlelib\configdialog.py", line 1279, in paint_theme_sample self.highlight_sample.tag_config(element, **colors) ... _tkinter.TclError: invalid color name "#00224" Proposal 2: wrap line in try-except, if error text matches, extract bad color, replace any occurrence of bad color with "#000000" (black) or "#FFFFFF" (white), report to user with suggestion to edit. Maybe save first. Should do within loop in case more errors. ---------- assignee: terry.reedy components: IDLE messages: 374558 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: Handle bad highlight tab color config type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 29 01:57:24 2020 From: report at bugs.python.org (Tatsunosuke Shimada) Date: Wed, 29 Jul 2020 05:57:24 +0000 Subject: [New-bugs-announce] [issue41433] Logging libraries BufferingHandler flushed twice at shutdown Message-ID: <1596002244.65.0.208232248615.issue41433@roundup.psfhosted.org> New submission from Tatsunosuke Shimada : BufferingHandler I expected function flush of BufferingHandler called once but it is called twice. This is due to that shutdown calls flush before close function which calls flush function inside. (Faced this issue on my custom implementation of flush function.) Seems that there is no need of calling flush inside BufferingHandler. Following is the link to the code of function shutdown and Buffering Handler https://github.com/python/cpython/blob/0124f2b5e0f88ee7f5d40fca96dbf087cbe4764b/Lib/logging/handlers.py#L1286 https://github.com/python/cpython/blob/a74eea238f5baba15797e2e8b570d153bc8690a7/Lib/logging/__init__.py#L2151 This issue could be also related. https://bugs.python.org/issue26559 ---------- components: Library (Lib) messages: 374559 nosy: adamist521, vinay.sajip priority: normal severity: normal status: open title: Logging libraries BufferingHandler flushed twice at shutdown type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 29 02:54:57 2020 From: report at bugs.python.org (wyz23x2) Date: Wed, 29 Jul 2020 06:54:57 +0000 Subject: [New-bugs-announce] [issue41434] IDLE: Warn user on "Run Module" if file is not .py/.pyw Message-ID: <1596005697.8.0.893586453742.issue41434@roundup.psfhosted.org> New submission from wyz23x2 : It would be great if IDLE shows a note when a non-Python file is attempted to run. ---------- assignee: terry.reedy components: IDLE messages: 374561 nosy: terry.reedy, wyz23x2 priority: normal severity: normal status: open title: IDLE: Warn user on "Run Module" if file is not .py/.pyw type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 29 11:39:46 2020 From: report at bugs.python.org (Julien Danjou) Date: Wed, 29 Jul 2020 15:39:46 +0000 Subject: [New-bugs-announce] [issue41435] Allow to retrieve ongoing exception handled by every threads Message-ID: <1596037186.46.0.97878619969.issue41435@roundup.psfhosted.org> New submission from Julien Danjou : In order to do statistical profiling on raised exception, having the ability to poll all the running threads for their currently handled exception would be fantastic. There is an exposed function named `sys._current_frames()` that allows to list the current frame handled by CPython. Having an equivalent for `sys._current_exceptions()` that would return the content of `sys.exc_info()` for each running thread would solve the issue. ---------- components: Interpreter Core messages: 374575 nosy: jd priority: normal severity: normal status: open title: Allow to retrieve ongoing exception handled by every threads type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 29 12:30:45 2020 From: report at bugs.python.org (Samran) Date: Wed, 29 Jul 2020 16:30:45 +0000 Subject: [New-bugs-announce] [issue41436] BUG a simple "and" and "or" Message-ID: <1596040245.85.0.344075232599.issue41436@roundup.psfhosted.org> New submission from Samran : #this is the code from random import randint num = randint(1,10) print(type(num)) print(num) ch = None #tried changing this tried converting types guess = 0 print(type(guess)) print("Welcome to guessing game: ") while ch != 'n' or ch != 'N': #here is the bug this statement is not running. It works with ?and? while( guess != num): guess = int(input("Enter your guess? ")) if guess == num: print("You Guessed Congratz") ch=input("Enter 'y' to play or 'n' to exit: ") if ch=='y' or ch == 'Y': guess= 0 num = randint(1,10) print("Thankyou for playing.") ---------- components: Tests files: Command Prompt - python test.py 29-Jul-20 9_16_22 PM (2).png messages: 374576 nosy: Samran priority: normal severity: normal status: open title: BUG a simple "and" and "or" type: compile error versions: Python 3.8 Added file: https://bugs.python.org/file49347/Command Prompt - python test.py 29-Jul-20 9_16_22 PM (2).png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 29 14:13:12 2020 From: report at bugs.python.org (Zhiming Wang) Date: Wed, 29 Jul 2020 18:13:12 +0000 Subject: [New-bugs-announce] [issue41437] SIGINT blocked by socket operations like recv on Windows Message-ID: <1596046392.19.0.70440136745.issue41437@roundup.psfhosted.org> New submission from Zhiming Wang : I noticed that on Windows, socket operations like recv appear to always block SIGINT until it's done, so if a recv hangs, Ctrl+C cannot interrupt the program. (I'm a *nix developer investigating a behavioral problem of my program on Windows, so please excuse my limited knowledge of Windows.) Consider the following example where I spawn a TCP server that stalls connections by 5 seconds in a separate thread, and use a client to connect to it on the main thread. I then try to interrupt the client with Ctrl+C. import socket import socketserver import time import threading interrupted = threading.Event() class HoneypotServer(socketserver.TCPServer): # Stall each connection for 5 seconds. def get_request(self): start = time.time() while time.time() - start < 5 and not interrupted.is_set(): time.sleep(0.1) return self.socket.accept() class EchoHandler(socketserver.BaseRequestHandler): def handle(self): data = self.request.recv(1024) self.request.sendall(data) class HoneypotServerThread(threading.Thread): def __init__(self): super().__init__() self.server = HoneypotServer(("127.0.0.1", 0), EchoHandler) def run(self): self.server.serve_forever(poll_interval=0.1) def main(): start = time.time() server_thread = HoneypotServerThread() server_thread.start() sock = socket.create_connection(server_thread.server.server_address) try: sock.sendall(b"hello") sock.recv(1024) except KeyboardInterrupt: print(f"processed SIGINT {time.time() - start:.3f}s into the program") interrupted.set() finally: sock.close() server_thread.server.shutdown() server_thread.join() if __name__ == "__main__": main() On *nix systems the KeyboardInterrupt is processed immediately. On Windows, the KeyboardInterrupt is always processed more than 5 seconds into the program, when the recv is finished. I suppose this is a fundamental limitation of Windows? Is there any workaround (other than going asyncio)? Btw, I learned about SIGBREAK, which when unhandled seems to kill the process immediately, but that means no chance of cleanup. I tried to handle SIGBREAK but whenever a signal handler is installed, the behavior reverts to that of SIGINT -- the handler is called only after 5 seconds have passed. (I'm attaching a socket_sigint_sigbreak.py which is a slightly expanded version of my sample program above, showing my attempt at handler SIGBREAK. Both python .\socket_sigint_sigbreak.py --sigbreak-handler interrupt and python .\socket_sigint_sigbreak.py --sigbreak-handler exit stall for 5 seconds.) ---------- components: Windows files: socket_sigint_sigbreak.py messages: 374580 nosy: paul.moore, steve.dower, tim.golden, zach.ware, zmwangx priority: normal severity: normal status: open title: SIGINT blocked by socket operations like recv on Windows type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49348/socket_sigint_sigbreak.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jul 29 18:17:28 2020 From: report at bugs.python.org (Yerken Tussupbekov) Date: Wed, 29 Jul 2020 22:17:28 +0000 Subject: [New-bugs-announce] [issue41438] TimeoutError behavior changes on async.wait_for from python3.7 Message-ID: <1596061048.21.0.803910276646.issue41438@roundup.psfhosted.org> New submission from Yerken Tussupbekov : ``` import asyncio from concurrent import futures logging.basicConfig( level=logging.DEBUG, format="[%(asctime)s %(name)s %(levelname)s]: %(message)s", datefmt="%H:%M:%S %d-%m-%y", ) logger: logging.Logger = logging.getLogger("ideahitme") async def too_long() -> None: await asyncio.sleep(2) async def run() -> None: try: await asyncio.wait_for(too_long(), 1) except futures.TimeoutError: logger.info("concurrent.futures.TimeoutError happened") except asyncio.TimeoutError: logger.info("asyncio.TimeoutError happened") if __name__ == "__main__": asyncio.run(run()) ``` In python 3.8.4 running this script will print: asyncio.TimeoutError happened In python 3.7.5 running this script will print: concurrent.futures.TimeoutError happened This is a breaking change which I didn't see announced in the changelog. What is the expected behavior here ? ---------- components: asyncio messages: 374589 nosy: Yerken Tussupbekov, asvetlov, yselivanov priority: normal severity: normal status: open title: TimeoutError behavior changes on async.wait_for from python3.7 type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 02:06:45 2020 From: report at bugs.python.org (Peixing Xin) Date: Thu, 30 Jul 2020 06:06:45 +0000 Subject: [New-bugs-announce] [issue41439] some test cases in test_uuid.py and test_ssl.py fail on some operating systems because of no os.fork support Message-ID: <1596089205.38.0.465991237875.issue41439@roundup.psfhosted.org> New submission from Peixing Xin : Some operating systems, for example VxWorks RTOS, don't support fork(). Some test cases that depend on os.fork() will fail. test_ssl.BasicSocketTests.test_random_fork and test_uuid.TestUUIDWithExtModule.testIssue8621 fail are this case. ---------- components: Tests messages: 374599 nosy: pxinwr priority: normal severity: normal status: open title: some test cases in test_uuid.py and test_ssl.py fail on some operating systems because of no os.fork support type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 03:07:44 2020 From: report at bugs.python.org (Peixing Xin) Date: Thu, 30 Jul 2020 07:07:44 +0000 Subject: [New-bugs-announce] [issue41440] os.cpu_count doesn't work on VxWorks RTOS Message-ID: <1596092864.05.0.11139024599.issue41440@roundup.psfhosted.org> New submission from Peixing Xin : Now os.cpu_count() always returns NONE on VxWorks RTOS. It needs to be fixed particularly for VxWorks. ---------- components: Library (Lib) messages: 374601 nosy: pxinwr priority: normal severity: normal status: open type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 04:42:51 2020 From: report at bugs.python.org (=?utf-8?b?5pa95paH5bOw?=) Date: Thu, 30 Jul 2020 08:42:51 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue41441=5D_io__reading_?= =?utf-8?q?=EF=BC=86_updating_fix?= Message-ID: <1596098571.51.0.839262162281.issue41441@roundup.psfhosted.org> Change by ??? : ---------- components: IO nosy: ke265379ke priority: normal severity: normal status: open title: io reading ? updating fix type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 06:05:24 2020 From: report at bugs.python.org (Peixing Xin) Date: Thu, 30 Jul 2020 10:05:24 +0000 Subject: [New-bugs-announce] [issue41442] test_posix.PosixTester.test_getgroups fail on operating systems without supporting unix shell Message-ID: <1596103524.36.0.943373328991.issue41442@roundup.psfhosted.org> New submission from Peixing Xin : test_posix.PosixTester.test_getgroups requires unix shell supported on tested platform. However some operating systems like VxWorks doesn't support unix shell. This case will fail on it. ---------- components: Tests messages: 374609 nosy: pxinwr priority: normal severity: normal status: open title: test_posix.PosixTester.test_getgroups fail on operating systems without supporting unix shell type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 06:49:02 2020 From: report at bugs.python.org (Peixing Xin) Date: Thu, 30 Jul 2020 10:49:02 +0000 Subject: [New-bugs-announce] [issue41443] some test cases in test_posix.py fail if some os attributes are not supported Message-ID: <1596106142.56.0.534494527465.issue41443@roundup.psfhosted.org> New submission from Peixing Xin : Some operating systems like VxWorks can't support posix.chown, posix.mknod and posix.readlink. This results that test_chown_dir_fd, test_mknod_dir_fd and test_readlink_dir_fd fail. ---------- components: Tests messages: 374610 nosy: pxinwr priority: normal severity: normal status: open title: some test cases in test_posix.py fail if some os attributes are not supported type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 09:23:44 2020 From: report at bugs.python.org (Vladislav Mikhalin) Date: Thu, 30 Jul 2020 13:23:44 +0000 Subject: [New-bugs-announce] [issue41444] CPython 3.8.5 fails to build on Windows with -E option Message-ID: <1596115424.63.0.60743816933.issue41444@roundup.psfhosted.org> New submission from Vladislav Mikhalin : These errors are generated on a clean build: C:\cpython-3.8.5\Modules\_ctypes\_ctypes.c(107,10): fatal error C1083: Cannot open include file: 'ffi.h': No such file or directory [C:\cpython-3.8.5\PCbuild\_ctypes.vcxproj] C:\cpython-3.8.5\Modules\_ctypes\callbacks.c(4,10): fatal error C1083: Cannot open include file: 'ffi.h': No such file or directory [C:\cpython-3.8.5\PCbuild\_ctypes.vcxproj] C:\cpython-3.8.5\Modules\_ctypes\callproc.c(71,10): fatal error C1083: Cannot open include file: 'ffi.h': No such file or directory [C:\cpython-3.8.5\PCbuild\_ctypes.vcxproj] C:\cpython-3.8.5\Modules\_ctypes\cfield.c(3,10): fatal error C1083: Cannot open include file: 'ffi.h': No such file or directory [C:\cpython-3.8.5\PCbuild\_ctypes.vcxproj] C:\cpython-3.8.5\Modules\_ctypes\malloc_closure.c(2,10): fatal error C1083: Cannot open include file: 'ffi.h': No such file or directory [C:\cpython-3.8.5\PCbuild\_ctypes.vcxproj] C:\cpython-3.8.5\Modules\_ctypes\stgdict.c(2,10): fatal error C1083: Cannot open include file: 'ffi.h': No such file or directory [C:\cpython-3.8.5\PCbuild\_ctypes.vcxproj] ---------- components: Build messages: 374614 nosy: Vladislav Mikhalin priority: normal severity: normal status: open title: CPython 3.8.5 fails to build on Windows with -E option versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 12:23:51 2020 From: report at bugs.python.org (fj92f3jj923f923) Date: Thu, 30 Jul 2020 16:23:51 +0000 Subject: [New-bugs-announce] [issue41445] Adding configure temporary files to gitignore Message-ID: <1596126231.09.0.21986498647.issue41445@roundup.psfhosted.org> New submission from fj92f3jj923f923 : I noticed that a lot of files generated and removed due configuring. I was too scary, when I saw them, and decided to add them to .gitignore. ---------- components: Build messages: 374619 nosy: fj92f3jj923f923 priority: normal severity: normal status: open title: Adding configure temporary files to gitignore type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 13:18:47 2020 From: report at bugs.python.org (Juan Jimenez) Date: Thu, 30 Jul 2020 17:18:47 +0000 Subject: [New-bugs-announce] [issue41446] New demo, using a web api Message-ID: <1596129527.79.0.745234109519.issue41446@roundup.psfhosted.org> New submission from Juan Jimenez : As per a suggestion in https://bugs.python.org/issue41274 I would like to submit for consideration a new demo program, one that demonstrates how to use a web API to generate a pseudo-random generator seed from high resolution, high cadence images of the sun. I will be submitting a pull request for consideration. ---------- components: Demos and Tools messages: 374620 nosy: flybd5 priority: normal severity: normal status: open title: New demo, using a web api type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jul 30 13:19:35 2020 From: report at bugs.python.org (Damian Barabonkov) Date: Thu, 30 Jul 2020 17:19:35 +0000 Subject: [New-bugs-announce] [issue41447] Resource Tracker in Multiprocessing Shared Memory not working correctly Message-ID: <1596129575.64.0.828449737414.issue41447@roundup.psfhosted.org> New submission from Damian Barabonkov : The way the resource tracker is used in /Lib/multiprocessing/shared_memory.py leads to it issuing warnings/errors, even when resources are cleaned up properly. Attached are two simple demo files copied from the documentation example (https://docs.python.org/3.9/library/multiprocessing.shared_memory.html) that illustrate the warnings below: proc1.py Traceback (most recent call last): File "proc1.py", line 19, in shm.unlink() # Free and release the shared memory block at the very end File "/home/damian/Documents/QuantCo/miniconda3/envs/pipeline-parallel/lib/python3.8/multiprocessing/shared_memory.py", line 244, in unlink _posixshmem.shm_unlink(self._name) FileNotFoundError: [Errno 2] No such file or directory: '/shmem' /home/damian/Documents/QuantCo/miniconda3/envs/pipeline-parallel/lib/python3.8/multiprocessing/resource_tracker.py:218: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /home/damian/Documents/QuantCo/miniconda3/envs/pipeline-parallel/lib/python3.8/multiprocessing/resource_tracker.py:231: UserWarning: resource_tracker: '/shmem': [Errno 2] No such file or directory: '/shmem' warnings.warn('resource_tracker: %r: %s' % (name, e)) proc2.py python3.8/multiprocessing/resource_tracker.py:218: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' ---------- files: shmem_bug.py messages: 374621 nosy: damian.barabonkov priority: normal severity: normal status: open title: Resource Tracker in Multiprocessing Shared Memory not working correctly type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49351/shmem_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 31 05:28:44 2020 From: report at bugs.python.org (Mond Wan) Date: Fri, 31 Jul 2020 09:28:44 +0000 Subject: [New-bugs-announce] [issue41448] pathlib behave differ between OS Message-ID: <1596187724.05.0.638155501968.issue41448@roundup.psfhosted.org> New submission from Mond Wan : I have tried 2 functions with different behavior across platform # as_posix() In linux platform, as_posix() cannot process window path nicely * From linux + docker + python:3.8 ``` Python 3.8.0 (default, Oct 17 2019, 05:36:36) [GCC 8.3.0] on linux >>> winPath = r'\workspace\xxx\test_fixture\user-restore-success.zip' >>> posixPath = '/workspace/xxx/test_fixture/user-restore-success.zip' >>> pWIN = pathlib.PurePath(winPath) >>> pPOSIX = pathlib.PurePath(posixPath) >>> pWIN.as_posix() '\\workspace\\xxx\\test_fixture\\user-restore-success.zip' >>> pPOSIX.as_posix() '/workspace/xxx/test_fixture/user-restore-success.zip' ``` * From window + powershell + python3.6 ``` Python 3.6.8 (tags/v3.6.8:3c6b436a57, Dec 24 2018, 00:16:47) [MSC v.1916 64 bit (AMD64)] on win32 >>> winPath = r'\workspace\xxx\test_fixture\user-restore-success.zip' >>> posixPath = '/workspace/xxx/test_fixture/user-restore-success.zip' >>> pWIN = pathlib.PurePath(winPath) >>> pPOSIX = pathlib.PurePath(posixPath) >>> pWIN.as_posix() '/workspace/xxx/test_fixture/user-restore-success.zip' >>> pPOSIX.as_posix() '/workspace/xxx/test_fixture/user-restore-success.zip' ``` * From MAC ``` >>> winPath = '\\workspace\\xxx\\test_fixture\\user-restore-success.zip' >>> posixPath = '/workspace/xxx/test_fixture/user-restore-success.zip' >>> pWIN = pathlib.PurePath(winPath) >>> pPOSIX = pathlib.PurePath(posixPath) >>> pWIN.as_posix() '\\workspace\\xxx\\test_fixture\\user-restore-success.zip' >>> pPOSIX.as_posix() '/workspace/xxx/test_fixture/user-restore-success.zip' ``` # resolve() In window platform, resolve() returns absolute path only if such file is able to locate. Otherwise, it returns relative path. In Linux platform, resolve() returns absolute path anyway * From linux ``` root at b4f03ed3003b:/var/run/lock# python Python 3.8.0 (default, Oct 17 2019, 05:36:36) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pathlib >>> p = pathlib.Path('no_exists') >>> p.resolve() PosixPath('/run/lock/no_exists') >>> str(p.resolve()) '/run/lock/no_exits' ``` * From window ``` Python 3.6.8 (tags/v3.6.8:3c6b436a57, Dec 24 2018, 00:16:47) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import pathlib >>> p = pathlib.Path('README.md') >>> p.resolve() WindowsPath('C:/Users/xxx/PycharmProjects/yyy/README.md') >>> str(p.resolve()) 'C:\\Users\\xxx\\PycharmProjects\\yyy\\README.md' >>> p = pathlib.Path('no_exists') >>> p.resolve() WindowsPath('no_exists') >>> str(p.resolve()) 'no_exists' ``` Also, I have spotted a ticket similar to this one https://bugs.python.org/issue41357 ---------- components: FreeBSD, Windows, macOS messages: 374632 nosy: Mond Wan, koobs, ned.deily, paul.moore, ronaldoussoren, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: pathlib behave differ between OS versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 31 06:10:32 2020 From: report at bugs.python.org (Anatoli Babenia) Date: Fri, 31 Jul 2020 10:10:32 +0000 Subject: [New-bugs-announce] [issue41449] An article on Python 3 stdout and stderr output buffering Message-ID: <1596190232.56.0.383059757324.issue41449@roundup.psfhosted.org> New submission from Anatoli Babenia : It is hard to find info why Python 3 buffers stdout/stderr. The buffering causes problems when debugging Python apps in Docker and Kubernetes, and it is unclear if it is Python 3 who starts to buffer stdout if no tty is attached, it is Docker, or it is Kubernetes. The only bit of info that could be searched is the description of -u option https://docs.python.org/3.8/using/cmdline.html?#cmdoption-u which is not linked to any article. The `-u` description also says. > Changed in version 3.7: The text layer of the stdout and stderr streams now is unbuffered. However, I don't understand what is the text layers of stdout. And there is no description of behaviour when the output is not attached, and when the output is redirected. ---------- assignee: docs at python components: Documentation messages: 374633 nosy: Anatoli Babenia, docs at python priority: normal severity: normal status: open title: An article on Python 3 stdout and stderr output buffering _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 31 10:36:27 2020 From: report at bugs.python.org (Alexander Sibiryakov) Date: Fri, 31 Jul 2020 14:36:27 +0000 Subject: [New-bugs-announce] [issue41450] OSError is not documented in ssl library, but still can be thrown Message-ID: <1596206187.31.0.0863049661733.issue41450@roundup.psfhosted.org> New submission from Alexander Sibiryakov : See stack trace [07/15/2020 08:51:14.799: ERROR/kafka.producer.sender] Uncaught error in kafka producer I/O thread Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/kafka/producer/sender.py", line 60, in run self.run_once() File "/usr/local/lib/python3.6/site-packages/kafka/producer/sender.py", line 160, in run_once self._client.poll(timeout_ms=poll_timeout_ms) File "/usr/local/lib/python3.6/site-packages/kafka/client_async.py", line 580, in poll self._maybe_connect(node_id) File "/usr/local/lib/python3.6/site-packages/kafka/client_async.py", line 390, in _maybe_connect conn.connect() File "/usr/local/lib/python3.6/site-packages/kafka/conn.py", line 426, in connect if self._try_handshake(): File "/usr/local/lib/python3.6/site-packages/kafka/conn.py", line 505, in _try_handshake self._sock.do_handshake() File "/usr/local/lib/python3.6/ssl.py", line 1077, in do_handshake self._sslobj.do_handshake() File "/usr/local/lib/python3.6/ssl.py", line 689, in do_handshake self._sslobj.do_handshake() OSError: [Errno 0] Error See docs https://docs.python.org/3.8/library/ssl.html and see source code: https://github.com/python/cpython/blob/3.8/Modules/_ssl.c Probably the best would be to proceed with using SSLError, but short term OSError could be documented. ---------- assignee: docs at python components: Documentation messages: 374644 nosy: docs at python, sibiryakov priority: normal severity: normal status: open title: OSError is not documented in ssl library, but still can be thrown type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 31 20:33:51 2020 From: report at bugs.python.org (Joshua Bronson) Date: Sat, 01 Aug 2020 00:33:51 +0000 Subject: [New-bugs-announce] [issue41451] Class with __weakref__ slot cannot inherit from multiple typing.Generic classes Message-ID: <1596242031.59.0.938089827478.issue41451@roundup.psfhosted.org> New submission from Joshua Bronson : This appears to be a bug in Python 3.6 that I hit while trying to add type hints to my bidirectional mapping library (https://bidict.rtfd.io). Pasting a working, minimal repro below for easier inline viewing, and also attaching for easier downloading and running. Please let me know if there is a workaround that would allow me to continue to support Python 3.6 after adding type hints without having to remove the use of slots and weak references. I spent a while trying to find one first but was then encouraged to report this by @ethanhs. Thanks in advance for any pointers you may have. #!/usr/bin/env python3 """Repro for Python 3.6 slots + weakref + typing.Generic bug.""" from typing import Iterator, Mapping, MutableMapping, TypeVar from weakref import ref KT1 = TypeVar("KT1") KT2 = TypeVar("KT2") class Invertible(Mapping[KT1, KT2]): """A one-element mapping that is generic in two key types with a reference to its inverse. ...which in turn holds a (weak) reference back to it. >>> element = Invertible("H", 1) >>> element >>> element.inverse >>> element.inverse.inverse >>> element.inverse.inverse is element True >>> dict(element.items()) {'H': 1} >>> dict(element.inverse.items()) {1: 'H'} >>> list(element) ['H'] Uses the __slots__ optimization, and uses weakrefs for references in one direction to avoid strong reference cycles. And still manages to support pickling to boot! >>> from pickle import dumps, loads >>> pickled = dumps(element) >>> roundtripped = loads(pickled) >>> roundtripped """ # Each instance has (either a strong or a weak) reference to its # inverse instance, which has a (weak or strong) reference back. __slots__ = ("_inverse_strong", "_inverse_weak", "__weakref__", "key1", "key2") def __init__(self, key1: KT1, key2: KT2) -> None: self._inverse_weak = None self._inverse_strong = inverse = self.__class__.__new__(self.__class__) self.key1 = inverse.key2 = key1 self.key2 = inverse.key1 = key2 inverse._inverse_strong = None inverse._inverse_weak = ref(self) def __len__(self) -> int: return 1 def __iter__(self) -> Iterator[KT1]: yield self.key1 def __getitem__(self, key: KT1) -> KT2: if key == self.key1: return self.key2 raise KeyError(key) def __repr__(self) -> str: return f"<{self.__class__.__name__} key1={self.key1!r} key2={self.key2!r}>" @property def inverse(self) -> "Invertible[KT2, KT1]": """The inverse instance.""" if self._inverse_strong is not None: return self._inverse_strong inverse = self._inverse_weak() if inverse is not None: return inverse # Refcount of referent must have dropped to zero, # as in `Invertible().inverse.inverse`, so init a new one. self._inverse_weak = None self._inverse_strong = inverse = self.__class__.__new__(self.__class__) inverse.key2 = self.key1 inverse.key1 = self.key2 inverse._inverse_strong = None inverse._inverse_weak = ref(self) return inverse def __getstate__(self) -> dict: """Needed to enable pickling due to use of __slots__ and weakrefs.""" state = {} for cls in self.__class__.__mro__: slots = getattr(cls, '__slots__', ()) for slot in slots: if hasattr(self, slot): state[slot] = getattr(self, slot) # weakrefs can't be pickled. state.pop('_inverse_weak', None) # Added back in __setstate__ below. state.pop('__weakref__', None) # Not added back in __setstate__. Python manages this one. return state def __setstate__(self, state) -> None: """Needed because use of __slots__ would prevent unpickling otherwise.""" for slot, value in state.items(): setattr(self, slot, value) self._inverse_weak = None self._inverse_strong = inverse = self.__class__.__new__(self.__class__) inverse.key2 = self.key1 inverse.key1 = self.key2 inverse._inverse_strong = None inverse._inverse_weak = ref(self) # So far so good, but now let's make a mutable version. # # The following class definition works on Python > 3.6, but fails on 3.6 with # TypeError: __weakref__ slot disallowed: either we already got one, or __itemsize__ != 0 class MutableInvertible(Invertible[KT1, KT2], MutableMapping[KT1, KT2]): """Works on > 3.6, but we don't even get this far on Python 3.6: >>> MutableInvertible("H", 1) """ __slots__ = () def __setitem__(self, key1: KT1, key2: KT2) -> None: self.key1 = self.inverse.key2 = key1 self.key2 = self.inverse.key1 = key2 def __delitem__(self, key: KT1) -> None: raise KeyError(key) if __name__ == "__main__": import doctest doctest.testmod() ---------- components: Library (Lib) files: repro.py messages: 374649 nosy: jab priority: normal severity: normal status: open title: Class with __weakref__ slot cannot inherit from multiple typing.Generic classes type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file49352/repro.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jul 31 23:34:14 2020 From: report at bugs.python.org (Ma Lin) Date: Sat, 01 Aug 2020 03:34:14 +0000 Subject: [New-bugs-announce] [issue41452] Inefficient BufferedReader.read(-1) Message-ID: <1596252854.24.0.567639060257.issue41452@roundup.psfhosted.org> New submission from Ma Lin : BufferedReader's constructor has a `buffer_size` parameter, it's the size of this buffer: When reading data from BufferedReader object, a larger amount of data may be requested from the underlying raw stream, and kept in an internal buffer. The doc of BufferedReader[1] If call the BufferedReader.read(size) function: 1, When `size` is a positive number, it reads `buffer_size` bytes from the underlying stream. This is expected behavior. 2, When `size` is -1, it tries to call underlying stream's readall() function [2]. In this case `buffer_size` is not be respected. The underlying stream may be `RawIOBase`, its readall() function read `DEFAULT_BUFFER_SIZE` bytes in each read [3]. `DEFAULT_BUFFER_SIZE` currently only 8KB, which is very inefficient for BufferedReader.read(-1). If `buffer_size` bytes is read every time, will be the expected performance. Attached file demonstrates this problem. [1] doc of BufferedReader: https://docs.python.org/3/library/io.html#io.BufferedReader [2] BufferedReader.read(-1) tries to call underlying stream's readall() function: https://github.com/python/cpython/blob/v3.9.0b5/Modules/_io/bufferedio.c#L1538-L1542 [3] RawIOBase.readall() read DEFAULT_BUFFER_SIZE each time: https://github.com/python/cpython/blob/v3.9.0b5/Modules/_io/iobase.c#L968-L969 ---------- components: IO files: demo.py messages: 374652 nosy: malin priority: normal severity: normal status: open title: Inefficient BufferedReader.read(-1) type: performance versions: Python 3.10 Added file: https://bugs.python.org/file49354/demo.py _______________________________________ Python tracker _______________________________________