From report at bugs.python.org Mon Jun 1 07:41:44 2020 From: report at bugs.python.org (=?utf-8?q?Pekka_Kl=C3=A4rck?=) Date: Mon, 01 Jun 2020 11:41:44 +0000 Subject: [New-bugs-announce] [issue40838] inspect.getsourcefile documentation doesn't mention it can return None Message-ID: <1591011704.27.0.0385051429778.issue40838@roundup.psfhosted.org> New submission from Pekka Kl?rck : The docs of inspect.getsourcefile [1] mention the function can raise TypeError, but there's nothing about the function possibly returning None. This caused a bug in our project [2]. If I understand the code [3] correctly, None is returned if getsourcefile cannot determine the original source file of the file returned by getfile. That's understandable but should definitely be documented. Raising TypeError that getfile itself may raise might be even better, but such a backwards incompatible API change is probably not worth the effort. While looking at the code, I also noticed there's getabsfile [4] that uses getfile if getsourcefile returns None. That looks handy but since the function isn't included in the inspect module documentation [5] using it feels pretty risky. [1] https://docs.python.org/3/library/inspect.html#inspect.getsourcefile [2] https://github.com/robotframework/robotframework/issues/3587 [3] https://github.com/python/cpython/blob/3.8/Lib/inspect.py#L692 [4] https://github.com/python/cpython/blob/3.8/Lib/inspect.py#L714 [5] https://bugs.python.org/issue12317 ---------- messages: 370547 nosy: pekka.klarck priority: normal severity: normal status: open title: inspect.getsourcefile documentation doesn't mention it can return None _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 1 14:46:37 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 01 Jun 2020 18:46:37 +0000 Subject: [New-bugs-announce] [issue40839] Disallow calling PyDict_GetItem() with the GIL released Message-ID: <1591037197.66.0.735914267677.issue40839@roundup.psfhosted.org> New submission from STINNER Victor : For historical reasons, it was allowed to call the PyDict_GetItem() function with the GIL released. I propose to change PyDict_GetItem() to fail with a fatal error if it's called with the GIL released. To help C extension modules authors, I propose to keep a check at the runtime even in release build. Later, we may drop this check in release mode and only keep it in debug mode. In Python 3.8 and then 3.9, some functions started to crash when called without holding the GIL. It caused some bad surprises to C extension modules authors. Example: gdb developers with bpo-40826. In my opinion, holding the GIL was always required even if it is not very explicit in the documentation of the C API (only the documentation of few functions are explicit about the GIL). ---------- components: C API messages: 370572 nosy: vstinner priority: normal severity: normal status: open title: Disallow calling PyDict_GetItem() with the GIL released versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 1 20:28:11 2020 From: report at bugs.python.org (Jason R. Coombs) Date: Tue, 02 Jun 2020 00:28:11 +0000 Subject: [New-bugs-announce] [issue40840] lzma.h file not found building on macOS Message-ID: <1591057691.57.0.0922898078512.issue40840@roundup.psfhosted.org> New submission from Jason R. Coombs : Attempting to build Python on macOS following [the instructions](https://devguide.python.org/setup/#macos-and-os-x) (for Homebrew). xz is installed: ``` $ brew --prefix xz /Users/jaraco/.local/homebrew/opt/xz ``` Yet, after running `./configure`, which makes no mention of "xz" or "lzma", "make" fails with: ``` gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -fvisibility=hidden -I./Include/internal -I./Include -I. -I/Users/jaraco/code/public/cpython/Include -I/Users/jaraco/code/public/cpython -c /Users/jaraco/code/public/cpython/Modules/_lzmamodule.c -o build/temp.macosx-10.15-x86_64-3.9/Users/jaraco/code/public/cpython/Modules/_lzmamodule.o /Users/jaraco/code/public/cpython/Modules/_lzmamodule.c:16:10: fatal error: 'lzma.h' file not found #include ^~~~~~~~ 1 error generated. ``` Yet the file is there: ``` $ ls ~/.local/homebrew/opt/xz/include lzma lzma.h ``` What's missing from the instructions that I should be doing? ---------- messages: 370582 nosy: jaraco priority: normal severity: normal status: open title: lzma.h file not found building on macOS _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 02:42:45 2020 From: report at bugs.python.org (Dong-hee Na) Date: Tue, 02 Jun 2020 06:42:45 +0000 Subject: [New-bugs-announce] [issue40841] Implement mimetypes.sniff Message-ID: <1591080165.24.0.514984320107.issue40841@roundup.psfhosted.org> New submission from Dong-hee Na : The current mimetypes.guess_type API guesses file types based on file extensions. However, there is a more accurate method which is calling sniffing. Some languages like Go(https://golang.org/pkg/net/http/#DetectContentType) provides mimesniff API and the method is implemented based on a standard way which is published on https://mimesniff.spec.whatwg.org/ I have a sample code implementation this https://github.com/corona10/mimesniff/blob/master/mimesniff/mimesniff.py But the API interface will be changed to mimetypes API. So I would like to provide mimetypes.sniff API rather than a new stdlib package like mimesniff. ---------- components: Library (Lib) messages: 370591 nosy: corona10, vstinner priority: normal severity: normal status: open title: Implement mimetypes.sniff type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 07:05:03 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Tue, 02 Jun 2020 11:05:03 +0000 Subject: [New-bugs-announce] [issue40842] _Pickler_CommitFrame() always returns 0 but it's return code is checked Message-ID: <1591095903.49.0.200421685869.issue40842@roundup.psfhosted.org> New submission from R?mi Lapeyre : I'm currently investigating a SystemError one of our workers returned: returned NULL without setting an error While doing so I came across the _Pickler_CommitFrame() function: static int _Pickler_CommitFrame(PicklerObject *self) { size_t frame_len; char *qdata; if (!self->framing || self->frame_start == -1) return 0; frame_len = self->output_len - self->frame_start - FRAME_HEADER_SIZE; qdata = PyBytes_AS_STRING(self->output_buffer) + self->frame_start; if (frame_len >= FRAME_SIZE_MIN) { qdata[0] = FRAME; _write_size64(qdata + 1, frame_len); } else { memmove(qdata, qdata + FRAME_HEADER_SIZE, frame_len); self->output_len -= FRAME_HEADER_SIZE; } self->frame_start = -1; return 0; } Is there a reason for this function to return an int if it is always 0? I checked all call sites (_Pickler_GetString(), _Pickler_OpcodeBoundary(), _Pickler_write_bytes() and dump()) and they all check the return code but it seems useless. ---------- components: Library (Lib) messages: 370603 nosy: remi.lapeyre priority: normal severity: normal status: open title: _Pickler_CommitFrame() always returns 0 but it's return code is checked type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 11:18:22 2020 From: report at bugs.python.org (mxmlnkn) Date: Tue, 02 Jun 2020 15:18:22 +0000 Subject: [New-bugs-announce] [issue40843] tarfile: ignore_zeros = True exceedingly slow on a sparse tar file Message-ID: <1591111102.48.0.378396019382.issue40843@roundup.psfhosted.org> New submission from mxmlnkn : Consider this example replicating a real use case where I was downloading the 1.191TiB ImageNet in sequential order for ~1GiB in order to preview it: echo "foo" > bar tar cf sparse.tar bar #!/usr/bin/env python3 # -*- coding: utf-8 -*- import os import tarfile import time t0 = time.time() for tarInfo in tarfile.open( 'sparse.tar', 'r:', ignore_zeros = True ): pass t1 = time.time() print( f"Small TAR took {t1 - t0}s to iterate over" ) f = open( 'sparse.tar', 'wb' ) f.truncate( 2*1024*1024*1024 ) f.close() t0 = time.time() for tarInfo in tarfile.open( 'sparse.tar', 'r:', ignore_zeros = True ): pass t1 = time.time() print( f"Small TAR with sparse tail took {t1 - t0}s to iterate over" ) Output: Small TAR took 0.00020813941955566406s to iterate over Small TAR with sparse tail took 6.999570846557617s to iterate over So, iterating over sparse holes takes tarfile ~300MiB/s. Which sounds fast but is really slow for 1.2TiB and when thinking about it as tarfile doing basically >nothing<. There should be better options like using os.lseek with os.SEEK_DATA if available to skip those empty holes. An alternative would be an option to tell tarfile how many zeros it should at maximum skip. Personally, I only use the ignore_zeros option to be able to work with concatenated TARs, which in my case only have up to 19*512 byte empty tar blocks to be skipped. Anything longer would indicate an invalid file. I'm aware that these maximum runs of zeros vary depending on the tar blocking factor, so it should be adjustable. ---------- messages: 370611 nosy: mxmlnkn priority: normal severity: normal status: open title: tarfile: ignore_zeros = True exceedingly slow on a sparse tar file versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 13:20:12 2020 From: report at bugs.python.org (Matthew Francis) Date: Tue, 02 Jun 2020 17:20:12 +0000 Subject: [New-bugs-announce] [issue40844] Alternate ways of running coroutines Message-ID: <1591118412.36.0.0833494361234.issue40844@roundup.psfhosted.org> New submission from Matthew Francis <4576francis at gmail.com>: Currently, using await inside a coroutine will block inside the coroutine. This behavior would usually be fine, but for some usecases a way to nonblockingly run coroutines without creating a Task could be useful, because tasks don't allow for a callback. I'm suggesting a method on coroutines that runs them without blocking, and will run a callback when it's complete. ---------- components: asyncio messages: 370614 nosy: asvetlov, matthewfrancis, yselivanov priority: normal severity: normal status: open title: Alternate ways of running coroutines type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 13:53:00 2020 From: report at bugs.python.org (Roman Akopov) Date: Tue, 02 Jun 2020 17:53:00 +0000 Subject: [New-bugs-announce] [issue40845] idna encoding fails for Cherokee symbols Message-ID: <1591120380.37.0.839523789408.issue40845@roundup.psfhosted.org> New submission from Roman Akopov : For a specific Cherokee string of three symbols b'\\u13e3\\u13b3\\u13a9' generating punycode representation fails. What steps will reproduce the problem? Execute '???'.encode('idna') of even more reliable Execute '\u13e3\u13b3\u13a9'.encode('idna') What is the expected result? 'xn--f9dt7l' What happens instead? 'xn--tz9ata7l' Version affected. Tested on Python 3.8.3 Windows and Python 3.6.8 CentOS. Other information. I was testing if our product supports internationalized domain names. So I had written a Python script which generated DNS zone file with punycode encoded names and JavaScript file for a browser to send requests to URLs containing internationalized domain names. Strings were taken from Common Locale Data Repository. 193 various URL, one per language. When executed in Google Chrome, Mozilla Firefox and Microsoft EDGE, domain name '???.myhost.local' is converted to 'xn--f9dt7l.myhost.local', but we have 'xn--tz9ata7l.myhost.local' in DNS zone file and this is how I had found the bug. For 192 other languages I have tested everything works just fine. hese are Afrikaans, Aghem, Akan, Amharic, Arabic, Assamese, Asu, Asturian, Azerbaijani, Basaa, Belarusian, Bemba, Bena, Bulgarian, Bambara, Bangla, Tibetan, Breton, Bodo, Bosnian, Catalan, Chakma, Chechen, Cebuano, Chiga, Czech, Church Slavic, Welsh, Danish, Taita, German, Zarma, Lower Sorbian, Duala, Jola-Fonyi, Dzongkha, Embu, Ewe, Greek, English, Esperanto, Spanish, Estonian, Basque, Ewondo, Persian, Fulah, Finnish, Filipino, Faroese, French, Friulian, Western Frisian, Irish, Scottish Gaelic, Galician, Swiss German, Gujarati, Gusii, Manx, Hausa, Hebrew, Hindi, Croatian, Upper Sorbian, Hungarian, Armenian, Interlingua, Indonesian, Sichuan Yi, Icelandic, Italian, Japanese, Ngomba, Machame, Javanese, Georgian, Kabyle, Kamba, Makonde, Kabuverdianu, Kikuyu, Kako, Kalaallisut, Kalenjin, Khmer, Kannada, Korean, Konkani, Kashmiri, Shambala, Bafia, Colognian, Kurdish, Cornish, Kyrgyz, Langi, Luxembourgish, Ganda, Lakota, Lingala, Lao, Lithuanian, Luba-Katanga, Luo, Luyia, Latvian, Maithili, Masai, Meru, Malagasy, Makhuwa-Meetto, Meta?, Maori, Macedonian, Malayalam, Mongolian, Manipuri, Marathi, Malay, Maltese, Mundang, Burmese, Mazanderani, Nama, North Ndebele, Low German, Nepali, Dutch, Kwasio, Norwegian Nynorsk, Nyankole, Oromo, Odia, Ossetic, Punjabi, Polish, Prussian, Pashto, Portuguese, Quechua, Romansh, Rundi, Romanian, Rombo, Russian, Kinyarwanda, Rwa, Samburu, Santali, Sangu, Sindhi, Northern Sami, Sena, Sango, Tachelhit, Sinhala, Slovak, Slovenian, Inari Sami, Shona, Somali, Albanian, Serbian, Swedish, Swahili, Tamil, Telugu, Teso, Tajik, Thai, Tigrinya, Turkish, Tatar, Uyghur, Ukrainian, Urdu, Uzbek, Vai, Volap?k, Vunjo, Walser, Wolof, Xhosa, Soga, Yangben, Yiddish, Cantonese, Standard Moroccan Tamazight, Chinese, Traditional Chinese, Zulu. Somehow specifically Cherokee code points trigger the bug. On top of that, https://www.punycoder.com/ converts '???' into 'xn--f9dt7l' and back. However 'xn--tz9ata7l' is reported as an invalid punycode. ---------- components: Unicode messages: 370615 nosy: Roman Akopov, ezio.melotti, vstinner priority: normal severity: normal status: open title: idna encoding fails for Cherokee symbols type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 14:23:09 2020 From: report at bugs.python.org (J Arun Mani) Date: Tue, 02 Jun 2020 18:23:09 +0000 Subject: [New-bugs-announce] [issue40846] Misleading line in documentation Message-ID: <1591122189.99.0.387318782653.issue40846@roundup.psfhosted.org> New submission from J Arun Mani : Hi. In docs : https://docs.python.org/3/faq/programming.html#faq-argument-vs-parameter it says "Parameters define what types of arguments a function can accept." This is not true. Python's functions do not impose any type checking or raise error when the argument's type is not matching it's type hint. Please change the line to a better one. Maybe "Parameters define the names that will hold the supplied arguments." Thanks ^^ ---------- assignee: docs at python components: Documentation messages: 370616 nosy: J Arun Mani, docs at python priority: normal severity: normal status: open title: Misleading line in documentation _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 15:03:34 2020 From: report at bugs.python.org (Adam Williamson) Date: Tue, 02 Jun 2020 19:03:34 +0000 Subject: [New-bugs-announce] [issue40847] New parser considers empty line following a backslash to be a syntax error, old parser didn't Message-ID: <1591124614.71.0.196427498662.issue40847@roundup.psfhosted.org> New submission from Adam Williamson : While debugging issues with the black test suite in Python 3.9, I found one which black upstream says is a Cpython issue, so I'm filing it here. Reproduction is very easy. Just use this four-line tester: print("hello, world") \ print("hello, world 2") with that saved as `test.py`, check the results: sh-5.0# PYTHONOLDPARSER=1 python3 test.py hello, world hello, world 2 sh-5.0# python3 test.py File "/builddir/build/BUILD/black-19.10b0/test.py", line 3 ^ SyntaxError: invalid syntax The reason black has this test (well, a similar test - in black's test, the file *starts* with the backslash then the empty line, but the result is the same) is covered in https://github.com/psf/black/issues/922 and https://github.com/psf/black/pull/948 . ---------- components: Interpreter Core messages: 370618 nosy: adamwill priority: normal severity: normal status: open title: New parser considers empty line following a backslash to be a syntax error, old parser didn't type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 15:08:16 2020 From: report at bugs.python.org (Adam Williamson) Date: Tue, 02 Jun 2020 19:08:16 +0000 Subject: [New-bugs-announce] [issue40848] compile() can compile a bare starred expression with `PyCF_ONLY_AST` flag with the old parser, but not the new one Message-ID: <1591124896.73.0.969808465521.issue40848@roundup.psfhosted.org> New submission from Adam Williamson : Not 100% sure this would be considered a bug, but it seems at least worth filing to check. This is a behaviour difference between the new parser and the old one. It's very easy to reproduce: sh-5.0# PYTHONOLDPARSER=1 python3 Python 3.9.0b1 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from _ast import * >>> compile("(*starred)", "", "exec", flags=PyCF_ONLY_AST) >>> sh-5.0# python3 Python 3.9.0b1 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from _ast import * >>> compile("(*starred)", "", "exec", flags=PyCF_ONLY_AST) Traceback (most recent call last): File "", line 1, in File "", line 1 (*starred) ^ SyntaxError: invalid syntax That is, you can compile() the expression "(*starred)" with PyCF_ONLY_AST flag set with the old parser, but not with the new one. Without PyCF_ONLY_AST you get a SyntaxError with both parsers, though a with the old parser, the error message is "can't use starred expression here", not "invalid syntax". ---------- components: Interpreter Core messages: 370620 nosy: adamwill priority: normal severity: normal status: open title: compile() can compile a bare starred expression with `PyCF_ONLY_AST` flag with the old parser, but not the new one versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 15:20:28 2020 From: report at bugs.python.org (l0x) Date: Tue, 02 Jun 2020 19:20:28 +0000 Subject: [New-bugs-announce] [issue40849] Expose X509_V_FLAG_PARTIAL_CHAIN ssl flag Message-ID: <1591125628.93.0.87420657479.issue40849@roundup.psfhosted.org> New submission from l0x : This simple patch exposes OpenSSL's X509_V_FLAG_PARTIAL_CHAIN if it is defined. This lets us trust a certificate if it is signed by a certificate in the trust store, even if that CA is not a root CA. It makes it possible to trust an intermediate CA without trusting the root and all the other intermediate CAs it has signed. ---------- assignee: christian.heimes components: SSL messages: 370621 nosy: christian.heimes, l0x priority: normal pull_requests: 19828 severity: normal status: open title: Expose X509_V_FLAG_PARTIAL_CHAIN ssl flag type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 2 19:11:05 2020 From: report at bugs.python.org (Rainald Koch) Date: Tue, 02 Jun 2020 23:11:05 +0000 Subject: [New-bugs-announce] [issue40850] Programming FAQ - variables local to the lambdas Message-ID: <1591139465.93.0.232349952448.issue40850@roundup.psfhosted.org> New submission from Rainald Koch : In section [10] of the FAQ, I whould add "(a function parameter with different default values)" after "variables local to the lambdas". Besides, I had to look up "memoizing" [2] and suspect that "memorizing" would be correct. [10] https://docs.python.org/3/faq/programming.html#why-do-lambdas-defined-in-a-loop-with-different-values-all-return-the-same-result [2] https://en.wikipedia.org/wiki/Memoization ---------- assignee: docs at python components: Documentation messages: 370637 nosy: Rainald Koch, docs at python priority: normal severity: normal status: open title: Programming FAQ - variables local to the lambdas type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 00:54:30 2020 From: report at bugs.python.org (akdor1154) Date: Wed, 03 Jun 2020 04:54:30 +0000 Subject: [New-bugs-announce] [issue40851] subprocess.Popen: impossible to show console window when shell=True Message-ID: <1591160070.69.0.321919779989.issue40851@roundup.psfhosted.org> New submission from akdor1154 : Hi all, It seems impossible to show a new console window with calling subprocess.Popen on windows with shell=True. Attempt: si = subprocess.STARTUPINFO() si.dwFlags = subprocess.STARTF_USESHOWWINDOW si.wShowWindow = 5 proc = Popen( cmd, cwd=runFolder, creationflags=subprocess.CREATE_NEW_CONSOLE, shell=True, startupinfo=si ) In the current source, it looks like this is due to the block in https://github.com/python/cpython/blob/master/Lib/subprocess.py#L1405 , which unreservedly wipes wShowWindow even if I have provided it. Testing on Python 3.6 but I am assuming this affects all versions. ---------- components: Windows messages: 370639 nosy: akdor1154, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: subprocess.Popen: impossible to show console window when shell=True versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 04:14:48 2020 From: report at bugs.python.org (hung son luong) Date: Wed, 03 Jun 2020 08:14:48 +0000 Subject: [New-bugs-announce] [issue40852] Dictionary created with dict.fromkeys have issues (all explained in the file) Message-ID: <1591172088.76.0.193777337738.issue40852@roundup.psfhosted.org> Change by hung son luong : ---------- components: ctypes files: issue_of_python_dict.py nosy: hung son luong priority: normal severity: normal status: open title: Dictionary created with dict.fromkeys have issues (all explained in the file) type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file49214/issue_of_python_dict.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 06:26:55 2020 From: report at bugs.python.org (yesheng) Date: Wed, 03 Jun 2020 10:26:55 +0000 Subject: [New-bugs-announce] [issue40853] "set() in set()" should raise TypeError: unhashable type: 'set' Message-ID: <1591180015.28.0.813613755365.issue40853@roundup.psfhosted.org> New submission from yesheng : >>> set() in set() False # should raise TypeError >>> dict() in set() Traceback (most recent call last): File "", line 1, in TypeError: unhashable type: 'dict' >>> set() in dict() Traceback (most recent call last): File "", line 1, in TypeError: unhashable type: 'set' >>> dict() in dict() Traceback (most recent call last): File "", line 1, in TypeError: unhashable type: 'dict' >>> frozenset({1,2}) in {frozenset({1,2}), 3} True >>> {1,2} in {frozenset({1,2}), 3} True # should raise TypeError ---------- messages: 370652 nosy: yesheng priority: normal severity: normal status: open title: "set() in set()" should raise TypeError: unhashable type: 'set' type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 07:00:25 2020 From: report at bugs.python.org (Sandro Mani) Date: Wed, 03 Jun 2020 11:00:25 +0000 Subject: [New-bugs-announce] [issue40854] [Patch] Allow overriding sys.platlibdir Message-ID: <1591182025.1.0.356419479186.issue40854@roundup.psfhosted.org> New submission from Sandro Mani : You can currently point the python interpreter to a different install say via export PYTHONHOME= export PYTHONPATH=/lib/python3.9 python3 With the newly added platlibdir [1], if python was configured with platlibdir=lib64, this will break because i.e. the site-packages dir as returned by `sysconfig.get_paths()` will use lib64 and not lib as the other install may be using. This PR adds the possibility to override the platlibdir via environment variable. Full rationale: [2]. [1] https://github.com/python/cpython/pull/8068 [2] https://src.fedoraproject.org/rpms/python3.9/pull-request/10 ---------- components: Interpreter Core messages: 370655 nosy: smani priority: normal severity: normal status: open title: [Patch] Allow overriding sys.platlibdir versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 08:31:21 2020 From: report at bugs.python.org (Matti) Date: Wed, 03 Jun 2020 12:31:21 +0000 Subject: [New-bugs-announce] [issue40855] statistics.stdev ignore xbar argument Message-ID: <1591187481.58.0.119061539675.issue40855@roundup.psfhosted.org> New submission from Matti : statistics.variance also has the same problem. >>> import statistics >>> statistics.stdev([1,2]) 0.7071067811865476 >>> statistics.stdev([1,2], 3) 0.7071067811865476 >>> statistics.stdev([1,2], 1.5) 0.7071067811865476 should be 0.7071067811865476 2.23606797749979 0.5 ---------- messages: 370659 nosy: Folket priority: normal severity: normal status: open title: statistics.stdev ignore xbar argument type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 13:10:44 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Wed, 03 Jun 2020 17:10:44 +0000 Subject: [New-bugs-announce] [issue40856] IDLE line numbering should be light gray Message-ID: <1591204244.03.0.0281870328766.issue40856@roundup.psfhosted.org> New submission from Raymond Hettinger : In live code demos, the visual weight of the line numbers is heavier than the code next to it. Instead, is should be light gray, just dark enough to easily locate a line of interest, and light enough to tune out while reading the line and surrounding code. The current weight impairs readability. ---------- assignee: terry.reedy components: IDLE messages: 370681 nosy: rhettinger, terry.reedy priority: normal severity: normal status: open title: IDLE line numbering should be light gray type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 18:08:19 2020 From: report at bugs.python.org (Tim Reid) Date: Wed, 03 Jun 2020 22:08:19 +0000 Subject: [New-bugs-announce] [issue40857] tempfile.TemporaryDirectory() context manager can fail to propagate exceptions generated within its context Message-ID: <1591222099.53.0.802921276051.issue40857@roundup.psfhosted.org> New submission from Tim Reid : When an exception occurs within a tempfile.TemporaryDirectory() context and the directory cleanup fails, the _cleanup exception_ is propagated, not the original one. This effectively 'masks' the original exception, and makes it impossible to catch using a simple 'try'/'except' construct. ---------------------------------------------------------------------------- Code like this: import tempfile import os import sys try: with tempfile.TemporaryDirectory() as tempdir: print(tempdir) # some code happens here except ArithmeticError as exc: print('An arithmetic error occurred: {}'.format(exc)) print('Continuing...') is effective at catching any ArithmeticError which occurs in the code fragment but is not otherwise handled. However if, in addition, an error occues in cleaning up the temporary directory, the exception which occurred in the code is replaced by the cleanup exception, and is not be propagated to be caught by the 'except' clause. For example: import tempfile import os import sys try: with tempfile.TemporaryDirectory() as tempdir: print(tempdir) n = 1 / 0 except ArithmeticError as exc: print('An arithmetic error occurred: {}'.format(exc)) print('Continuing...') produces this: /tmp/tmp_r2sxqgb An arithmetic error occurred: division by zero Continuing... but this: import tempfile import os import sys try: with tempfile.TemporaryDirectory() as tempdir: print(tempdir) os.rmdir(tempdir) # this new line is the only difference n = 1 / 0 except ArithmeticError as exc: print('An arithmetic error occurred: {}'.format(exc)) print('Continuing...') produces this: /tmp/tmp_yz6zyfs Traceback (most recent call last): File "tempfilebug.py", line 9, in n = 1 / 0 ZeroDivisionError: division by zero During handling of the above exception, another exception occurred: Traceback (most recent call last): File "tempfilebug.py", line 9, in n = 1 / 0 File "/usr/lib/python3.6/tempfile.py", line 948, in __exit__ self.cleanup() File "/usr/lib/python3.6/tempfile.py", line 952, in cleanup _rmtree(self.name) File "/usr/lib/python3.6/shutil.py", line 477, in rmtree onerror(os.lstat, path, sys.exc_info()) File "/usr/lib/python3.6/shutil.py", line 475, in rmtree orig_st = os.lstat(path) FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp_yz6zyfs' and the program exits with the top-level code having no chance to catch the ZeroDivisionError and continue execution. (To catch this exception, the top-level code would need to know to catch FileNotFoundError.) My view is that if an exception happens within a TemporaryDirectory context, *and* there is an exception generated as a result of the cleanup process, the original exception is likely to be more significant, and should be the exception which is propagated, not the one generated by the cleanup. ---------------------------------------------------------------------------- System info: $ python3 --version Python 3.6.9 $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.4 LTS Release: 18.04 Codename: bionic ---------- components: Extension Modules messages: 370689 nosy: granchester priority: normal severity: normal status: open title: tempfile.TemporaryDirectory() context manager can fail to propagate exceptions generated within its context type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 19:24:39 2020 From: report at bugs.python.org (Eryk Sun) Date: Wed, 03 Jun 2020 23:24:39 +0000 Subject: [New-bugs-announce] [issue40858] ntpath.realpath fails for broken symlinks with rooted target paths Message-ID: <1591226679.06.0.123805076951.issue40858@roundup.psfhosted.org> New submission from Eryk Sun : ntpath.realpath fails to resolve the non-strict path of a broken relative symlink if the target is a rooted path. For example: >>> os.readlink('symlink') '\\broken\\link' >>> os.path.realpath('symlink') '\\broken\\link' >>> os.path.abspath('symlink') 'C:\\Temp\\symlink' The problem is that relative paths have to be specially handled by ntpath._readlink_deep, but ntpath.isabs incorrectly classifies r"\broken\link" as an absolute path. It's actually relative to the current device or drive in the access context. Other path libraries get this right, such as pathlib.Path.is_absolute and C++ path::is_absolute [1]. The documented behavior of ntpath.isabs (i.e. "begins with a (back)slash after chopping off a potential drive letter") isn't something that we can change. So ntpath._readlink_deep needs a private implementation of isabs. For example: def _isabs(s): s = os.fspath(s) seps = _get_bothseps(s) s = s[1:2] if (s and s[0] in seps) else splitdrive(s)[1] return bool(s) and s[0] in seps This classifies UNC paths as absolute; rooted paths without a drive as relative; and otherwise depends on splitdrive() to get the root path, if any. [1]: https://docs.microsoft.com/en-us/cpp/standard-library/path-class?view=vs-2019#is_absolute ---- Background The target of a relative symlink gets evaluated against the hard path that's used to access the link. A hard path contains directories and mountpoints, but no symlinks. In particular, a rooted symlink target such as r"\spam\eggs" is relative to the root device of the hard path that's used to access the link. This may or may not be the device on which the link resides. It depends on how it's accessed. For example, if the volume that contains the r"\spam\eggs" link is accessed via its DOS device name "V:", then it resolves to r"V:\spam\eggs". Similarly, if the r"\spam\eggs" link is accessed via r"C:\Mount\VolumeSymlink", where "VolumeSymlink" is a directory symlink to "V:\\", then it also resolves to r"V:\spam\eggs". On the other hand, if the r"\spam\eggs" link is accessed via the mountpoint r"C:\Mount\VolumeMountpoint", then it resolves to r"C:\spam\eggs". ---------- components: Library (Lib), Windows messages: 370690 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: ntpath.realpath fails for broken symlinks with rooted target paths type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 22:41:03 2020 From: report at bugs.python.org (Ma Lin) Date: Thu, 04 Jun 2020 02:41:03 +0000 Subject: [New-bugs-announce] [issue40859] Update Windows build to use xz-5.2.5 Message-ID: <1591238463.96.0.281158098158.issue40859@roundup.psfhosted.org> New submission from Ma Lin : The Windows build is using xz-5.2.2, it was released on 2015-09-29. xz-5.2.5 was released recently, maybe we can update this library. When preparing cpython-source-deps, don't forget to copy `xz-5.2.5\windows\vs2019\config.h` to `xz-5.2.5\windows\` folder. `\vs2019\config.h` and `\vs2017\config.h` are the same, except for the comment on the first line. I tested xz-5.2.5 on my local machine, it passed test_lzma.py. XZ Utils Release Notes: https://git.tukaani.org/?p=xz.git;a=blob;f=NEWS;hb=HEAD ---------- components: Windows messages: 370693 nosy: Ma Lin, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Update Windows build to use xz-5.2.5 versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 3 23:06:13 2020 From: report at bugs.python.org (Arkady M) Date: Thu, 04 Jun 2020 03:06:13 +0000 Subject: [New-bugs-announce] [issue40860] Exception in multiprocessing/context.py under load Message-ID: <1591239973.04.0.347860195364.issue40860@roundup.psfhosted.org> New submission from Arkady M : I am running an HTTP server (socketserver.ThreadingMixIn, http.server.HTTPServer) in a Docker container (FROM ubuntu:19.10) Occasionally I get an exception: Exception happened during processing of request from ('172.17.0.1', 35756) Traceback (most recent call last): File "/usr/lib/python3.7/socketserver.py", line 650, in process_request_thread self.finish_request(request, client_address) File "/usr/lib/python3.7/socketserver.py", line 360, in finish_request self.RequestHandlerClass(request, client_address, self) File "service.py", line 221, in __init__ super(UrlExtractorServer, self).__init__(*args, **kwargs) File "/usr/lib/python3.7/socketserver.py", line 720, in __init__ self.handle() File "/usr/lib/python3.7/http/server.py", line 426, in handle self.handle_one_request() File "/usr/lib/python3.7/http/server.py", line 414, in handle_one_request method() File "service.py", line 488, in do_POST self._post_extract(url) File "service.py", line 459, in _post_extract extracted_links, err_msg = self._extract_links(transaction_id, attachment_id, zip_password, data) File "service.py", line 403, in _extract_links error, results = call_timeout(process_deadline, extractor.extract_links_binary_multiprocess, args=data) File "service.py", line 175, in call_timeout manager = multiprocessing.Manager() File "/usr/lib/python3.7/multiprocessing/context.py", line 56, in Manager m.start() File "/usr/lib/python3.7/multiprocessing/managers.py", line 563, in start self._process.start() File "/usr/lib/python3.7/multiprocessing/process.py", line 111, in start _cleanup() File "/usr/lib/python3.7/multiprocessing/process.py", line 56, in _cleanup if p._popen.poll() is not None: AttributeError: 'NoneType' object has no attribute 'poll' I am in the process of preparingof of a reasonably simple piece of code demonstrating the problem. Meanwhile the following can be important. In the code below I ma getting the elapse < timeout (20 times from 70K). In all case psutil.Process() returned psutil.NoSuchProcess time_start = time.time() job = multiprocessing.Process(target=func, args=(args, results), kwargs=kwargs) job.start() job.join(timeout) elapsed = time.time()-time_start if job.is_alive(): try: process = psutil.Process(job.pid) process_error = f"pid {job.pid} status {process.status} {process}" except Exception as e: process_error = f"psutil.Process() failed {e}" if elapsed < timeout: print("elapsed < timeout") ---------- messages: 370695 nosy: Arkady M priority: normal severity: normal status: open title: Exception in multiprocessing/context.py under load type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 05:38:48 2020 From: report at bugs.python.org (Nikita Nemkin) Date: Thu, 04 Jun 2020 09:38:48 +0000 Subject: [New-bugs-announce] [issue40861] On Windows, liblzma is always built without optimization Message-ID: <1591263528.02.0.798205890082.issue40861@roundup.psfhosted.org> New submission from Nikita Nemkin : Windows build system always builds liblzma with optimizations disabled (/Od), even in Release configuration. Compared to optimized build (/O2), compression speed is 2-2.5x slower and module size is 30% larger. ---------- components: Build messages: 370702 nosy: nnemkin priority: normal severity: normal status: open title: On Windows, liblzma is always built without optimization type: performance versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 07:59:48 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Thu, 04 Jun 2020 11:59:48 +0000 Subject: [New-bugs-announce] [issue40862] argparse.BooleanOptionalAction accept and silently its the const argument Message-ID: <1591271988.48.0.105166319.issue40862@roundup.psfhosted.org> New submission from R?mi Lapeyre : The action is used to store None, True or False when an argument like --foo or --no-foo is given to the cli so it has no used for this action, but it is accepted without warning: Python 3.10.0a0 (heads/bpo-wip:6e23a9c82b, Jun 4 2020, 13:41:35) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument('--foo', const='this_is_not_used', action=argparse.BooleanOptionalAction) BooleanOptionalAction(option_strings=['--foo', '--no-foo'], dest='foo', nargs=0, const=None, default=None, type=None, choices=None, help=None, metavar=None) >>> args = parser.parse_args() >>> args Namespace(foo=None) We could either always refuse this argument, or accepted and raise ValueError if it is different than None. The attached PR does the first. ---------- components: Library (Lib) messages: 370703 nosy: remi.lapeyre priority: normal severity: normal status: open title: argparse.BooleanOptionalAction accept and silently its the const argument versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 10:07:03 2020 From: report at bugs.python.org (Matthias Naegler) Date: Thu, 04 Jun 2020 14:07:03 +0000 Subject: [New-bugs-announce] [issue40863] bytes.decode changes/destroys line endings on windows Message-ID: <1591279623.17.0.0260015854078.issue40863@roundup.psfhosted.org> New submission from Matthias Naegler : ``` # 0x0D, 13 = /r # 0x0A, 10 = /n print('test "\\r\\n"') print('-------------') b = bytes([0x41, 0x0D, 0x0A]) print("bytes: %s" % b) print("string: %s" % b.decode('utf8'), end='') # expected string: "A\r\n" # ressult string: "A\r\r\n" print('test "\\n"') print('----------') b = bytes([0x41, 0x0A]) print("bytes: %s" % b) print("string: %s" % b.decode('utf8'), end='') # expected string: "A\n" # ressult string: "A\r\n" ``` It seems like bytes.decode always replaces "\n" with "\r\n". Tested with Windows 10 with Python: - 3.6.0 - 3.7.7 - 3.8.3-rc1 - 3.8.3 - 2.7.18 (works as expected) ---------- components: Unicode, Windows messages: 370711 nosy: Matthias Naegler, ezio.melotti, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: bytes.decode changes/destroys line endings on windows type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 10:28:47 2020 From: report at bugs.python.org (Evan Fagerberg) Date: Thu, 04 Jun 2020 14:28:47 +0000 Subject: [New-bugs-announce] [issue40864] spec_set/autospec/spec seems to not be reading attributes defined in class body Message-ID: <1591280927.55.0.854829069355.issue40864@roundup.psfhosted.org> New submission from Evan Fagerberg : Hello, I really like that this library allows for really strict mocking however one thing I have noticed is that it seems like using spec on a mock does not properly read the class body for attributes like some of the documentation claims. For example this is a snippet of the Logger class in python 3.6's `logging` module ```python class Logger(Filterer): name: str level: int parent: Union[Logger, PlaceHolder] propagate: bool handlers: List[Handler] disabled: int ``` Now I want to mock that class ensuring that propagate gets set to False for example ```python from unittest import mock from logging import Logger logger = mock.Mock(spec_set=Logger) logger.propagate = False assert logger.propagate is False *** AttributeError: Mock object has no attribute 'propagate' ``` I have noticed this does work when the value is initialized in the class body so for example ```python class Logger(Filterer): name: str level: int parent: Union[Logger, PlaceHolder] propagate: bool = False handlers: List[Handler] disabled: int ``` This would not fail with the test in question. Wondering if this is intended behavior or not or if I am misunderstanding something. I have tested this with Python 3.6.10, 3.8.2, all with the same result. ---------- components: Tests messages: 370712 nosy: efagerberg priority: normal severity: normal status: open title: spec_set/autospec/spec seems to not be reading attributes defined in class body type: behavior versions: Python 3.6, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 15:11:01 2020 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Thu, 04 Jun 2020 19:11:01 +0000 Subject: [New-bugs-announce] [issue40865] Remove unused insint() macro in SHA512 module Message-ID: <1591297861.37.0.139238219668.issue40865@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : The insint() macro in line 741 is unused and can be removed. ---------- components: Library (Lib) messages: 370723 nosy: erlendaasland priority: normal severity: normal status: open title: Remove unused insint() macro in SHA512 module versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 16:30:32 2020 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Thu, 04 Jun 2020 20:30:32 +0000 Subject: [New-bugs-announce] [issue40866] Use PyModule_AddType() in posix module initialisation Message-ID: <1591302632.33.0.545094297594.issue40866@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : Use PyModule_AddType() iso. PyModule_AddObject() in posix module initialisation, and use PyTypeObject iso. PyObject for type objects. ---------- components: Library (Lib) messages: 370731 nosy: erlendaasland priority: normal severity: normal status: open title: Use PyModule_AddType() in posix module initialisation type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 16:56:57 2020 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Thu, 04 Jun 2020 20:56:57 +0000 Subject: [New-bugs-announce] [issue40867] Remove unused include in Module/_randommodule.c Message-ID: <1591304217.78.0.0159593220617.issue40867@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : _Py_bswap32() is no longer used in _randommodule.c (removed in commit 2d87577), so including pycore_byteswap.h is not necessary. ---------- components: Library (Lib) messages: 370732 nosy: erlendaasland priority: normal severity: normal status: open title: Remove unused include in Module/_randommodule.c type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 21:15:31 2020 From: report at bugs.python.org (Manuel Jacob) Date: Fri, 05 Jun 2020 01:15:31 +0000 Subject: [New-bugs-announce] [issue40868] io.TextIOBase.buffer is not necessarily a buffer Message-ID: <1591319731.34.0.581555363883.issue40868@roundup.psfhosted.org> New submission from Manuel Jacob : https://docs.python.org/dev/library/io.html#io.TextIOBase.buffer says: "The underlying binary buffer (a BufferedIOBase instance) that TextIOBase deals with. This is not part of the TextIOBase API and may not exist in some implementations." It is not necessarily a buffer (a BufferedIOBase instance), e.g. when the stdout and stderr streams are set to be unbuffered. Example: % python -u Python 3.8.3 (default, May 17 2020, 18:15:42) [GCC 10.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import io, sys >>> sys.stdout <_io.TextIOWrapper name='' mode='w' encoding='utf-8'> >>> isinstance(sys.stdout, io.TextIOBase) True >>> sys.stdout.buffer <_io.FileIO name='' mode='wb' closefd=False> >>> isinstance(sys.stdout.buffer, io.BufferedIOBase) False Therefore the name and the documentation are incorrect. I suggest to deprecate the attribute "buffer", introduce a new attribute with a correct name, and forward the old attribute to the new attribute and vice versa in the io.TextIOBase class. I think that "binary" would be a good attribute name for the underlying binary stream, as it would be consistent with io.BufferedIOBase.raw (for "the underlying raw stream"). ---------- assignee: docs at python components: Documentation, IO messages: 370744 nosy: docs at python, mjacob priority: normal severity: normal status: open title: io.TextIOBase.buffer is not necessarily a buffer _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 4 22:52:37 2020 From: report at bugs.python.org (YoSTEALTH) Date: Fri, 05 Jun 2020 02:52:37 +0000 Subject: [New-bugs-announce] [issue40869] errno missing descriptions Message-ID: <1591325557.2.0.303009626884.issue40869@roundup.psfhosted.org> New submission from YoSTEALTH : `errno` https://docs.python.org/3/library/errno.html is missing description for symbols like `ECANCELED` https://www.gnu.org/software/libc/manual/html_node/Error-Codes.html There might be others missing description as well, i haven't investigated further, figure i start the bug report process. ---------- assignee: docs at python components: Documentation messages: 370749 nosy: YoSTEALTH, docs at python priority: normal severity: normal status: open title: errno missing descriptions _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 05:46:27 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Fri, 05 Jun 2020 09:46:27 +0000 Subject: [New-bugs-announce] [issue40870] Custom AST can crash Python (debug build) Message-ID: <1591350387.83.0.288817090794.issue40870@roundup.psfhosted.org> New submission from Batuhan Taskaya : import ast t = ast.fix_missing_locations(ast.Expression(ast.Name("True", ast.Load()))) compile(t, "", "eval") compilation of this AST can crash the interpreter for 3.8+ test_constant_as_name (test.test_ast.AST_Tests) ... python: Python/compile.c:3559: compiler_nameop: Assertion `!_PyUnicode_EqualToASCIIString(name, "None") && !_PyUnicode_EqualToASCIIString(name, "True") && !_PyUnicode_EqualToASCIIString(name, "False")' failed. Fatal Python error: Aborted I've encountered this while running test suite of 'pytest' with the current master, so I guess there are some usages related this out there. IMHO we should validate this on the PyAST_Validate step to prevent this kind of crashes. ---------- messages: 370753 nosy: BTaskaya priority: normal severity: normal status: open title: Custom AST can crash Python (debug build) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 09:42:51 2020 From: report at bugs.python.org (Jacob Kunnappally) Date: Fri, 05 Jun 2020 13:42:51 +0000 Subject: [New-bugs-announce] [issue40871] threading.Event.wait_unset() Message-ID: <1591364571.85.0.30119550937.issue40871@roundup.psfhosted.org> New submission from Jacob Kunnappally : Just requesting a threading.Event.wait_unset(timeout=None) function. I would request the same for multiprocessing. My use case: I've made my own class that adds a little bit of IPC plumbing to the base Process class (ChildProcess). Each ChildProcess has a status that it can update to let other threads/processes know what it's doing at the moment. There is a configurable period when that updated status can be considered "fresh". In some cases, I would like a listening process to be able to ignore the "freshness" and only trigger some action only if the status updates while the listening process is waiting for it to update. To do this, I need to be able to know when the status goes unfresh so that waiting for the status to update can begin in earnest. Right now I am polling manually, and that can't be the right answer. Happy to clarify the above paragraphs. That's as best as I could think to describe it in text. ---------- components: Library (Lib) messages: 370761 nosy: Jacob Kunnappally priority: normal severity: normal status: open title: threading.Event.wait_unset() type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 09:46:27 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Fri, 05 Jun 2020 13:46:27 +0000 Subject: [New-bugs-announce] [issue40872] multiprocess.Lock is missing the locked() method Message-ID: <1591364787.91.0.202130651791.issue40872@roundup.psfhosted.org> New submission from R?mi Lapeyre : multiprocessing is supposed to be a drop-in replacement for the threading module and multiprocessing.Lock() is advertised as a "a close analog of threading.Lock." but it is missing the locked() method that returns whether the current status of the lock. ---------- components: Library (Lib) messages: 370762 nosy: remi.lapeyre priority: normal severity: normal status: open title: multiprocess.Lock is missing the locked() method versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 12:07:08 2020 From: report at bugs.python.org (=?utf-8?b?0JLQsNC70LXQvdGC0LjQvSBEcmV5aw==?=) Date: Fri, 05 Jun 2020 16:07:08 +0000 Subject: [New-bugs-announce] [issue40873] Something wrong with html.unescape() Message-ID: <1591373228.55.0.770453836865.issue40873@roundup.psfhosted.org> New submission from ???????? Dreyk : import html import xml.sax.saxutils as saxutils print(saxutils.unescape("®hard")) # ®hard print(html.unescape("®hard")) # ?hard html.unescape() replace "®" to "?" even without ";" at the end. ---------- components: Library (Lib) messages: 370765 nosy: ???????? Dreyk priority: normal severity: normal status: open title: Something wrong with html.unescape() type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 12:29:23 2020 From: report at bugs.python.org (Stefan Krah) Date: Fri, 05 Jun 2020 16:29:23 +0000 Subject: [New-bugs-announce] [issue40874] Update to libmpdec-2.5.0 Message-ID: <1591374563.33.0.374271364023.issue40874@roundup.psfhosted.org> New submission from Stefan Krah : Synopsis: There are no relevant new features for _decimal, but it would be too much work/error prone to have divergent code in libmpdec-2.5.0 and Python 3.9, especially for the Linux distributions. I'll release libmpdec-2.5.0/libmpdec++-2.5.0 in a month or so. The standalone lib needs the new versions of mpd_qsqrt() and mpd_qdiv(), because it allows identical result/input args. This is not needed for _decimal, but the distributions should have the correct version. In detail ========= - Use Google style guide for header guards and includes. - Update mpdecimal.h for C++11. - Use minimum set of includes. - Whitespace fixes. - Add annotations to suppress false positives from static analyzers. - Small rewrite in base conversion functions to suppress false positives from static analyzers. - MSVC: make libmpdec /W4 warning free and replace UNUSED with void casts. - MSVC: C++ fixes in vccompat.h. - Make a couple of quiet functions safe for being called with a dirty status (irrelevant for _decimal and not recommended anyway -- always set the status to 0 before calling a quiet function). - Add the sqrt/div versions that are already in the Python libmpdec but not in the upstream libmpdec. Also make them safe for identical result/operand arguments (irrelevant for _decimal, since Decimals are immutable). New functions for the upcoming libmpdec++ (unused in _decimal) ============================================================== - mpd_qset_string_exact() - mpd_qset_i64_exact() - mpd_qset_u64_exact() - mpd_qcopy_cxx() ---------- assignee: skrah messages: 370766 nosy: doko, lukasz.langa, skrah priority: normal severity: normal status: open title: Update to libmpdec-2.5.0 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 12:54:02 2020 From: report at bugs.python.org (Ram Rachum) Date: Fri, 05 Jun 2020 16:54:02 +0000 Subject: [New-bugs-announce] [issue40875] Implement __repr__ for classes in csv module Message-ID: <1591376042.94.0.47875557002.issue40875@roundup.psfhosted.org> New submission from Ram Rachum : Why see this: When you can see this: It could show columns=? if they weren't read yet. Any other info would be good too. This applies to all classes in the csv module. ---------- components: Library (Lib) messages: 370768 nosy: cool-RR priority: normal severity: normal status: open title: Implement __repr__ for classes in csv module type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 13:00:03 2020 From: report at bugs.python.org (Ram Rachum) Date: Fri, 05 Jun 2020 17:00:03 +0000 Subject: [New-bugs-announce] [issue40876] Clarify error message in csv module Message-ID: <1591376403.94.0.4038301694.issue40876@roundup.psfhosted.org> New submission from Ram Rachum : I was working with the csv module, and I vaguely remembered that you should open files in binary mode. So I did. Then I saw this error message: _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?) I read the end and thought "I didn't open it in text mode, what does it want from me?!" It took a careful reading to figure out that I was *supposed to* open it in text mode. I'm going to open a PR to slightly change the text to: _csv.Error: iterator should return strings, not bytes (the file should be opened in text mode) ---------- components: Library (Lib) messages: 370769 nosy: cool-RR priority: normal severity: normal status: open title: Clarify error message in csv module type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 13:10:11 2020 From: report at bugs.python.org (Stefan Krah) Date: Fri, 05 Jun 2020 17:10:11 +0000 Subject: [New-bugs-announce] [issue40877] Code coverage is blocking a merge again Message-ID: <1591377011.07.0.156762797802.issue40877@roundup.psfhosted.org> New submission from Stefan Krah : Again code coverage prevents the merge button from going green: https://travis-ci.org/github/python/cpython/jobs/695095365 "The job exceeded the maximum time limit for jobs, and has been terminated." Why can this not run on buildbot.python.org? ---------- messages: 370770 nosy: skrah priority: normal severity: normal status: open title: Code coverage is blocking a merge again _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 15:34:15 2020 From: report at bugs.python.org (Stefan Krah) Date: Fri, 05 Jun 2020 19:34:15 +0000 Subject: [New-bugs-announce] [issue40878] Use c99 on the aixtools bot Message-ID: <1591385655.5.0.64244490428.issue40878@roundup.psfhosted.org> New submission from Stefan Krah : There appears to be an xlc buildbot with libmpdec failures. libmpdec uses C99 extern inline semantics. From the brief period that I had access to xlc I remember that xlc was quite picky about C99. Actually all of Python uses C99. So I think xlc_r needs to be invoked as c99_r. ---------- messages: 370779 nosy: Michael.Felt, skrah priority: normal severity: normal status: open title: Use c99 on the aixtools bot _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 16:49:23 2020 From: report at bugs.python.org (Matt Miller) Date: Fri, 05 Jun 2020 20:49:23 +0000 Subject: [New-bugs-announce] [issue40879] Strange regex cycle Message-ID: <1591390163.66.0.486591765961.issue40879@roundup.psfhosted.org> New submission from Matt Miller : I was evaluating a few regular expressions for parsing URL. One such expression (https://daringfireball.net/2010/07/improved_regex_for_matching_urls) causes the `re.Pattern` to exhibit some strange behavior (notice the stripped characters in the `repr`): ``` >>> STR_RE_URL = r"""(?i)\b((?:[a-z][\w-]+:(?:/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:'".,<>???????]))""" >>> print(re.compile(STR_RE_URL)) re.compile('(?i)\\b((?:[a-z][\\w-]+:(?:/{1,3}|[a-z0-9%])|www\\d{0,3}[.]|[a-z0-9.\\-]+[.][a-z]{2,4}/)(?:[^\\s()<>]+|\\(([^\\s()<>]+|(\\([^\\s()<>]+\\)))*\\))+(?:\\(([^\\s()<>]+|(\\([^\\s()<>]+\\)))*\\)|[^\\s`!()\, re.IGNORECASE) ``` The reason I started looking at this was because the following string causes the same `re.Pattern` object's `.search()` method to loop forever for some reason: ``` >>> weird_str = """AY:OhQOhQNhQLdLAX78N'7M&6K%4K#4K#7N&9P(JcHOiQE^=8P'F_DJdLC\@9P&D\;IdKHbJ at Z8AY7@Y7AY7B[9E_Jc at Jc:F_1PjRRlSOiLKeAKeAGa=D^:F`=Ga=Fa>> url_pat.search(weird_str) ``` The `.search(weird_str)` will never exit. I assume the `.search()` taking forever is is an error in the expression but the fact that it causes the `repr` to strip some characters was something I thought should be looked into. I have not tested this on any other versions of Python. ---------- components: Library (Lib) messages: 370784 nosy: Matt Miller priority: normal severity: normal status: open title: Strange regex cycle versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 18:00:14 2020 From: report at bugs.python.org (Stefan Krah) Date: Fri, 05 Jun 2020 22:00:14 +0000 Subject: [New-bugs-announce] [issue40880] Invalid read in pegen.c Message-ID: <1591394414.92.0.125857803863.issue40880@roundup.psfhosted.org> New submission from Stefan Krah : >From test_decimal: test_xor (test.test_decimal.PyIBMTestCases) ... ==17597== Invalid read of size 1 ==17597== at 0x64A7E2: newline_in_string (pegen.c:940) ==17597== by 0x64A84E: bad_single_statement (pegen.c:958) ==17597== by 0x64AD59: _PyPegen_run_parser (pegen.c:1101) ==17597== by 0x64B044: _PyPegen_run_parser_from_string (pegen.c:1194) ==17597== by 0x5C6D56: PyPegen_ASTFromStringObject (peg_api.c:27) ==17597== by 0x52A2A9: Py_CompileStringObject (pythonrun.c:1259) ==17597== by 0x63CBF6: builtin_compile_impl (bltinmodule.c:819) ==17597== by 0x63AF08: builtin_compile (bltinmodule.c.h:249) ==17597== by 0x5F9446: cfunction_vectorcall_FASTCALL_KEYWORDS (methodobject.c:440) ==17597== by 0x4D2642: _PyObject_VectorcallTstate (abstract.h:114) ==17597== by 0x4D26A1: PyObject_Vectorcall (abstract.h:123) ==17597== by 0x4E3F26: call_function (ceval.c:5111) ==17597== Address 0xadc82bf is 1 bytes before a block of size 22 alloc'd ==17597== at 0x4C3016F: realloc (vg_replace_malloc.c:826) ==17597== by 0x46A983: _PyMem_RawRealloc (obmalloc.c:121) ==17597== by 0x46B49E: PyMem_Realloc (obmalloc.c:623) ==17597== by 0x5C9565: translate_newlines (tokenizer.c:654) ==17597== by 0x5C98FE: PyTokenizer_FromUTF8 (tokenizer.c:751) ==17597== by 0x64AF7F: _PyPegen_run_parser_from_string (pegen.c:1169) ==17597== by 0x5C6D56: PyPegen_ASTFromStringObject (peg_api.c:27) ==17597== by 0x52A2A9: Py_CompileStringObject (pythonrun.c:1259) ==17597== by 0x63CBF6: builtin_compile_impl (bltinmodule.c:819) ==17597== by 0x63AF08: builtin_compile (bltinmodule.c.h:249) ==17597== by 0x5F9446: cfunction_vectorcall_FASTCALL_KEYWORDS (methodobject.c:440) ==17597== by 0x4D2642: _PyObject_VectorcallTstate (abstract.h:114) ==17597== *--cur dereferences one below p->tok->buf in the last iteration. ---------- components: Interpreter Core messages: 370791 nosy: lys.nikolaou, skrah priority: normal severity: normal stage: needs patch status: open title: Invalid read in pegen.c type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 18:38:42 2020 From: report at bugs.python.org (Stefan Krah) Date: Fri, 05 Jun 2020 22:38:42 +0000 Subject: [New-bugs-announce] [issue40881] --with-valgrind broken Message-ID: <1591396722.67.0.757074834651.issue40881@roundup.psfhosted.org> New submission from Stefan Krah : ./configure --with-valgrind: Objects/unicodeobject.c: In function ?unicode_release_interned?: Objects/unicodeobject.c:15672:26: error: lvalue required as left operand of assignment Py_REFCNT(s) += 1; ^ Objects/unicodeobject.c:15678:26: error: lvalue required as left operand of assignment Py_REFCNT(s) += 2; Well, Py_REFCNT(s) is no longer an lvalue. :-) ---------- messages: 370793 nosy: skrah, vstinner priority: normal severity: normal status: open title: --with-valgrind broken _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 18:47:05 2020 From: report at bugs.python.org (Eryk Sun) Date: Fri, 05 Jun 2020 22:47:05 +0000 Subject: [New-bugs-announce] [issue40882] memory leak in multiprocessing.shared_memory.SharedMemory in Windows Message-ID: <1591397225.82.0.932982465865.issue40882@roundup.psfhosted.org> New submission from Eryk Sun : mmap.mmap in Windows doesn't support an exist_ok parameter and doesn't correctly handle the combination fileno=-1, length=0, and tagname with an existing file mapping. SharedMemory has to work around these limitations. Part of the workaround for the create=False case requires mapping a view via MapViewOfFile in order to get the size from VirtualQuerySize, since mmap.mmap requires it (needlessly if implemented right) when fileno=-1. This mapped view never gets unmapped, which means the shared memory will never be freed until the termination of all processes that have opened it with create=False. Also, at least in a 32-bit process, this wastes precious address space. _winapi.UnmapViewOfFile needs to be implemented. Then the temporary view can be unmapped as follows: self._name = name h_map = _winapi.OpenFileMapping(_winapi.FILE_MAP_READ, False, name) try: p_buf = _winapi.MapViewOfFile(h_map, _winapi.FILE_MAP_READ, 0, 0, 0) finally: _winapi.CloseHandle(h_map) try: size = _winapi.VirtualQuerySize(p_buf) finally: _winapi.UnmapViewOfFile(p_buf) self._mmap = mmap.mmap(-1, size, tagname=name) [1]: https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-unmapviewoffile ---------- components: Library (Lib), Windows messages: 370794 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: memory leak in multiprocessing.shared_memory.SharedMemory in Windows type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 18:56:57 2020 From: report at bugs.python.org (Stefan Krah) Date: Fri, 05 Jun 2020 22:56:57 +0000 Subject: [New-bugs-announce] [issue40883] parse_string.c: free "str" Message-ID: <1591397817.32.0.494425935201.issue40883@roundup.psfhosted.org> New submission from Stefan Krah : Also in test_decimal, there's a small leak here: ==10040== 24 bytes in 1 blocks are definitely lost in loss record 549 of 5,095 ==10040== at 0x4C2DE56: malloc (vg_replace_malloc.c:299) ==10040== by 0x643B33: fstring_compile_expr (parse_string.c:594) ==10040== by 0x643B33: fstring_find_expr (parse_string.c:924) ==10040== by 0x643B33: fstring_find_literal_and_expr (parse_string.c:1076) ==10040== by 0x643B33: _PyPegen_FstringParser_ConcatFstring (parse_string.c:1293) ==10040== by 0x644569: fstring_parse (parse_string.c:1409) ==10040== by 0x644569: fstring_find_expr (parse_string.c:980) ==10040== by 0x644569: fstring_find_literal_and_expr (parse_string.c:1076) ==10040== by 0x644569: _PyPegen_FstringParser_ConcatFstring (parse_string.c:1293) ==10040== by 0x62CE94: _PyPegen_concatenate_strings (pegen.c:2003) ==10040== by 0x62EF52: strings_rule (parse.c:10834) ==10040== by 0x62EF52: atom_rule (parse.c:10674) ==10040== by 0x6389A2: t_primary_raw (parse.c:14042) ==10040== by 0x6389A2: t_primary_rule (parse.c:13839) ==10040== by 0x638D67: star_target_rule (parse.c:12684) ==10040== by 0x6392FC: star_targets_rule (parse.c:12501) ==10040== by 0x63BD7B: _tmp_135_rule (parse.c:23255) ==10040== by 0x63BD7B: _loop1_22_rule (parse.c:16468) ==10040== by 0x63BD7B: assignment_rule (parse.c:2116) ==10040== by 0x63BD7B: small_stmt_rule (parse.c:1508) ==10040== by 0x63DB44: simple_stmt_rule (parse.c:1406) ==10040== by 0x63F995: statement_rule (parse.c:1240) ==10040== by 0x63F995: _loop1_11_rule (parse.c:15835) ==10040== by 0x63F995: statements_rule (parse.c:1175) ==10040== by 0x63FB49: block_rule (parse.c:6127) ---------- messages: 370795 nosy: lys.nikolaou, pablogsal, skrah priority: normal severity: normal status: open title: parse_string.c: free "str" _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 19:29:51 2020 From: report at bugs.python.org (Bar Harel) Date: Fri, 05 Jun 2020 23:29:51 +0000 Subject: [New-bugs-announce] [issue40884] Added defaults parameter for logging.Formatter Message-ID: <1591399791.89.0.0887642700869.issue40884@roundup.psfhosted.org> New submission from Bar Harel : TLDR; `logging.Formatter('%(ip)s %(message)s', defaults={"ip": None})` Python's logging.Formatter allows the placement of custom fields, e.g. `logging.Formatter("%(ip)s %(message)")`. If a handler has a formatter with a custom field, all log records that go through the handler must have the custom field set using `extra={}`. Failure to do so will result in exceptions thrown inside the logging library. Custom fields are common, and are even suggested by the Python logging cookbook, where they are attached to the root logger. There is, however, no way to specify default values for the custom fields. Quite a few issues arise from it. For example, if I've set a formatter on the root logger with the custom field "%(ip)s", all logging messages sent by the asyncio library, will cause exceptions to raise. Adding default values is possible using LoggerAdapter but will causes other issues as well as not solve the aforementioned problem. Adding default values is possible using Filters, but cause confusion, isn't simple, and permanently modify the record object itself, which can cause issues if more handlers or formatters are attached. >From a quick search, this feature was asked for many times in stackoverflow, and even spawned up a few libraries such as "logaugment" in order to solve it. I believe the solution offered, by using `defaults={}` is simple enough to not need discussion over python-ideas, yet common enough to justify the addition to the standard library. I've provided a reference PR. It does not cause backwards compatibility issues, complies with all formatter styles (%, {}, $), passes all tests and is simple enough to both use and understand. Not sure if 3.9 is feature-closed for small additions like this. ---------- components: Library (Lib) messages: 370796 nosy: bar.harel priority: normal severity: normal status: open title: Added defaults parameter for logging.Formatter type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 20:56:53 2020 From: report at bugs.python.org (Nehal Patel) Date: Sat, 06 Jun 2020 00:56:53 +0000 Subject: [New-bugs-announce] [issue40885] Cannot pipe GzipFile into subprocess Message-ID: <1591405013.03.0.805954404624.issue40885@roundup.psfhosted.org> New submission from Nehal Patel : The following code produces incorrect behavior: with gzip.open("foo.gz") as gz: res = subprocess.run("cat", stdin=gz, capture_output=True) the contents of res.stdout are identical to the contents of "foo.gz" It seems the subprocess somehow gets a hold of the underlying file descriptor pointing to the compressed file, and ends up being fed the compressed bytes. ---------- components: IO messages: 370804 nosy: Nehal Patel priority: normal severity: normal status: open title: Cannot pipe GzipFile into subprocess type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 5 21:11:59 2020 From: report at bugs.python.org (Bar Harel) Date: Sat, 06 Jun 2020 01:11:59 +0000 Subject: [New-bugs-announce] [issue40886] Add PYTHONLOGGING environment variable and -L cmdline argument Message-ID: <1591405919.07.0.877593512287.issue40886@roundup.psfhosted.org> New submission from Bar Harel : Per discussion on mailing list, I suggest adding a PYTHONLOGGING environment variable, and a matching -L cmdline argument. When set to a logging level of choice, they will initiate basicConfig with the appropriate level. For example, "py.exe -L info" will be equivalent to "logging.basicConfig(level='info')" on interpreter startup. Sames as setting env var "PYTHONLOGGING=info". This matches the current behavior of other settings, such as PYTHONWARNINGS and -W, allows to easily test programs without modifying them, and further completes the expected arguments available from the commandline. Discussion on mailing list for reference: https://mail.python.org/archives/list/python-ideas at python.org/thread/I74LVJWJLE2LUCCZGOF5A5JDSDHJ6WX2/ ---------- components: Library (Lib) messages: 370807 nosy: bar.harel priority: normal severity: normal status: open title: Add PYTHONLOGGING environment variable and -L cmdline argument type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 04:23:56 2020 From: report at bugs.python.org (Stefan Krah) Date: Sat, 06 Jun 2020 08:23:56 +0000 Subject: [New-bugs-announce] [issue40887] Leaks in new free lists Message-ID: <1591431836.65.0.780118303952.issue40887@roundup.psfhosted.org> New submission from Stefan Krah : I'm opening a separate issue to prevent #40521 from getting too big. Lists and tuples sometimes leak (starting 69ac6e58f and later): ==1445== 56 bytes in 1 blocks are definitely lost in loss record 1,542 of 4,898 ==1445== at 0x4C2DE56: malloc (vg_replace_malloc.c:299) ==1445== by 0x550487: _PyObject_GC_Alloc (gcmodule.c:2233) ==1445== by 0x550487: _PyObject_GC_Malloc (gcmodule.c:2260) ==1445== by 0x550487: _PyObject_GC_New (gcmodule.c:2272) ==1445== by 0x44CB04: PyList_New (listobject.c:144) ==1445== by 0x4E3DE1: init_filters (_warnings.c:88) ==1445== by 0x4E3DE1: warnings_init_state (_warnings.c:120) ==1445== by 0x4E3DE1: _PyWarnings_InitState (_warnings.c:1372) ==1445== by 0x521720: pycore_init_import_warnings (pylifecycle.c:687) ==1445== by 0x521720: pycore_interp_init (pylifecycle.c:735) ==1445== by 0x5246A0: pyinit_config (pylifecycle.c:763) ==1445== by 0x5246A0: pyinit_core (pylifecycle.c:924) ==1445== by 0x5246A0: Py_InitializeFromConfig (pylifecycle.c:1134) ==1445== by 0x4285DC: pymain_init (main.c:66) ==1445== by 0x4296A1: pymain_main (main.c:653) ==1445== by 0x4296A1: Py_BytesMain (main.c:686) ==1445== by 0x578882F: (below main) (libc-start.c:291) ==1445== 64 bytes in 1 blocks are definitely lost in loss record 2,259 of 4,898 ==1445== at 0x4C2DE56: malloc (vg_replace_malloc.c:299) ==1445== by 0x550611: _PyObject_GC_Alloc (gcmodule.c:2233) ==1445== by 0x550611: _PyObject_GC_Malloc (gcmodule.c:2260) ==1445== by 0x550611: _PyObject_GC_NewVar (gcmodule.c:2289) ==1445== by 0x48452C: tuple_alloc (tupleobject.c:76) ==1445== by 0x48452C: _PyTuple_FromArray (tupleobject.c:413) ==1445== by 0x435EE0: _PyObject_MakeTpCall (call.c:165) ==1445== by 0x436947: _PyObject_FastCallDictTstate (call.c:113) ==1445== by 0x436947: PyObject_VectorcallDict (call.c:142) ==1445== by 0x61DFC5: builtin___build_class__ (bltinmodule.c:232) ==1445== by 0x5E8A39: cfunction_vectorcall_FASTCALL_KEYWORDS (methodobject.c:440) ==1445== by 0x41F4D5: _PyObject_VectorcallTstate (abstract.h:114) ==1445== by 0x41F4D5: PyObject_Vectorcall (abstract.h:123) ==1445== by 0x41F4D5: call_function (ceval.c:5111) ==1445== by 0x42220E: _PyEval_EvalFrameDefault (ceval.c:3542) ==1445== by 0x4E6882: _PyEval_EvalFrame (pycore_ceval.h:40) ==1445== by 0x4E6882: _PyEval_EvalCode (ceval.c:4366) ==1445== by 0x4E6A65: _PyEval_EvalCodeWithName (ceval.c:4398) ---------- messages: 370813 nosy: skrah, vstinner priority: normal severity: normal status: open title: Leaks in new free lists _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 08:00:34 2020 From: report at bugs.python.org (=?utf-8?b?0JDQvdC00YDQtdC5INCa0LDQt9Cw0L3RhtC10LI=?=) Date: Sat, 06 Jun 2020 12:00:34 +0000 Subject: [New-bugs-announce] [issue40888] Add close method to queue Message-ID: <1591444834.07.0.323114052221.issue40888@roundup.psfhosted.org> New submission from ?????? ???????? : I have a problem with notifying all current subscribers and new subscribers about the closure of the queue and the reason. For example, I have a producer that reads messages from websocket or something else and send this to a queue, and several consumers (I do not know how many). If any exception occurred, then all current subscribers and subscribers which will be added later should know about this error. I tried to send an exception to a queue, but that did not help, because I have several consumers. Also, this will not protect new consumers. I propose to add a new close method with exc argument, which will throw an exception when calling the get method, and also throw an exception for all current _getters. ---------- components: asyncio messages: 370818 nosy: asvetlov, heckad, yselivanov priority: normal severity: normal status: open title: Add close method to queue _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 13:09:59 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 06 Jun 2020 17:09:59 +0000 Subject: [New-bugs-announce] [issue40889] Symmetric difference on dict_views is inefficient Message-ID: <1591463399.57.0.727941047538.issue40889@roundup.psfhosted.org> New submission from Raymond Hettinger : Running "d1.items() ^ d2.items()" will rehash every key and value in both dictionaries regardless of how much they overlap. By taking advantage of the known hashes, the analysis step could avoid making any calls to __hash__(). Only the result tuples would need to hashed. Currently the code below calls hash for every key and value on the left and for every key and value on the right: >>> left = {1: -1, 2: -2, 3: -3, 4: -4, 5: -5, 6: -6, 7: -7} >>> right = {1: -1, 2: -2, 3: -3, 4: -4, 5: -5, 8: -8, 9: -9} >>> left.items() ^ right.items() # Total work: 28 __hash__() calls {(6, -6), (7, -7), (8, -8), (9, -9)} Compare that with the workload for set symmetric difference which makes zero calls to __hash__(): >>> set(left) ^ set(right) {6, 7, 8, 9} FWIW, I do have an important use case where this matters. ---------- components: Interpreter Core messages: 370839 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: Symmetric difference on dict_views is inefficient type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 13:21:19 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 06 Jun 2020 17:21:19 +0000 Subject: [New-bugs-announce] [issue40890] Dict views should be introspectable Message-ID: <1591464079.86.0.471509652953.issue40890@roundup.psfhosted.org> New submission from Raymond Hettinger : Dict views wrap an underlying mapping but don't expose that mapping as an attribute. Traditionally, we do expose wrapped objects: property() exposes fget, partial() exposes func, bound methods expose __func__, ChainMap() exposes maps, etc. Exposing this attribute would help with introspection, making it possible to write efficient functions that operate on dict views. ---------- components: Interpreter Core keywords: easy (C) messages: 370841 nosy: rhettinger priority: normal severity: normal status: open title: Dict views should be introspectable type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 14:27:24 2020 From: report at bugs.python.org (hai shi) Date: Sat, 06 Jun 2020 18:27:24 +0000 Subject: [New-bugs-announce] [issue40891] Use pep573 in functools Message-ID: <1591468044.26.0.252061125305.issue40891@roundup.psfhosted.org> New submission from hai shi : petr have write a PR(adding a method: _PyType_GetModuleByDef) to supply pep573 in https://github.com/encukou/cpython/pull/4/commits/98dd889575cf7d1688495983ba791e14894a0bb8 So I try to use pep573 in functools again in: https://github.com/shihai1991/cpython/pull/5 >From the CI gate result, the only one question is a resource leak in functools(I am not find the leak reason now): https://github.com/shihai1991/cpython/pull/5/checks?check_run_id=743098116 some other discuss info in issue40137. ---------- components: Extension Modules messages: 370843 nosy: petr.viktorin, shihai1991, vstinner priority: normal severity: normal status: open title: Use pep573 in functools versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 15:23:27 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Sat, 06 Jun 2020 19:23:27 +0000 Subject: [New-bugs-announce] [issue40892] IDLE: use rlcompleter suffixed for completions Message-ID: <1591471407.52.0.255326225411.issue40892@roundup.psfhosted.org> New submission from Terry J. Reedy : Tab completions may be suffixed with ' ' (keywords), ':' (keywords)*, or '(' (callables) if one of those is required. Ex. 'import ', 'finally:', 'len('. Attributes may get '('. The possible downside is needing to remove the suffix if one does not want the completion for what it is but as a prefix to a longer word. Ex. 'imports','elsewhere', 'length'. But this should be much less common in code. * 'else ' should be 'else:' With keywords added (#37765) tab list is sorted(Completer().global_match('') + list(__main__.dict__.keys())). Whatever decide on, calculate first part once (if not already). list.sort used preexisting order. key are sorted. builtins might be. ---------- messages: 370845 nosy: terry.reedy priority: normal severity: normal status: open title: IDLE: use rlcompleter suffixed for completions _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 16:09:50 2020 From: report at bugs.python.org (E. Paine) Date: Sat, 06 Jun 2020 20:09:50 +0000 Subject: [New-bugs-announce] [issue40893] tkinter integrate TkDND support Message-ID: <1591474190.65.0.383574909158.issue40893@roundup.psfhosted.org> New submission from E. Paine : For years, the Python docs for the tkinter.dnd module (and prior Tkdnd module) have said that it will become deprecated once it has been replaced by TkDND bindings (I can find it back in the Python 2.2 docs ? https://docs.python.org/2.2/lib/node508.html). Despite this, I cannot find anywhere that the changes are actually proposed (have I missed something?!). Attached is a git diff of a draft of proposed changes for the tkinter library, tkinter docs and Windows installer. The changes to the tkinter library are designed so that TkDND is not required where tcl/tk is installed, and in particular I check before any call is made to the library that it has been successfully loaded so as to give the user a more helpful error message. I don?t feel it is ready to be a PR yet, unless that would be helpful, as I have not been able to extensively test on MacOS (I think I have sufficiently tested on Windows and Linux, though someone is bound to find a problem!). I also don?t feel it is ready for a PR because I would really appreciate if someone could take a serious look at the changes to the installer. I have got it working but I don?t really understand how the installer works and it is quite likely that the TkDND externals should have their own group and bindpath. On a similar note to the Windows installer, how do we declare to the Linux package maintainers that TkDND is now an optional dependency of Python? I cannot find a file where optional dependencies are mentioned, so is there some mechanism to notify them of the changes? I want to give credit where it is due and so it should be noted that new docs are very nearly just a ?translation? of the TkDND man page and that the changes in the tkinter library are (loosely) based on the TkinterDND bindings. I also have the relevant files for the cpython-source-deps and cpython-bin-deps repos, but would need someone with write permissions to create a ?tkdnd? branch in each of those repos before I create the PRs for them. I am very new to contributing and so have a few questions on what is expected of me: 1. What is required of a news entry, and where would it go (I am assuming Misc\NEWS.d\next\Library)? 2. Could such a change, though not particularly large for code, be considered a ?major new feature? (which would require a PEP)? If people don?t generally want to add TkDND bindings to the tkinter library, I would instead propose that we remove the note that the tkinter.dnd module will be deprecated (but leave the note about it being experimental). ---------- components: Tkinter files: tkdnd.diff keywords: patch messages: 370852 nosy: epaine priority: normal severity: normal status: open title: tkinter integrate TkDND support type: enhancement versions: Python 3.10 Added file: https://bugs.python.org/file49217/tkdnd.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 18:13:51 2020 From: report at bugs.python.org (Timm Wagener) Date: Sat, 06 Jun 2020 22:13:51 +0000 Subject: [New-bugs-announce] [issue40894] asyncio.gather() cancelled() always False Message-ID: <1591481631.01.0.634103349075.issue40894@roundup.psfhosted.org> New submission from Timm Wagener : It seems like the future subclass returned by asyncio.gather() (_GatheringFuture) can never return True for future.cancelled() even after it's cancel() has been invoked successfully (returning True) and an await on it actually raised a CancelledError. This is in contrast to the behavior of normal Futures and it seems generally to be classified as a minor bug by developers. * Stackoverflow Post: https://stackoverflow.com/questions/61942306/asyncio-gather-task-cancelled-is-false-after-task-cancel * Github snippet: https://gist.github.com/timmwagener/dfed038dc2081c8b5a770e175ba3756b I have created a fix and will create a PR. It seemed rather easy to fix and the asyncio test suite still succeeds. So maybe this is a minor bug, whose fix has no backward-compatibility consequences. However, my understanding is that asyncio.gather() is scheduled for deprecation, so maybe changes in this area are on halt anyways!? ---- # excerpt from snippet async def main(): """Cancel a gather() future and child and return it.""" task_child = ensure_future(sleep(1.0)) future_gather = gather(task_child) future_gather.cancel() try: await future_gather except CancelledError: pass return future_gather, task_child # run future_gather, task_child = run(main()) # log gather state logger.info(future_gather.cancelled()) # False / UNEXPECTED / ASSUMED BUG logger.info(future_gather.done()) # True logger.info(future_gather.exception()) # CancelledError logger.info(future_gather._state) # FINISHED # log child state logger.info(task_child.cancelled()) # True logger.info(task_child.done()) # True # logger.info(task_child.exception()) Raises because _state is CANCELLED logger.info(task_child._state) # CANCELLED ---------- components: asyncio messages: 370855 nosy: asvetlov, timmwagener, yselivanov priority: normal severity: normal status: open title: asyncio.gather() cancelled() always False type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 19:23:07 2020 From: report at bugs.python.org (Daniel Fortunov) Date: Sat, 06 Jun 2020 23:23:07 +0000 Subject: [New-bugs-announce] [issue40895] weakref documentation contains cautions about dictionary mutation problems that have been solved in the implementation Message-ID: <1591485787.87.0.566129309531.issue40895@roundup.psfhosted.org> New submission from Daniel Fortunov : The doccumentation at https://docs.python.org/3.10/library/weakref.html cautions that the WeakKeyDictionary and WeakValueDictionary are susceptible to the problem of dictionary mutation during iteration. These notes present the user with a problem that has no easy solution. I dug into the implementation and found that fortunately, Antoine Pitrou already addressed this challenge (10 years ago!) by introducing an _IterationGuard context manager to the implementation, which delays mutation while an iteration is in progress. I asked for confirmation and Antoine agreed that these notes could be removed: https://github.com/python/cpython/commit/c1baa601e2b558deb690edfdf334fceee3b03327#commitcomment-39514438 ---------- assignee: docs at python components: Documentation messages: 370860 nosy: dfortunov, docs at python priority: normal severity: normal status: open title: weakref documentation contains cautions about dictionary mutation problems that have been solved in the implementation versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 20:20:06 2020 From: report at bugs.python.org (Edison Abahurire) Date: Sun, 07 Jun 2020 00:20:06 +0000 Subject: [New-bugs-announce] [issue40896] Missing links to Source Code in Documentation pages Message-ID: <1591489206.64.0.663736660712.issue40896@roundup.psfhosted.org> New submission from Edison Abahurire : Just below the Module heading, most library documenation pages like https://docs.python.org/3/library/datetime.html have a link to the source code in the form of `Source code: Lib/datetime.py` that links to the cpython github file of that module. Some modules like https://docs.python.org/3/library/time.html are missing this though and yet I think it would be quite useful for people who want to navigate the depths of the implementation. Now, the challenge is that most modules are not located in single code files but what if, instead of leaving it blank, we link to the folder that contains the code of that module? Wouldn't that be helpful? For example: For https://docs.python.org/3/library/wsgiref.html, we can add this https://github.com/python/cpython/tree/master/Lib/wsgiref as the link to source code. ---------- assignee: docs at python components: Documentation messages: 370863 nosy: docs at python, edison.abahurire, eric.araujo, ezio.melotti, mdk, willingc priority: normal severity: normal status: open title: Missing links to Source Code in Documentation pages type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 6 23:37:38 2020 From: report at bugs.python.org (Edward Yang) Date: Sun, 07 Jun 2020 03:37:38 +0000 Subject: [New-bugs-announce] [issue40897] Inheriting from Generic causes inspect.signature to always return (*args, **kwargs) for constructor (and all subclasses) Message-ID: <1591501058.45.0.282415808867.issue40897@roundup.psfhosted.org> New submission from Edward Yang : Consider the following program: ``` import inspect from typing import Generic, TypeVar T = TypeVar('T') class A(Generic[T]): def __init__(self) -> None: pass print(inspect.signature(A)) ``` I expect inspect.signature to return () as the signature of the constructor of this function. However, I get this: ``` $ python3 foo.py (*args, **kwds) ``` Although it is true that one cannot generally rely on inspect.signature to always give the most accurate signature (because there may always be decorator or metaclass shenanigans getting in the way), in this particular case it seems especially undesirable because Python type annotations are supposed to be erased at runtime, and yet here inheriting from Generic (simply to add type annotations) causes a very clear change in runtime behavior. ---------- components: Library (Lib) messages: 370870 nosy: ezyang priority: normal severity: normal status: open title: Inheriting from Generic causes inspect.signature to always return (*args, **kwargs) for constructor (and all subclasses) type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 7 05:10:49 2020 From: report at bugs.python.org (hai shi) Date: Sun, 07 Jun 2020 09:10:49 +0000 Subject: [New-bugs-announce] [issue40898] Remove redundant if statements in tp_traverse Message-ID: <1591521049.7.0.876552225456.issue40898@roundup.psfhosted.org> New submission from hai shi : redundant if statements in itertools?_functools?_io, so remove it. ---------- components: Extension Modules messages: 370883 nosy: shihai1991 priority: normal severity: normal status: open title: Remove redundant if statements in tp_traverse versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 7 07:08:14 2020 From: report at bugs.python.org (Johannes Buchner) Date: Sun, 07 Jun 2020 11:08:14 +0000 Subject: [New-bugs-announce] [issue40899] Ddcument exceptions raised by importlib.import Message-ID: <1591528094.81.0.814829435189.issue40899@roundup.psfhosted.org> New submission from Johannes Buchner : https://docs.python.org/3/library/functions.html#__import__ and https://docs.python.org/3/library/importlib.html#importlib.import_module do not list which Exceptions are raised in case the module cannot be imported. The two exceptions are listed here https://docs.python.org/3/library/exceptions.html#ImportError ModuleNotFoundError Could you add a half-sentence "and raises an ModuleNotFoundError if import was unsuccessful." to the function docs? ---------- assignee: docs at python components: Documentation messages: 370887 nosy: docs at python, j13r priority: normal severity: normal status: open title: Ddcument exceptions raised by importlib.import versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 7 08:55:47 2020 From: report at bugs.python.org (David CARLIER) Date: Sun, 07 Jun 2020 12:55:47 +0000 Subject: [New-bugs-announce] [issue40900] uuid module build fix on FreeBSD proposal Message-ID: <1591534547.12.0.261389892929.issue40900@roundup.psfhosted.org> Change by David CARLIER : ---------- components: FreeBSD nosy: devnexen, koobs priority: normal pull_requests: 19908 severity: normal status: open title: uuid module build fix on FreeBSD proposal type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 7 09:17:37 2020 From: report at bugs.python.org (Jakub Stasiak) Date: Sun, 07 Jun 2020 13:17:37 +0000 Subject: [New-bugs-announce] [issue40901] It's not clear what "interface name" means in socket if_nameindex/if_nametoindex/if_indextoname on Windows Message-ID: <1591535857.64.0.826872278568.issue40901@roundup.psfhosted.org> New submission from Jakub Stasiak : On Windows there are different names for the same interface in different contexts. ---------- components: Library (Lib), Windows messages: 370894 nosy: jstasiak, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: It's not clear what "interface name" means in socket if_nameindex/if_nametoindex/if_indextoname on Windows type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 7 18:06:49 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sun, 07 Jun 2020 22:06:49 +0000 Subject: [New-bugs-announce] [issue40902] Speed up PEG parser by using operator precedence for binary operators Message-ID: <1591567609.56.0.0913110799919.issue40902@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Check https://github.com/we-like-parsers/cpython/issues/132 for context. ---------- messages: 370915 nosy: pablogsal priority: normal severity: normal status: open title: Speed up PEG parser by using operator precedence for binary operators _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 7 18:09:09 2020 From: report at bugs.python.org (Steve Stagg) Date: Sun, 07 Jun 2020 22:09:09 +0000 Subject: [New-bugs-announce] [issue40903] Segfault in new PEG parser Message-ID: <1591567749.45.0.607406261903.issue40903@roundup.psfhosted.org> New submission from Steve Stagg : The input `p=p=` causes python 3.10 to crash. I bisected the change, and the behavior appears to have been introduced by 16ab07063cb564c1937714bd39d6915172f005b5 (bpo-40334: Correctly identify invalid target in assignment errors (GH-20076) ) Steps to reproduce: $ echo 'p=p=' | /path/to/python3.10 === SIGSEGV (Address boundary error) Analysis: This code is an invalid assignment, and the parser tries to generate a useful message for this case (invalid_assignment_rule). However, the `target` of the assignment is a Name node. The invalid_assignment_rule function tries to identify the target of the assignment, to create a useful description for the error menssage by calling `_PyPegen_get_invalid_target`, passing in the Name Node. `PyPegen_get_invalid_target` returns NULL if the type is a Name type (pegen.c:2114). The result of this call is then passed unconditionally to _PyPegen_get_expr_name, which is expecting a statement, not NULL. Error happens here: pegen.c:164 `_PyPegen_get_expr_name(expr_ty e)` is being called with `e = 0x0` ---------- components: Interpreter Core messages: 370916 nosy: stestagg priority: normal severity: normal status: open title: Segfault in new PEG parser type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 7 19:06:21 2020 From: report at bugs.python.org (Steve Stagg) Date: Sun, 07 Jun 2020 23:06:21 +0000 Subject: [New-bugs-announce] [issue40904] Segfault from new PEG parser handling yield withing f-strings Message-ID: <1591571181.13.0.747709905878.issue40904@roundup.psfhosted.org> New submission from Steve Stagg : The following command causes python to segfault: $ echo "f'{yield}'" | python/bin/python3 Bisect tracked this down to: c5fc15685202cda73f7c3f5c6f299b0945f58508 (bpo-40334: PEP 617 implementation: New PEG parser for CPython (GH-19503)) The illegal access is coming out of `fstring_shift_children_locations` as n->v.Yield.value is None. Correspondingly, the following produces the expected output: $ echo "f'{yield 1}'" | python/bin/python3 Suggesting there's a missing check for no yield value in this code. ---------- components: Interpreter Core messages: 370923 nosy: stestagg priority: normal severity: normal status: open title: Segfault from new PEG parser handling yield withing f-strings type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 7 19:51:44 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 07 Jun 2020 23:51:44 +0000 Subject: [New-bugs-announce] [issue40905] IDLE relabel Save on close Message-ID: <1591573904.32.0.951896364407.issue40905@roundup.psfhosted.org> New submission from Terry J. Reedy : Currently [Yes] [No] [Cancel] Proposed (#13504) [Save] [Skip save] [Cancel close] ---------- assignee: terry.reedy components: IDLE messages: 370934 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE relabel Save on close type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 01:50:44 2020 From: report at bugs.python.org (Saba Kauser) Date: Mon, 08 Jun 2020 05:50:44 +0000 Subject: [New-bugs-announce] [issue40906] Unable to import module due to python unable to resolve dependecies Message-ID: <1591595444.88.0.936758108882.issue40906@roundup.psfhosted.org> New submission from Saba Kauser : Hi, I am building python ibm_db C extension for Python 3.8 support. while the binary is generated successfully and installed to site-packages, I am unable to load the same. The error I get is: Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import ibm_db Traceback (most recent call last): File "", line 1, in ImportError: DLL load failed while importing ibm_db: The specified module could not be found. >>> quit() I have correctly set PATH and LIB to point to python installation. I have also updated LIB to point to correct runtime dependencies. Seeing through procmon, python is unable to resolve the runtime lib dependency path from PATH. The same steps work just fine for python 3.7 and other versions. This problem is only seen on windows. Linux and MAC works fine. I am attaching the process monitor snippets for python 3.8(failing case) as well as python (3.7) success case. Can you kindly look into it and share insights on what could lead to this problem and any possible resolution/s that are currently available. I tried with python 3.8.2 as same problem there as well. Attached are the procmon logs for python 3.7(success case) and python 3.8(failing case). python 3.7 : python3.7.log python 3.8 : python3.8_1.log , python3.8_2.log ---------- components: Installation files: python3.8_1.png messages: 370964 nosy: sabakauser priority: normal severity: normal status: open title: Unable to import module due to python unable to resolve dependecies type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49220/python3.8_1.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 02:59:47 2020 From: report at bugs.python.org (r n) Date: Mon, 08 Jun 2020 06:59:47 +0000 Subject: [New-bugs-announce] [issue40907] cpython/queue.py: put() does not acquire not_empty lock before notifying on it Message-ID: <1591599587.02.0.0884269120154.issue40907@roundup.psfhosted.org> Change by r n : ---------- components: Library (Lib) nosy: r n2 priority: normal severity: normal status: open title: cpython/queue.py: put() does not acquire not_empty lock before notifying on it type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 06:00:33 2020 From: report at bugs.python.org (laurent chriqui) Date: Mon, 08 Jun 2020 10:00:33 +0000 Subject: [New-bugs-announce] [issue40908] datetime strftime with %Y and 2 digit years Message-ID: <1591610433.59.0.264285389277.issue40908@roundup.psfhosted.org> New submission from laurent chriqui : When you use strftime with a 2 digit year i.e. '32' it outputs a '32' instead of '0032'. This prevents parsing the string again with the same format through strftime. Exemple: import datetime datetime_format="%Y" date=datetime.date(32,1,1) date_str=datetime.datetime.strftime(date, datetime_format) datetime.datetime.strptime(date_str, datetime_format) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/_strptime.py", line 568, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) File "/usr/lib/python3.8/_strptime.py", line 349, in _strptime raise ValueError("time data %r does not match format %r" % ValueError: time data '32' does not match format '%Y' ---------- components: Library (Lib) messages: 370971 nosy: laurent chriqui priority: normal severity: normal status: open title: datetime strftime with %Y and 2 digit years type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 07:25:50 2020 From: report at bugs.python.org (Emil Bode) Date: Mon, 08 Jun 2020 11:25:50 +0000 Subject: [New-bugs-announce] [issue40909] unittest assertCountEqual doesn't filter on values in dict Message-ID: <1591615550.57.0.325345327913.issue40909@roundup.psfhosted.org> New submission from Emil Bode : Found as a comment on SO (https://stackoverflow.com/questions/12813633/how-to-assert-two-list-contain-the-same-elements-in-python#comment104082703_31832447): In unittest, `self.assertCountEqual({1: [1, 2, 3]}, {1: [5, 6, 7]})` succeeds, even though the two are different. In this simple case, using assertCountEqual is unnecessary, but there may be cases where a user wants to test for general equality regardless of order. Note that `self.assertCountEqual([{1: [1, 2, 3]}], [{1: [5, 6, 7]}])` (where both are a list, with only a dict-element), does fail. And comparing 2 dicts with different keys also fails as expected. ---------- components: Library (Lib) messages: 370973 nosy: EmilBode priority: normal severity: normal status: open title: unittest assertCountEqual doesn't filter on values in dict type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 09:22:42 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 08 Jun 2020 13:22:42 +0000 Subject: [New-bugs-announce] [issue40910] Py_GetArgcArgv() is no longer exported by the C API Message-ID: <1591622562.08.0.803364056725.issue40910@roundup.psfhosted.org> New submission from STINNER Victor : Python 3.9 is now built with -fvisibility=hidden. The Py_GetArgcArgv() function is no longer exported. Previously, it was exported because all symbols were exported by default. I'm working on a PR to export it again. Fedora downstream issue, setproctitle is broken on Python 3.9: https://bugzilla.redhat.com/show_bug.cgi?id=1792059 ---------- components: C API messages: 370978 nosy: vstinner priority: normal severity: normal status: open title: Py_GetArgcArgv() is no longer exported by the C API versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 10:58:25 2020 From: report at bugs.python.org (Adam Cmiel) Date: Mon, 08 Jun 2020 14:58:25 +0000 Subject: [New-bugs-announce] [issue40911] Unexpected behaviour for += assignment to list inside tuple Message-ID: <1591628305.3.0.352312564341.issue40911@roundup.psfhosted.org> New submission from Adam Cmiel : Python version: Python 3.8.3 (default, May 15 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] on linux Description: When assigning to a tuple index using +=, if the element at that index is a list, the list is extended and a TypeError is raised. a = ([],) try: a[0] += [1] except TypeError: assert a != ([1],) # assertion fails else: assert a == ([1],) The expected behaviour is that only one of those things would happen (probably the list being extended with no error, given that a[0].extend([1]) works fine). ---------- components: Interpreter Core messages: 370990 nosy: Adam Cmiel priority: normal severity: normal status: open title: Unexpected behaviour for += assignment to list inside tuple type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 14:39:53 2020 From: report at bugs.python.org (Steve Dower) Date: Mon, 08 Jun 2020 18:39:53 +0000 Subject: [New-bugs-announce] [issue40912] _PyOS_SigintEvent is never closed on Windows Message-ID: <1591641593.68.0.465410581538.issue40912@roundup.psfhosted.org> New submission from Steve Dower : Currently, it's just stored as global state in signalmodule.c and forgotten about. It probably needs to become module state, and the signal module needs better initialization/shutdown. ---------- components: Windows messages: 371036 nosy: eric.snow, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal stage: needs patch status: open title: _PyOS_SigintEvent is never closed on Windows type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 14:40:16 2020 From: report at bugs.python.org (Steve Dower) Date: Mon, 08 Jun 2020 18:40:16 +0000 Subject: [New-bugs-announce] [issue40913] time.sleep ignores errors on Windows Message-ID: <1591641616.51.0.015066290777.issue40913@roundup.psfhosted.org> New submission from Steve Dower : In time.sleep on Windows, if the WaitForSingleObject call fails, the call returns success. It essentially looks like the timeout was 0. Errors should be highly unlikely, as the event object is leaked (see issue40912), but if they _do_ occur, we should raise them. ---------- components: Windows messages: 371037 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: time.sleep ignores errors on Windows type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 17:15:00 2020 From: report at bugs.python.org (Michael Richardson) Date: Mon, 08 Jun 2020 21:15:00 +0000 Subject: [New-bugs-announce] [issue40914] tarfile creates output that appears to omit files Message-ID: <1591650900.31.0.37019490492.issue40914@roundup.psfhosted.org> New submission from Michael Richardson : The simplest tarcopy program seems to result in output that GNU tar, bsdtar, and even Emacs tar-mode is unable to correctly process. It appears that the resulting tar file is missing files, but examination of the raw output shows they might be there, but just corrupt. GNU tar actually complains while reading the file. https://github.com/mcr/python3-tar-copy-failure has a test case. Here is the stupid code to reproduce it: import tarfile out = tarfile.open(name="./t2.tar", mode="w", format=tarfile.PAX_FORMAT) with tarfile.open("./t1.tar") as tar: for file in tar.getmembers(): print (file.name) out.addfile(file) out.close() This has been confirmed on python 3.6.9 (Ubuntu 18.04 LTS), and python 3.7.3 (Devuan Beowulf). It seems to omit different files on 32-bit and 64-bit systems. ---------- components: Library (Lib) messages: 371045 nosy: mcr314 priority: normal severity: normal status: open title: tarfile creates output that appears to omit files type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 19:08:03 2020 From: report at bugs.python.org (Eryk Sun) Date: Mon, 08 Jun 2020 23:08:03 +0000 Subject: [New-bugs-announce] [issue40915] muultiple problems with mmap.resize() in Windows Message-ID: <1591657683.26.0.992231826575.issue40915@roundup.psfhosted.org> New submission from Eryk Sun : In Windows, mmap.resize() unmaps the view of the section [1], closes the section handle, resizes the file (without error checking), creates a new section (without checking for ERROR_ALREADY_EXISTS), and maps a new view. This code has several problems. Case 1 If it tries to shrink a file that has existing section references, SetEndOfFile fails with ERROR_USER_MAPPED_FILE. Since the error isn't checked, the code proceeds to silently map a smaller view without changing the file size, which contradicts the documented behavior. For example: >>> f = open('C:/Temp/spam.txt', 'wb+') >>> f.truncate(8192) 8192 >>> m1 = mmap.mmap(f.fileno(), 0) >>> m2 = mmap.mmap(f.fileno(), 0) >>> m1.resize(4096) >>> os.fstat(f.fileno()).st_size 8192 In this case, it should set dwErrCode = ERROR_USER_MAPPED_FILE; set new_size = self->size; remap the file; and fail the resize() call. The mmap object should remain open. Case 2 Growing the file may succeed, but if it's a named section (i.e. self->tagname is set) with multiple handle references, then CreateFileMapping just reopens the existing section by name. Or there could be a race condition with another section that gets created with the same name in the window between closing the current section, resizing the file, and creating a new section. Generally mapping a new view will fail in this case. In particular, if the section isn't large enough, then the NtMapViewOfSection system call fails with STATUS_INVALID_VIEW_SIZE, which WinAPI MapViewOfFile translates to ERROR_ACCESS_DENIED. For example: >>> f = open('C:/Temp/spam.txt', 'wb+') >>> f.truncate(8192) 8192 >>> m1 = mmap.mmap(f.fileno(), 0, 'spam') >>> m2 = mmap.mmap(-1, 8192, 'spam') >>> m1.resize(16384) Traceback (most recent call last): File "", line 1, in PermissionError: [WinError 5] Access is denied >>> m1[0] Traceback (most recent call last): File "", line 1, in ValueError: mmap closed or invalid >>> os.fstat(f.fileno()).st_size 16384 If CreateFileMapping succeeds with the last error set to ERROR_ALREADY_EXISTS, then it either opened the previous section or there was a race condition in which another section was created with the same name. This case should be handled by closing the section and failing the call. It should not proceed to MapViewOfFile, which may 'succeed' unreliably. Additionally, there could be a LBYL check at the outset to avoid having to close the section in the case of a named section that's opened multiple times. If self->tagname is set, get the handle count of the section via NtQueryObject: ObjectBasicInformation [2]. If the handle count is not exactly 1, fail the resize() call, but don't close the mmap object. Case 3 resize() is broken for sections that are backed by the system paging file. For example: >>> m = mmap.mmap(-1, 4096) >>> m.resize(8192) Traceback (most recent call last): File "", line 1, in OSError: [WinError 87] The parameter is incorrect >>> m[0] Traceback (most recent call last): File "", line 1, in ValueError: mmap closed or invalid Since the handle value is INVALID_HANDLE_VALUE in this case, the calls to resize the 'file' via SetFilePointer and SetEndOfFile fail with ERROR_INVALID_HANDLE (due to an object type mismatch because handle value -1 is the current process). There's no error checking, so it proceeds to call CreateFileMapping with dwMaximumSizeHigh and dwMaximumSizeLow as 0, which fails immediately as an invalid parameter. Creating a section that's backed by the system paging file requires a non-zero size. The view of a section can shrink, but the section itself cannot grow. Generally resize() in this case would have to create a copy as follows: close the section; create another section with the new size; map the new view; copy the contents of the old view to the new view; and unmap the old view. Note that creating a new named section may open the previous section, or open an unrelated section if there's a race condition, so ERROR_ALREADY_EXISTS has to be handled. [1]: "section" is the native system name of a "file mapping". [2]: https://docs.microsoft.com/en-us/windows/win32/api/winternl/nf-winternl-ntqueryobject ---------- components: IO, Library (Lib), Windows messages: 371050 nosy: eryksun, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: muultiple problems with mmap.resize() in Windows type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 19:23:21 2020 From: report at bugs.python.org (Joshua Oreman) Date: Mon, 08 Jun 2020 23:23:21 +0000 Subject: [New-bugs-announce] [issue40916] Proposed tweak to allow for per-task async generator semantics Message-ID: <1591658601.25.0.694478043873.issue40916@roundup.psfhosted.org> New submission from Joshua Oreman : The current async generator finalization hooks are per-thread, but sometimes you want different async generator semantics in different async tasks in the same thread. This is currently challenging to implement using the thread-level hooks. I'm proposing a small backwards-compatible change to the existing async generator hook semantics in order to better support this use case. I'm seeking feedback on the proposal and also on how "major" it would be considered. Does it need a PEP? If not, does it need to wait for 3.10 or could it maybe still make 3.9 at this point? TL;DR: if the firstiter hook returns a callable, use that as the finalizer hook for this async generator instead of using the thread-level finalizer hook. == Why would you want this? == The use case that brought me here is trio-asyncio, a library that allows asyncio and Trio tasks to coexist in the same thread. Trio is working on adding async generator finalization support at the moment, which presents problems for trio-asyncio: it wouldn't work to finalize asyncio-flavored async generators as if they were Trio-flavored, or vice versa. It's easy to tell an async generator's flavor from the firstiter hook (just look at which flavor of task is running right now), but hard to ensure that the corresponding correct finalizer hook is called (more on this below). There are other possible uses as well. For example, one could imagine writing an async context manager that ensures all async generators firstiter'd within the context are aclose'd before exiting the context. This would be less verbose than guarding each individual use of an async generator, but still provide more deterministic cleanup behavior than leaving it up to GC. == Why is this challenging to implement currently? == Both of the above use cases want to provide a certain async generator firstiter/finalizer behavior, but only within a particular task or tasks. A task-local firstiter hook is easy: just install a thread-local hook that checks if you're in a task of interest, calls your custom logic if so or calls the previous hook if not. But a task-local finalizer hook is challenging, because finalization doesn't necessarily occur in the same context where the generator was being used. The firstiter hook would need to remember which finalizer hook to use for this async generator, but where could it store that information? Using the async generator iterator object as a key in a regular dictionary will prevent it from being finalized, and as a key in a WeakKeyDictionary will remove the information before the finalizer hook can look it up (because weakrefs are broken before finalizers are called). About the only solution I've found is to store it in the generator's f_locals dict, but that's not very appealing. == What's the proposed change? == My proposal is to allow the firstiter hook to return the finalizer hook that this async generator should use. If it does so, then when this async generator is finalized, it will call the returned finalizer hook instead of the thread-level one. If the firstiter hook returns None, then this async generator will use whatever the thread-level finalizer was just before firstiter was called, same as the current behavior. == How disruptive would this be? == Async generator objects already have an ag_finalizer field, so this would not change the object size. It's just providing a more flexible way to determine the value of ag_finalizer, which is currently not accessible from Python. There is a theoretical backwards compatibility concern if any existing firstiter hook returns a non-None value. There wouldn't be any reason to do so, though, and the number of different users of set_asyncgen_hooks() currently is likely extremely small. I searched all of Github and found only asyncio, curio, uvloop, async_generator, and my work-in-progress PR for Trio. All of these install either no firstiter hook or a firstiter hook that returns None. The implementation would likely only be a handful of lines change to genobject.c. ---------- components: asyncio messages: 371053 nosy: Joshua Oreman, asvetlov, njs, yselivanov priority: normal severity: normal status: open title: Proposed tweak to allow for per-task async generator semantics type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 19:46:10 2020 From: report at bugs.python.org (Toshio Kuratomi) Date: Mon, 08 Jun 2020 23:46:10 +0000 Subject: [New-bugs-announce] [issue40917] pickling exceptions with mandatory keyword args will traceback Message-ID: <1591659970.45.0.239988250333.issue40917@roundup.psfhosted.org> New submission from Toshio Kuratomi : I was trying to use multiprocessing (via a concurrent.futures.ProcessPoolExecutor) and encountered an error when pickling a custom Exception. On closer examination I was able to create a simple test case that only involves pickle: import pickle class StrRegexError(Exception): def __init__(self, *, pattern): self.pattern = pattern data = pickle.dumps(StrRegexError(pattern='test')) instance = pickle.loads(data) [pts/11 at peru /srv/ansible]$ python3.8 ~/p.py Traceback (most recent call last): File "/home/badger/p.py", line 7, in instance = pickle.loads(data) TypeError: __init__() missing 1 required keyword-only argument: 'pattern' pickle can handle mandatory keyword args in other classes derived from object; it's only classes derived from Exception that have issues. ---------- components: Library (Lib) messages: 371057 nosy: a.badger priority: normal severity: normal status: open title: pickling exceptions with mandatory keyword args will traceback versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 8 20:11:34 2020 From: report at bugs.python.org (=?utf-8?q?La=C3=ABl_Cellier?=) Date: Tue, 09 Jun 2020 00:11:34 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue40923=5D_Thread_safety?= =?utf-8?q?=C2=A0=3A_disable_intel=E2=80=99s_compiler_autopar_where_it?= =?utf-8?q?=E2=80=99s_being_relevant=2E?= Message-ID: <1591661494.52.0.310043483361.issue40923@roundup.psfhosted.org> New submission from La?l Cellier : As the bug tracker constantly crash over a continuation byte error while using latest Edge?????s Edge browser the description is posted here : https://pastebin.com/5AU9HuQk ---------- components: Build, C API, Interpreter Core messages: 371066 nosy: La?l Cellier priority: normal severity: normal status: open title: Thread safety?: disable intel?s compiler autopar where it?s being relevant. type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 01:59:48 2020 From: report at bugs.python.org (Ned Deily) Date: Tue, 09 Jun 2020 05:59:48 +0000 Subject: [New-bugs-announce] [issue40924] Recent importlib change breaks most recent certifi == 2020.4.5.2 Message-ID: <1591682388.65.0.686796854093.issue40924@roundup.psfhosted.org> New submission from Ned Deily : The very recent latest commits for Issue39791, "New `files()` api from importlib_resources", have broken the popular certifi package, a package which provides a basic set of Root Certificates for TLS secure network connection verification. Among other users of it, the python.org macOS installers encourage its users to run a provided script to install certifi. Alas, we discovered just after v3.9.2b2 was tagged that that script is broken because certifi.where() is returning a bogus path toe the pem file, what appears to be a path in a deleted temp directory. The culprit commits are 843c27765652e2322011fb3e5d88f4837de38c06 (master) and 9cf1be46e3692d565461afd3afa326d124d743dd (3.9). This is now a critical problem because in its most recent release, certifi 2020.4.5.2, certifi was changed to try to use importlib.resources.path(), if available, to find the path to the installed cacert.pem file. (The previous release, 2020.4.5.1, appears not to use path() and so is not affected by this bug.) https://github.com/certifi/python-certifi/commit/3fc8fec0466b0f12f10ad3e429b8d915bc5c26fb https://pypi.org/project/certifi/2020.4.5.2/ Without trying to debug the bug, I was able to bisect the branch and then reduce the problem seen in macOS installer testing to a fairly simple reproducible test case. The problem was reproduced on both Linux and macOS systems. The test case: # in a current cpython git repo, checkout the commit before the failing one git checkout 843c27765652e2322011fb3e5d88f4837de38c06^ git log HEAD^..HEAD git clean -fdxq ./configure --prefix=$PWD/root -q make -j4 ./python -m ensurepip ./python -m pip install --upgrade pip # not necessary to reproduce ./python -m pip install --force --no-binary certifi certifi==2020.4.5.2 ./python -c 'import certifi;print(certifi.where())' The output tail should be something like: [...] Successfully installed certifi-2020.4.5.2 /home/nad/cpython/root/lib/python3.10/site-packages/certifi/cacert.pem Now checkout the failing commit and repeat all the other steps: git checkout 843c27765652e2322011fb3e5d88f4837de38c06 git log HEAD^..HEAD [...] The output tail is now incorrect: [...] Successfully installed certifi-2020.4.5.2 /tmp/tmpqfjnbj5bcacert.pem The cacert.pem is installed to the expected (same) location in either case; its just the output from importlib.resources.path that is incorrect: ./python Python 3.10.0a0 (remotes/upstream/master:0a40849eb9, Jun 9 2020, 00:35:07) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import importlib.resources, os, os.path, certifi >>> certifi.__file__ '/home/nad/cpython/root/lib/python3.10/site-packages/certifi/__init__.py' >>> os.listdir(os.path.dirname(certifi.__file__)) ['__pycache__', 'cacert.pem', '__init__.py', 'core.py', '__main__.py'] >>> with importlib.resources.path('certifi', 'cacert.pem') as f: print(f) ... /tmp/tmpxsrjxb8lcacert.pem >>> with importlib.resources.path('certifi', 'core.py') as f: print(f) ... /tmp/tmpjq8h3si5core.py No test suite failures were noted. Perhaps there should be a test case for this? Presumably any other downstream users of importlib.resources.path() are affected. ?ukasz as 3.9 release manager is aware there is an issue but was awaiting the tracking down of the problem before making a decision about what to do for 3.9.0b2. cc: Donald as the author of the certifi change. ---------- assignee: jaraco components: Library (Lib) messages: 371073 nosy: dstufft, jaraco, lukasz.langa, ned.deily priority: release blocker severity: normal stage: needs patch status: open title: Recent importlib change breaks most recent certifi == 2020.4.5.2 type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 06:13:40 2020 From: report at bugs.python.org (Mark Shannon) Date: Tue, 09 Jun 2020 10:13:40 +0000 Subject: [New-bugs-announce] [issue40925] Remove redundant macros used for stack manipulation in interpreter Message-ID: <1591697620.81.0.981270573583.issue40925@roundup.psfhosted.org> New submission from Mark Shannon : Currently, there are fourteen macros provided for stack manipulation. Along with the fundamental, POP(), PUSH() and PEEK(n) the following eleven are also provided: TOP() SECOND() THIRD() FOURTH() SET_TOP(v) SET_SECOND(v) SET_THIRD(v) SET_FOURTH(v) SET_VALUE(n, v) STACK_GROW(n) STACK_SHRINK(n) These are unnecessary as compilers have long understood the sequences of pointer increments and decrements that is generated by using POPs and PUSHes. See https://godbolt.org/z/Htw-2k which shows GCC generating exactly the same code from just POP, PUSH and PEEK as from direct access to items on the stack. Removing the redundant macros would make the code easier to reason about and analyze. TOP() and SECOND() are used quite heavily and are trivial to convert to PEEK by tools, so should probably be left alone. Notes: I'm ignoring the stack debugging macros here, they aren't a problem. The EMPTY() macro is only used twice, so might as well be replaced with STACK_LEVEL == 0. Sadly, there is no "maintainability/code quality" selection for "type" of issue, so I've chosen "performance" :( ---------- components: Interpreter Core keywords: easy (C) messages: 371087 nosy: Mark.Shannon priority: normal severity: normal stage: needs patch status: open title: Remove redundant macros used for stack manipulation in interpreter type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 06:20:39 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Tue, 09 Jun 2020 10:20:39 +0000 Subject: [New-bugs-announce] [issue40926] command line interface of symtable module is broken Message-ID: <1591698039.58.0.381389763759.issue40926@roundup.psfhosted.org> New submission from Batuhan Taskaya : (.venv) (Python 3.10.0a0) [ 1:11?S ] [ isidentical at x200:~ ] $ cat t.py import x a = 1 print(x) (.venv) (Python 3.10.0a0) [ 1:11?S ] [ isidentical at x200:~ ] $ python -m symtable t.py True False True False True False True False True False ... It can clearly seen that the initial argument [t.py] is completely ignored, and this script prints out the symtable.py itself. This is because the script uses argv[0] (itself) instead of argv[1] (the first argument). I also find this output quite poor since we don't know what these boolean values are; True False The fix I had in my mind is printing all properties instead of 2 boolean values $ ./cpython/cpython/python -m symtable t.py ==> {'local', 'imported', 'referenced'} ==> {'local', 'assigned'} ==> {'referenced', 'global'} ---------- components: Library (Lib) messages: 371088 nosy: BTaskaya priority: normal severity: normal stage: needs patch status: open title: command line interface of symtable module is broken type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 06:47:16 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Tue, 09 Jun 2020 10:47:16 +0000 Subject: [New-bugs-announce] [issue40927] ./python -m test test___all__ test_binhex fails Message-ID: <1591699636.51.0.837580045442.issue40927@roundup.psfhosted.org> New submission from R?mi Lapeyre : It looks like the warning registry does not get flushed properly: ./python -m test test___all__ test_binhex 0:00:00 load avg: 1.55 Run tests sequentially 0:00:00 load avg: 1.55 [1/2] test___all__ 0:00:01 load avg: 1.55 [2/2] test_binhex test test_binhex crashed -- Traceback (most recent call last): File "/Users/remi/src/cpython/Lib/test/libregrtest/runtest.py", line 270, in _runtest_inner refleak = _runtest_inner2(ns, test_name) File "/Users/remi/src/cpython/Lib/test/libregrtest/runtest.py", line 221, in _runtest_inner2 the_module = importlib.import_module(abstest) File "/Users/remi/src/cpython/Lib/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 790, in exec_module File "", line 228, in _call_with_frames_removed File "/Users/remi/src/cpython/Lib/test/test_binhex.py", line 10, in import binhex File "/Users/remi/src/cpython/Lib/contextlib.py", line 124, in __exit__ next(self.gen) File "/Users/remi/src/cpython/Lib/test/support/__init__.py", line 1166, in _filterwarnings raise AssertionError("filter (%r, %s) did not catch any warning" % AssertionError: filter ('', DeprecationWarning) did not catch any warning test_binhex failed == Tests result: FAILURE == 1 test OK. 1 test failed: test_binhex Total duration: 2.0 sec Tests result: FAILURE It's not a very issue but does appear when running refleaks on the whole stdlib. ---------- components: Tests messages: 371090 nosy: remi.lapeyre priority: normal severity: normal status: open title: ./python -m test test___all__ test_binhex fails versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 10:23:41 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Tue, 09 Jun 2020 14:23:41 +0000 Subject: [New-bugs-announce] [issue40928] test_decimal.CWhitebox.test_maxcontext_exact_arith() shows "malloc: can't allocate region" on MacOS Message-ID: <1591712621.72.0.447782390229.issue40928@roundup.psfhosted.org> New submission from R?mi Lapeyre : Here's the result of "./python -m test test_decimal -m test_maxcontext_exact_arith": 0:00:00 load avg: 1.33 Run tests sequentially 0:00:00 load avg: 1.33 [1/1] test_decimal python(7835,0x11a218dc0) malloc: can't allocate region :*** mach_vm_map(size=842105263157895168, flags: 100) failed (error code=3) python(7835,0x11a218dc0) malloc: *** set a breakpoint in malloc_error_break to debug python(7835,0x11a218dc0) malloc: can't allocate region :*** mach_vm_map(size=842105263157895168, flags: 100) failed (error code=3) python(7835,0x11a218dc0) malloc: *** set a breakpoint in malloc_error_break to debug python(7835,0x11a218dc0) malloc: can't allocate region :*** mach_vm_map(size=421052631578947584, flags: 100) failed (error code=3) python(7835,0x11a218dc0) malloc: *** set a breakpoint in malloc_error_break to debug python(7835,0x11a218dc0) malloc: can't allocate region :*** mach_vm_map(size=421052631578947584, flags: 100) failed (error code=3) python(7835,0x11a218dc0) malloc: *** set a breakpoint in malloc_error_break to debug == Tests result: SUCCESS == 1 test OK. Total duration: 553 ms Tests result: SUCCESS I spent quite a time to find where this error was coming from and it's actually not an error but a dubious message from OSX. Others will surely lose time on this too, I think the best way to handle it is to use bigmemtest() with a small value on MacOS so that it's only run when '-M' is given on this platform, but always on the others. ---------- components: Tests messages: 371107 nosy: remi.lapeyre priority: normal severity: normal status: open title: test_decimal.CWhitebox.test_maxcontext_exact_arith() shows "malloc: can't allocate region" on MacOS versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 12:01:56 2020 From: report at bugs.python.org (cyberlis) Date: Tue, 09 Jun 2020 16:01:56 +0000 Subject: [New-bugs-announce] [issue40929] Multiprocessing. Instance of the custom class is missing in subprocesses Message-ID: <1591718516.0.0.742849238452.issue40929@roundup.psfhosted.org> New submission from cyberlis : When I use python3.7 everything works correctly and every subprocess has its own `Hello` instance. When I use python3.8 subprocesses do not have an instance of `Hello` class. Is this behavior correct? My code to reproduce an issue ```python from concurrent.futures.process import ProcessPoolExecutor from concurrent.futures import as_completed class Hello: _instance = None def __init__(self): print("Creating instance of Hello ", self) Hello._instance = self def worker(): return Hello._instance def main(): hello = Hello() with ProcessPoolExecutor(max_workers=2) as pool: futures = [] for _ in range(4): futures.append(pool.submit(worker)) for future in as_completed(futures): print(future.result()) if __name__ == "__main__": main() ``` Output ```bash pyenv local 3.7.6 python main.py Creating instance of Hello <__main__.Hello object at 0x102f48310> <__main__.Hello object at 0x103587410> <__main__.Hello object at 0x1035874d0> <__main__.Hello object at 0x103587110> <__main__.Hello object at 0x1035871d0> pyenv local 3.8.1 python main.py Creating instance of Hello <__main__.Hello object at 0x102104d90> None None None None ``` ---------- files: main.py messages: 371117 nosy: cyberlis priority: normal severity: normal status: open title: Multiprocessing. Instance of the custom class is missing in subprocesses type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49223/main.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 16:09:18 2020 From: report at bugs.python.org (Paul Ganssle) Date: Tue, 09 Jun 2020 20:09:18 +0000 Subject: [New-bugs-announce] [issue40930] zoneinfo gives incorrect dst() in Pacific/Rarotonga between 1978 and 1991 Message-ID: <1591733358.04.0.308688404979.issue40930@roundup.psfhosted.org> New submission from Paul Ganssle : While developing a shim for deprecating pytz, I discovered this issue with the Pacific/Rarotonga zone: >>> from datetime import datetime, timedelta >>> from backports.zoneinfo import ZoneInfo >>> datetime(1991, 2, 1, tzinfo=ZoneInfo("Pacific/Rarotonga")).dst() / timedelta(hours=1) 1.0 This reports that the DST offset is 1 hour, but in fact it should be 30 minutes, because from 1978 to 1991, Pacific/Rarotonga alternated between -0930 and -10: $ zdump -V -c 1990,1993 'Pacific/Rarotonga' Pacific/Rarotonga Sun Mar 4 09:29:59 1990 UT = Sat Mar 3 23:59:59 1990 -0930 isdst=1 gmtoff=-34200 Pacific/Rarotonga Sun Mar 4 09:30:00 1990 UT = Sat Mar 3 23:30:00 1990 -10 isdst=0 gmtoff=-36000 Pacific/Rarotonga Sun Oct 28 09:59:59 1990 UT = Sat Oct 27 23:59:59 1990 -10 isdst=0 gmtoff=-36000 Pacific/Rarotonga Sun Oct 28 10:00:00 1990 UT = Sun Oct 28 00:30:00 1990 -0930 isdst=1 gmtoff=-34200 Pacific/Rarotonga Sun Mar 3 09:29:59 1991 UT = Sat Mar 2 23:59:59 1991 -0930 isdst=1 gmtoff=-34200 Pacific/Rarotonga Sun Mar 3 09:30:00 1991 UT = Sat Mar 2 23:30:00 1991 -10 isdst=0 gmtoff=-36000 I believe that the error comes from the fact that before 1978, they were on -1030 time, then they transitioned to -0930, then started alternating between -0930 and -10: $ zdump -V -c 1977,1980 'Pacific/Rarotonga' Pacific/Rarotonga Sun Nov 12 10:29:59 1978 UT = Sat Nov 11 23:59:59 1978 -1030 isdst=0 gmtoff=-37800 Pacific/Rarotonga Sun Nov 12 10:30:00 1978 UT = Sun Nov 12 01:00:00 1978 -0930 isdst=1 gmtoff=-34200 Pacific/Rarotonga Sun Mar 4 09:29:59 1979 UT = Sat Mar 3 23:59:59 1979 -0930 isdst=1 gmtoff=-34200 Pacific/Rarotonga Sun Mar 4 09:30:00 1979 UT = Sat Mar 3 23:30:00 1979 -10 isdst=0 gmtoff=-36000 Pacific/Rarotonga Sun Oct 28 09:59:59 1979 UT = Sat Oct 27 23:59:59 1979 -10 isdst=0 gmtoff=-36000 Pacific/Rarotonga Sun Oct 28 10:00:00 1979 UT = Sun Oct 28 00:30:00 1979 -0930 isdst=1 gmtoff=-34200 This is not amazingly important, but it would be a good idea to make the result correct. Right now I think the heuristic looks for the first example of an STD ? DST transition and decides that's the best option. It might be a good idea to change the heuristic to look at *all* examples of such transitions and choose the plurality value, and if there's no plurality value and one of the values is 1H, choose that one. ---------- assignee: p-ganssle components: Library (Lib) messages: 371136 nosy: p-ganssle priority: normal severity: normal stage: needs patch status: open title: zoneinfo gives incorrect dst() in Pacific/Rarotonga between 1978 and 1991 type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 16:21:28 2020 From: report at bugs.python.org (Paul Ganssle) Date: Tue, 09 Jun 2020 20:21:28 +0000 Subject: [New-bugs-announce] [issue40931] zoneinfo gives incorrect dst() in Europe/Madrid in 1938 Message-ID: <1591734088.53.0.352585166753.issue40931@roundup.psfhosted.org> New submission from Paul Ganssle : Apparently in 1938, Madrid had a "double daylight saving time", where they transitioned from WET (+0) ? WEST (+1) ? WEMT (+2) ? WEST (+1) ? WET (+0): $ zdump -V -c 1938,1940 'Europe/Madrid' Europe/Madrid Sat Apr 2 22:59:59 1938 UT = Sat Apr 2 22:59:59 1938 WET isdst=0 gmtoff=0 Europe/Madrid Sat Apr 2 23:00:00 1938 UT = Sun Apr 3 00:00:00 1938 WEST isdst=1 gmtoff=3600 Europe/Madrid Sat Apr 30 21:59:59 1938 UT = Sat Apr 30 22:59:59 1938 WEST isdst=1 gmtoff=3600 Europe/Madrid Sat Apr 30 22:00:00 1938 UT = Sun May 1 00:00:00 1938 WEMT isdst=1 gmtoff=7200 Europe/Madrid Sun Oct 2 21:59:59 1938 UT = Sun Oct 2 23:59:59 1938 WEMT isdst=1 gmtoff=7200 Europe/Madrid Sun Oct 2 22:00:00 1938 UT = Sun Oct 2 23:00:00 1938 WEST isdst=1 gmtoff=3600 Europe/Madrid Sat Oct 7 23:59:59 1939 UT = Sun Oct 8 00:59:59 1939 WEST isdst=1 gmtoff=3600 Europe/Madrid Sun Oct 8 00:00:00 1939 UT = Sun Oct 8 00:00:00 1939 WET isdst=0 gmtoff=0 However, zoneinfo reports `.dst()` during the "double daylight saving time" period as 1 hour: >>> from datetime import datetime, timedelta >>> from backports.zoneinfo import ZoneInfo >>> datetime(1938, 5, 5, tzinfo=ZoneInfo("Europe/Madrid")).dst() / timedelta(hours=1) 1.0 >>> datetime(1938, 5, 5, tzinfo=ZoneInfo("Europe/Madrid")).tzname() 'WEMT' I believe the issue is that the "WEMT" is bordered on both sides by DST offsets, and so the heuristic of "Look for the previous or next non-DST zone and calculate the difference" doesn't work. We can probably solve this with one of two heuristics: 1. Allow DST ? DST transitions to be included in the calculation of the current DST, such that when going from x_dst ? y_dst, y_dstoff = (y_utcoff - x_utcoff) + x_dstoff 2. Look more than 1 transition away to find the nearest STD zone in either direction, and calculate the offsets from there. Between this bug and bpo-40930, I suspect we may want to write a rudimentary parser for `tzdata.zi` to be used only for testing our heuristics, since the `tzdata.zi` file (shipped with the `tzdata` package), does actually have the information we want, without resorting to heuristics. ---------- assignee: p-ganssle components: Library (Lib) messages: 371137 nosy: p-ganssle priority: normal severity: normal status: open title: zoneinfo gives incorrect dst() in Europe/Madrid in 1938 versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 16:49:51 2020 From: report at bugs.python.org (Stephen Farris) Date: Tue, 09 Jun 2020 20:49:51 +0000 Subject: [New-bugs-announce] [issue40932] subprocess docs don't qualify the instruction to use shlex.quote by OS Message-ID: <1591735791.89.0.818565720729.issue40932@roundup.psfhosted.org> New submission from Stephen Farris : The subprocess docs state: "When using shell=True, the shlex.quote() function can be used to properly escape whitespace and shell metacharacters in strings that are going to be used to construct shell commands." While this is true on Unix, it is not true on Windows. On Windows it is easy to create scenarios where shell injection still exists despite using shlex.quote properly (e.g. subprocess.run(shlex.quote("'&calc '"), shell=True) launches the Windows calculator, which it wouldn't do if shlex.quote was able to prevent shell injection on Windows). While the shlex docs state that shlex is for Unix, the subprocess docs imply that shlex.quote will work on Windows too, possibly leading some developers to erroneously use shlex.quote on Windows to try to prevent shell injection. Recommend: 1) qualifying the above section in the subprocess docs to make it clear that this only works on Unix, and 2) updating the shlex docs with warnings that shlex.quote in particular is not for use on Windows. ---------- assignee: docs at python components: Documentation messages: 371140 nosy: Stephen Farris, docs at python priority: normal severity: normal status: open title: subprocess docs don't qualify the instruction to use shlex.quote by OS versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 16:57:31 2020 From: report at bugs.python.org (Paul Ganssle) Date: Tue, 09 Jun 2020 20:57:31 +0000 Subject: [New-bugs-announce] [issue40933] zoneinfo may give incorrect dst() in Europe/Minsk in 1942 Message-ID: <1591736251.74.0.950208419673.issue40933@roundup.psfhosted.org> New submission from Paul Ganssle : Related to bpo-40930 and bpo-40931, it *seems* that in 1942 only, `zoneinfo.ZoneInfo` returns -01:00 for DST in Europe/Minsk: >>> from datetime import datetime, timedelta >>> from backports.zoneinfo import ZoneInfo >>> datetime(1942, 1, 1, tzinfo=ZoneInfo("Europe/Minsk")).dst() // timedelta(hours=1) It looks like this occurs because they transitioned directly from MSK to CEST, jumping back 1 hour, then started switching between CEST and CET. $ zdump -V -c 1941,1944 'Europe/Minsk' Europe/Minsk Fri Jun 27 20:59:59 1941 UT = Fri Jun 27 23:59:59 1941 MSK isdst=0 gmtoff=10800 Europe/Minsk Fri Jun 27 21:00:00 1941 UT = Fri Jun 27 23:00:00 1941 CEST isdst=1 gmtoff=7200 Europe/Minsk Mon Nov 2 00:59:59 1942 UT = Mon Nov 2 02:59:59 1942 CEST isdst=1 gmtoff=7200 Europe/Minsk Mon Nov 2 01:00:00 1942 UT = Mon Nov 2 02:00:00 1942 CET isdst=0 gmtoff=3600 Europe/Minsk Mon Mar 29 00:59:59 1943 UT = Mon Mar 29 01:59:59 1943 CET isdst=0 gmtoff=3600 Europe/Minsk Mon Mar 29 01:00:00 1943 UT = Mon Mar 29 03:00:00 1943 CEST isdst=1 gmtoff=7200 Europe/Minsk Mon Oct 4 00:59:59 1943 UT = Mon Oct 4 02:59:59 1943 CEST isdst=1 gmtoff=7200 Europe/Minsk Mon Oct 4 01:00:00 1943 UT = Mon Oct 4 02:00:00 1943 CET isdst=0 gmtoff=3600 This might get fixed automatically if we do the "plurality" heuristic in bpo-40930, though we might also consider a heuristic that puts greater weight on a transition if the names associated with them different only by transforming a single letter, or insertion of a letter. I am somewhat puzzled as to why only 1943 is affected, since I would have thought that all the CEST offsets in that stretch would be considered the same ttinfo (and thus all would be assigned the same dstoff). ---------- assignee: p-ganssle messages: 371141 nosy: p-ganssle priority: normal severity: normal status: open title: zoneinfo may give incorrect dst() in Europe/Minsk in 1942 versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 20:09:42 2020 From: report at bugs.python.org (Misko Dzamba) Date: Wed, 10 Jun 2020 00:09:42 +0000 Subject: [New-bugs-announce] [issue40934] Default RLock does not work well for Manager Condition Message-ID: <1591747783.0.0.431376232538.issue40934@roundup.psfhosted.org> New submission from Misko Dzamba : I get unexpected behavior from Condition when using a Manager. I expected that if I call Condition.acquire() in one thread (or process) and then call .acquire() if another thread (or process) without first releasing the condition that it should block. However it seems like Condition.acquire() never blocks... from multiprocessing import Pool,Manager import time def f(x): cv,t=x cv.acquire() print(t,"Got cv") return if __name__ == '__main__': m=Manager() cv=m.Condition(m.RLock()) p = Pool(5) print(p.map(f, [ (cv,x) for x in range(10) ])) ---------- messages: 371146 nosy: Misko Dzamba priority: normal severity: normal status: open title: Default RLock does not work well for Manager Condition type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 9 22:41:30 2020 From: report at bugs.python.org (Edison Abahurire) Date: Wed, 10 Jun 2020 02:41:30 +0000 Subject: [New-bugs-announce] [issue40935] Links to Python3 docs for some libs return 404 Message-ID: <1591756890.13.0.690147132201.issue40935@roundup.psfhosted.org> New submission from Edison Abahurire : These links in the deprecation warning on some Python 2 stdlib libraries documentation pages pointing to Python 3 alternatives return 404s. The link behind the words "Python documentation for the current stable release." Examples: https://docs.python.org/2/library/cgihttpserver.html https://docs.python.org/2/library/basehttpserver.html https://docs.python.org/2/library/simplehttpserver.html https://docs.python.org/2/library/httplib.html https://docs.python.org/2/library/cookie.html The current methodology used is to replace the `2` with a `3` when making a new url and the challenge is that some libraries names changed. ---------- messages: 371154 nosy: edison.abahurire, eric.araujo, ezio.melotti, mdk, willingc priority: normal severity: normal status: open title: Links to Python3 docs for some libs return 404 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 05:17:33 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Wed, 10 Jun 2020 09:17:33 +0000 Subject: [New-bugs-announce] [issue40936] Remove deprecated functions from gettext Message-ID: <1591780653.48.0.661147089613.issue40936@roundup.psfhosted.org> New submission from R?mi Lapeyre : The codeset parameter and the following functions were marked for removal in Python 3.10: - bind_textdomain_codeset() - lgettext() - ldgettext() - lngettext() - ldngettext() - output_charset() - set_output_charset() ---------- components: Library (Lib) messages: 371174 nosy: barry, remi.lapeyre priority: normal severity: normal status: open title: Remove deprecated functions from gettext versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 06:43:59 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Wed, 10 Jun 2020 10:43:59 +0000 Subject: [New-bugs-announce] [issue40937] Remove deprecated functions marked for removal in 3.10 Message-ID: <1591785839.81.0.272297043685.issue40937@roundup.psfhosted.org> New submission from R?mi Lapeyre : The following modules have functions marked for deprecation in Python 3.10: - asyncio-queue - asyncio-subprocess - asyncio-sync - asyncio-task - collections - builtins - gettext - the old parser I already opened a PR to remove those in gettext and Pablo Galindo removed the old parser. I will open PRs to fix the remainding modules. Instead of creating a new issue to track those separately, I thought it would be easier to list them all here and open multiple PRs. ---------- components: Library (Lib) messages: 371178 nosy: remi.lapeyre priority: normal severity: normal status: open title: Remove deprecated functions marked for removal in 3.10 versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 07:18:48 2020 From: report at bugs.python.org (Open Close) Date: Wed, 10 Jun 2020 11:18:48 +0000 Subject: [New-bugs-announce] [issue40938] urllib.parse.urlunsplit makes relative path to absolute (http:g -> http:///g) Message-ID: <1591787928.34.0.800218743026.issue40938@roundup.psfhosted.org> New submission from Open Close : path 'g' in 'http:g' becomes '/g'. >>> urlsplit('http:g') SplitResult(scheme='http', netloc='', path='g', query='', fragment='') >>> urlunsplit(urlsplit('http:g')) 'http:///g' >>> urlsplit('http:///g') SplitResult(scheme='http', netloc='', path='/g', query='', fragment='') >>> urljoin('http://a/b/c/d', 'http:g') 'http://a/b/c/g' >>> urljoin('http://a/b/c/d', 'http:///g') 'http://a/g' The problematic part of the code is: def urlunsplit(components): [...] if netloc or (scheme and scheme in uses_netloc and url[:2] != '//'): ---> if url and url[:1] != '/': url = '/' + url url = '//' + (netloc or '') + url Note also that urllib has decided on the interpretation of 'http:g' (in test). def test_RFC3986(self): [...] #self.checkJoin(RFC3986_BASE, 'http:g','http:g') # strict parser self.checkJoin(RFC3986_BASE, 'http:g','http://a/b/c/g') #relaxed parser ---------- messages: 371179 nosy: op368 priority: normal severity: normal status: open title: urllib.parse.urlunsplit makes relative path to absolute (http:g -> http:///g) type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 07:25:17 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Wed, 10 Jun 2020 11:25:17 +0000 Subject: [New-bugs-announce] [issue40939] Remove the old parser Message-ID: <1591788317.65.0.969058655213.issue40939@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : As stated in PEP 617, the old parser will be removed in Python 3.10 ---------- assignee: pablogsal components: Interpreter Core messages: 371182 nosy: pablogsal priority: normal severity: normal status: open title: Remove the old parser versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 08:26:43 2020 From: report at bugs.python.org (tillus) Date: Wed, 10 Jun 2020 12:26:43 +0000 Subject: [New-bugs-announce] [issue40940] How to update pip permanently for new venv? Message-ID: <1591792003.17.0.226692929121.issue40940@roundup.psfhosted.org> New submission from tillus : **Describe the bug** Every new environment has the old pip version (19.2.3) **Expected behavior** My system python has pip version 20 so I would expect direnv also to use the most recent pip version. But it installs the old pip version for every new environment **Environment** - OS: macOS Catalina 10.15.4 - Shell: zsh 5.7.1 - Python version: stable 3.7.7 (homebrew bottled) **To Reproduce** ```console ? ~ pip list Package Version ------------------ ------- appdirs 1.4.4 astroid 2.4.1 distlib 0.3.0 filelock 3.0.12 importlib-metadata 1.6.1 isort 4.3.21 lazy-object-proxy 1.4.3 mccabe 0.6.1 numpy 1.18.5 pip 20.1.1 pylint 2.5.2 python-dateutil 2.8.1 pytz 2020.1 setuptools 40.8.0 six 1.12.0 toml 0.10.1 typed-ast 1.4.1 virtualenv 20.0.21 wheel 0.33.1 wrapt 1.12.1 zipp 3.1.0 ? ~ python3 -m venv ./foobar ? ~ source ./foobar/bin/activate (foobar) ? ~ pip list Package Version ---------- ------- pip 19.2.3 setuptools 41.2.0 WARNING: You are using pip version 19.2.3, however version 20.1.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. ``` ---------- components: asyncio messages: 371187 nosy: asvetlov, tillus, yselivanov priority: normal severity: normal status: open title: How to update pip permanently for new venv? type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 09:33:12 2020 From: report at bugs.python.org (Mark Shannon) Date: Wed, 10 Jun 2020 13:33:12 +0000 Subject: [New-bugs-announce] [issue40941] Merge generator.gi_running and frame executing flag into single frame state Message-ID: <1591795992.13.0.468529987519.issue40941@roundup.psfhosted.org> New submission from Mark Shannon : Generators have a "gi_running" attribute (coroutines have an equivalent cr_running flag). Internally frame also has an executing flag. We use these, plus the test `f_stacktop pointer == NULL`, to determine what state a generator or frame is in. It would be much cleaner to maintain a single f_state field in the frame to track the state of a frame, reducing the change of ambiguity and error. The possible states of a frame are: CREATED -- Frame exists but has not been executed at all SUSPENDED -- Frame has been executed, and has been suspended by a yield EXECUTING -- Frame is being executed RETURNED -- Frame has completed by a RETURN_VALUE instruction RAISED -- Frame has completed as a result of an exception being raised CLEARED -- Frame has been cleared, either by explicit call to clear or by the GC. ---------- assignee: Mark.Shannon components: Interpreter Core messages: 371194 nosy: Mark.Shannon priority: normal severity: normal stage: needs patch status: open title: Merge generator.gi_running and frame executing flag into single frame state type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 11:54:57 2020 From: report at bugs.python.org (Mike Jarvis) Date: Wed, 10 Jun 2020 15:54:57 +0000 Subject: [New-bugs-announce] [issue40942] BaseManager cannot start with local manager Message-ID: <1591804497.09.0.165286994066.issue40942@roundup.psfhosted.org> New submission from Mike Jarvis : I had a function for making a logger proxy that could be safely passed to multiprocessing workers and log back to the main logger with essentially the following code: ``` import logging from multiprocessing.managers import BaseManager class SimpleGenerator: def __init__(self, obj): self._obj = obj def __call__(self): return self._obj def get_logger_proxy(logger): class LoggerManager(BaseManager): pass logger_generator = SimpleGenerator(logger) LoggerManager.register('logger', callable = logger_generator) logger_manager = LoggerManager() logger_manager.start() logger_proxy = logger_manager.logger() return logger_proxy logger = logging.getLogger('test') logger_proxy = get_logger_proxy(logger) ``` This worked great on python 2.7 through 3.7. I could pass the resulting logger_proxy to workers and they would log information, which would then be properly sent back to the main logger. However, on python 3.8.2 (and 3.8.0) I get the following: ``` Traceback (most recent call last): File "test_proxy.py", line 20, in logger_proxy = get_logger_proxy(logger) File "test_proxy.py", line 13, in get_logger_proxy logger_manager.start() File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/managers.py", line 579, in start self._process.start() File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/context.py", line 283, in _Popen return Popen(process_obj) File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'get_logger_proxy..LoggerManager' ``` So it seems that something changed about ForkingPickler that makes it unable to handle the closure in my get_logger_proxy function. I don't know if this is an intentional change in behavior or an unintentional regression. If the former, I would appreciate advice on how to modify the above code to work. Possibly relevant system details: ``` $ uname -a Darwin Fife 17.5.0 Darwin Kernel Version 17.5.0: Mon Mar 5 22:24:32 PST 2018; root:xnu-4570.51.1~1/RELEASE_X86_64 x86_64 $ python --version Python 3.8.2 $ which python /anaconda3/envs/py3.8/bin/python $ conda info active environment : py3.8 active env location : /anaconda3/envs/py3.8 shell level : 2 user config file : /Users/Mike/.condarc populated config files : /Users/Mike/.condarc conda version : 4.8.3 conda-build version : 3.18.5 python version : 3.6.5.final.0 virtual packages : __osx=10.13.4 base environment : /anaconda3 (writable) channel URLs : https://conda.anaconda.org/conda-forge/osx-64 https://conda.anaconda.org/conda-forge/noarch https://conda.anaconda.org/astropy/osx-64 https://conda.anaconda.org/astropy/noarch https://repo.anaconda.com/pkgs/main/osx-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/osx-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /anaconda3/pkgs /Users/Mike/.conda/pkgs envs directories : /anaconda3/envs /Users/Mike/.conda/envs platform : osx-64 user-agent : conda/4.8.3 requests/2.23.0 CPython/3.6.5 Darwin/17.5.0 OSX/10.13.4 UID:GID : 501:20 netrc file : /Users/Mike/.netrc offline mode : False ``` ---------- components: Library (Lib) messages: 371211 nosy: Mike Jarvis priority: normal severity: normal status: open title: BaseManager cannot start with local manager type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 12:09:12 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 10 Jun 2020 16:09:12 +0000 Subject: [New-bugs-announce] [issue40943] Drop support for PyArg_ParseTuple() format when PY_SSIZE_T_CLEAN is not defined Message-ID: <1591805352.1.0.263144015623.issue40943@roundup.psfhosted.org> New submission from STINNER Victor : Follow-up of bpo-36381: In Python 3.8, PyArg_ParseTuple() and Py_BuildValue() formats using "int" when PY_SSIZE_T_CLEAN is not defined, but Py_ssize_t when PY_SSIZE_T_CLEAN is defined, were deprecated by: commit d3c72a223a5f771f964fc34557c55eb5bfa0f5a0 Author: Inada Naoki Date: Sat Mar 23 21:04:40 2019 +0900 bpo-36381: warn when no PY_SSIZE_T_CLEAN defined (GH-12473) We will remove int support from 3.10 or 4.0. I propose to drop support for these formats in Python 3.10. It is a backward incompatible change on purpose, to ensure that all C extensions are compatible with objects larger than 2 GB, and that all C extensions behave the same. I'm not sure of the effects of this issue on bpo-27499 "PY_SSIZE_T_CLEAN conflicts with Py_LIMITED_API". ---------- components: C API messages: 371216 nosy: inada.naoki, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Drop support for PyArg_ParseTuple() format when PY_SSIZE_T_CLEAN is not defined versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 15:05:29 2020 From: report at bugs.python.org (Ivan Savin) Date: Wed, 10 Jun 2020 19:05:29 +0000 Subject: [New-bugs-announce] [issue40944] email.message.EmailMessage address parser fails to handle 'example@' Message-ID: <1591815929.0.0.387967575361.issue40944@roundup.psfhosted.org> New submission from Ivan Savin : How to reproduce: >>> import email.message >>> message = email.message.EmailMessage() >>> message['From'] = 'hey@' Traceback (most recent call last): File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1956, in get_address token, value = get_group(value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1914, in get_group raise errors.HeaderParseError("expected ':' at end of group " email.errors.HeaderParseError: expected ':' at end of group display name but found '@' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1782, in get_mailbox token, value = get_name_addr(value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1768, in get_name_addr token, value = get_angle_addr(value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1693, in get_angle_addr raise errors.HeaderParseError( email.errors.HeaderParseError: expected angle-addr but found '@' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/message.py", line 409, in __setitem__ self._headers.append(self.policy.header_store_parse(name, val)) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/policy.py", line 148, in header_store_parse return (name, self.header_factory(name, value)) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/headerregistry.py", line 596, in __call__ return self[name](name, value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/headerregistry.py", line 191, in __new__ cls.parse(value, kwds) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/headerregistry.py", line 334, in parse kwds['parse_tree'] = address_list = cls.value_parser(value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/headerregistry.py", line 325, in value_parser address_list, value = parser.get_address_list(value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1979, in get_address_list token, value = get_address(value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1959, in get_address token, value = get_mailbox(value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1785, in get_mailbox token, value = get_addr_spec(value) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1638, in get_addr_spec token, value = get_domain(value[1:]) File "/home/ivan/.pyenv/versions/3.9.0a5/lib/python3.9/email/_header_value_parser.py", line 1595, in get_domain if value[0] in CFWS_LEADER: IndexError: string index out of range ---------- components: email messages: 371234 nosy: Ivan Savin, barry, r.david.murray priority: normal severity: normal status: open title: email.message.EmailMessage address parser fails to handle 'example@' type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 10 16:04:10 2020 From: report at bugs.python.org (Ben Li-Sauerwine) Date: Wed, 10 Jun 2020 20:04:10 +0000 Subject: [New-bugs-announce] [issue40945] TKinter.Tk.geometry(Tk.winfo_geometry()) should be idempotent Message-ID: <1591819450.72.0.638106419632.issue40945@roundup.psfhosted.org> New submission from Ben Li-Sauerwine : One would expect that calling TKinter.Tk.geometry(tk.winfo_geometry()) would be idempotent. However, based on the system window frame, it is not and doing so moves the window down the screen. Running the example below reproduces the issue. ---------- components: Tkinter files: tkinter_ex.py messages: 371236 nosy: Ben Li-Sauerwine priority: normal severity: normal status: open title: TKinter.Tk.geometry(Tk.winfo_geometry()) should be idempotent versions: Python 3.6 Added file: https://bugs.python.org/file49225/tkinter_ex.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 11 02:08:46 2020 From: report at bugs.python.org (Eryk Sun) Date: Thu, 11 Jun 2020 06:08:46 +0000 Subject: [New-bugs-announce] [issue40946] SuppressCrashReport should set SEM_FAILCRITICALERRORS in Windows Message-ID: <1591855726.1.0.173183308432.issue40946@roundup.psfhosted.org> New submission from Eryk Sun : In test.support, suppress_msvcrt_asserts sets the process error mode to include SEM_FAILCRITICALERRORS, SEM_NOGPFAULTERRORBOX, SEM_NOALIGNMENTFAULTEXCEPT, and SEM_NOOPENFILEERRORBOX. In contrast, the SuppressCrashReport context manager in the same module only sets SEM_NOGPFAULTERRORBOX. It should also set SEM_FAILCRITICALERRORS (i.e. do not display the critical-error-handler message box). Including SEM_NOOPENFILEERRORBOX wouldn't hurt, but it's not of much value since it only affects the deprecated OpenFile function. SEM_NOALIGNMENTFAULTEXCEPT may not be appropriate in a context manager, since it's a one-time setting that can't be reverted, plus x86 and x64 processors aren't even configured by default to generate alignment exceptions; they do fixups in hardware. --- Discussion SEM_FAILCRITICALERRORS suppresses normal "hard error" reports sent by the NtRaiseHardError system call -- or by ExRaiseHardError or IoRaiseHardError in the kernel. If reporting a hard error isn't prevented by the error mode, the report gets sent to the ExceptionPort of the process. Normally this is the session's API port (e.g. "\Sessions\1\Windows\ApiPort"). A thread in the session server process (csrss.exe) handles requests sent to this port. In the case of a hard error, ultimately it creates a message box via user32!MessageBoxTimeoutW. For example: NtRaiseHardError = ctypes.windll.ntdll.NtRaiseHardError response = (ctypes.c_ulong * 1)() With the default process error mode, the following raises a hard error dialog for STATUS_UNSUCCESSFUL (0xC0000001), with abort/retry/ignore options: >>> NtRaiseHardError(0xC000_0001, 0, 0, None, 0, response) 0 >>> response[0] 2 The normal response value for the above call is limited to abort (2), retry (7), and ignore (4). The response is 0 if the process is set to fail critical errors: >>> msvcrt.SetErrorMode(msvcrt.SEM_FAILCRITICALERRORS) 0 >>> NtRaiseHardError(0xC000_0001, 0, 0, None, 0, response) 0 >>> response[0] 0 SEM_FAILCRITICALERRORS doesn't suppress all hard-error reporting. The system also checks for an override flag (0x1000_0000) in the status code. This flag is used in many cases, such as WinAPI FatalAppExitW. For example, the following will report a hard error regardless of the process error mode: >>> NtRaiseHardError(0xC000_0001 | 0x1000_0000, 0, 0, None, 0, response) 0 >>> response[0] 2 A common case that doesn't use the override flag is when the loader fails to initialize a process. For the release build of Python 3.10, for example, if "python310.dll" can't be found, the loader tries to raise a hard error with the status code STATUS_DLL_NOT_FOUND (0xC0000135). If the process error mode allows this, the NtRaiseHardError system call won't return until the user clicks on the "OK" button. >>> os.rename('amd64/python310.dll', 'amd64/python310.dll.bak') >>> # the following returns after clicking OK >>> hex(subprocess.call('python')) '0xc0000135' With SEM_FAILCRITICALERRORS set, which by default gets inherited by a child process, sending the hard error report is suppressed (i.e. NtRaiseHardError returns immediately with a response of 0), and the failed child process terminates immediately with the status code STATUS_DLL_NOT_FOUND. >>> msvcrt.SetErrorMode(msvcrt.SEM_FAILCRITICALERRORS) 0 >>> # the following returns immediately >>> hex(subprocess.call('python')) '0xc0000135' ---------- messages: 371257 nosy: eryksun priority: normal severity: normal stage: needs patch status: open title: SuppressCrashReport should set SEM_FAILCRITICALERRORS in Windows type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 11 06:06:46 2020 From: report at bugs.python.org (Sandro Mani) Date: Thu, 11 Jun 2020 10:06:46 +0000 Subject: [New-bugs-announce] [issue40947] Replace PLATLIBDIR macro with config->platlibdir Message-ID: <1591870006.36.0.733271971459.issue40947@roundup.psfhosted.org> New submission from Sandro Mani : Followup of bpo-40854, there is one remaining usage of PLATLIBDIR which should be replaced by config->platlibdir. ---------- components: Interpreter Core messages: 371260 nosy: smani priority: normal severity: normal status: open title: Replace PLATLIBDIR macro with config->platlibdir _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 11 08:04:15 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Thu, 11 Jun 2020 12:04:15 +0000 Subject: [New-bugs-announce] [issue40948] Better identify Windows installer as installer only, not runner Message-ID: <1591877055.43.0.578200145114.issue40948@roundup.psfhosted.org> New submission from Terry J. Reedy : Some beginners on Windows think that python-3.8.3-amd64.exe, for instance, is for running python-3.8.3, leading to repeated and now tiresome questions on python-list and probably elsewhere. The latest example is "repair modify uninstall" with the core complaint that "after downloading and trying to launch it keeps saying repair modify uninstall". (Actually, 'modify' comes first.) In response, Grant Edwards suggested adding run instructions to the initial screen and asked a "Is the file name not clear that it's an installer?" For the naive, the answer, as is traditional, is 'no'. How about adding 'setup' or 'install', as I have seen occasionally. python-3.8.3-amd64-setup.exe The initial screen is different according to whether an installed binary is absent or present. For the latter, add something like Python 3.8.3 {n} bit is installed for {who}. To run it, {directions} To change it, click one of the buttons below. An optional add-on would be a button to open the doc page on using python on Windows. (I an not suggesting a button to actually run python.exe from the installer. Users should really learn how to start it properly according to platform and python-specific conventions and their particular needs.) Does the final screen after installation say anything about running the new install? (I cannot remember.) Ned: I don't think that this issue afficts Mac newbies. Perhaps python-xyz.pkg is more clearly not for running. But something you might watch for. ---------- components: Windows messages: 371263 nosy: ned.deily, paul.moore, steve.dower, terry.reedy, tim.golden, zach.ware priority: normal severity: normal stage: needs patch status: open title: Better identify Windows installer as installer only, not runner versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 11 10:27:23 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 11 Jun 2020 14:27:23 +0000 Subject: [New-bugs-announce] [issue40949] test_socket: threading_cleanup() failed to cleanup 0 threads (count: 0, dangling: 5) Message-ID: <1591885643.11.0.227802495041.issue40949@roundup.psfhosted.org> New submission from STINNER Victor : aarch64 RHEL8 Refleaks 3.x: https://buildbot.python.org/all/#/builders/563/builds/127 The system load was quite high (9.95) for "CPU count: 8". 0:22:19 load avg: 9.95 [406/426/2] test_socket failed (env changed) (2 min 41 sec) -- running: test_pydoc (20 min 35 sec), test_asyncio (11 min 56 sec), test_faulthandler (39.2 sec), test_concurrent_futures (20 min 7 sec), test_venv (1 min 17 sec), test_peg_generator (15 min 51 sec), test_multiprocessing_spawn (54.4 sec), test_signal (9 min 26 sec) beginning 6 repetitions 123456 ....Warning -- threading_cleanup() failed to cleanup 0 threads (count: 0, dangling: 5) Warning -- Dangling thread: Warning -- Dangling thread: Warning -- Dangling thread: Warning -- Dangling thread: Warning -- Dangling thread: <_MainThread(MainThread, started 281473770473840)> .. It may be related to bpo-36750: "test_socket leaks file descriptors on macOS". ---------- components: Tests messages: 371287 nosy: vstinner priority: normal severity: normal status: open title: test_socket: threading_cleanup() failed to cleanup 0 threads (count: 0, dangling: 5) versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 11 11:09:43 2020 From: report at bugs.python.org (Dong-hee Na) Date: Thu, 11 Jun 2020 15:09:43 +0000 Subject: [New-bugs-announce] [issue40950] PEP 3121 applied to nis module Message-ID: <1591888183.85.0.936951063464.issue40950@roundup.psfhosted.org> Change by Dong-hee Na : ---------- assignee: corona10 components: Extension Modules nosy: corona10 priority: normal severity: normal status: open title: PEP 3121 applied to nis module type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 11 18:28:38 2020 From: report at bugs.python.org (Rudolfo Munguia) Date: Thu, 11 Jun 2020 22:28:38 +0000 Subject: [New-bugs-announce] [issue40951] csv skipinitialspace no longer working Message-ID: <1591914518.12.0.392827187881.issue40951@roundup.psfhosted.org> New submission from Rudolfo Munguia : This was non-obvious at first. Code had written 1 month ago making use of csv.reader to strip leading spaces is failing when I ran it today. My company had updated my installation of python from 3.6 to 3.7 a few weeks back. I have skipinitialspace=True set, but all output still contains leading spaces. I have even checked around several places on the internet where they have "example" code to demonstrate this function -> but all of their examples are now also broken (?)! and contain the leading spaces as well :: https://www.w3resource.com/python-exercises/csv/python-csv-exercise-6.php ---------- components: Build messages: 371325 nosy: Rudolfo Munguia priority: normal severity: normal status: open title: csv skipinitialspace no longer working type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 03:14:45 2020 From: report at bugs.python.org (Christian Heimes) Date: Fri, 12 Jun 2020 07:14:45 +0000 Subject: [New-bugs-announce] [issue40952] GCC overflow warnings (format-overflow, stringop-overflow) Message-ID: <1591946085.85.0.775000698536.issue40952@roundup.psfhosted.org> New submission from Christian Heimes : I'm getting a couple of compiler warnings with gcc-10.1.1 (Fedora 32) with an asan and ubsan build: Parser/string_parser.c: In function ?decode_unicode_with_escapes?: Parser/string_parser.c:100:17: warning: null destination pointer [-Wformat-overflow=] 100 | sprintf(p, "\\U%08x", chr); | ^~~~~~~~~~~~~~~~~~~~~~~~~~ Parser/string_parser.c:100:17: warning: null destination pointer [-Wformat-overflow=] Parser/string_parser.c:100:17: warning: null destination pointer [-Wformat-overflow=] Objects/unicodeobject.c: In function ?xmlcharrefreplace?: Objects/unicodeobject.c:849:16: warning: null destination pointer [-Wformat-overflow=] 849 | str += sprintf(str, "&#%d;", PyUnicode_READ(kind, data, i)); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Objects/unicodeobject.c:849:16: warning: null destination pointer [-Wformat-overflow=] Objects/unicodeobject.c:849:16: warning: null destination pointer [-Wformat-overflow=] In function ?assemble_lnotab?, inlined from ?assemble_emit? at Python/compile.c:5697:25, inlined from ?assemble? at Python/compile.c:6036:18: Python/compile.c:5651:19: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=] 5651 | *lnotab++ = k; | ~~~~~~~~~~^~~ ---------- components: Build messages: 371335 nosy: christian.heimes priority: normal severity: normal status: open title: GCC overflow warnings (format-overflow, stringop-overflow) type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 03:30:34 2020 From: report at bugs.python.org (Christian Heimes) Date: Fri, 12 Jun 2020 07:30:34 +0000 Subject: [New-bugs-announce] [issue40953] _PyWideStringList_Copy breaks asan builds Message-ID: <1591947034.46.0.994029214013.issue40953@roundup.psfhosted.org> New submission from Christian Heimes : It's not possible to build Python with address sanitizer and memory leak checker asan. $ ./configure --with-address-sanitizer --with-undefined-behavior-sanitizer $ make ... ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi ================================================================= ==1585722==ERROR: LeakSanitizer: detected memory leaks Direct leak of 48 byte(s) in 1 object(s) allocated from: #0 0x7fa404ea9667 in __interceptor_malloc (/lib64/libasan.so.6+0xb0667) #1 0x9ab398 in _PyWideStringList_Copy Python/initconfig.c:315 #2 0x9b48bd in _PyConfig_Copy Python/initconfig.c:857 #3 0xa20b4c in _PyInterpreterState_SetConfig Python/pystate.c:1834 #4 0xa07a14 in pycore_create_interpreter Python/pylifecycle.c:554 #5 0xa07a14 in pyinit_config Python/pylifecycle.c:759 #6 0xa07a14 in pyinit_core Python/pylifecycle.c:926 #7 0xa09b17 in Py_InitializeFromConfig Python/pylifecycle.c:1136 #8 0x4766c2 in pymain_init Modules/main.c:66 #9 0x47bd12 in pymain_main Modules/main.c:653 #10 0x47bd12 in Py_BytesMain Modules/main.c:686 #11 0x7fa404173041 in __libc_start_main (/lib64/libc.so.6+0x27041) Direct leak of 48 byte(s) in 1 object(s) allocated from: #0 0x7fa404ea9667 in __interceptor_malloc (/lib64/libasan.so.6+0xb0667) #1 0x9ab398 in _PyWideStringList_Copy Python/initconfig.c:315 #2 0x9b8ee8 in PyConfig_Read Python/initconfig.c:2506 #3 0xa07282 in pyinit_core Python/pylifecycle.c:920 #4 0xa09b17 in Py_InitializeFromConfig Python/pylifecycle.c:1136 #5 0x4766c2 in pymain_init Modules/main.c:66 #6 0x47bd12 in pymain_main Modules/main.c:653 #7 0x47bd12 in Py_BytesMain Modules/main.c:686 #8 0x7fa404173041 in __libc_start_main (/lib64/libc.so.6+0x27041) Indirect leak of 200 byte(s) in 6 object(s) allocated from: #0 0x7fa404ea9667 in __interceptor_malloc (/lib64/libasan.so.6+0xb0667) #1 0x63351c in PyMem_RawMalloc Objects/obmalloc.c:572 #2 0x63351c in _PyMem_RawWcsdup Objects/obmalloc.c:644 #3 0x9ab457 in _PyWideStringList_Copy Python/initconfig.c:321 #4 0x9b48bd in _PyConfig_Copy Python/initconfig.c:857 #5 0xa20b4c in _PyInterpreterState_SetConfig Python/pystate.c:1834 #6 0xa07a14 in pycore_create_interpreter Python/pylifecycle.c:554 #7 0xa07a14 in pyinit_config Python/pylifecycle.c:759 #8 0xa07a14 in pyinit_core Python/pylifecycle.c:926 #9 0xa09b17 in Py_InitializeFromConfig Python/pylifecycle.c:1136 #10 0x4766c2 in pymain_init Modules/main.c:66 #11 0x47bd12 in pymain_main Modules/main.c:653 #12 0x47bd12 in Py_BytesMain Modules/main.c:686 #13 0x7fa404173041 in __libc_start_main (/lib64/libc.so.6+0x27041) Indirect leak of 200 byte(s) in 6 object(s) allocated from: #0 0x7fa404ea9667 in __interceptor_malloc (/lib64/libasan.so.6+0xb0667) #1 0x63351c in PyMem_RawMalloc Objects/obmalloc.c:572 #2 0x63351c in _PyMem_RawWcsdup Objects/obmalloc.c:644 #3 0x9ab457 in _PyWideStringList_Copy Python/initconfig.c:321 #4 0x9b8ee8 in PyConfig_Read Python/initconfig.c:2506 #5 0xa07282 in pyinit_core Python/pylifecycle.c:920 #6 0xa09b17 in Py_InitializeFromConfig Python/pylifecycle.c:1136 #7 0x4766c2 in pymain_init Modules/main.c:66 #8 0x47bd12 in pymain_main Modules/main.c:653 #9 0x47bd12 in Py_BytesMain Modules/main.c:686 #10 0x7fa404173041 in __libc_start_main (/lib64/libc.so.6+0x27041) Workaround: $ cat Misc/asan-suppression.txt leak:_PyWideStringList_Copy $ LSAN_OPTIONS="suppressions=Misc/asan-suppression.txt" make ---------- components: Build messages: 371336 nosy: christian.heimes priority: normal severity: normal status: open title: _PyWideStringList_Copy breaks asan builds type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 04:37:31 2020 From: report at bugs.python.org (=?utf-8?b?5bCP5pys5YGl5Y+4?=) Date: Fri, 12 Jun 2020 08:37:31 +0000 Subject: [New-bugs-announce] [issue40954] freeze.py aborts on macOS Message-ID: <1591951051.78.0.0676119565993.issue40954@roundup.psfhosted.org> New submission from ???? : freeze.py (in cpython/Tools/freeze) fails with the folloing error. ``` $ ~/.pyenv/versions/3.8.0/bin/python freeze.py Error: needed directory /Users/k-omoto/.pyenv/versions/3.8.0/lib/python3.8/config-3.8 not found Use ``freeze.py -h'' for help ``` ---------- components: Demos and Tools messages: 371342 nosy: ???? priority: normal severity: normal status: open title: freeze.py aborts on macOS versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 05:54:50 2020 From: report at bugs.python.org (Christian Heimes) Date: Fri, 12 Jun 2020 09:54:50 +0000 Subject: [New-bugs-announce] [issue40955] subprocess_fork_exec leaks memory Message-ID: <1591955690.55.0.160305225118.issue40955@roundup.psfhosted.org> New submission from Christian Heimes : asan has detected a minor memory leak in subprocess_fork_exec: Direct leak of 8 byte(s) in 1 object(s) allocated from: #0 0x7f008bf19667 in __interceptor_malloc (/lib64/libasan.so.6+0xb0667) #1 0x7f007a0bee4a in subprocess_fork_exec /home/heimes/dev/python/cpython/Modules/_posixsubprocess.c:774 #2 0xe0305b in cfunction_call Objects/methodobject.c:546 #3 0x4bdd0c in _PyObject_MakeTpCall Objects/call.c:191 #4 0x462fd0 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112 #5 0x462fd0 in _PyObject_VectorcallTstate Include/cpython/abstract.h:99 #6 0x462fd0 in PyObject_Vectorcall Include/cpython/abstract.h:123 #7 0x462fd0 in call_function Python/ceval.c:5110 #8 0x462fd0 in _PyEval_EvalFrameDefault Python/ceval.c:3510 #9 0x8c1c1d in _PyEval_EvalFrame Include/internal/pycore_ceval.h:40 #10 0x8c1c1d in _PyEval_EvalCode Python/ceval.c:4365 ---------- components: FreeBSD messages: 371346 nosy: christian.heimes, gregory.p.smith, koobs priority: normal severity: normal status: open title: subprocess_fork_exec leaks memory type: resource usage versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 06:05:35 2020 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Fri, 12 Jun 2020 10:05:35 +0000 Subject: [New-bugs-announce] [issue40956] Use Argument Clinic in sqlite3 Message-ID: <1591956335.72.0.553737405887.issue40956@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : Use Argument Clinic in sqlite3. ---------- components: Library (Lib) messages: 371347 nosy: erlendaasland priority: normal severity: normal status: open title: Use Argument Clinic in sqlite3 type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 06:20:16 2020 From: report at bugs.python.org (Christian Heimes) Date: Fri, 12 Jun 2020 10:20:16 +0000 Subject: [New-bugs-announce] [issue40957] _Py_fopen_obj leaks reference on audit error Message-ID: <1591957216.36.0.726588708522.issue40957@roundup.psfhosted.org> New submission from Christian Heimes : Direct leak of 50 byte(s) in 1 object(s) allocated from: #0 0x7f429c681667 in __interceptor_malloc (/lib64/libasan.so.6+0xb0667) #1 0x487496 in _PyBytes_FromSize Objects/bytesobject.c:81 #2 0x487496 in PyBytes_FromStringAndSize Objects/bytesobject.c:112 #3 0x487496 in PyBytes_FromStringAndSize Objects/bytesobject.c:97 #4 0x75f140 in unicode_encode_utf8 Objects/unicodeobject.c:5425 #5 0x7f2052 in PyUnicode_EncodeFSDefault Objects/unicodeobject.c:3660 #6 0x7f28a7 in PyUnicode_FSConverter Objects/unicodeobject.c:3947 #7 0xab48ab in _Py_fopen_obj Python/fileutils.c:1459 #8 0x7f428a713cc5 in _ssl__SSLContext_load_dh_params /home/heimes/dev/python/cpython/Modules/_ssl.c:4293 #9 0xe03e0c in cfunction_vectorcall_O Objects/methodobject.c:510 #10 0x4c166a in PyVectorcall_Call Objects/call.c:230 ---------- assignee: christian.heimes components: Interpreter Core messages: 371349 nosy: christian.heimes, steve.dower priority: normal severity: normal status: open title: _Py_fopen_obj leaks reference on audit error type: resource usage versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 06:42:31 2020 From: report at bugs.python.org (Christian Heimes) Date: Fri, 12 Jun 2020 10:42:31 +0000 Subject: [New-bugs-announce] [issue40958] ASAN/UBSAN: heap-buffer-overflow in pegen.c Message-ID: <1591958551.05.0.683053178898.issue40958@roundup.psfhosted.org> New submission from Christian Heimes : ASAN/UBSAN has detected a heap-buffer-overflow in pegen.c ==1625693==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x606000026b71 at pc 0x00000073574d bp 0x7fff297284f0 sp 0x7fff297284e0 READ of size 1 at 0x606000026b71 thread T0 #0 0x73574c in ascii_decode Objects/unicodeobject.c:4941 #1 0x82bd0f in unicode_decode_utf8 Objects/unicodeobject.c:4999 #2 0xf35859 in byte_offset_to_character_offset Parser/pegen.c:148 #3 0xf35859 in _PyPegen_raise_error_known_location Parser/pegen.c:412 #4 0xf36482 in _PyPegen_raise_error Parser/pegen.c:373 #5 0xf39e1d in tokenizer_error Parser/pegen.c:321 #6 0xf39e1d in _PyPegen_fill_token Parser/pegen.c:638 #7 0xf3ca0f in _PyPegen_expect_token Parser/pegen.c:753 #8 0xf4cc7a in _tmp_15_rule Parser/parser.c:16184 #9 0xf3c799 in _PyPegen_lookahead (/home/heimes/dev/python/cpython/python+0xf3c799) #10 0xfafb4a in compound_stmt_rule Parser/parser.c:1860 #11 0xfb7fc2 in statement_rule Parser/parser.c:1224 #12 0xfb7fc2 in _loop1_11_rule Parser/parser.c:15954 #13 0xfb7fc2 in statements_rule Parser/parser.c:1183 #14 0xfbbce7 in file_rule Parser/parser.c:716 #15 0xfbbce7 in _PyPegen_parse Parser/parser.c:24401 #16 0xf3f868 in _PyPegen_run_parser Parser/pegen.c:1077 #17 0xf4044f in _PyPegen_run_parser_from_file_pointer Parser/pegen.c:1137 #18 0xa27f36 in PyRun_FileExFlags Python/pythonrun.c:1057 #19 0xa2826a in PyRun_SimpleFileExFlags Python/pythonrun.c:400 #20 0x479b1b in pymain_run_file Modules/main.c:369 #21 0x479b1b in pymain_run_python Modules/main.c:553 #22 0x47bd59 in Py_RunMain Modules/main.c:632 #23 0x47bd59 in pymain_main Modules/main.c:662 #24 0x47bd59 in Py_BytesMain Modules/main.c:686 #25 0x7f59aa5cd041 in __libc_start_main (/lib64/libc.so.6+0x27041) #26 0x47643d in _start (/home/heimes/dev/python/cpython/python+0x47643d) 0x606000026b71 is located 0 bytes to the right of 49-byte region [0x606000026b40,0x606000026b71) allocated by thread T0 here: #0 0x7f59ab303667 in __interceptor_malloc (/lib64/libasan.so.6+0xb0667) #1 0x749c7d in PyUnicode_New Objects/unicodeobject.c:1437 #2 0x872f15 in _PyUnicode_Init Objects/unicodeobject.c:15535 #3 0x9fe0ab in pycore_init_types Python/pylifecycle.c:599 #4 0x9fe0ab in pycore_interp_init Python/pylifecycle.c:724 #5 0xa07c69 in pyinit_config Python/pylifecycle.c:765 #6 0xa07c69 in pyinit_core Python/pylifecycle.c:926 #7 0xa09b17 in Py_InitializeFromConfig Python/pylifecycle.c:1136 #8 0x4766c2 in pymain_init Modules/main.c:66 #9 0x47bd12 in pymain_main Modules/main.c:653 #10 0x47bd12 in Py_BytesMain Modules/main.c:686 #11 0x7f59aa5cd041 in __libc_start_main (/lib64/libc.so.6+0x27041) SUMMARY: AddressSanitizer: heap-buffer-overflow Objects/unicodeobject.c:4941 in ascii_decode Shadow bytes around the buggy address: 0x0c0c7fffcd10: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa 0x0c0c7fffcd20: 00 00 00 00 00 00 00 07 fa fa fa fa 00 00 00 00 0x0c0c7fffcd30: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 05 0x0c0c7fffcd40: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa 0x0c0c7fffcd50: 00 00 00 00 00 00 00 00 fa fa fa fa 00 00 00 00 =>0x0c0c7fffcd60: 00 00 00 01 fa fa fa fa 00 00 00 00 00 00[01]fa 0x0c0c7fffcd70: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa 0x0c0c7fffcd80: 00 00 00 00 00 00 05 fa fa fa fa fa 00 00 00 00 0x0c0c7fffcd90: 00 00 00 fa fa fa fa fa 00 00 00 00 00 00 00 00 0x0c0c7fffcda0: fa fa fa fa fd fd fd fd fd fd fd fd fa fa fa fa 0x0c0c7fffcdb0: fd fd fd fd fd fd fd fd fa fa fa fa 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==1625693==ABORTING ---------- components: Interpreter Core messages: 371351 nosy: christian.heimes, pablogsal priority: high severity: normal status: open title: ASAN/UBSAN: heap-buffer-overflow in pegen.c type: security versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 07:21:57 2020 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Fri, 12 Jun 2020 11:21:57 +0000 Subject: [New-bugs-announce] [issue40959] Remove unused and unneeded function declaration from sqlite3 header files Message-ID: <1591960917.09.0.191201794303.issue40959@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : The following function declarations can safely be removed because they're either unused or unneeded. In Modules/_sqlite/cache.h: pysqlite_node_init() // unused; no function definition pysqlite_node_dealloc() // unneeded; file scope pysqlite_cache_init() // unneeded; file scope pysqlite_cache_dealloc() // unneeded; file scope In Modules/_sqlite/connection.h: pysqlite_connection_alloc() // unused; no function definition pysqlite_connection_dealloc() // unneeded; file scope pysqlite_connection_cursor() // unneeded; file scope pysqlite_connection_close() // unneeded; file scope pysqlite_connection_rollback() // unneeded; file scope pysqlite_connection_new() // unused; no function definition pysqlite_connection_init() // unneeded; file scope In Modules/_sqlite/cursor.h: pysqlite_cursor_execute() // unneeded; file scope pysqlite_cursor_executemany() // unneeded; file scope pysqlite_cursor_getiter() // unused; no function definition pysqlite_cursor_iternext() // unneeded; file scope pysqlite_cursor_fetchone() // unneeded; file scope pysqlite_cursor_fetchmany() // unneeded; file scope pysqlite_cursor_fetchall() // unneeded; file scope pysqlite_noop() // unneeded; file scope pysqlite_cursor_close() // unneeded; file scope In Modules/_sqlite/prepare_protocol.h: pysqlite_prepare_protocol_init() // unneeded; file scope pysqlite_prepare_protocol_dealloc() // unneeded; file scope In Modules/_sqlite/statement.h: pysqlite_statement_dealloc() // unneeded; file scope ---------- components: Library (Lib) messages: 371353 nosy: erlendaasland priority: normal severity: normal status: open title: Remove unused and unneeded function declaration from sqlite3 header files type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 07:36:43 2020 From: report at bugs.python.org (Marcelo) Date: Fri, 12 Jun 2020 11:36:43 +0000 Subject: [New-bugs-announce] [issue40960] Fix the example in hashlib documentarion Message-ID: <1591961803.74.0.363372588689.issue40960@roundup.psfhosted.org> New submission from Marcelo : The documentation found in https://docs.python.org/2/library/hashlib.html give us the following example: ``` More condensed: >>> hashlib.sha224("Nobody inspects the spammish repetition").hexdigest() 'a4337bc45a8fc544c03f52dc550cd6e1e87021bc896588bd79e901e2' ``` Although, we get the error if not encode the string: ``` Traceback (most recent call last): File "", line 1, in TypeError: Unicode-objects must be encoded before hashing ``` So the example should be (string to bytes): `hashlib.sha224(b"Nobody inspects the spammish repetition").hexdigest()` ---------- assignee: docs at python components: Documentation messages: 371354 nosy: docs at python, marcelogrsp at gmail.com priority: normal severity: normal status: open title: Fix the example in hashlib documentarion type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 10:36:10 2020 From: report at bugs.python.org (Daniel Martin) Date: Fri, 12 Jun 2020 14:36:10 +0000 Subject: [New-bugs-announce] [issue40961] os.putenv should be documented as not affecting os.getenv's return value Message-ID: <1591972570.31.0.396945912462.issue40961@roundup.psfhosted.org> New submission from Daniel Martin : I find this behavior extremely surprising: $ ABC=def python Python 3.7.4 (v3.7.4:e09359112e, Jul 8 2019, 14:54:52) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.getenv('ABC') 'def' >>> os.putenv('ABC', 'xyz') >>> os.getenv('ABC') 'def' Although the documentation for the os module does say in the documentation for putenv that > "When putenv() is supported, assignments to items in os.environ > are automatically translated into corresponding calls to putenv(); > however, calls to putenv() don?t update os.environ, so it is > actually preferable to assign to items of os.environ" , there is absolutely nothing in the documentation to give the impression that os.putenv WILL NOT affect the result of os.getenv in the same process, as it is completely undocumented that os.getenv is a wrapper around os.environ.get(). The documentation for os.environ is careful to note that the copy of the environment stored in os.environ is read only once at startup, and has a note specifically about putenv; however, given this care the lack of any similar language in the os.getenv documentation gives the impression that os.getenv behaves differently. Ideally, the documentation of both os.getenv and os.putenv would be modified. The getenv documentation should note that it pulls from a mapping captured at first import and does not, as the name may imply, result in a call to the C function getenv. The putenv documentation should note that calls to putenv don?t update os.environ, and therefore do not affect the result of os.getenv. Possibly, given the thread safety issues, direct calls to os.getenv and os.putenv should be deprecated in favor of os.environ use. ---------- assignee: docs at python components: Documentation messages: 371377 nosy: Daniel Martin, docs at python priority: normal severity: normal status: open title: os.putenv should be documented as not affecting os.getenv's return value type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 11:42:17 2020 From: report at bugs.python.org (Eugene Huang) Date: Fri, 12 Jun 2020 15:42:17 +0000 Subject: [New-bugs-announce] [issue40962] Add documentation for asyncio._set_running_loop() Message-ID: <1591976537.41.0.345001271187.issue40962@roundup.psfhosted.org> New submission from Eugene Huang : In the pull request https://github.com/python/asyncio/pull/452#issue-92245081 the linked comment says that `asyncio._set_running_loop()` is part of the public asyncio API and will be documented, but I couldn't find any references to this function in https://docs.python.org/3/library/asyncio-eventloop.html or anywhere else (tried quick searching for it) in the docs. ---------- assignee: docs at python components: Documentation messages: 371387 nosy: docs at python, eugenhu priority: normal severity: normal status: open title: Add documentation for asyncio._set_running_loop() versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 13:30:20 2020 From: report at bugs.python.org (ghost43) Date: Fri, 12 Jun 2020 17:30:20 +0000 Subject: [New-bugs-announce] [issue40963] distutils make_zipfile uses random order Message-ID: <1591983020.96.0.753934283459.issue40963@roundup.psfhosted.org> New submission from ghost43 : I am trying to generate .zip sdists for a project in a reproducible manner, using setuptoools. The generated zips differ in the order of packed files. The root cause of the non-determinicity is using os.walk() in make_zipfile here: https://github.com/python/cpython/blob/0d3350daa8123a3e16d4a534b6e873eb12c10d7c/Lib/distutils/archive_util.py#L174 For a potential fix, see https://github.com/pypa/setuptools/commit/29688821b381268a0d59c0d26317d88ad518f966 I guess https://bugs.python.org/issue30693 is sort of related. The change made there is necessary, and was sufficient to make the tars reproducible but not the zips. (sidenote: Is it acceptable to sign the PSF CLA with a pseudonym?) ---------- components: Distutils messages: 371400 nosy: dstufft, eric.araujo, ghost43 priority: normal severity: normal status: open title: distutils make_zipfile uses random order type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 14:43:06 2020 From: report at bugs.python.org (Christian Heimes) Date: Fri, 12 Jun 2020 18:43:06 +0000 Subject: [New-bugs-announce] [issue40964] Connection to IMAP server cyrus.andrew.cmu.edu hangs Message-ID: <1591987386.88.0.623611449873.issue40964@roundup.psfhosted.org> New submission from Christian Heimes : All buildbots and tests are currently failing because connections cyrus.andrew.cmu.edu:143 (IMAP) and cyrus.andrew.cmu.edu:993 (IMAPS) are hanging. ---------- components: Tests messages: 371403 nosy: christian.heimes, lukasz.langa, ned.deily priority: release blocker severity: normal status: open title: Connection to IMAP server cyrus.andrew.cmu.edu hangs versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 12 22:29:55 2020 From: report at bugs.python.org (The Comet) Date: Sat, 13 Jun 2020 02:29:55 +0000 Subject: [New-bugs-announce] [issue40965] Segfault when importing unittest module Message-ID: <1592015395.95.0.70978961228.issue40965@roundup.psfhosted.org> New submission from The Comet : The following program will segfault the interpreter: #include int main() { Py_Initialize(); PyRun_SimpleString("import unittest"); Py_Finalize(); Py_Initialize(); PyRun_SimpleString("import unittest"); /* segfault here */ Py_Finalize(); } This only seems to happen with the unittest module. This is something that used to work but broke somewhere between python 3.7 and 3.8. The code above can also be found on github as a cmake project for your convenience: https://github.com/TheComet/python3.8-unittest-broken ---------- messages: 371432 nosy: The Comet priority: normal severity: normal status: open title: Segfault when importing unittest module type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 13 01:35:23 2020 From: report at bugs.python.org (hai shi) Date: Sat, 13 Jun 2020 05:35:23 +0000 Subject: [New-bugs-announce] [issue40966] Remove redundant var in PyErr_NewException Message-ID: <1592026523.77.0.455360135659.issue40966@roundup.psfhosted.org> New submission from hai shi : Looks like `classname` in PyErr_NewException() is redundant: https://github.com/python/cpython/blob/master/Python/errors.c#L1082 ---------- components: Interpreter Core messages: 371438 nosy: shihai1991 priority: normal severity: normal status: open title: Remove redundant var in PyErr_NewException versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 13 06:48:02 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Sat, 13 Jun 2020 10:48:02 +0000 Subject: [New-bugs-announce] [issue40967] asyncio.Task.all_tasks() and asyncio.Task.current_task() must be removed in 3.9 Message-ID: <1592045282.09.0.91488246101.issue40967@roundup.psfhosted.org> New submission from R?mi Lapeyre : The documentation says: .. deprecated-removed:: 3.7 3.9 Do not call this as a task method. Use the :func:`asyncio.all_tasks` function instead. I don't know if it's still possible to merge this in 3.9 and if so, if it should be a release blocker. Anyway I'm working on a PR for this. ---------- components: asyncio messages: 371449 nosy: asvetlov, remi.lapeyre, yselivanov priority: normal severity: normal status: open title: asyncio.Task.all_tasks() and asyncio.Task.current_task() must be removed in 3.9 versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 13 07:10:14 2020 From: report at bugs.python.org (Paul Menzel) Date: Sat, 13 Jun 2020 11:10:14 +0000 Subject: [New-bugs-announce] [issue40968] urllib is unable to deal with TURN server infront Message-ID: <1592046614.41.0.0652918856783.issue40968@roundup.psfhosted.org> New submission from Paul Menzel : Having the TURN server Coturn [1] set up in a Jitsi Meet installation, Python?s urllib requests fail, while it works with cURL and browsers. ``` $ curl -I https://jitsi.molgen.mpg.de HTTP/2 200 server: nginx/1.14.2 date: Sat, 13 Jun 2020 11:09:19 GMT content-type: text/html vary: Accept-Encoding strict-transport-security: max-age=63072000 ``` ``` >>> import urllib.request >>> response = urllib.request.urlopen('https://jitsi.molgen.mpg.de') Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python3.8/urllib/request.py", line 525, in open response = self._open(req, data) File "/usr/lib/python3.8/urllib/request.py", line 542, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain result = func(*args) File "/usr/lib/python3.8/urllib/request.py", line 1393, in https_open return self.do_open(http.client.HTTPSConnection, req, File "/usr/lib/python3.8/urllib/request.py", line 1354, in do_open r = h.getresponse() File "/usr/lib/python3.8/http/client.py", line 1332, in getresponse response.begin() File "/usr/lib/python3.8/http/client.py", line 303, in begin version, status, reason = self._read_status() File "/usr/lib/python3.8/http/client.py", line 272, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response ``` [1]: https://github.com/coturn/coturn/ ---------- components: Library (Lib) messages: 371450 nosy: pmenzel priority: normal severity: normal status: open title: urllib is unable to deal with TURN server infront type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 13 12:53:22 2020 From: report at bugs.python.org (Keith Spitz) Date: Sat, 13 Jun 2020 16:53:22 +0000 Subject: [New-bugs-announce] [issue40969] Python 3.8.3 And Now For Something CompletelyDifferent Message-ID: <1592067202.36.0.876883862148.issue40969@roundup.psfhosted.org> New submission from Keith Spitz : For the totally unimportant list... On the Python 3.8.3 release page (https://www.python.org/downloads/release/python-383/), Mr. Anemone was actually played by Graham Chapman and not John Cleese (https://montypython.fandom.com/wiki/Flying_Lessons). ---------- assignee: docs at python components: Documentation messages: 371464 nosy: Keith Spitz, docs at python priority: normal severity: normal status: open title: Python 3.8.3 And Now For Something CompletelyDifferent versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 13 19:37:57 2020 From: report at bugs.python.org (Christopher Yeh) Date: Sat, 13 Jun 2020 23:37:57 +0000 Subject: [New-bugs-announce] [issue40970] Error in Python Datamodel Documentation Message-ID: <1592091477.81.0.850962820237.issue40970@roundup.psfhosted.org> New submission from Christopher Yeh : The documentation says the following: > A built-in function object is a wrapper around a C function. Examples of built-in functions are `len` and `math.sin` (`math` is a standard built-in module). However, `math` is not always a built-in module, as can be seen in on my own Python installation (Windows 10, WSL 1, Python 3.7.7 installed via conda). >>> import sys >>> sys.builtin_module_names ('_abc', '_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_symtable', '_thread', '_tracemalloc', '_warnings', '_weakref', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'posix', 'pwd', 'sys', 'time', 'xxsubtype', 'zipimport') Therefore, I have submitted a pull request to remove the statement "(`math` is a standard built-in module)" from the documentation. ---------- assignee: docs at python components: Documentation messages: 371473 nosy: chrisyeh, docs at python priority: normal pull_requests: 20055 severity: normal status: open title: Error in Python Datamodel Documentation type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 13 19:47:49 2020 From: report at bugs.python.org (Gordon P. Hemsley) Date: Sat, 13 Jun 2020 23:47:49 +0000 Subject: [New-bugs-announce] [issue40971] Documentation still mentions 'u' string formatting option Message-ID: <1592092069.64.0.724839327789.issue40971@roundup.psfhosted.org> New submission from Gordon P. Hemsley : https://docs.python.org/3/library/stdtypes.html#old-string-formatting still lists the 'u' string formatting option, described as "Obsolete type ? it is identical to 'd'." and linking to PEP 237. However, testing indicates that Python 3 does not support a 'u' option and my archaeology suggests that such support was removed in Python 2.4. It seems this has flown under the radar for quite some time. ---------- assignee: docs at python components: Documentation messages: 371474 nosy: alexandre.vassalotti, benjamin.peterson, berker.peksag, christian.heimes, docs at python, eli.bendersky, ezio.melotti, georg.brandl, gphemsley, martin.panter, ncoghlan, rhettinger priority: normal severity: normal status: open title: Documentation still mentions 'u' string formatting option type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 00:28:24 2020 From: report at bugs.python.org (Charles Machalow) Date: Sun, 14 Jun 2020 04:28:24 +0000 Subject: [New-bugs-announce] [issue40972] Add a recurse flag to Path.rmdir() Message-ID: <1592108904.89.0.703350475052.issue40972@roundup.psfhosted.org> New submission from Charles Machalow : I think it would make sense to add a recurse flag to the Path.rmdir() method. It would default to False (to allow for current behavior). If set to True, the method would act very similarly to shutil.rmtree() in that it would delete all files in the directory and then delete the directory itself. I understand that this behavior doesn't really line up with os.rmdir(), though I think it makes sense to allow this type of method to appear on the Path object for easy deletion of a directory with things still inside (while looking more object-oriented at the same time) If people think this makes sense, I may be able to provide a PR to just delegate to shutil.rmtree for the recurse=True operation. Thanks for thoughts. ---------- components: Library (Lib) messages: 371487 nosy: Charles Machalow priority: normal severity: normal status: open title: Add a recurse flag to Path.rmdir() type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 01:51:01 2020 From: report at bugs.python.org (Ben Du) Date: Sun, 14 Jun 2020 05:51:01 +0000 Subject: [New-bugs-announce] [issue40973] platform.platform() in Python 3.8 does not report detailed Linux platform information Message-ID: <1592113861.51.0.223662192335.issue40973@roundup.psfhosted.org> New submission from Ben Du : The function platform.platform() does not report detailed Linux platform information (Ubuntu, Debain, CentOS, etc). This information is reported in Python 3.7 and earlier. ---------- components: Library (Lib) messages: 371488 nosy: legendu priority: normal severity: normal status: open title: platform.platform() in Python 3.8 does not report detailed Linux platform information type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 08:08:18 2020 From: report at bugs.python.org (SHRINK_STACK) Date: Sun, 14 Jun 2020 12:08:18 +0000 Subject: [New-bugs-announce] [issue40974] possible optimization: SHRINK_STACK(n) Message-ID: <1592136498.04.0.523354391069.issue40974@roundup.psfhosted.org> New submission from SHRINK_STACK : context managers and except blocks generates multiple POP_TOPS constantly and maybe other cases which might lead generation of multiple POP_TOPS. A SHRINK_STACK(n) opcode would make things better (improvement on pyc size, less opcodes = faster evaluation). A possible patch: (to peephole.c) + + case POP_TOP: + h = i + 1; + while (h < codelen && _Py_OPCODE(codestr[h]) == POP_TOP) { + h++; + } + if (h > i + 1) { + codestr[i] = PACKOPARG(SHRINK_STACK, h - i); + fill_nops(codestr, i + 1, h); + nexti = h; + } + break; (to ceval.c) + case TARGET(SHRINK_STACK): { + for (int i = 0; i < oparg; i++) { + PyObject *value = POP(); + Py_DECREF(value); + } + FAST_DISPATCH(); + } + and some other minor things for opcode.py and magic number ---------- messages: 371501 nosy: shrink_stack priority: normal severity: normal status: open title: possible optimization: SHRINK_STACK(n) _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 08:26:50 2020 From: report at bugs.python.org (Naglis Jonaitis) Date: Sun, 14 Jun 2020 12:26:50 +0000 Subject: [New-bugs-announce] [issue40975] contextlib.AsyncExitStack enter_async_context and aclose should be labeled as coroutine methods Message-ID: <1592137610.97.0.667816309601.issue40975@roundup.psfhosted.org> New submission from Naglis Jonaitis : enter_async_context[1] and aclose[2] are coroutine methods. 1: https://github.com/python/cpython/blob/8f04a84755babe516ebb5304904ea7c15b865c80/Lib/contextlib.py#L548 2: https://github.com/python/cpython/blob/8f04a84755babe516ebb5304904ea7c15b865c80/Lib/contextlib.py#L591 ---------- assignee: docs at python components: Documentation messages: 371503 nosy: docs at python, naglis priority: normal severity: normal status: open title: contextlib.AsyncExitStack enter_async_context and aclose should be labeled as coroutine methods versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 08:46:16 2020 From: report at bugs.python.org (Ram Rachum) Date: Sun, 14 Jun 2020 12:46:16 +0000 Subject: [New-bugs-announce] [issue40976] Clarify motivation for `chain.from_iterable` Message-ID: <1592138776.38.0.041174641716.issue40976@roundup.psfhosted.org> Change by Ram Rachum : ---------- components: Library (Lib) nosy: cool-RR priority: normal severity: normal status: open title: Clarify motivation for `chain.from_iterable` type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 09:25:14 2020 From: report at bugs.python.org (Ronald Oussoren) Date: Sun, 14 Jun 2020 13:25:14 +0000 Subject: [New-bugs-announce] [issue40977] asyncio.trsock.TransportSocket says some APIs will be prohibited in 3.9 Message-ID: <1592141114.24.0.475548647885.issue40977@roundup.psfhosted.org> New submission from Ronald Oussoren : The implementation for asyncio.trsock.TransportSocket says that a number of methods will be prohibited in 3.9 (https://github.com/python/cpython/blob/8f04a84755babe516ebb5304904ea7c15b865c80/Lib/asyncio/trsock.py#L19), but merely warns in both 3.9 and 3.10. It is too late to change this in 3.9, other than updating the warning message. ---------- components: asyncio messages: 371504 nosy: asvetlov, ronaldoussoren, yselivanov priority: normal severity: normal status: open title: asyncio.trsock.TransportSocket says some APIs will be prohibited in 3.9 type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 15:43:07 2020 From: report at bugs.python.org (ramalho) Date: Sun, 14 Jun 2020 19:43:07 +0000 Subject: [New-bugs-announce] [issue40978] Document that typing.SupportsXXX protocols are runtime checkable Message-ID: <1592163787.83.0.523481527574.issue40978@roundup.psfhosted.org> New submission from ramalho : The typing module documentation (https://docs.python.org/3/library/typing.html#typing.SupportsInt) does not mention that the protocols listed below are all decorated with `@runtime_checkable`. This should mentioned in the entry for each protocol and also in the entry for `@runtime_checkable` * SupportsAbs * SupportsBytes * SupportsComplex * SupportsFloat * SupportsIndex * SupportsInt * SupportsRound ---------- assignee: docs at python components: Documentation messages: 371513 nosy: docs at python, ramalho priority: normal severity: normal status: open title: Document that typing.SupportsXXX protocols are runtime checkable versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 15:56:11 2020 From: report at bugs.python.org (ramalho) Date: Sun, 14 Jun 2020 19:56:11 +0000 Subject: [New-bugs-announce] [issue40979] typing module docs: keep text, add subsections Message-ID: <1592164571.1.0.235029301981.issue40979@roundup.psfhosted.org> New submission from ramalho : The typing module documentation page has a very long section "Classes, functions, and decorators" (https://docs.python.org/3/library/typing.html#classes-functions-and-decorators) that should be split in subsections. The ordering of the entries seems haphazard: it's not alphabetical. Its grouped according to invisible categories. The categories appear as comments in the source code of typing.py: the `__all__` global lists the API split into categories (see below). We should add these categories to the page as subsections of "Classes, functions, and decorators" - Super-special typing primitives. - ABCs (from collections.abc). - Structural checks, a.k.a. protocols. - Concrete collection types. ---------- assignee: docs at python components: Documentation messages: 371514 nosy: docs at python, ramalho priority: normal severity: normal status: open title: typing module docs: keep text, add subsections versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 14 17:03:34 2020 From: report at bugs.python.org (Quentin Wenger) Date: Sun, 14 Jun 2020 21:03:34 +0000 Subject: [New-bugs-announce] [issue40980] group names of bytes regexes are strings Message-ID: <1592168614.73.0.0415067769111.issue40980@roundup.psfhosted.org> New submission from Quentin Wenger : I noticed that match.groupdict() returns string keys, even for a bytes regex: ``` >>> import re >>> re.match(b"(?P)", b"").groupdict() {'a': b''} ``` This seems somewhat strange, because string and bytes matching in re are kind of two separate parts, cf. doc: > Both patterns and strings to be searched can be Unicode strings (str) as well as 8-bit strings (bytes). However, Unicode strings and 8-bit strings cannot be mixed: that is, you cannot match a Unicode string with a byte pattern or vice-versa; similarly, when asking for a substitution, the replacement string must be of the same type as both the pattern and the search string. ---------- components: Regular Expressions messages: 371516 nosy: ezio.melotti, matpi, mrabarnett priority: normal severity: normal status: open title: group names of bytes regexes are strings type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 01:36:56 2020 From: report at bugs.python.org (mike stern) Date: Mon, 15 Jun 2020 05:36:56 +0000 Subject: [New-bugs-announce] [issue40981] increment is wrong in 3.7 Message-ID: <1592199416.39.0.0193524951936.issue40981@roundup.psfhosted.org> New submission from mike stern : I noticed i big problem making a simple increment of .1 in python 3.7 using while the result is wrong i=0 while i < 1.2: i += 0.1 print(i) result == RESTART: C:/Users/icosf/AppData/Local/Programs/Python/Python37-32/bb.py == 0.1 0.2 0.30000000000000004 0.4 0.5 0.6 0.7 0.7999999999999999 0.8999999999999999 0.9999999999999999 1.0999999999999999 1.2 what the heck is going on, can someone explain to me ---------- assignee: terry.reedy components: IDLE messages: 371519 nosy: rskiredj at hotmail.com, terry.reedy priority: normal severity: normal status: open title: increment is wrong in 3.7 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 05:18:48 2020 From: report at bugs.python.org (=?utf-8?q?Alberto_Torres_Barr=C3=A1n?=) Date: Mon, 15 Jun 2020 09:18:48 +0000 Subject: [New-bugs-announce] [issue40982] copytree example in shutil Message-ID: <1592212728.34.0.101904144419.issue40982@roundup.psfhosted.org> New submission from Alberto Torres Barr?n : The copytree example in https://docs.python.org/3/library/shutil.html#copytree-example does not match the source code, even removing docstrings. In particular is missing parameters and the exceptions are in the wrong order (Error will never be reachable since it is n instance of OSError). ---------- assignee: docs at python components: Documentation messages: 371535 nosy: Alberto Torres Barr?n, docs at python priority: normal severity: normal status: open title: copytree example in shutil versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 07:23:42 2020 From: report at bugs.python.org (Manuel Jacob) Date: Mon, 15 Jun 2020 11:23:42 +0000 Subject: [New-bugs-announce] =?utf-8?b?W2lzc3VlNDA5ODNdIENhbuKAmXQgY29u?= =?utf-8?q?figure_encoding_used_by_urllib=2Erequest=2Eurl2pathname=28=29?= Message-ID: <1592220222.29.0.529443110739.issue40983@roundup.psfhosted.org> New submission from Manuel Jacob : On Python 2, it was possible to recover a percent-encoded byte: >>> from urllib import url2pathname >>> url2pathname('%ff') '\xff' On Python 3, the byte is decoded using the utf-8 encoding and the "replace" error handler (therefore there?s no way to recover the byte): >>> from urllib.request import url2pathname >>> url2pathname('%ff') '?' For my use case (getting the pathname as bytes), it would be sufficient to specify a different encoding (e.g. latin-1) or a different error handler (e.g. surrogateescape) that makes it possible to recover the byte by encoding the result of url2pathname() such that it roundtrips with the encoding and error handler internally used by url2pathname() for percent-encoded bytes. I?m not simply sending a patch, because this might point to a deeper issue. Suppose there?s the following script: import sys from pathlib import Path from urllib.request import urlopen path = Path(sys.argv[1]) path.write_text('Hello, World!') with urlopen(path.as_uri()) as resp: print(resp.read()) If I call this script with b'/tmp/\xff' as the argument, it fails with the following traceback: Traceback (most recent call last): File "/usr/lib/python3.8/urllib/request.py", line 1507, in open_local_file stats = os.stat(localfile) FileNotFoundError: [Errno 2] No such file or directory: '/tmp/?' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "test_url2pathname.py", line 6, in with urlopen(path.as_uri()) as resp: File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python3.8/urllib/request.py", line 525, in open response = self._open(req, data) File "/usr/lib/python3.8/urllib/request.py", line 542, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain result = func(*args) File "/usr/lib/python3.8/urllib/request.py", line 1485, in file_open return self.open_local_file(req) File "/usr/lib/python3.8/urllib/request.py", line 1524, in open_local_file raise URLError(exp) urllib.error.URLError: So maybe urllib.request.url2pathname() should use the same encoding and error handler as os.fsencode() / os.fsdecode(). ---------- components: Library (Lib) messages: 371537 nosy: mjacob priority: normal severity: normal status: open title: Can?t configure encoding used by urllib.request.url2pathname() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 09:29:51 2020 From: report at bugs.python.org (Quentin Wenger) Date: Mon, 15 Jun 2020 13:29:51 +0000 Subject: [New-bugs-announce] [issue40984] re.compile's repr truncates patterns at 199 characters Message-ID: <1592227791.99.0.0594724563571.issue40984@roundup.psfhosted.org> New submission from Quentin Wenger : This seems somewhat arbitrary and yields unusable results, going against the doc: > repr(object) > Return a string containing a printable representation of an object. For many types, this function makes an attempt to return a string that would yield an object with the same value when passed to eval(), otherwise the representation is a string enclosed in angle brackets that contains the name of the type of the object together with additional information often including the name and address of the object. A class can control what this function returns for its instances by defining a __repr__() method. The truncated representation neither "yields an object with the same value" (it raises a SyntaxError, of course, due to the missing quote and closing parenthesis), nor is "enclosed in angle brackets". ``` >>> import re >>> re.compile("()"*99) re.compile('()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()') >>> re.compile("()"*100) re.compile('()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()() ``` ---------- components: Regular Expressions messages: 371541 nosy: ezio.melotti, matpi, mrabarnett priority: normal severity: normal status: open title: re.compile's repr truncates patterns at 199 characters type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 09:45:35 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Mon, 15 Jun 2020 13:45:35 +0000 Subject: [New-bugs-announce] [issue40985] PEG Parser: SyntaxError text empty when last line has a LINECONT Message-ID: <1592228734.99.0.539504567081.issue40985@roundup.psfhosted.org> New submission from Lysandros Nikolaou : While investigating bpo-40958, the following came up: When a file ends with a line that contains a line continuation character the text of the emitted SyntaxError is empty, contrary to the old parser, where the error text contained the text of the last line. Here is an example: cpython git:(master)$ cat t.py x = 6\% cpython git:(master)$ ./python t.py File "/home/lysnikolaou/repos/cpython/t.py", line 2 ^ SyntaxError: unexpected EOF while parsing cpython git:(master)$ python3.9 -X oldparser t.py File "/home/lysnikolaou/repos/cpython/t.py", line 2 x = 6\ ^ SyntaxError: unexpected EOF while parsing ---------- components: Interpreter Core messages: 371544 nosy: lys.nikolaou priority: normal severity: normal status: open title: PEG Parser: SyntaxError text empty when last line has a LINECONT type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 10:27:50 2020 From: report at bugs.python.org (Vytautas Liuolia) Date: Mon, 15 Jun 2020 14:27:50 +0000 Subject: [New-bugs-announce] [issue40986] Async generators are not garbage collected Message-ID: <1592231270.66.0.0853044039719.issue40986@roundup.psfhosted.org> New submission from Vytautas Liuolia : Hello! I am having issues with asynchronous generators not being garbage collected at least until the current loop has completed. In the attached test case (test.py), one starts iterating over an asynchronous generator, then breaks and returns the first element. After each call, gc.collect() is invoked for illustration purposes. It seems that no memory is freed until the whole test() coroutine is done. The for-loop could obviously be extended to more iterations, or swapped out to a while-loop to easily run out of available memory. I have then removed all async stuff, producing test_sync.py (also attached). In the sync case, everything is garbage-collected as I would expect. ---------- components: asyncio files: test.py messages: 371550 nosy: asvetlov, vytas, yselivanov priority: normal severity: normal status: open title: Async generators are not garbage collected type: resource usage versions: Python 3.8 Added file: https://bugs.python.org/file49231/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 12:06:29 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 15 Jun 2020 16:06:29 +0000 Subject: [New-bugs-announce] [issue40987] Add tests to test_interpreters to import C extension modules converted to PEP 489 multiphase initialization Message-ID: <1592237189.15.0.645985819824.issue40987@roundup.psfhosted.org> New submission from STINNER Victor : In bpo-1635741, many C extension modules were ported to the new PEP 489 multiphase initialization. Some of these changes suddenly make old bugs visible. I mean that bugs were there for years, but nobody noticed them previously. * _weakref in importlib: bpo-40050 (commit 83d46e0622d2efdf5f3bf8bf8904d0dcb55fc322) * leak in _testcapi: commit 310e2d25170a88ef03f6fd31efcc899fe062da2c (bpo-36854) * Reference leak in select: bpo-32604 (commit 18a90248fdd92b27098cc4db773686a2d10a4d24) * test_threading vs the garbage collector: bpo-40217 "The garbage collector doesn't take in account that objects of heap allocated types hold a strong reference to their type"... This issue has a controversial history... Read bpo-40217 discussion. * etc. The problem is that these issues are discovered afterwards, usually when "Refleaks" buildbots complete. I propose to add a new test case to test_interpreters which would run "import xxx" in _testcapi.run_in_subinterp() on the 34 C extensions already ported to PyModuleDef_Init(): $ grep -l PyModuleDef_Init Modules/*.c Modules/_abc.c Modules/arraymodule.c Modules/atexitmodule.c Modules/audioop.c Modules/binascii.c Modules/_bz2module.c Modules/_codecsmodule.c Modules/_collectionsmodule.c Modules/_contextvarsmodule.c Modules/_cryptmodule.c Modules/errnomodule.c Modules/fcntlmodule.c Modules/_functoolsmodule.c Modules/_heapqmodule.c Modules/itertoolsmodule.c Modules/_json.c Modules/_localemodule.c Modules/mathmodule.c Modules/mmapmodule.c Modules/nismodule.c Modules/_operator.c Modules/posixmodule.c Modules/resource.c Modules/_stat.c Modules/_statisticsmodule.c Modules/syslogmodule.c Modules/_testmultiphase.c Modules/timemodule.c Modules/_uuidmodule.c Modules/_weakref.c Modules/xxlimited.c Modules/xxmodule.c Modules/xxsubtype.c Modules/_zoneinfo.c The test case should explain the purpose of these tests: * ensure that the import doesn't crash * help to discover reference leaks when running "python -m test test_interpreters -R 3:3" Maybe later we could elaborate these tests to ensure that the sub interpreter module is unrelated to the module in the main interpreter. For example, add a specific attribute in the subinterpreter, and then check that it doesn't exist in the main interpreter. But I'm not sure if it's a good idea to import 34 modules in test_interpreters. ---------- components: Tests messages: 371568 nosy: corona10, vstinner priority: normal severity: normal status: open title: Add tests to test_interpreters to import C extension modules converted to PEP 489 multiphase initialization versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 16:14:13 2020 From: report at bugs.python.org (Federico Caselli) Date: Mon, 15 Jun 2020 20:14:13 +0000 Subject: [New-bugs-announce] [issue40988] singledispatchmethod significantly slower than singledispatch Message-ID: <1592252053.27.0.723113376898.issue40988@roundup.psfhosted.org> New submission from Federico Caselli : The implementation of singledispatchmethod is significantly slower (~4x) than the normal singledispatch version Using timeit to test this example case: from functools import singledispatch, singledispatchmethod import timeit class Test: @singledispatchmethod def go(self, item, arg): print('general') @go.register def _(self, item:int, arg): return item + arg @singledispatch def go(item, arg): print('general') @go.register def _(item:int, arg): return item + arg print(timeit.timeit('t.go(1, 1)', globals={'t': Test()})) print(timeit.timeit('go(1, 1)', globals={'go': go})) Prints on my system. 3.118346 0.713173 Looking at the singledispatchmethod implementation I believe that most of the difference is because a new function is generated every time the method is called. Maybe an implementation similar to cached_property could be used if the class has __dict__ attribute? Trying this simple patch diff --git a/Lib/functools.py b/Lib/functools.py index 5cab497..e42f485 100644 --- a/Lib/functools.py +++ b/Lib/functools.py @@ -900,6 +900,7 @@ class singledispatchmethod: self.dispatcher = singledispatch(func) self.func = func + self.attrname = None def register(self, cls, method=None): """generic_method.register(cls, func) -> func @@ -908,6 +909,10 @@ class singledispatchmethod: """ return self.dispatcher.register(cls, func=method) + def __set_name__(self, owner, name): + if self.attrname is None: + self.attrname = name + def __get__(self, obj, cls=None): def _method(*args, **kwargs): method = self.dispatcher.dispatch(args[0].__class__) @@ -916,6 +921,7 @@ class singledispatchmethod: _method.__isabstractmethod__ = self.__isabstractmethod__ _method.register = self.register update_wrapper(_method, self.func) + obj.__dict__[self.attrname] = _method return _method @property improves the performance noticeably 0.9720976 0.7269078 ---------- components: Library (Lib) messages: 371594 nosy: CaselIT priority: normal severity: normal status: open title: singledispatchmethod significantly slower than singledispatch type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 15 17:05:35 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 15 Jun 2020 21:05:35 +0000 Subject: [New-bugs-announce] [issue40989] [C API] Remove _Py_NewReference() and _Py_ForgetReference() from the public C API Message-ID: <1592255135.39.0.855912884014.issue40989@roundup.psfhosted.org> New submission from STINNER Victor : The _Py_NewReference() and _Py_ForgetReference() functions are tightly coupled to CPython internals. _Py_NewReference() is only exposed because it used by the PyObject_INIT() macro which is the fast inlined flavor of PyObject_Init(). If we make PyObject_INIT() as alias to PyObject_Init(), as already done for the limited C API, _Py_NewReference() can be removed from the public C API (moved to the internal C API). _Py_ForgetReference() function is only defined if Py_TRACE_REFS macro is defined. I propose to also removed it from the public C API (move it to the internal C API). In the CPython code base, _Py_NewReference() is used: * to implement the free list optimization * in _PyBytes_Resize() and unicode_resize() (resize_compact() to be precise) * by PyObject_CallFinalizerFromDealloc() to resurrect an object These are corner cases which can be avoided in third party C extension modules. ---------- components: C API messages: 371597 nosy: vstinner priority: normal severity: normal status: open title: [C API] Remove _Py_NewReference() and _Py_ForgetReference() from the public C API versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 16 08:51:47 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Tue, 16 Jun 2020 12:51:47 +0000 Subject: [New-bugs-announce] [issue40990] Make http.server support SSL Message-ID: <1592311907.03.0.0583719503418.issue40990@roundup.psfhosted.org> New submission from R?mi Lapeyre : It's a bit outside of its original scope but with more and more application requiring HTTPS it is sometime needed even when doing some simple tests or local development. It's quite easy to have an SSL certificate, either auto-signed or using Let's Encrypt but configuring Nginx or HAProxy just to serve a local directory on localhost is tedious. I think just wrapping the socket in SSLSocket and adding two flags on the command line would be enough? ---------- components: Library (Lib) messages: 371647 nosy: remi.lapeyre priority: normal severity: normal status: open title: Make http.server support SSL type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 16 10:06:27 2020 From: report at bugs.python.org (Brendan Steffens) Date: Tue, 16 Jun 2020 14:06:27 +0000 Subject: [New-bugs-announce] [issue40991] Can't open more than on .py file with IDLE Message-ID: <1592316387.5.0.39234682117.issue40991@roundup.psfhosted.org> New submission from Brendan Steffens : I am using Python 3.7.2, macOS Catalina 10.15.5. When I double click a .py script from the Finder, it opens it, along with IDLE shell. When I click a second .py script so I can view both at once, it won't open the second one. ---------- assignee: terry.reedy components: IDLE messages: 371658 nosy: BrendanSteffens, terry.reedy priority: normal severity: normal status: open title: Can't open more than on .py file with IDLE type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 16 10:14:39 2020 From: report at bugs.python.org (Alex Alex) Date: Tue, 16 Jun 2020 14:14:39 +0000 Subject: [New-bugs-announce] [issue40992] Wrong warning in asyncio debug mode Message-ID: <1592316879.27.0.621999678999.issue40992@roundup.psfhosted.org> New submission from Alex Alex : I run code in example and get message like: Executing () created at /usr/lib/python3.7/asyncio/futures.py:288> took 2.000 seconds It say that coroutine run for 2 second but it was run for 5 second. Also if I comment part in qwe function after await I won't get any warning, but should. ---------- components: asyncio files: asyncio-wrong-warn.py messages: 371659 nosy: Alex Alex, asvetlov, yselivanov priority: normal severity: normal status: open title: Wrong warning in asyncio debug mode versions: Python 3.7 Added file: https://bugs.python.org/file49237/asyncio-wrong-warn.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 16 10:58:58 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 16 Jun 2020 14:58:58 +0000 Subject: [New-bugs-announce] [issue40993] Don't run Python and C coverage jobs of Travis CI on pull requests Message-ID: <1592319538.03.0.787243890388.issue40993@roundup.psfhosted.org> New submission from STINNER Victor : Currently, Travis CI runs C coverage and Python coverage jobs on all pull requests. This is a waste of resources: we should only run these jobs on branches like master. Attached PR skips these jobs on pull requests. Not only it's a waste of resources, but it seems like it's not longer possible to merge a PR as soon as Travis CI required jobs pass: a PR can only be merged when *all* Travis CI jobs complete. Problem: while a full test suite run takes around 20 min, coverage jobs take longer than 40 minutes. By the way, the C coverage job fails with timeout, but that's a different issue. ---------- components: Build messages: 371661 nosy: vstinner priority: normal severity: normal status: open title: Don't run Python and C coverage jobs of Travis CI on pull requests versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 16 15:14:55 2020 From: report at bugs.python.org (Sydney Pemberton) Date: Tue, 16 Jun 2020 19:14:55 +0000 Subject: [New-bugs-announce] [issue40994] Very confusing documenation for abc.Collections Message-ID: <1592334895.01.0.222155011801.issue40994@roundup.psfhosted.org> New submission from Sydney Pemberton : I was writing a Jupyter notebook at the time, which I think perfectly illustrated the blind alley this documentation bug led me down before beating me up and stealing my lunch money. I have come to this point in the documentation at least half a dozen times while learning Python and always left confused and with the sense that Python is more complicated than I had thought. Notebook attached. This documentation style violates two principles: - The implied structure of headings and content below it. - Many natural languages do not contain context-sensitive grammar and so using the "respectively" idiom can be very confusing for people who speak English as a second language. ---------- assignee: docs at python components: Documentation files: Confusing docs.ipynb messages: 371690 nosy: Sydney Pemberton, docs at python priority: normal severity: normal status: open title: Very confusing documenation for abc.Collections versions: Python 3.8 Added file: https://bugs.python.org/file49238/Confusing docs.ipynb _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 16 19:00:12 2020 From: report at bugs.python.org (=?utf-8?q?R=C3=A9mi_Lapeyre?=) Date: Tue, 16 Jun 2020 23:00:12 +0000 Subject: [New-bugs-announce] [issue40995] reprlib.Repr.__init__ should accept arguments Message-ID: <1592348412.27.0.318563143111.issue40995@roundup.psfhosted.org> New submission from R?mi Lapeyre : reprlib.Repr does not accept arguments for the moment so setting its attributes must be done in multiple steps: import reprlib r = reprlib.Repr() r.maxstring = 10 r.maxset = 4 r.repr(...) It would be more user-friendly to be able to do: import reprlib r = reprlib.Repr(maxstring=10, maxset=4) r.repr(...) ---------- components: Library (Lib) messages: 371701 nosy: remi.lapeyre priority: normal severity: normal status: open title: reprlib.Repr.__init__ should accept arguments type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 16 20:19:10 2020 From: report at bugs.python.org (Manuel Jacob) Date: Wed, 17 Jun 2020 00:19:10 +0000 Subject: [New-bugs-announce] [issue40996] urllib should fsdecode percent-encoded parts of file URIs on Unix Message-ID: <1592353150.99.0.187851758639.issue40996@roundup.psfhosted.org> New submission from Manuel Jacob : On Unix, file names are bytes. Python mostly prefers to use unicode for file names. On the Python <-> system boundary, os.fsencode() / os.fsdecode() are used. In URIs, bytes can be percent-encoded. On Unix, most applications pass the percent-decoded bytes in file URIs to the file system unchanged. The remainder of this issue description is about Unix, except for the last paragraph. Pathlib fsencodes the path when making a file URI, roundtripping the bytes e.g. passed as an argument: % python3 -c 'import pathlib, sys; print(pathlib.Path(sys.argv[1]).as_uri())' /tmp/a$(echo -e '\xE4') file:///tmp/a%E4 Example with curl using this URL: % echo 'Hello, World!' > /tmp/a$(echo -e '\xE4') % curl file:///tmp/a%E4 Hello, World! Python 2?s urllib works the same: % python2 -c 'from urllib import urlopen; print(repr(urlopen("file:///tmp/a%E4").read()))' 'Hello, World!\n' However, Python 3?s urllib fails: % python3 -c 'from urllib.request import urlopen; print(repr(urlopen("file:///tmp/a%E4").read()))' Traceback (most recent call last): File "/usr/lib/python3.8/urllib/request.py", line 1507, in open_local_file stats = os.stat(localfile) FileNotFoundError: [Errno 2] No such file or directory: '/tmp/a?' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python3.8/urllib/request.py", line 525, in open response = self._open(req, data) File "/usr/lib/python3.8/urllib/request.py", line 542, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain result = func(*args) File "/usr/lib/python3.8/urllib/request.py", line 1485, in file_open return self.open_local_file(req) File "/usr/lib/python3.8/urllib/request.py", line 1524, in open_local_file raise URLError(exp) urllib.error.URLError: urllib.request.url2pathname() is the function converting the path of the file URI to a file name. On Unix, it uses urllib.parse.unquote() with the default settings (UTF-8 encoding and the "replace" error handler). I think that on Unix, the settings from os.fsdecode() should be used, so that it roundtrips with pathlib.Path.as_uri() and so that the percent-decoded bytes are passed to the file system as-is. On Windows, I couldn?t do experiments, but using UTF-8 seems like the right thing (according to https://en.wikipedia.org/wiki/File_URI_scheme#Windows_2). I?m not sure that the "replace" error handler is a good idea. I prefer "errors should never pass silently" from the Zen of Python, but I don?t a have a strong opinion on this. ---------- components: Library (Lib), Unicode messages: 371702 nosy: ezio.melotti, mjacob, vstinner priority: normal severity: normal status: open title: urllib should fsdecode percent-encoded parts of file URIs on Unix type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 02:33:01 2020 From: report at bugs.python.org (Eddie Parker) Date: Wed, 17 Jun 2020 06:33:01 +0000 Subject: [New-bugs-announce] [issue40997] python 3.8.2: datetime.datetime(1969, 1, 1).timestamp() yields OSError Message-ID: <1592375581.16.0.231176585152.issue40997@roundup.psfhosted.org> New submission from Eddie Parker : Running the following yields an unexpected OSError: Invalid argument: Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import datetime >>> datetime.datetime(1000,1,1).timestamp() >>> datetime.datetime(1969,1,1).timestamp() Traceback (most recent call last): File "", line 1, in OSError: [Errno 22] Invalid argument I understand that the time can't yield a valid timestamp, but the exception doesn't really explain that and the documentation doesn't mention OSError as an exception to indicate an invalid date is specified. Ideally a better exception could be used (ValueError?) or the documentation could mention this possibility? Or even better, allow timestamp() to take a parameter for what to return in the case of an invalid timestamp (None?) I mention this because I hit this in some asyncio code which was a nuisance to debug and finding it excepting on a timestamp that had worked before with an OS error baffled me until I got it in a debugger. Thanks as always! ---------- components: Library (Lib) messages: 371710 nosy: Eddie Parker priority: normal severity: normal status: open title: python 3.8.2: datetime.datetime(1969,1,1).timestamp() yields OSError type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 03:00:21 2020 From: report at bugs.python.org (Christian Heimes) Date: Wed, 17 Jun 2020 07:00:21 +0000 Subject: [New-bugs-announce] [issue40998] Compiler warnings in ubsan builds Message-ID: <1592377221.62.0.524170790208.issue40998@roundup.psfhosted.org> New submission from Christian Heimes : I'm seeing several compiler warnings in ubsan builds: $ ./configure --with-address-sanitizer --with-undefined-behavior-sanitizer $ make clean $ make Parser/string_parser.c: In function ?decode_unicode_with_escapes?: Parser/string_parser.c:98:17: warning: null destination pointer [-Wformat-overflow=] 98 | sprintf(p, "\\U%08x", chr); | ^~~~~~~~~~~~~~~~~~~~~~~~~~ Parser/string_parser.c:98:17: warning: null destination pointer [-Wformat-overflow=] Parser/string_parser.c:98:17: warning: null destination pointer [-Wformat-overflow=] In function ?assemble_lnotab?, inlined from ?assemble_emit? at Python/compile.c:5697:25, inlined from ?assemble? at Python/compile.c:6036:18: Python/compile.c:5651:19: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=] 5651 | *lnotab++ = k; | ~~~~~~~~~~^~~ Objects/unicodeobject.c: In function ?xmlcharrefreplace?: Objects/unicodeobject.c:849:16: warning: null destination pointer [-Wformat-overflow=] 849 | str += sprintf(str, "&#%d;", PyUnicode_READ(kind, data, i)); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Objects/unicodeobject.c:849:16: warning: null destination pointer [-Wformat-overflow=] Objects/unicodeobject.c:849:16: warning: null destination pointer [-Wformat-overflow=] Python/pylifecycle.c: In function ?Py_FinalizeEx?: Python/pylifecycle.c:1339:25: warning: unused variable ?interp? [-Wunused-variable] 1339 | PyInterpreterState *interp = tstate->interp; | ---------- components: Build messages: 371712 nosy: christian.heimes priority: normal severity: normal status: open title: Compiler warnings in ubsan builds type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 03:16:15 2020 From: report at bugs.python.org (Christian Heimes) Date: Wed, 17 Jun 2020 07:16:15 +0000 Subject: [New-bugs-announce] [issue40999] implicit-int-float-conversion warnings in time and math code Message-ID: <1592378175.99.0.301099402499.issue40999@roundup.psfhosted.org> New submission from Christian Heimes : clang 10 with asan and ubsan complains about implicit int to float conversion in time and math related code: $ CC=clang ./configure --with-address-sanitizer --with-undefined-behavior-sanitizer $ make clean $ make -s Python/pytime.c:154:10: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion] if (!_Py_InIntegralTypeRange(time_t, intpart)) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./Include/pymath.h:228:82: note: expanded from macro '_Py_InIntegralTypeRange' #define _Py_InIntegralTypeRange(type, v) (_Py_IntegralTypeMin(type) <= v && v <= _Py_IntegralTypeMax(type)) ~~ ^~~~~~~~~~~~~~~~~~~~~~~~~ ./Include/pymath.h:221:124: note: expanded from macro '_Py_IntegralTypeMax' #define _Py_IntegralTypeMax(type) ((_Py_IntegralTypeSigned(type)) ? (((((type)1 << (sizeof(type)*CHAR_BIT - 2)) - 1) << 1) + 1) : ~(type)0) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~ Python/pytime.c:207:14: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion] if (!_Py_InIntegralTypeRange(time_t, intpart)) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./Include/pymath.h:228:82: note: expanded from macro '_Py_InIntegralTypeRange' #define _Py_InIntegralTypeRange(type, v) (_Py_IntegralTypeMin(type) <= v && v <= _Py_IntegralTypeMax(type)) ~~ ^~~~~~~~~~~~~~~~~~~~~~~~~ ./Include/pymath.h:221:124: note: expanded from macro '_Py_IntegralTypeMax' #define _Py_IntegralTypeMax(type) ((_Py_IntegralTypeSigned(type)) ? (((((type)1 << (sizeof(type)*CHAR_BIT - 2)) - 1) << 1) + 1) : ~(type)0) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~ Python/pytime.c:392:10: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-int-float-conversion] if (!_Py_InIntegralTypeRange(_PyTime_t, d)) { ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./Include/pymath.h:228:82: note: expanded from macro '_Py_InIntegralTypeRange' #define _Py_InIntegralTypeRange(type, v) (_Py_IntegralTypeMin(type) <= v && v <= _Py_IntegralTypeMax(type)) ~~ ^~~~~~~~~~~~~~~~~~~~~~~~~ ./Include/pymath.h:221:124: note: expanded from macro '_Py_IntegralTypeMax' #define _Py_IntegralTypeMax(type) ((_Py_IntegralTypeSigned(type)) ? (((((type)1 << (sizeof(type)*CHAR_BIT - 2)) - 1) << 1) + 1) : ~(type)0) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~ 3 warnings generated. ./Modules/timemodule.c:116:13: warning: code will never be executed [-Wunreachable-code] PyErr_SetString(PyExc_OverflowError, ^~~~~~~~~~~~~~~ 1 warning generated. ./Modules/_threadmodule.c:1587:19: warning: implicit conversion from '_PyTime_t' (aka 'long') to 'double' changes value from 9223372036854775 to 9223372036854776 [-Wimplicit-int-float-conversion] timeout_max = (_PyTime_t)PY_TIMEOUT_MAX * 1e-6; ^~~~~~~~~~~~~~~~~~~~~~~~~ ~ 1 warning generated. ---------- messages: 371714 nosy: christian.heimes, mark.dickinson, p-ganssle priority: normal severity: normal status: open title: implicit-int-float-conversion warnings in time and math code type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 05:22:18 2020 From: report at bugs.python.org (E. Paine) Date: Wed, 17 Jun 2020 09:22:18 +0000 Subject: [New-bugs-announce] [issue41000] IDLE only allow single instance Message-ID: <1592385738.41.0.899337100123.issue41000@roundup.psfhosted.org> New submission from E. Paine : I propose that IDLE only allows a single instance, but behaves mostly like before (multiple shells, etc.). The main motivation for this issue is to (1) stop the same file being opened more than once and (2) make a tabbed interface easier to implement. Starting with point (1), believe a file should not be allowed to be opened multiple times but enforcing this currently in IDLE would be incredibly difficult. Instead, I propose that a socket-server sits on the main instance and any new instances send requests to the main instance (to open a file in a new 'instance'). There would be two layers of file-lists, and the current one still acts as an 'instance' file-list but we also create a master list which controls all of the 'instances': Instance flist Instance flist | | ---- Master flist ---- Secondly, point (2). I am currently in the planning phase of creating an IDLE tabbed interface (based loosely off the code currently found in #9262) but it requires both this issue and #40893 to be pulled before it can work effectively (dragging tabs between windows, etc.). I don't currently have any code to propose, but I don't think it should be *too* difficult to implement (but now I've said that!...). ---------- assignee: terry.reedy components: IDLE messages: 371722 nosy: epaine, taleinat, terry.reedy priority: normal severity: normal status: open title: IDLE only allow single instance versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 06:22:48 2020 From: report at bugs.python.org (Christian Heimes) Date: Wed, 17 Jun 2020 10:22:48 +0000 Subject: [New-bugs-announce] [issue41001] Provide wrapper for eventfd Message-ID: <1592389368.35.0.695966020763.issue41001@roundup.psfhosted.org> New submission from Christian Heimes : eventfd is a Linux syscall that returns a file descriptor for event/notify systems. I propse to add a simple eventfd(initval, flags) function that is a wrapper around glibc's eventfd abstraction layer. High level notify and semaphores can then be implemented in pure Python. See https://man7.org/linux/man-pages/man2/eventfd.2.html for more details See https://bugs.python.org/issue40485 for a use case. ---------- components: Extension Modules, Library (Lib) messages: 371727 nosy: christian.heimes priority: normal severity: normal status: open title: Provide wrapper for eventfd versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 08:08:16 2020 From: report at bugs.python.org (Bruce Merry) Date: Wed, 17 Jun 2020 12:08:16 +0000 Subject: [New-bugs-announce] [issue41002] HTTPResponse.read with amt is slow Message-ID: <1592395696.11.0.156333341881.issue41002@roundup.psfhosted.org> New submission from Bruce Merry : I've run into this on 3.8, but the code on Git master doesn't look significantly different so I assume it still applies. I'm happy to work on a PR for this. When http.client.HTTPResponse.read is called with a specific amount to read, it goes down this code path: ``` if amt is not None: # Amount is given, implement using readinto b = bytearray(amt) n = self.readinto(b) return memoryview(b)[:n].tobytes() ``` That's pretty inefficient, because - `bytearray(amt)` will first zero-fill some memory - `tobytes()` will make an extra copy of this memory - if amt is big enough, it'll cause the temporary memory to be allocated from the kernel, which will *also* zero-fill the pages for security. A better approach would be to use the read method of the underlying fp. I have a micro-benchmark (that I'll attach) showing that for a 1GB body and reading the whole body with or without the amount being explicit, performance is reduced from 3GB/s to 1GB/s. For some unknown reason the requests library likes to read the body in 10KB chunks even if the user has requested the entire body, so this will help here (although the gains probably won't be as big because 10KB is really too small to amortise all the accounting overhead). Output from my benchmark, run against a 1GB file on localhost: httpclient-read: 3019.0 ? 63.8 MB/s httpclient-read-length: 1050.3 ? 4.8 MB/s httpclient-read-raw: 3150.3 ? 5.3 MB/s socket-read: 3134.4 ? 7.9 MB/s ---------- components: Library (Lib) files: httpbench-simple.py messages: 371732 nosy: bmerry priority: normal severity: normal status: open title: HTTPResponse.read with amt is slow versions: Python 3.8 Added file: https://bugs.python.org/file49239/httpbench-simple.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 08:45:57 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 17 Jun 2020 12:45:57 +0000 Subject: [New-bugs-announce] [issue41003] test_copyreg: importing numpy changes warnings filters Message-ID: <1592397957.29.0.726353926705.issue41003@roundup.psfhosted.org> New submission from STINNER Victor : This issue is similar to bpo-40055: running test_copyreg fails with ENV_CHANGED if numpy is available. Example: $ ./python -m venv env $ env/bin/python -m pip install numpy $ env/bin/python -m test --fail-env-changed -v test_copyreg -m xxxx == CPython 3.9.0b3+ (heads/getpath_platlibdir39:11462e9847, Jun 11 2020, 17:49:13) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] (...) Warning -- warnings.filters was modified by test_copyreg Before: (140498488250848, [], []) After: (140498488250848, [], [('ignore', re.compile('numpy.ndarray size changed', re.IGNORECASE), , None, 0), ('ignore', re.compile('numpy.ufunc size changed', re.IGNORECASE), , None, 0), ('ignore', re.compile('numpy.dtype size changed', re.IGNORECASE), , None, 0), ('always', None, , None, 0)]) (...) ---------- components: Tests messages: 371736 nosy: vstinner priority: normal severity: normal status: open title: test_copyreg: importing numpy changes warnings filters versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 09:11:52 2020 From: report at bugs.python.org (martin w) Date: Wed, 17 Jun 2020 13:11:52 +0000 Subject: [New-bugs-announce] [issue41004] Hash collisions in IPv4Interface and IPv6Interface Message-ID: <1592399512.32.0.283218763722.issue41004@roundup.psfhosted.org> New submission from martin w : In the ipaddress library there exists two classes IPv4Interface, and IPv6Interface. These classes' hash functions will always return 32 and 64 respectively. If IPv4Interface or IPv6Interface objects then are put in a dictionary, on for example a server storing IPs, this will cause hash collisions, which in turn can lead to DOS. The root of this is on line 1421 and 2095. On both lines, self._ip and self.network.network_address will both be same, and when xor is applied they will cancel eachother out, leaving return self._prefixlen . Since self._prefixlen is a constant, 32 and 64 respectively, this will lead to a constant hash. The fix is trivial, on line 1421, change to: return hash((self._ip, self._prefixlen, int(self.network.network_address))) and on line 2095, change to: return hash((self._ip, self._prefixlen, int(self.network.network_address))) ---------- components: Library (Lib) messages: 371738 nosy: nnewram priority: normal severity: normal status: open title: Hash collisions in IPv4Interface and IPv6Interface versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 09:20:54 2020 From: report at bugs.python.org (SBC King) Date: Wed, 17 Jun 2020 13:20:54 +0000 Subject: [New-bugs-announce] [issue41005] Permission denied: 'xdg-settings' when executing 'jupyter notebook' from command line Message-ID: <1592400054.74.0.017392691783.issue41005@roundup.psfhosted.org> New submission from SBC King : error was caused by the permission problem with 'xdg-settings'. I checked that Google Chrome is my default browser. I tried the following command by changing the c.NotebookApp.allow_root to True in the jupyter_notebook_config.py file, and it can successfully open jupyter notebook in the browser Safari: sudo jupyter notebook ---------- messages: 371739 nosy: SBC King priority: normal severity: normal status: open title: Permission denied: 'xdg-settings' when executing 'jupyter notebook' from command line type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 09:58:21 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 17 Jun 2020 13:58:21 +0000 Subject: [New-bugs-announce] [issue41006] Reduce number of modules imported by runpy Message-ID: <1592402301.15.0.428746435956.issue41006@roundup.psfhosted.org> New submission from STINNER Victor : Currently, the runpy module imports many modules. runpy is used by "python3 -m module". I propose to attempt to reduce the number of imports to reduce Python startup time. With my local changes, I reduce Python startup time from 24 ms to 18 ms: Mean +- std dev: [ref] 24.3 ms +- 0.2 ms -> [patch] 18.0 ms +- 0.3 ms: 1.35x faster (-26%) Timing measured by: ./python -m venv env python -m pyperf command -v -o patch.json -- env/bin/python -m empty Currently, runpy imports +21 modules: * ./python mod.py: Total 33 * ./python -m mod: Total 54 (+21) Example with attached mod.py: $ ./python -m mod ['__main__', '_abc', '_codecs', '_collections', '_collections_abc', '_frozen_importlib', '_frozen_importlib_external', '_functools', '_heapq', '_imp', '_io', '_locale', '_operator', '_signal', '_sitebuiltins', '_sre', '_stat', '_thread', '_warnings', '_weakref', '_weakrefset', 'abc', 'builtins', 'codecs', 'collections', 'collections.abc', 'contextlib', 'copyreg', 'encodings', 'encodings.aliases', 'encodings.ascii', 'encodings.latin_1', 'encodings.utf_8', 'enum', 'functools', 'genericpath', 'heapq', 'importlib', 'importlib._bootstrap', 'importlib._bootstrap_external', 'importlib.abc', 'importlib.machinery', 'importlib.util', 'io', 'itertools', 'keyword', 'marshal', 'operator', 'os', 'os.path', 'pkgutil', 'posix', 'posixpath', 're', 'reprlib', 'runpy', 'site', 'sre_compile', 'sre_constants', 'sre_parse', 'stat', 'sys', 'time', 'types', 'typing', 'typing.io', 'typing.re', 'warnings', 'weakref', 'zipimport'] Total 70 ---------- components: Library (Lib) files: mod.py messages: 371741 nosy: vstinner priority: normal severity: normal status: open title: Reduce number of modules imported by runpy versions: Python 3.10 Added file: https://bugs.python.org/file49240/mod.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 10:33:20 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 17 Jun 2020 14:33:20 +0000 Subject: [New-bugs-announce] [issue41007] test_importlib logs ResourceWarning: test_path.CommonTests.test_importing_module_as_side_effect() Message-ID: <1592404400.93.0.826536668116.issue41007@roundup.psfhosted.org> New submission from STINNER Victor : It seems like the issue was introduced in bpo-39791 by: commit 843c27765652e2322011fb3e5d88f4837de38c06 Author: Jason R. Coombs Date: Sun Jun 7 21:00:51 2020 -0400 bpo-39791 native hooks for importlib.resources.files (GH-20576) The warning: $ ./python -X tracemalloc=20 -m test test_importlib -m test.test_importlib.test_path.CommonTests.test_importing_module_as_side_effect -v == CPython 3.10.0a0 (heads/importlib_typing:d1b0d052cf, Jun 17 2020, 16:09:52) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] == Linux-5.6.18-300.fc32.x86_64-x86_64-with-glibc2.31 little-endian == cwd: /home/vstinner/python/master/build/test_python_67972 == CPU count: 8 == encodings: locale=UTF-8, FS=utf-8 0:00:00 load avg: 0.44 Run tests sequentially 0:00:00 load avg: 0.44 [1/1] test_importlib test_importing_module_as_side_effect (test.test_importlib.test_path.CommonTests) ... /home/vstinner/python/master/Lib/contextlib.py:124: ResourceWarning: unclosed file <_io.BufferedReader name='/home/vstinner/python/master/Lib/test/test_importlib/data01/utf-8.file'> next(self.gen) Object allocated at (most recent call last): File "/home/vstinner/python/master/Lib/unittest/runner.py", lineno 176 test(result) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 84 return self.run(*args, **kwds) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 122 test(result) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 84 return self.run(*args, **kwds) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 122 test(result) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 84 return self.run(*args, **kwds) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 122 test(result) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 84 return self.run(*args, **kwds) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 122 test(result) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 84 return self.run(*args, **kwds) File "/home/vstinner/python/master/Lib/unittest/suite.py", lineno 122 test(result) File "/home/vstinner/python/master/Lib/unittest/case.py", lineno 653 return self.run(*args, **kwds) File "/home/vstinner/python/master/Lib/unittest/case.py", lineno 593 self._callTestMethod(testMethod) File "/home/vstinner/python/master/Lib/unittest/case.py", lineno 550 method() File "/home/vstinner/python/master/Lib/test/test_importlib/util.py", lineno 509 self.execute(data01.__name__, 'utf-8.file') File "/home/vstinner/python/master/Lib/test/test_importlib/test_path.py", lineno 10 with resources.path(package, path): File "/home/vstinner/python/master/Lib/contextlib.py", lineno 117 return next(self.gen) File "/home/vstinner/python/master/Lib/importlib/resources.py", lineno 118 opener_reader = reader.open_resource(norm_resource) File "/home/vstinner/python/master/Lib/importlib/abc.py", lineno 465 return self.files().joinpath(resource).open('rb') File "/home/vstinner/python/master/Lib/pathlib.py", lineno 1238 return io.open(self, mode, buffering, encoding, errors, newline, ok ---------------------------------------------------------------------- Ran 1 test in 0.134s OK == Tests result: SUCCESS == 1 test OK. Total duration: 6.2 sec Tests result: SUCCESS ---------- components: Tests messages: 371743 nosy: vstinner priority: normal severity: normal status: open title: test_importlib logs ResourceWarning: test_path.CommonTests.test_importing_module_as_side_effect() versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 10:59:25 2020 From: report at bugs.python.org (David Adam) Date: Wed, 17 Jun 2020 14:59:25 +0000 Subject: [New-bugs-announce] [issue41008] multiprocessing.Connection.poll raises BrokenPipeError on Windows Message-ID: <1592405965.75.0.513458950808.issue41008@roundup.psfhosted.org> New submission from David Adam : On Windows 10 (1909, build 18363.900) in 3.7.7 and 3.9.0b3, poll() on a multiprocessing.Connection object can produce an exception: -- import multiprocessing def run(output_socket): for i in range(10): output_socket.send(i) output_socket.close() def main(): recv, send = multiprocessing.Pipe(duplex=False) process = multiprocessing.Process(target=run, args=(send,)) process.start() send.close() while True: if not process._closed: if recv.poll(): try: print(recv.recv()) except EOFError: process.join() break if __name__ == "__main__": main() -- On Linux/macOS this prints 0-9 and exits successfully, but on Windows produces a backtrace as follows: File "mptest.py", line 17, in main if recv.poll(): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.179.0_x64__qbz5n2kfra8p0\lib\multiprocessing\connection.py", line 262, in poll return self._poll(timeout) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.179.0_x64__qbz5n2kfra8p0\lib\multiprocessing\connection.py", line 333, in _poll _winapi.PeekNamedPipe(self._handle)[0] != 0): BrokenPipeError: [WinError 109] The pipe has been ended ---------- messages: 371748 nosy: zanchey priority: normal severity: normal status: open title: multiprocessing.Connection.poll raises BrokenPipeError on Windows type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 11:20:43 2020 From: report at bugs.python.org (Christian Heimes) Date: Wed, 17 Jun 2020 15:20:43 +0000 Subject: [New-bugs-announce] [issue41009] @support.requires_*_version broken for classes Message-ID: <1592407243.92.0.831664912584.issue41009@roundup.psfhosted.org> New submission from Christian Heimes : The decorators requires_linux_version, requires_freebsd_version, and requires_mac_ver don't work as class decorators. Decorated classes are ignored completely and not used in tests. The problem affects a couple of tests in test_os and maybe more cases. ---------- components: Tests messages: 371752 nosy: christian.heimes priority: high severity: normal status: open title: @support.requires_*_version broken for classes versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 12:06:53 2020 From: report at bugs.python.org (patrick totzke) Date: Wed, 17 Jun 2020 16:06:53 +0000 Subject: [New-bugs-announce] [issue41010] email.message.EmailMessage.get_body Message-ID: <1592410013.12.0.0160903099449.issue41010@roundup.psfhosted.org> New submission from patrick totzke : I am trying to use EmailMessage.get_body() on the attached spam email. Although that message may be malformed, I believe that this method should fail gracefully. To reproduce ``` with open('msg', 'rb') as f: m = email.message_from_binary_file(f, _class=email.message.EmailMessage) m.get_body() --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in ----> 1 m.get_body() /usr/lib/python3.8/email/message.py in get_body(self, preferencelist) 1016 best_prio = len(preferencelist) 1017 body = None -> 1018 for prio, part in self._find_body(self, preferencelist): 1019 if prio < best_prio: 1020 best_prio = prio /usr/lib/python3.8/email/message.py in _find_body(self, part, preferencelist) 987 if subtype != 'related': 988 for subpart in part.iter_parts(): --> 989 yield from self._find_body(subpart, preferencelist) 990 return 991 if 'related' in preferencelist: /usr/lib/python3.8/email/message.py in _find_body(self, part, preferencelist) 987 if subtype != 'related': 988 for subpart in part.iter_parts(): --> 989 yield from self._find_body(subpart, preferencelist) 990 return 991 if 'related' in preferencelist: /usr/lib/python3.8/email/message.py in _find_body(self, part, preferencelist) 976 977 def _find_body(self, part, preferencelist): --> 978 if part.is_attachment(): 979 return 980 maintype, subtype = part.get_content_type().split('/') AttributeError: 'str' object has no attribute 'is_attachment' ``` I am on Python 3.8.3 on debian testing. ---------- components: Library (Lib) files: msg messages: 371755 nosy: patrick totzke priority: normal severity: normal status: open title: email.message.EmailMessage.get_body type: crash versions: Python 3.8 Added file: https://bugs.python.org/file49241/msg _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 17:25:38 2020 From: report at bugs.python.org (Brett Cannon) Date: Wed, 17 Jun 2020 21:25:38 +0000 Subject: [New-bugs-announce] [issue41011] [venv] record which executable and command were used to create a virtual environment Message-ID: <1592429138.01.0.105154043549.issue41011@roundup.psfhosted.org> New submission from Brett Cannon : When a virtual environment is created, the resulting pyvenv.cfg specifies the directory which contained the Python executable and the version of Python (see https://github.com/python/cpython/blob/master/Lib/venv/__init__.py#L147). Unfortunately that may not be enough to work backwards to which binary was used to create the virtual environment. My idea is to add an `executable` and `command` key to pyvenv.cfg which record the Python executable name and the command used to construct the virtual environment, respectively. The former would disambiguate which exact Python interpreter was used, and the `command` key could be used by e.g. virtualenv to record what was used to construct the virtual environment. That potentially could be used to make recreating a broken virtual environment easier. ---------- components: Library (Lib) messages: 371775 nosy: brett.cannon, vinay.sajip priority: normal severity: normal status: open title: [venv] record which executable and command were used to create a virtual environment type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 18:39:43 2020 From: report at bugs.python.org (Manuel Jacob) Date: Wed, 17 Jun 2020 22:39:43 +0000 Subject: [New-bugs-announce] [issue41012] Some code comments refer to removed initfsencoding() Message-ID: <1592433583.2.0.837325823711.issue41012@roundup.psfhosted.org> New submission from Manuel Jacob : Some code comments refer to initfsencoding(), which was however removed after Python 3.7. ---------- messages: 371779 nosy: mjacob priority: normal severity: normal status: open title: Some code comments refer to removed initfsencoding() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 18:52:12 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 17 Jun 2020 22:52:12 +0000 Subject: [New-bugs-announce] [issue41013] test_os.test_memfd_create() fails on AMD64 FreeBSD Shared 3.x Message-ID: <1592434332.74.0.304039490213.issue41013@roundup.psfhosted.org> New submission from STINNER Victor : https://buildbot.python.org/all/#/builders/152/builds/1024 ... test_makedir (test.test_os.MakedirTests) ... ok test_mode (test.test_os.MakedirTests) ... ok Timeout (0:25:00)! Thread 0x0000000800b54000 (most recent call first): File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_os.py", line 3520 in test_memfd_create ... ---------- components: Tests messages: 371780 nosy: vstinner priority: normal severity: normal status: open title: test_os.test_memfd_create() fails on AMD64 FreeBSD Shared 3.x versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 19:26:07 2020 From: report at bugs.python.org (Sree Vaddi) Date: Wed, 17 Jun 2020 23:26:07 +0000 Subject: [New-bugs-announce] [issue41014] sqlite3_trace deprecated Message-ID: <1592436367.99.0.294210468626.issue41014@roundup.psfhosted.org> New submission from Sree Vaddi : details in the attached python.compile.log ---------- components: macOS files: python.compile.log messages: 371784 nosy: ned.deily, ronaldoussoren, svaddi priority: normal severity: normal status: open title: sqlite3_trace deprecated type: enhancement Added file: https://bugs.python.org/file49242/python.compile.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 19:27:44 2020 From: report at bugs.python.org (Sree Vaddi) Date: Wed, 17 Jun 2020 23:27:44 +0000 Subject: [New-bugs-announce] [issue41015] warning: 'sqlite3_enable_shared_cache' is deprecated Message-ID: <1592436464.21.0.919729812752.issue41015@roundup.psfhosted.org> New submission from Sree Vaddi : details in the attached python.compile.log ---------- components: macOS files: python.compile.log messages: 371785 nosy: ned.deily, ronaldoussoren, svaddi priority: normal severity: normal status: open title: warning: 'sqlite3_enable_shared_cache' is deprecated type: enhancement Added file: https://bugs.python.org/file49243/python.compile.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 19:29:09 2020 From: report at bugs.python.org (Sree Vaddi) Date: Wed, 17 Jun 2020 23:29:09 +0000 Subject: [New-bugs-announce] [issue41016] warning: 'Tk_GetNumMainWindows' is deprecated Message-ID: <1592436549.98.0.251888900953.issue41016@roundup.psfhosted.org> Change by Sree Vaddi : ---------- components: macOS files: python.compile.log nosy: ned.deily, ronaldoussoren, svaddi priority: normal severity: normal status: open title: warning: 'Tk_GetNumMainWindows' is deprecated type: enhancement Added file: https://bugs.python.org/file49244/python.compile.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 19:29:34 2020 From: report at bugs.python.org (Sree Vaddi) Date: Wed, 17 Jun 2020 23:29:34 +0000 Subject: [New-bugs-announce] [issue41017] warning: 'Tk_Init' is deprecated: first deprecated Message-ID: <1592436574.64.0.825505741473.issue41017@roundup.psfhosted.org> Change by Sree Vaddi : ---------- components: macOS files: python.compile.log nosy: ned.deily, ronaldoussoren, svaddi priority: normal severity: normal status: open title: warning: 'Tk_Init' is deprecated: first deprecated Added file: https://bugs.python.org/file49245/python.compile.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 19:30:13 2020 From: report at bugs.python.org (Sree Vaddi) Date: Wed, 17 Jun 2020 23:30:13 +0000 Subject: [New-bugs-announce] [issue41018] warning: 'Tk_MainWindow' is deprecated: first deprecated Message-ID: <1592436613.04.0.537099789756.issue41018@roundup.psfhosted.org> Change by Sree Vaddi : ---------- components: macOS files: python.compile.log nosy: ned.deily, ronaldoussoren, svaddi priority: normal severity: normal status: open title: warning: 'Tk_MainWindow' is deprecated: first deprecated Added file: https://bugs.python.org/file49246/python.compile.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 19:31:30 2020 From: report at bugs.python.org (Sree Vaddi) Date: Wed, 17 Jun 2020 23:31:30 +0000 Subject: [New-bugs-announce] [issue41019] The necessary bits to build these optional modules were not found: Message-ID: <1592436690.4.0.900808103671.issue41019@roundup.psfhosted.org> Change by Sree Vaddi : ---------- components: macOS files: python.compile.log nosy: ned.deily, ronaldoussoren, svaddi priority: normal severity: normal status: open title: The necessary bits to build these optional modules were not found: Added file: https://bugs.python.org/file49247/python.compile.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 17 19:31:58 2020 From: report at bugs.python.org (Sree Vaddi) Date: Wed, 17 Jun 2020 23:31:58 +0000 Subject: [New-bugs-announce] [issue41020] Could not build the ssl module! Message-ID: <1592436718.73.0.150168220835.issue41020@roundup.psfhosted.org> Change by Sree Vaddi : ---------- components: macOS files: python.compile.log nosy: ned.deily, ronaldoussoren, svaddi priority: normal severity: normal status: open title: Could not build the ssl module! Added file: https://bugs.python.org/file49248/python.compile.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 18 05:26:16 2020 From: report at bugs.python.org (Aravindhan) Date: Thu, 18 Jun 2020 09:26:16 +0000 Subject: [New-bugs-announce] [issue41021] Ctype callback with Structures crashes on python 3.8 on windows. Message-ID: <1592472376.78.0.985098642017.issue41021@roundup.psfhosted.org> New submission from Aravindhan : Python Process crashes with unauthorised memory access when the C++ DLL callback has arguments as structures. Happens in python 3.8. python 3.7 works fine with same source code. Initial investigations revels that the structure is called back as a pointer instead of plain python object. Callbacks with Primitive types are successful. This might be something to do with vector calling in python 3.8? >>>Replicate: >>>A c++ DLL with "subscribe_cb" function and a callback "listen_cb"; where FILETIME is windows FILETIME structure (it can be any structure for that matter) extern "C" CBFUNC(void, listen_cb)(int value, FILETIME ts ); extern "C" EXFUNC(void) subscribe_cb(listen_cb); EXFUNC(void) subscribe_cb(listen_cb cb) { int i = 0; while (true) { FILETIME systime; GetSystemTimeAsFileTime(&systime); i++; Sleep(1000); cb(i, systime); } } >>>Python client for the dll class FT(Structure): _fields_ = [("dwLowDateTime", c_ulong, 32), ("dwHighDateTime", c_ulong, 32)] @WINFUNCTYPE(c_void_p, c_int, FT) def cb(val, ft): print(f"callback {val} {ft.dwLowDateTime} {ft.dwHighDateTime}") lib = WinDLL(r"C:\Temp\CBsimulate\CbClient\CbClient\Debug\DummyCb.dll") lib.subscribe_cb(cb) while 1: sleep(5) ---------- components: ctypes messages: 371796 nosy: amaury.forgeotdarc, belopolsky, itsgk92, meador.inge priority: normal severity: normal status: open title: Ctype callback with Structures crashes on python 3.8 on windows. type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 18 05:38:01 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Thu, 18 Jun 2020 09:38:01 +0000 Subject: [New-bugs-announce] [issue41022] AST sum types is unidentifiable after ast.Constant became a base class Message-ID: <1592473081.05.0.347184745832.issue41022@roundup.psfhosted.org> New submission from Batuhan Taskaya : Previously (before ast.Constant became a base class for deprecated node classes), Sum types were identifiable with this rule; len(node_class.__subclasses__()) > 0 Now, this is also valid for the ast.Constant itself because of ast.Str, ast.Num etc. bases it. The bad side is, it makes (complex) sum classes unidentifiable. ---------- components: Library (Lib) messages: 371797 nosy: BTaskaya, pablogsal, serhiy.storchaka priority: normal severity: normal status: open title: AST sum types is unidentifiable after ast.Constant became a base class _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 18 05:45:34 2020 From: report at bugs.python.org (Jay Patel) Date: Thu, 18 Jun 2020 09:45:34 +0000 Subject: [New-bugs-announce] [issue41023] smtplib does not handle Unicode characters Message-ID: <1592473534.66.0.101889528587.issue41023@roundup.psfhosted.org> New submission from Jay Patel : According to the user requirements, I need to send an email, which is provided as a raw email, i.e., the contents of email are provided in form of headers. To accomplish this I am using the methods provided in the "send_rawemail_demo.py" file (attached below). The smtplib library works fine when providing only 'ascii' characters in the 'raw_email' variable. But, when I provide any Unicode characters either in the Subject or Body of the email, then the sendmail method of the smtplib library fails with the following message: UnicodeEncodeError 'ascii' codec can't encode characters in position 123-124: ordinal not in range(128) I tried providing the mail_options=["SMTPUTF-8"] in the sendmail method (On line no. 72 in the send_rawemail_demo.py file), but then it fails (even for the 'ascii' characters) with the exception as SMTPSenderRefused. I have faced the same issue on Python 3.6. The sendmail method of the SMTP class encodes the message using 'ascii' as: if isinstance(msg, str): msg = _fix_eols(msg).encode('ascii') The code works properly for Python 2 as the smtplib library for Python 2 does not have the above line and hence it allows Unicode characters in the Body and the Subject. ---------- components: email files: send_rawemail_demo.py messages: 371801 nosy: barry, jpatel, r.david.murray priority: normal severity: normal status: open title: smtplib does not handle Unicode characters type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file49249/send_rawemail_demo.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 18 10:02:27 2020 From: report at bugs.python.org (vincent-ferotin) Date: Thu, 18 Jun 2020 14:02:27 +0000 Subject: [New-bugs-announce] [issue41024] doc: Explicitly mention use of 'enum.Enum' as a valid container for 'choices' argument of 'argparse.ArgumentParser.add_argument' Message-ID: <1592488947.97.0.69594351568.issue41024@roundup.psfhosted.org> New submission from vincent-ferotin : It is currently not obvious, reading :mod:`argparse` documentation, that :meth:`argparse.ArgumentParser.add_argument` could accept as 'choices' parameter an :class:`enum.Enum`. However, it seems (at least to me) that this 'choices' parameter [1] is the perfect candidat of enums usage (see [2]). So I suggest here to explicitly mention and illustrate such usage in current Python documentation. As far I as could test, such usage works at least with Python 3.5, 3.7, and 3.8. A small patch on 'master' branches is in progress, and should shortly be available as a GitHub pull-request. [1] https://docs.python.org/3/library/argparse.html#choices [2] PEP 435: https://www.python.org/dev/peps/pep-0435/#motivation ---------- assignee: docs at python components: Documentation messages: 371812 nosy: Vincent F?rotin, docs at python priority: normal severity: normal status: open title: doc: Explicitly mention use of 'enum.Enum' as a valid container for 'choices' argument of 'argparse.ArgumentParser.add_argument' type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 18 10:26:51 2020 From: report at bugs.python.org (Paul Ganssle) Date: Thu, 18 Jun 2020 14:26:51 +0000 Subject: [New-bugs-announce] [issue41025] C implementation of ZoneInfo cannot be subclassed Message-ID: <1592490411.18.0.433446521396.issue41025@roundup.psfhosted.org> New submission from Paul Ganssle : In the C implementation of zoneinfo.ZoneInfo, __init_subclass__ is not declared as a classmethod, which prevents it from being subclassed. This was not noticed because the tests for ZoneInfo subclasses in C are actually testing zoneinfo.ZoneInfo, not a subclass, due to a mistake in the inheritance tree: https://github.com/python/cpython/blob/8f192d12af82c4dc40730bf59814f6a68f68f950/Lib/test/test_zoneinfo/test_zoneinfo.py#L465-L487 Originally reported on the backport by S?bastien Eustace: https://github.com/pganssle/zoneinfo/issues/82 The fix in the backport is here: https://github.com/pganssle/zoneinfo/pull/83 ---------- assignee: p-ganssle components: Library (Lib) messages: 371817 nosy: p-ganssle priority: high severity: normal stage: needs patch status: open title: C implementation of ZoneInfo cannot be subclassed versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 18 12:45:48 2020 From: report at bugs.python.org (Laurence) Date: Thu, 18 Jun 2020 16:45:48 +0000 Subject: [New-bugs-announce] [issue41026] mailbox does not support new Path object Message-ID: <1592498748.35.0.391421735484.issue41026@roundup.psfhosted.org> New submission from Laurence : The mailbox library, in particular the Mailbox class I'm using, does not support the new Path object requiring a clumsy `mbx = Maildir(str(some_path_obj))` to use with a Path instance. It currently blows up if passed a Path directly (does not support startswith) - perhaps a simple solution is to coerce whatever is passed into a string inside `__init__`? Could this support be added? I feel that strings representing paths should be discouraged as a general principal now we have a truly portable object to represent paths, and supporting Path in all places it makes logical sense (without breaking backwards compatibility, if implemented as I suggest with coercion by `str(...)` inside the module) in the core library seems like a good thing to me. ---------- components: Library (Lib) messages: 371823 nosy: LimaAlphaHotel priority: normal severity: normal status: open title: mailbox does not support new Path object type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 18 13:38:27 2020 From: report at bugs.python.org (mbarbera) Date: Thu, 18 Jun 2020 17:38:27 +0000 Subject: [New-bugs-announce] [issue41027] get_version() fails to return gcc version for gcc-7 Message-ID: <1592501907.65.0.64967213393.issue41027@roundup.psfhosted.org> New submission from mbarbera : In Lib/distutils/cygwincompiler.py, get_version() tries to parse the output of shell commands to check for the gcc version. gcc-7 has changed the behavior of the flag -dumpversion to only show the major version number. This prevents the regex string RE_VERSION from parsing the version number from the shell output. The issue was encountered when trying to install a third-party module to python3.6 on Ubuntu 18.04 ---------- components: Distutils messages: 371830 nosy: dstufft, eric.araujo, mbarbera priority: normal severity: normal status: open title: get_version() fails to return gcc version for gcc-7 type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 18 17:35:33 2020 From: report at bugs.python.org (Julien Palard) Date: Thu, 18 Jun 2020 21:35:33 +0000 Subject: [New-bugs-announce] [issue41028] Move docs.python.org language and version switcher out of cpython Message-ID: <1592516133.76.0.0424010515361.issue41028@roundup.psfhosted.org> New submission from Julien Palard : This is to track [1] from the cpython point of view, mostly just to remove the switchers as they are now handled by docsbuild-scripts. see: https://mail.python.org/pipermail/doc-sig/2020-June/004200.html [1]: https://github.com/python/docsbuild-scripts/issues/90 ---------- assignee: mdk components: Documentation messages: 371839 nosy: mdk priority: normal severity: normal status: open title: Move docs.python.org language and version switcher out of cpython _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 01:05:21 2020 From: report at bugs.python.org (Suganya Vijayaraghavan) Date: Fri, 19 Jun 2020 05:05:21 +0000 Subject: [New-bugs-announce] [issue41029] parallel sftp from local to remote file creates empty directory in home path Message-ID: <1592543121.98.0.114702153703.issue41029@roundup.psfhosted.org> New submission from Suganya Vijayaraghavan : I was using parallelSSHClient - copy_to_file The code was like below, greenlets = client.copy_file('/tmp/logs', '/home/sugan/remote_copy/', recurse=True) >From /tmp/logs, I'm copying to my home path(/home/sugan) under the name 'remote_copy'. The copying is as expected but during copying the destination path is starting with '/' then after '/' what is the first directory (in my case it 'home') a empty directory 'home' is getting creating in my home path(/home/sugan) output: -rw-r--r-- 1 sugan sugan 420 Jun 19 00:15 sftp.py drwxr-xr-x 3 sugan sugan 21 Jun 19 00:17 home pwd: /home/sugan Consider now my script is changed as, greenlets = client.copy_file('/tmp/logs', '/opt/user1/remote_copy/', recurse=True) My home path is - /home/sugan but I'm copying to /opt/user1. So now when I executed the script. The script creates a empty directory 'opt' in my home path(/home/sugan). output: -rw-r--r-- 1 sugan sugan 420 Jun 19 00:19 sftp.py drwxr-xr-x 3 sugan sugan 21 Jun 19 00:22 opt pwd: /home/sugan So whatever destination path is given and the path startswith '/' then the first directory of the path is getting created in home path. If a same directory already exists in home path then it is ignored. And if the destination path is a relative path from home then the parallelSSHClient.copy_file works as expected. My concern is generally we use absolute path for all... So each an empty directory is getting created in home. ---------- components: Library (Lib) messages: 371845 nosy: sugan19 priority: normal severity: normal status: open title: parallel sftp from local to remote file creates empty directory in home path type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 04:41:27 2020 From: report at bugs.python.org (=?utf-8?q?Julien_Edmond_Ren=C3=A9_Harbulot?=) Date: Fri, 19 Jun 2020 08:41:27 +0000 Subject: [New-bugs-announce] [issue41030] Provide toList() method on iterators (`list()` is a flow killer in REPL) Message-ID: <1592556087.66.0.973915032204.issue41030@roundup.psfhosted.org> New submission from Julien Edmond Ren? Harbulot : I work with python in the REPL or jupyter notebooks for my data science work and often find myself needing to explore data structures or directories on disk. So I'll access these data structure in a linear "go-forward" fashion using the dot, for instance: ``` root_directory = pathlib.Path(...) root_directory.iterdir()[0].iterdir() ``` Which of course doesn't work because iterdir() provides an iterator instead of a list (I understand this is good for performance, but how annoying for interactive sessions!) The problem with the current API is that: 1. I have to go back to my code and edit in several places to convert the iterators to lists. With the currrent way (i.e. `list( ... )`) I have to edit before the iterator to add `list(` and then after to add `)`. When my one-liners become complex, this is very tedious because I have to perform extensive search to find where to insert the `list(` part. Having a method `.toList()` would allow to edit the expression in a *single* place instead, and is also much easier to read in my opinion: ``` root_directory.iterdir().toList()[0].iterdir().toList() ``` instead of: ``` list(list(root_directory.iterdir())[0].iterdir()) ``` 2. I want to think about my work, the business task at hand. Not about the particularities of python iterators. Having to specify `list(` first, forces me to think about the language. This gets in my way. The possiblity to use `.toList()` at the end of an expression allows the conversion to happen as an afterthought. In particular in REPL or notebooks where I'll often write: ``` directory.iterdir() ``` And the repl will display ``` ``` How easy would it be if I could just *append* to my code instead of surgically edit before and after. ---------- components: Demos and Tools messages: 371852 nosy: Julien Edmond Ren? Harbulot priority: normal severity: normal status: open title: Provide toList() method on iterators (`list()` is a flow killer in REPL) type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 04:54:18 2020 From: report at bugs.python.org (Michael Simacek) Date: Fri, 19 Jun 2020 08:54:18 +0000 Subject: [New-bugs-announce] [issue41031] Inconsistency in C and python traceback printers Message-ID: <1592556858.95.0.825167658939.issue41031@roundup.psfhosted.org> New submission from Michael Simacek : I belive the python traceback module was designed to produce the same output as the internal exception printer (sys.__excepthook__), but this is not the case when the exception's __str__ raises an exception. Given an exception of the following class: class E(Exception): def __str__(self): raise RuntimeError Internal printer output: Traceback (most recent call last): File "inconsistent.py", line 6, in raise E() __main__.E: traceback.print_exc output: Traceback (most recent call last): File "inconsistent.py", line 6, in raise E() E: ---------- components: Library (Lib) messages: 371855 nosy: msimacek priority: normal severity: normal status: open title: Inconsistency in C and python traceback printers type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 05:26:58 2020 From: report at bugs.python.org (MarcoBakera) Date: Fri, 19 Jun 2020 09:26:58 +0000 Subject: [New-bugs-announce] [issue41032] locale.setlocale example incorrect Message-ID: <1592558818.84.0.296845885403.issue41032@roundup.psfhosted.org> New submission from MarcoBakera : The example given results in an error. https://docs.python.org/3.8/library/locale.html?highlight=locale >>> locale.setlocale(locale.LC_ALL, 'de_DE') I could be improved with one of the following versions: >>> locale.setlocale(locale.LC_ALL, 'de_DE.UTF-8') >>> locale.setlocale(locale.LC_ALL, ('de_DE'), ('UTF-8')) ---------- assignee: docs at python components: Documentation messages: 371860 nosy: docs at python, pintman priority: normal severity: normal status: open title: locale.setlocale example incorrect type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 05:43:54 2020 From: report at bugs.python.org (daniel hahler) Date: Fri, 19 Jun 2020 09:43:54 +0000 Subject: [New-bugs-announce] [issue41033] readline.c: SEGFAULT on SIGWINCH when loaded twice Message-ID: <1592559834.18.0.183747914407.issue41033@roundup.psfhosted.org> New submission from daniel hahler : The following will crash due to the signal handler calling itself recursively: ``` import os, readline, signal, sys del sys.modules["readline"] import readline os.kill(os.getpid(), signal.SIGWINCH) ``` This fixes it: ``` diff --git a/Modules/readline.c b/Modules/readline.c index 081657fb23..174e0117a9 100644 --- a/Modules/readline.c +++ b/Modules/readline.c @@ -967,6 +967,7 @@ readline_sigwinch_handler(int signum) { sigwinch_received = 1; if (sigwinch_ohandler && + sigwinch_ohandler != readline_sigwinch_handler && sigwinch_ohandler != SIG_IGN && sigwinch_ohandler != SIG_DFL) sigwinch_ohandler(signum); ``` It gets installed/saved in https://github.com/python/cpython/blob/01ece63d42b830df106948db0aefa6c1ba24416a/Modules/readline.c#L1111-L1112. Maybe it could also not save it in the first place if it is itself / has been installed already. Or, it could be uninstalled when the module is unloaded, if there is such a thing? I've seen the crash initially in a more complex setup, where it is not really clear why/how the readline module gets initialized twice really, but the above appears to simulate the situation. (Hints on where to break in gdb to see where/when a module gets unloaded would be appreciated) Added in https://github.com/python/cpython/commit/5dbbf1abba89ef1766759fbc9d5a5af02db49505 (3.5.2). ---------- components: Extension Modules messages: 371861 nosy: blueyed priority: normal severity: normal status: open title: readline.c: SEGFAULT on SIGWINCH when loaded twice type: crash versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 06:22:34 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 19 Jun 2020 10:22:34 +0000 Subject: [New-bugs-announce] [issue41034] test_builtin: PtyTests fail when run twice Message-ID: <1592562154.53.0.0111619252983.issue41034@roundup.psfhosted.org> New submission from STINNER Victor : I tried to check for reference leaks, but running test_builtin twice fails: $ ./python -m test -R 3:3 test_builtin 0:00:00 load avg: 2.18 Run tests sequentially 0:00:00 load avg: 2.18 [1/1] test_builtin beginning 6 repetitions 123456 .test test_builtin failed -- multiple errors occurred; run in verbose mode for details test_builtin failed == Tests result: FAILURE == 1 test failed: test_builtin Total duration: 1.3 sec Tests result: FAILURE The error comes from PtyTests, but the issue is only triggered when builtins.float.hex is also run: $ ./python -m test test_builtin test_builtin -m test.test_builtin.PtyTests.test_input_tty_non_ascii_unicode_errors -m builtins.float.hex -v == CPython 3.10.0a0 (heads/master:310f6aa7db, Jun 19 2020, 12:18:54) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] == Linux-5.6.18-300.fc32.x86_64-x86_64-with-glibc2.31 little-endian == cwd: /home/vstinner/python/master/build/test_python_122646 == CPU count: 8 == encodings: locale=UTF-8, FS=utf-8 0:00:00 load avg: 0.60 Run tests sequentially 0:00:00 load avg: 0.60 [1/2] test_builtin test_input_tty_non_ascii_unicode_errors (test.test_builtin.PtyTests) ... ok hex (builtins.float) Doctest: builtins.float.hex ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.009s OK 0:00:00 load avg: 0.63 [2/2] test_builtin test_input_tty_non_ascii_unicode_errors (test.test_builtin.PtyTests) ... FAIL hex (builtins.float) Doctest: builtins.float.hex ... ok ====================================================================== FAIL: test_input_tty_non_ascii_unicode_errors (test.test_builtin.PtyTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/python/master/Lib/test/test_builtin.py", line 2105, in test_input_tty_non_ascii_unicode_errors self.check_input_tty("prompt?", b"quux\xe9", "ascii") File "/home/vstinner/python/master/Lib/test/test_builtin.py", line 2092, in check_input_tty self.assertEqual(input_result, expected) AssertionError: 'quux' != 'quux\udce9' - quux + quux\udce9 ? + ---------------------------------------------------------------------- Ran 2 tests in 0.011s FAILED (failures=1) test test_builtin failed test_builtin failed == Tests result: FAILURE == 1 test OK. 1 test failed: test_builtin Total duration: 762 ms Tests result: FAILURE ---------- components: Tests messages: 371871 nosy: vstinner priority: normal severity: normal status: open title: test_builtin: PtyTests fail when run twice versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 08:13:56 2020 From: report at bugs.python.org (sorrow) Date: Fri, 19 Jun 2020 12:13:56 +0000 Subject: [New-bugs-announce] [issue41035] zipfile.Path does not work properly with zip archives where paths start with / Message-ID: <1592568836.11.0.709805363189.issue41035@roundup.psfhosted.org> New submission from sorrow : I encountered errors when I had to work with ZPI file where path start with "/" ---------- components: Library (Lib) messages: 371880 nosy: sorrow priority: normal severity: normal status: open title: zipfile.Path does not work properly with zip archives where paths start with / type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 10:05:01 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 19 Jun 2020 14:05:01 +0000 Subject: [New-bugs-announce] [issue41036] Visit the type of instance of heap types if tp_traverse is not implemented Message-ID: <1592575501.27.0.818244173351.issue41036@roundup.psfhosted.org> New submission from STINNER Victor : While reviewing changes of bpo-40077 "Convert static types to PyType_FromSpec()", I noticed that some static types don't implement tp_traverse. The doc says: Heap-allocated types (...) hold a reference to their type. Their traversal function must therefore either visit Py_TYPE(self), or delegate this responsibility by calling tp_traverse of another heap-allocated type (such as a heap-allocated superclass). If they do not, the type object may not be garbage-collected. https://docs.python.org/dev/c-api/typeobj.html#c.PyTypeObject.tp_traverse Porting to 3.9 says: for types that have a custom tp_traverse function, ensure that all custom tp_traverse functions of heap-allocated types visit the object?s type https://docs.python.org/dev/whatsnew/3.9.html#changes-in-the-c-api -- It seems like converting a static type to a heap allocated type requires to *add* a new tp_traverse function, if it wasn't the case. Maybe we can provide a base tp_traverse implementation in the base object type: visit the type if it's a heap type? See attached PR. See bpo-35810 and bpo-40217 for more information. ---------- components: C API, Interpreter Core messages: 371884 nosy: pablogsal, vstinner priority: normal severity: normal status: open title: Visit the type of instance of heap types if tp_traverse is not implemented versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 10:11:55 2020 From: report at bugs.python.org (Sebastian Berg) Date: Fri, 19 Jun 2020 14:11:55 +0000 Subject: [New-bugs-announce] [issue41037] Add (optional) threadstate to: PyOS_InterruptOccurred() Message-ID: <1592575915.61.0.789948755318.issue41037@roundup.psfhosted.org> New submission from Sebastian Berg : In https://bugs.python.org/issue40826 it was defined that `PyOS_InterruptOccurred()` can only be called with a GIL. NumPy had a few places with very unsafe sigint handling (not thread-safe). But generally when we are in a situation that catching sigints would be nice as an enhancement, we do of course not hold the GIL. So I am wondering if we can find some kind of thread-safe solution, or even just recipe. Briefly looking at the code, it seemed to me that adding a new function with an additional `tstate` signature: PyOS_InterruptOccurred(PyThreadState *tstate) Could work, and be a simple way to allow this in the future? It is probably not high priority for us (NumPy had only one place where it tried to stop long running computations). But right now I am unsure if the function has much use if it requires the GIL to be held and a `tstate=NULL` argument seemed reasonable (if it works), except that it adds a C-API name. ---------- components: C API messages: 371885 nosy: seberg, vstinner priority: normal severity: normal status: open title: Add (optional) threadstate to: PyOS_InterruptOccurred() type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 11:18:28 2020 From: report at bugs.python.org (Nikita Nemkin) Date: Fri, 19 Jun 2020 15:18:28 +0000 Subject: [New-bugs-announce] [issue41038] VersionInfo string is corrupted when building on Windows with DBCS or UTF-8 locale Message-ID: <1592579908.38.0.132085999949.issue41038@roundup.psfhosted.org> New submission from Nikita Nemkin : In absence of explicit declaration, resource compiler uses system codepage. When this codepage is DBCS or UTF-8, Python's copyright string is corrupted, because it contains copyright sign encoded as \xA9. The fix is to explicitly declare codepage 1252. Another possible fix is to use codepage 65001, but that will require replacing \xA9 with an actual ?, because it seems impossible to escape unicode characters in VersionInfo strings. ---------- components: Windows messages: 371888 nosy: nnemkin, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: VersionInfo string is corrupted when building on Windows with DBCS or UTF-8 locale type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 12:27:16 2020 From: report at bugs.python.org (Nikita Nemkin) Date: Fri, 19 Jun 2020 16:27:16 +0000 Subject: [New-bugs-announce] [issue41039] Simplify python3.dll build Message-ID: <1592584036.23.0.347214520388.issue41039@roundup.psfhosted.org> New submission from Nikita Nemkin : python3.dll build process can be simplified if we use linker comment #pragma instead of .def files. Custom build targets become unnecessary and hardcoded Python DLL name can be replaced with a macro. Also, python3.dll doesn't need DllMain and can be built with /NOENTRY. ---------- components: Windows messages: 371894 nosy: nnemkin, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Simplify python3.dll build type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 16:06:42 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 19 Jun 2020 20:06:42 +0000 Subject: [New-bugs-announce] [issue41040] Fix test_modulefinder Message-ID: <1592597202.88.0.995685925684.issue41040@roundup.psfhosted.org> New submission from Serhiy Storchaka : There is a bug in test_modulefinder. The bytes string literal contains \u2090. 1. Since \u is not recognized escape sequence in bytes literals, compiling the file emits a deprecation warning: /home/serhiy/py/cpython/Lib/test/test_modulefinder.py:281: DeprecationWarning: invalid escape sequence \u b"""\ 2. b"\u2090" is interpreted as b"\\u2090", but actually the test implies that it should be a bytes sequence b'\xe2\x82\x90' which is valid in UTF-8 but is not a valid in CP1252. ---------- components: Tests messages: 371897 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Fix test_modulefinder type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 17:00:05 2020 From: report at bugs.python.org (Matthias Bussonnier) Date: Fri, 19 Jun 2020 21:00:05 +0000 Subject: [New-bugs-announce] [issue41041] Multiprocesing Pool borken on macOS REPL Message-ID: <1592600405.92.0.621654131969.issue41041@roundup.psfhosted.org> New submission from Matthias Bussonnier : $ python Python 3.8.2 | packaged by conda-forge | (default, Apr 24 2020, 07:56:27) [Clang 9.0.1 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from multiprocessing import Pool >>> >>> def f(x): ... return x*x ... >>> with Pool(5) as p: ... print(p.map(f, [1, 2, 3])) Process SpawnPoolWorker-1: Process SpawnPoolWorker-2: Process SpawnPoolWorker-3: Traceback (most recent call last): File "/Users/bussonniermatthias/miniconda3/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/Users/bussonniermatthias/miniconda3/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Users/bussonniermatthias/miniconda3/lib/python3.8/multiprocessing/pool.py", line 114, in worker task = get() File "/Users/bussonniermatthias/miniconda3/lib/python3.8/multiprocessing/queues.py", line 358, in get return _ForkingPickler.loads(res) AttributeError: Can't get attribute 'f' on Traceback (most recent call last): ... This is likely due to https://bugs.python.org/issue33725 (use spawn on MacOS), we we can't use `fork()`. ---------- messages: 371900 nosy: mbussonn priority: normal severity: normal status: open title: Multiprocesing Pool borken on macOS REPL _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 17:06:22 2020 From: report at bugs.python.org (Michael J.) Date: Fri, 19 Jun 2020 21:06:22 +0000 Subject: [New-bugs-announce] [issue41042] import searches for package even after file was found successfully Message-ID: <1592600782.72.0.658576039777.issue41042@roundup.psfhosted.org> New submission from Michael J. : Hello, Earlier today, I was developing a program and I wanted to check its variables after it finished running. Simply going into a terminal, entering my program's directory, and executing "python3 index.py" would return control to the command line before I would have a chance to examine the data, so I ran "python3", got to the shell, and then imported my file. My script executed, but after it finished, the intepreter raised a "ModuleNotFoundError: No module named 'index.py'; 'index' is not a package." Is this supposed to happen after the interpreter finishes importing a file? Thanks, Michael ---------- components: Interpreter Core messages: 371902 nosy: MichaelSapphire priority: normal severity: normal status: open title: import searches for package even after file was found successfully type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 17:16:43 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 19 Jun 2020 21:16:43 +0000 Subject: [New-bugs-announce] [issue41043] Escape the literal part of the path for glob() Message-ID: <1592601403.75.0.126611663451.issue41043@roundup.psfhosted.org> New submission from Serhiy Storchaka : It is common to use glob() as glob.glob(os.path.join(basedir, pattern)) But it does not work correctly if the base directory contains special globbing characters ('*', '?', '['). It is an uncommon case, so in most cases the code works. But when move sources to the directory containing special characters, built it and run tests, some tests will fail: test test_tokenize failed -- Traceback (most recent call last): File "/home/serhiy/py/[cpython]/Lib/test/test_tokenize.py", line 1615, in test_random_files testfiles.remove(os.path.join(tempdir, "test_unicode_identifiers.py")) ValueError: list.remove(x): x not in list test test_multiprocessing_fork failed -- Traceback (most recent call last): File "/home/serhiy/py/[cpython]/Lib/test/_test_multiprocessing.py", line 4272, in test_import modules = self.get_module_names() File "/home/serhiy/py/[cpython]/Lib/test/_test_multiprocessing.py", line 4267, in get_module_names modules.remove('multiprocessing.__init__') ValueError: list.remove(x): x not in list test test_bz2 failed -- Traceback (most recent call last): File "/home/serhiy/py/[cpython]/Lib/test/test_bz2.py", line 740, in testDecompressorChunksMaxsize self.assertFalse(bzd.needs_input) AssertionError: True is not false test test_multiprocessing_forkserver failed -- Traceback (most recent call last): File "/home/serhiy/py/[cpython]/Lib/test/_test_multiprocessing.py", line 4272, in test_import modules = self.get_module_names() File "/home/serhiy/py/[cpython]/Lib/test/_test_multiprocessing.py", line 4267, in get_module_names modules.remove('multiprocessing.__init__') ValueError: list.remove(x): x not in list test test_multiprocessing_spawn failed -- Traceback (most recent call last): File "/home/serhiy/py/[cpython]/Lib/test/_test_multiprocessing.py", line 4272, in test_import modules = self.get_module_names() File "/home/serhiy/py/[cpython]/Lib/test/_test_multiprocessing.py", line 4267, in get_module_names modules.remove('multiprocessing.__init__') ValueError: list.remove(x): x not in list The proposed PR adds glob.escape() to the above code: glob.glob(os.path.join(glob.escape(basedir), pattern)) ---------- components: Library (Lib) messages: 371903 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Escape the literal part of the path for glob() type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 18:57:19 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Fri, 19 Jun 2020 22:57:19 +0000 Subject: [New-bugs-announce] [issue41044] Pegen: double trailing comma on optional+sequence rules at python generator Message-ID: <1592607439.16.0.781875892842.issue41044@roundup.psfhosted.org> New submission from Batuhan Taskaya : Python generator generates two trailing commas instead of one when both repeat0 (*) + optional ([]) qualifiers used. Example failing test (raises a SyntaxError, since the generated parser can't be parseable / executable) def test_opt_sequence(self) -> None: grammar = """ start: [NAME*] """ # This case was failing because of double trailing comma at the end # of the generated parser. See bpo- make_parser(grammar) ---------- messages: 371908 nosy: BTaskaya priority: normal severity: normal status: open title: Pegen: double trailing comma on optional+sequence rules at python generator _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 19:53:12 2020 From: report at bugs.python.org (Eric V. Smith) Date: Fri, 19 Jun 2020 23:53:12 +0000 Subject: [New-bugs-announce] [issue41045] f-string's "debug" feature is undocumented Message-ID: <1592610792.76.0.722931750761.issue41045@roundup.psfhosted.org> New submission from Eric V. Smith : The feature of f-strings using '=' for "debugging" formatting is not documented. >>> foo = 'bar' >>> f'{foo=}' "foo='bar'" I'm not sure where this should fit in to the documentation, but it needs to be in there somewhere. Basically the documentation needs to say: 1. The entire string from the opening brace { to the start of the expression appears in the output. This allows things like f'{foo = }' to evaluate to "foo = 'bar'" (notice the extra spaces). 2. If no format spec (like :20) is given, repr() is used on the expression. If a format spec is given, then str() is used on the expression. You can use repr() with a format spec by using the !r conversion, like: >>> f'{foo=:20}' 'foo=bar ' >>> f'{foo=!r:20}' "foo='bar' " ---------- assignee: docs at python components: Documentation messages: 371912 nosy: docs at python, eric.smith priority: normal severity: normal stage: needs patch status: open title: f-string's "debug" feature is undocumented versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 20:36:33 2020 From: report at bugs.python.org (James Corbett) Date: Sat, 20 Jun 2020 00:36:33 +0000 Subject: [New-bugs-announce] [issue41046] unittest: make skipTest a classmethod Message-ID: <1592613393.95.0.180040510992.issue41046@roundup.psfhosted.org> New submission from James Corbett : The `unittest.TestCase.skipTest` method, used to skip the current test, is currently an instance method. There's nothing to stop it from being a `classmethod` or a `staticmethod` though---it doesn't use its reference to `self` since it's just a wrapper around the `SkipTest` exception. Making it a `classmethod` or `staticmethod` would allow calling the method from `setUpClass`. Here's an example: ``` import unittest class MyTestCase(unittest.TestCase): @classmethod def ready_for_tests(cls): pass @classmethod def setUpClass(cls): if not cls.ready_for_tests(): cls.skipTest() ``` ---------- components: Library (Lib) messages: 371914 nosy: jameshcorbett priority: normal severity: normal status: open title: unittest: make skipTest a classmethod type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 19 22:15:23 2020 From: report at bugs.python.org (James Corbett) Date: Sat, 20 Jun 2020 02:15:23 +0000 Subject: [New-bugs-announce] [issue41047] argparse: misbehavior when combining positionals and choices Message-ID: <1592619323.37.0.243911882097.issue41047@roundup.psfhosted.org> New submission from James Corbett : The `argparse.ArgumentParser` sometimes rejects positional arguments with no arguments when `choices` is set and `nargs="*"`. When there are no arguments and `nargs` is `"*"`, the default value is chosen, or `[]` if there is no default value. This value is then checked against `choices` and an error message is printed if the value is not in `choices`. However, sometimes the value is intentionally not in `choices`, and this leads to problems. An example will explain this much better, and show that the issue only occurs with the particular combination of positionals, `nargs="*"`, and `choices`: ``` >>> import argparse >>> parser = argparse.ArgumentParser() >>> parser.add_argument("foo", choices=["a", "b", "c"], nargs="*") >>> parser.add_argument("--bar", choices=["d", "e", "f"], nargs="*") >>> parser.add_argument('--baz', type=int, choices=range(5, 10), default="20") >>> parser.parse_args("a --bar".split()) Namespace(foo=['a'], bar=[], baz=20) >>> parser.parse_args(["a"]) Namespace(foo=['a'], bar=None, baz=20) >>> parser.parse_args([]) usage: [-h] [--bar [{d,e,f} ...]] [--baz {5,6,7,8,9}] [{a,b,c} ...] : error: argument foo: invalid choice: [] (choose from 'a', 'b', 'c') ``` In this case I could have got around the last error by adding `[]` to choices, but that pollutes the help and usage messages. ---------- components: Library (Lib) messages: 371915 nosy: jameshcorbett priority: normal severity: normal status: open title: argparse: misbehavior when combining positionals and choices type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 05:11:00 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 20 Jun 2020 09:11:00 +0000 Subject: [New-bugs-announce] [issue41048] read_mime_types() should read the rule file using UTF-8, not the locale encoding Message-ID: <1592644260.23.0.766790238639.issue41048@roundup.psfhosted.org> New submission from Serhiy Storchaka : MimeTypes.read() read the rule file using UTF-8, but read_mime_types() uses the locale encoding. It is an easy issue. You need just repeat issue13025 for read_mime_types(). ---------- components: Library (Lib) keywords: easy messages: 371925 nosy: serhiy.storchaka, vstinner priority: normal severity: normal stage: needs patch status: open title: read_mime_types() should read the rule file using UTF-8, not the locale encoding type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 05:20:57 2020 From: report at bugs.python.org (tamuhey) Date: Sat, 20 Jun 2020 09:20:57 +0000 Subject: [New-bugs-announce] [issue41049] PyObject_RichCompareBool(nan, nan, eq) can be True Message-ID: <1592644857.04.0.72570245151.issue41049@roundup.psfhosted.org> New submission from tamuhey : Applying PyObject_RichCompareBool to two `nan`s can be true if the two nans are same object, i.e. ``` a = float("nan") PyObject_RichCompareBool(a, a, Py_EQ) // True ``` I read the document (https://docs.python.org/3/c-api/object.html?highlight=pyobject_richcomparebool#c.PyObject_RichCompareBool) and understood it is intended, but there should be gentle comment to tell users this behaviour. ---------- components: C API messages: 371927 nosy: tamuhey priority: normal severity: normal status: open title: PyObject_RichCompareBool(nan, nan, eq) can be True type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 08:00:55 2020 From: report at bugs.python.org (Kernel Plevitsky) Date: Sat, 20 Jun 2020 12:00:55 +0000 Subject: [New-bugs-announce] [issue41050] class multiprocessing.Value calls set_start_method Message-ID: <1592654455.47.0.317268475244.issue41050@roundup.psfhosted.org> New submission from Kernel Plevitsky : I'm not sure if this is a bug or my carelessness. I cannot call set_start_method () if multiprocessing.Value() is declared in the class above than "if __name__ ==" __main__ "" The documentation for Value did not describe this. I have attached a code playing this moment. Sorry for my English ---------- assignee: docs at python components: Documentation files: test.py messages: 371933 nosy: Kernel Plevitsky, docs at python priority: normal severity: normal status: open title: class multiprocessing.Value calls set_start_method type: behavior versions: Python 3.5, Python 3.8 Added file: https://bugs.python.org/file49256/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 08:34:37 2020 From: report at bugs.python.org (Manuel Jacob) Date: Sat, 20 Jun 2020 12:34:37 +0000 Subject: [New-bugs-announce] [issue41051] Flush file after warning is written Message-ID: <1592656477.66.0.545359338323.issue41051@roundup.psfhosted.org> New submission from Manuel Jacob : Calling warnings.warn() will write to a file, but not flush it. On Python 3.9+, it won?t usually be a problem because the file is most likely stderr, which is always line-buffered. However, on older Python versions or if a different file is used, the current behavior unnecessarily delays the output of the warning. This is especially problematic if the warning is about buffering behavior itself, as e.g. caused by `open('/tmp/test', 'wb', buffering=1)`. ---------- messages: 371934 nosy: mjacob priority: normal severity: normal status: open title: Flush file after warning is written _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 09:17:29 2020 From: report at bugs.python.org (Dong-hee Na) Date: Sat, 20 Jun 2020 13:17:29 +0000 Subject: [New-bugs-announce] [issue41052] Opt out serialization/deserialization for heap type Message-ID: <1592659049.98.0.596314769567.issue41052@roundup.psfhosted.org> New submission from Dong-hee Na : See https://bugs.python.org/issue40077#msg371813 We noticed that heap type has different behavior about serialization/deserialization. Basically it can occur the regression issues. Two things needed. 1. opt out serialization/deserialization for converted modules. 2. Add unit tests to check whether their serialization is blocked. 3. If the module is already ported to 3.9 backport patch is needed. Long term - Add the object.reduce() and/or update pickle can be smarter ---------- components: C API messages: 371937 nosy: corona10, vstinner priority: normal severity: normal status: open title: Opt out serialization/deserialization for heap type type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 11:14:53 2020 From: report at bugs.python.org (Kagami Sascha Rosylight) Date: Sat, 20 Jun 2020 15:14:53 +0000 Subject: [New-bugs-announce] [issue41053] open() fails to read app exec links Message-ID: <1592666093.57.0.217193955367.issue41053@roundup.psfhosted.org> New submission from Kagami Sascha Rosylight : After installing Python from Microsoft Store, this fails: ``` >>> open('C:\\Users\\Kagami\\AppData\\Local\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\python.exe') Traceback (most recent call last): File "", line 1, in OSError: [Errno 22] Invalid argument: 'C:\\Users\\Kagami\\AppData\\Local\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\python.exe' ``` This causes virtualenv to fail on it: ``` INFO: Traceback (most recent call last): INFO: File "C:/Users/Kagami/.cargo/git/checkouts/mozjs-fa11ffc7d4f1cc2d/9a6d8fc/mozjs\third_party\python\virtualenv\virtualenv.py", line 2349, in INFO: main() INFO: File "C:/Users/Kagami/.cargo/git/checkouts/mozjs-fa11ffc7d4f1cc2d/9a6d8fc/mozjs\third_party\python\virtualenv\virtualenv.py", line 703, in main INFO: create_environment(home_dir, INFO: File "C:/Users/Kagami/.cargo/git/checkouts/mozjs-fa11ffc7d4f1cc2d/9a6d8fc/mozjs\third_party\python\virtualenv\virtualenv.py", line 925, in create_environment INFO: py_executable = os.path.abspath(install_python( INFO: File "C:/Users/Kagami/.cargo/git/checkouts/mozjs-fa11ffc7d4f1cc2d/9a6d8fc/mozjs\third_party\python\virtualenv\virtualenv.py", line 1239, in install_python INFO: shutil.copyfile(executable, py_executable) INFO: File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.1008.0_x64__qbz5n2kfra8p0\lib\shutil.py", line 261, in copyfile INFO: with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst: INFO: OSError: [Errno 22] Invalid argument: 'C:\\Users\\Kagami\\AppData\\Local\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\python.exe' ``` ---------- components: Windows messages: 371939 nosy: paul.moore, saschanaz, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: open() fails to read app exec links type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 12:25:11 2020 From: report at bugs.python.org (Nikita Nemkin) Date: Sat, 20 Jun 2020 16:25:11 +0000 Subject: [New-bugs-announce] [issue41054] Simplify resource compilation on Windows Message-ID: <1592670311.94.0.80143826005.issue41054@roundup.psfhosted.org> New submission from Nikita Nemkin : Every Windows project has a custom target (included from pyproject.props) that generates a header with definitions for resource files. Those definitions (PYTHON_DLL_NAME and FIELD3) can be passed directly to resource compiler. Another definition (MS_DLL_ID) doesn't need to be a resource at all. It was used in the past to initialize PyWin_DLLVersionString in dl_nt.c, but that code is now dead. ---------- components: Windows messages: 371941 nosy: nnemkin, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Simplify resource compilation on Windows type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 13:26:58 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 20 Jun 2020 17:26:58 +0000 Subject: [New-bugs-announce] [issue41055] Remove outdated tests for tp_print Message-ID: <1592674018.02.0.658996067478.issue41055@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are ancient tests for printing some basic types: str (actually a mix of Python 2 str and unicode), list, tuple, bool, set, deque, defaultdict. They are essentially tests for the tp_print slot. But since the tp_print slot is no longer used, they are virtually repeat tests for str or repr. These tests are outdated and no longer serve the initial purpose. To avoid confusion it is worth to remove them. ---------- components: Tests messages: 371944 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Remove outdated tests for tp_print type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 13:38:30 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Sat, 20 Jun 2020 17:38:30 +0000 Subject: [New-bugs-announce] [issue41056] minor NULL pointer and sign issues reported by Coverity Message-ID: <1592674710.76.0.0208418742118.issue41056@roundup.psfhosted.org> New submission from Gregory P. Smith : ________________________________________________________________________________________________________ *** CID 1464693: Null pointer dereferences (REVERSE_INULL) /Modules/_zoneinfo.c: 1625 in parse_abbr() 1619 ptr++; 1620 } 1621 str_end = ptr; 1622 } 1623 1624 *abbr = PyUnicode_FromStringAndSize(str_start, str_end - str_start); >>> CID 1464693: Null pointer dereferences (REVERSE_INULL) >>> Null-checking "abbr" suggests that it may be null, but it has already been dereferenced on all paths leading to the check. 1625 if (abbr == NULL) { 1626 return -1; 1627 } 1628 1629 return ptr - p; 1630 } ________________________________________________________________________________________________________ *** CID 1464687: Null pointer dereferences (FORWARD_NULL) /Modules/_ssl/debughelpers.c: 138 in _PySSL_keylog_callback() 132 * critical debug helper. 133 */ 134 if (lock == NULL) { 135 lock = PyThread_allocate_lock(); 136 if (lock == NULL) { 137 PyErr_SetString(PyExc_MemoryError, "Unable to allocate lock"); >>> CID 1464687: Null pointer dereferences (FORWARD_NULL) >>> Passing null pointer "&ssl_obj->exc_type" to "PyErr_Fetch", which dereferences it. 138 PyErr_Fetch(&ssl_obj->exc_type, &ssl_obj->exc_value, 139 &ssl_obj->exc_tb); 140 return; 141 } 142 } 143 ________________________________________________________________________________________________________ *** CID 1464684: Integer handling issues (NEGATIVE_RETURNS) /Modules/clinic/posixmodule.c.h: 6813 in os_fpathconf() 6807 if (fd == -1 && PyErr_Occurred()) { 6808 goto exit; 6809 } 6810 if (!conv_path_confname(args[1], &name)) { 6811 goto exit; 6812 } >>> CID 1464684: Integer handling issues (NEGATIVE_RETURNS) >>> "fd" is passed to a parameter that cannot be negative. 6813 _return_value = os_fpathconf_impl(module, fd, name); 6814 if ((_return_value == -1) && PyErr_Occurred()) { 6815 goto exit; 6816 } 6817 return_value = PyLong_FromLong(_return_value); 6818 ---------- assignee: gregory.p.smith messages: 371946 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: minor NULL pointer and sign issues reported by Coverity type: crash versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 13:45:23 2020 From: report at bugs.python.org (Fenn Ehk) Date: Sat, 20 Jun 2020 17:45:23 +0000 Subject: [New-bugs-announce] [issue41057] Division error Message-ID: <1592675123.96.0.415795942046.issue41057@roundup.psfhosted.org> New submission from Fenn Ehk : When performing some basic calculations, the result is wrong. 0.4 + 8/100 Out[43]: 0.48000000000000004 0.3 + 8/100 Out[44]: 0.38 I thought it could be processor related and tried the same operation with R, but the result was correct. So I tried it on some online repls: https://repl.it/languages/python3 https://www.learnpython.org/en/Basic_Operators And the bug is there, it seems to exist in 3.7.6 and 3.8.3 (and probably all versions in between Other examples of the error: 0.3 + 8/100 Out[50]: 0.38 0.4 + 8/100 Out[51]: 0.48000000000000004 0.4 + a Out[52]: 0.48000000000000004 0.4 + 9/100 Out[53]: 0.49 0.7 + 9/100 Out[54]: 0.7899999999999999 0.7 + 10/100 Out[55]: 0.7999999999999999 0.7 + 10/100 Out[56]: 0.7999999999999999 0.7 + 11/100 Out[57]: 0.8099999999999999 0.7 + 12/100 Out[58]: 0.82 0.8 + 8/100 Out[59]: 0.88 0.8 + 9/100 Out[60]: 0.89 0.6 + 9/100 Out[61]: 0.69 0.7 + 9/100 Out[62]: 0.7899999999999999 ---------- components: Interpreter Core messages: 371948 nosy: Fenn Ehk priority: normal severity: normal status: open title: Division error type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 13:54:13 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 20 Jun 2020 17:54:13 +0000 Subject: [New-bugs-announce] [issue41058] pdb reads source files using the locale encoding Message-ID: <1592675653.2.0.61878894167.issue41058@roundup.psfhosted.org> New submission from Serhiy Storchaka : find_function() in pdb uses the locale encoding for reading source files. It should use the encoding specified by the coding cookie or UTF-8 if it is not specified. ---------- components: Library (Lib) messages: 371950 nosy: serhiy.storchaka priority: normal severity: normal status: open title: pdb reads source files using the locale encoding type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 14:56:31 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Sat, 20 Jun 2020 18:56:31 +0000 Subject: [New-bugs-announce] [issue41059] Large number of Coverity reports for parser.c Message-ID: <1592679391.13.0.913540149945.issue41059@roundup.psfhosted.org> New submission from Gregory P. Smith : Here's an example: *** CID 1464688: Control flow issues (DEADCODE) /Parser/parser.c: 24243 in _tmp_147_rule() 24237 && 24238 (z = disjunction_rule(p)) // disjunction 24239 ) 24240 { 24241 D(fprintf(stderr, "%*c+ _tmp_147[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'if' disjunction")); 24242 _res = z; >>> CID 1464688: Control flow issues (DEADCODE) >>> Execution cannot reach the expression "PyErr_Occurred()" inside this statement: "if (_res == NULL && PyErr_O...". 24243 if (_res == NULL && PyErr_Occurred()) { 24244 p->error_indicator = 1; 24245 D(p->level--); 24246 return NULL; 24247 } 24248 goto done; A lot of them are of that form, which seems harmless if they are true - it means a compiler may deduce the same thing and omit code generation for an impossible to trigger error block. OTOH this could just be a weakness in the scanner. (i don't know how to silence it via markers in the code, but i assume it is possible) You'll need to login to Coverity to see the full report. https://scan.coverity.com/projects/python?tab=overview. (it has been ages since i've logged in, they appear to support Github logins now. yay.) As the parser.c code is new for 3.9, I'm marking this as deferred blocker. We should pay closer attention to the reports and update the parser generator code to generate code that passes analysis cleanly before we exit the beta phase. ---------- assignee: gvanrossum messages: 371956 nosy: gregory.p.smith, gvanrossum, lys.nikolaou, p-ganssle priority: deferred blocker severity: normal stage: needs patch status: open title: Large number of Coverity reports for parser.c versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 20 17:36:11 2020 From: report at bugs.python.org (Steve Stagg) Date: Sat, 20 Jun 2020 21:36:11 +0000 Subject: [New-bugs-announce] [issue41060] `with a as b` segfault in new peg parser Message-ID: <1592688971.14.0.300854552931.issue41060@roundup.psfhosted.org> New submission from Steve Stagg : Hi Fuzzing found the following: $ ./python/bin/python3 Python 3.10.0a0 (heads/master:eb0d5c38de, Jun 20 2020, 21:35:36) [Clang 10.0.0 ] on linux Type "help", "copyright", "credits" or "license" for more information. >>> with a as b fish: ?./python/bin/python3? terminated by signal SIGSEGV (Address boundary error) with stacktrace: * thread #1, name = 'run', stop reason = signal SIGSEGV: invalid address (fault address: 0x20) * frame #0: 0x0000555555a08feb run`with_item_rule at parser.c:15382:20 frame #1: 0x0000555555a08e96 run`with_item_rule(p=0x00007ffff78b9e40) at parser.c:4330 frame #2: 0x00005555559d22e9 run`compound_stmt_rule at parser.c:17930:21 frame #3: 0x00005555559d227c run`compound_stmt_rule at parser.c:4139 frame #4: 0x00005555559d1a64 run`compound_stmt_rule(p=) at parser.c:1931 frame #5: 0x00005555559d016c run`statements_rule at parser.c:1230:18 frame #6: 0x00005555559d00fb run`statements_rule at parser.c:16156 frame #7: 0x00005555559cff4d run`statements_rule(p=) at parser.c:1189 frame #8: 0x00005555559cb2bc run`_PyPegen_parse at parser.c:722:18 frame #9: 0x00005555559cb28d run`_PyPegen_parse(p=0x00007ffff78b9e40) at parser.c:24688 frame #10: 0x00005555559c5349 run`_PyPegen_run_parser(p=0x00007ffff78b9e40) at pegen.c:1083:17 frame #11: 0x00005555559c6458 run`_PyPegen_run_parser_from_string(str=, start_rule=, filename_ob=0x00007ffff788db30, flags=, arena=) at pegen.c:1201:14 frame #12: 0x00005555555eea84 run`PyPegen_ASTFromStringObject(str="with'lZ'', globals=0x00007ffff788d940, locals=0x00007ffff788d940, flags=0x0000000000000000) at pythonrun.c:1029:11 frame #14: 0x00005555555a8202 run`PyRun_SimpleStringFlags(command="with'lZ'', argv=) at run.c:19:3 frame #16: 0x00007ffff7c35002 libc.so.6`__libc_start_main + 242 frame #17: 0x000055555559568e run`_start + 46 This appears to be similar to: https://bugs.python.org/issue40903, where GET_INVALID_TARGET is being called with an Attribute Node, which returns None, and this result is passed, unchecked into `PyPegen_get_expr_name` ---------- components: Interpreter Core messages: 371964 nosy: stestagg priority: normal severity: normal status: open title: `with a as b` segfault in new peg parser type: crash versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 04:52:57 2020 From: report at bugs.python.org (Christian Heimes) Date: Sun, 21 Jun 2020 08:52:57 +0000 Subject: [New-bugs-announce] [issue41061] Incorrect expressions / assert with side effect in hashtable Message-ID: <1592729577.35.0.278051056824.issue41061@roundup.psfhosted.org> New submission from Christian Heimes : Coverity has found four issues in hashtable implementation and tests Py_hashtable_get_entry_generic(_Py_hashtable_t *ht, const void *key) CID 1464680 (#1 of 1): Evaluation order violation (EVALUATION_ORDER)write_write_typo: In entry = entry = (_Py_hashtable_entry_t *)((_Py_slist_t *)&ht->buckets[index])->head, entry is written twice with the same value. _Py_hashtable_get_entry_ptr(_Py_hashtable_t *ht, const void *key) CID 1464602 (#1 of 1): Evaluation order violation (EVALUATION_ORDER)write_write_typo: In entry = entry = (_Py_hashtable_entry_t *)((_Py_slist_t *)&ht->buckets[index])->head, entry is written twice with the same value. test_hashtable(PyObject *self, PyObject *Py_UNUSED(args)) CID 1464668 (#1 of 1): Side effect in assertion (ASSERT_SIDE_EFFECT)assignment_where_comparison_intended: Assignment entry->key = (void *)(uintptr_t)key has a side effect. This code will work differently in a non-debug build. CID 1464664 (#1 of 1): Side effect in assertion (ASSERT_SIDE_EFFECT)assignment_where_comparison_intended: Assignment entry->value = (void *)(uintptr_t)(1 + ((int)key - 97)) has a side effect. This code will work differently in a non-debug build. ---------- assignee: christian.heimes components: Interpreter Core messages: 371987 nosy: christian.heimes priority: normal severity: normal status: open title: Incorrect expressions / assert with side effect in hashtable type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 04:53:48 2020 From: report at bugs.python.org (Andrei Pashkin) Date: Sun, 21 Jun 2020 08:53:48 +0000 Subject: [New-bugs-announce] [issue41062] Advanced Debugger Support C-API is useless without HEAD_LOCK()/HEAD_UNLOCK() Message-ID: <1592729628.06.0.446212297946.issue41062@roundup.psfhosted.org> New submission from Andrei Pashkin : To me it seems like Advanced Debugger Support C-API doesn't make sense without HEAD_LOCK() and HEAD_UNLOCK() which are private right now. When researching how C-API works I've found this comment in the source code: https://github.com/python/cpython/blob/e838a9324c1719bb917ca81ede8d766b5cb551f4/Python/pystate.c#L1176 It says that the lists of interpreter-state and thread-state objects (that Adv. Debugger Support API operates on) could be mutated even when GIL is held so there is need to acquire head mutex when accessing them. But there is no way to acquire head mutex using public C-API. Am I right? If yes - it seems like HEAD_(UN)LOCK() should be made public. ---------- components: C API messages: 371988 nosy: pashkin priority: normal severity: normal status: open title: Advanced Debugger Support C-API is useless without HEAD_LOCK()/HEAD_UNLOCK() type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 07:04:15 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 21 Jun 2020 11:04:15 +0000 Subject: [New-bugs-announce] [issue41063] Avoid using the locale encoding for open() in tests Message-ID: <1592737455.62.0.0102457706835.issue41063@roundup.psfhosted.org> New submission from Serhiy Storchaka : Many tests use open() with the locale encoding for writing or reading files. They are passed because the written and read data a ASCII, and file paths are ASCII. But they do not test the case of non-ASCII data and file paths. In general, most of uses of the locale encoding should be changed. 1. In some cases it is enough to open the file in binary mode. For example when create an empty file, or use just fileno of the opened file. 2. In some cases the file should be opened in binary mode. For example, when compile the content of the file or parse it as XML, because the correct encoding is determined by the content (BOM, encoding coockie, XML declaration). 3. tokenize.open() or tokenize.detect_encoding() should be used when we read a Python source as a text. 4. os.fsdecode() and os.fsencode() may be used if the test file contains file paths and is read by bash or other external program. 5. encoding='ascii' should be specified if the test data always ASCII-only. 6. encoding='utf-8' should be specified if the test data can contain arbitrary Unicode characters. 7. Encoding different from 'ascii', 'latin1' and 'utf-8' should be used if arbitrary encodings should be supported. 8. Implicit locale encoding should be only used if the test is purposed to test the implicit encoding. It is preferable to add non-ASCII characters in the test data. I am working on a large patch for this (>50% is ready). Some parts of it may be extracted as separate PRs, and the rest will be exposed as a large PR. If changes are required not only in tests. separate issues will be opened. ---------- components: Tests messages: 371994 nosy: inada.naoki, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Avoid using the locale encoding for open() in tests type: enhancement versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 08:32:38 2020 From: report at bugs.python.org (JNCressey) Date: Sun, 21 Jun 2020 12:32:38 +0000 Subject: [New-bugs-announce] [issue41064] IDLE highlights wrong place when code has syntax error of ** unpacking dict in f-string Message-ID: <1592742758.57.0.680300455155.issue41064@roundup.psfhosted.org> New submission from JNCressey : Compare f"{*my_tuple}" with f"{**my_dict}". Both are syntax errors since you can't use unpacking here, but the bug is as follows: - For the tuple, IDLE highlights the asterisk and has the helpful message 'SyntaxError: can't use starred expression here', - But for the dictionary, the first few characters of your code are highlighted, regardless of where the syntax error is located, and the message only says 'SyntaxError: invalid syntax'. Bug occurs in both 3.8.3 and 3.7.7, I haven't tested it in 3.6 nor in-development versions. ---------- assignee: terry.reedy components: IDLE messages: 371995 nosy: JNCressey, terry.reedy priority: normal severity: normal status: open title: IDLE highlights wrong place when code has syntax error of ** unpacking dict in f-string type: compile error versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 10:05:37 2020 From: report at bugs.python.org (Ram Rachum) Date: Sun, 21 Jun 2020 14:05:37 +0000 Subject: [New-bugs-announce] [issue41065] Use zip-strict in zoneinfo Message-ID: <1592748337.97.0.906987974024.issue41065@roundup.psfhosted.org> New submission from Ram Rachum : Writing the PR now. ---------- components: Library (Lib) messages: 371997 nosy: brandtbucher, cool-RR priority: normal severity: normal status: open title: Use zip-strict in zoneinfo type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 12:22:19 2020 From: report at bugs.python.org (=?utf-8?b?U3Jpbml2YXMgIFJlZGR5IFRoYXRpcGFydGh5KOCwtuCxjeCwsOCxgOCwqA==?= =?utf-8?b?4LC/4LC14LC+4LC44LGNIOCwsOCxhuCwoeCxjeCwoeCwvyDgsKTgsL7gsJ8=?= =?utf-8?b?4LC/4LCq4LCw4LGN4LCk4LC/KQ==?=) Date: Sun, 21 Jun 2020 16:22:19 +0000 Subject: [New-bugs-announce] [issue41066] Update documentation for Pathlib Message-ID: <1592756539.0.0.672795473082.issue41066@roundup.psfhosted.org> New submission from Srinivas Reddy Thatiparthy(?????????? ?????? ?????????) : The correspondence section between os, os.path vs pathlib is missing two entries. https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module 1. os.link vs path.link_to 2. os.path.listdir vs path.iterdir I think adding them would be a good addition. ---------- assignee: docs at python components: Documentation messages: 372001 nosy: docs at python, thatiparthy priority: normal severity: normal status: open title: Update documentation for Pathlib type: enhancement versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 13:35:06 2020 From: report at bugs.python.org (David CARLIER) Date: Sun, 21 Jun 2020 17:35:06 +0000 Subject: [New-bugs-announce] [issue41067] Haiku build fix - posix module Message-ID: <1592760906.5.0.377880149054.issue41067@roundup.psfhosted.org> Change by David CARLIER : ---------- components: Extension Modules nosy: devnexen priority: normal pull_requests: 20203 severity: normal status: open title: Haiku build fix - posix module type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 16:08:04 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 21 Jun 2020 20:08:04 +0000 Subject: [New-bugs-announce] [issue41068] zipfile: read after write fails for non-ascii files Message-ID: <1592770084.24.0.0586983888963.issue41068@roundup.psfhosted.org> New submission from Serhiy Storchaka : When open a ZIP archive, write a file with non-ascii name in it, and, not closing the archive, read that file back, it fails: >>> import zipfile >>> with zipfile.ZipFile('test.zip', 'w') as zf: ... zf.writestr('??????', '') ... zf.read('??????') ... Traceback (most recent call last): File "", line 3, in File "/usr/lib/python3.8/zipfile.py", line 1440, in read with self.open(name, "r", pwd) as fp: File "/usr/lib/python3.8/zipfile.py", line 1521, in open raise BadZipFile( zipfile.BadZipFile: File name in directory '??????' and header b'\xd0\xb9\xd1\x86\xd1\x83\xd0\xba\xd0\xb5\xd0\xbd' differ. ---------- components: Library (Lib) messages: 372018 nosy: serhiy.storchaka priority: normal severity: normal status: open title: zipfile: read after write fails for non-ascii files type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 21 17:18:39 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 21 Jun 2020 21:18:39 +0000 Subject: [New-bugs-announce] [issue41069] Use non-ascii file names in tests by default Message-ID: <1592774319.03.0.379358214671.issue41069@roundup.psfhosted.org> New submission from Serhiy Storchaka : The following PR increases coverage of tests by making some paths non-ascii: 1. test.support.TESTFN now contains non-ascii characters if possible. 2. The temporary directory used as current working directory in tests also contains non-ascii characters if possible. This helped to catch and fix one bug in zipfile and two bugs in handling PYTHONSTARTUP. ---------- components: Tests messages: 372022 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Use non-ascii file names in tests by default type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 02:00:23 2020 From: report at bugs.python.org (Nikita Nemkin) Date: Mon, 22 Jun 2020 06:00:23 +0000 Subject: [New-bugs-announce] [issue41070] Simplify pyshellext.dll build Message-ID: <1592805623.25.0.00409293845235.issue41070@roundup.psfhosted.org> New submission from Nikita Nemkin : pyshellext uses MIDL to generate a header, whose only purpose is to define a class GUID. MIDL generation step can be replaced with a simple #define. This doesn't really matter for VS, but other build systems (CMake, probably Meson too) will benefit. pyshellext has separate .def files for debug and release builds. One .def file is sufficient, because LIBRARY statement is optional. Using __declspec(dllexport) isn't an option, because Windows headers misdeclare DllCanUnloadNow and DllGetClassObject... ---------- components: Windows messages: 372032 nosy: nnemkin, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Simplify pyshellext.dll build type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 03:01:18 2020 From: report at bugs.python.org (mike stern) Date: Mon, 22 Jun 2020 07:01:18 +0000 Subject: [New-bugs-announce] [issue41071] from an int to a float , why Message-ID: <1592809278.7.0.712153884721.issue41071@roundup.psfhosted.org> New submission from mike stern : please I would like to know why python changes an integer result in a division to a float even in the result is even like print(2 / 2) gives 2.0 instead of 2 or a = 2 / 2 print(a) ---------- components: Interpreter Core messages: 372033 nosy: rskiredj at hotmail.com priority: normal severity: normal status: open title: from an int to a float , why type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 03:32:29 2020 From: report at bugs.python.org (xcl) Date: Mon, 22 Jun 2020 07:32:29 +0000 Subject: [New-bugs-announce] [issue41072] Python Message-ID: <1592811149.94.0.354270357022.issue41072@roundup.psfhosted.org> Change by xcl <1318683902 at qq.com>: ---------- nosy: xcl priority: normal severity: normal status: open title: Python _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 04:38:14 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 22 Jun 2020 08:38:14 +0000 Subject: [New-bugs-announce] [issue41073] [C API] PyType_GetSlot() should accept static types Message-ID: <1592815094.33.0.264745776024.issue41073@roundup.psfhosted.org> New submission from STINNER Victor : To fix bpo-40170, I would like to modify Py_TRASHCAN_BEGIN() macro to use PyType_GetSlot() to get the deallocator function, rather than accessing directly the PyTypeObject.tp_dealloc member. The problem is that currently PyType_GetSlot() only works on heap allocated types. Would it be possible to add support for statically allocated types to PyType_GetSlot()? Py_TRASHCAN_BEGIN() is currently defined as: #define Py_TRASHCAN_BEGIN(op, dealloc) \ Py_TRASHCAN_BEGIN_CONDITION(op, \ Py_TYPE(op)->tp_dealloc == (destructor)(dealloc)) ---------- components: C API messages: 372049 nosy: vstinner priority: normal severity: normal status: open title: [C API] PyType_GetSlot() should accept static types versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 05:49:24 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 22 Jun 2020 09:49:24 +0000 Subject: [New-bugs-announce] [issue41074] msilib does not work correctly with non-ASCII names Message-ID: <1592819364.77.0.204133335843.issue41074@roundup.psfhosted.org> New submission from Serhiy Storchaka : It encodes input string arguments with utf-8 and pass encoded strings to 8-bit API which expect they be encoded using the locale encoding. It may pass tests, create and read files, but these files will just have wrong names. ---------- components: Extension Modules, Windows messages: 372083 nosy: paul.moore, serhiy.storchaka, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: msilib does not work correctly with non-ASCII names type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 07:16:27 2020 From: report at bugs.python.org (wyz23x2) Date: Mon, 22 Jun 2020 11:16:27 +0000 Subject: [New-bugs-announce] [issue41075] Add support of navigating through prev. commands in IDLE Message-ID: <1592824587.6.0.226335506835.issue41075@roundup.psfhosted.org> New submission from wyz23x2 : Terminals like CMD have support of navigating through commands with ??. While directly implementing the arrows is not good in IDLE (the use for jumping to the prev. line in GUI is needed), there should be a good way. Some ways: 1. Alt+??. The current behavior is exactly like ??. 2. A "Previous input" option in the right-click menu. 3. A "Navigate" option in the right-click menu. A GUI like this will pop up: ?????????????????????????? ? Navigate ? ? ? The [4]th command ? ? O [1] command before ? ? _______ ? ? ?Paste? ? ? ??????? ? ?????????????????????????? It would be better if 2&3 are together. ---------- assignee: terry.reedy components: IDLE messages: 372085 nosy: terry.reedy, wyz23x2 priority: normal severity: normal status: open title: Add support of navigating through prev. commands in IDLE type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 08:54:04 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Mon, 22 Jun 2020 12:54:04 +0000 Subject: [New-bugs-announce] [issue41076] Pre-feed the parser with the f-string location Message-ID: <1592830444.28.0.715795634223.issue41076@roundup.psfhosted.org> New submission from Lysandros Nikolaou : Inspired by bpo-41064, I sat down to try and find problems with f-string locations in the new parser. I was able to come up with a way to compute the locations of the f-string expressions that I think is more consistent and allows us to delete all the code that was fixing the expression locations after the actual parsing, which accounted for about 1/6 of string_parser.c. A high-level explanation of the change: Before this change we were pre-feeding the parser with the location of the f-string itself. The parser was then parsing the expression and was computing the locations of all the nodes based on the offset of the f-string. After the parsing was done, we were identifying the offset and the lineno of the expression *within* the fstring and were fixing the node locations accordingly. For example, for an f-string like `a = 0; f'irrelevant {a}'` we were doing the following: - Pre-feed the parser with lineno=0 and col_offset=7 (the offset of the f-string itself in the current line). - Parse the expression (adding 7 to the col_offset of each parsed node, lineno remains the same since it's 0). - Fix the node locations by shifting the Name node by 14, which is the number of characters in the f-string (counting the `f` and the opening quote) before the start of the expression. With this change we now pre-feed the parser with the exact lineno and offset of the expression itself, not the f-string. This allows us to completely skip the third step of shifting the node locations. ---------- assignee: lys.nikolaou components: Interpreter Core messages: 372086 nosy: eric.smith, gvanrossum, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Pre-feed the parser with the f-string location type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 09:21:59 2020 From: report at bugs.python.org (=?utf-8?b?U3Jpbml2YXMgIFJlZGR5IFRoYXRpcGFydGh5KOCwtuCxjeCwsOCxgOCwqA==?= =?utf-8?b?4LC/4LC14LC+4LC44LGNIOCwsOCxhuCwoeCxjeCwoeCwvyDgsKTgsL7gsJ8=?= =?utf-8?b?4LC/4LCq4LCw4LGN4LCk4LC/KQ==?=) Date: Mon, 22 Jun 2020 13:21:59 +0000 Subject: [New-bugs-announce] [issue41077] Make Cookiejar a bit more pythonic Message-ID: <1592832119.29.0.934428532499.issue41077@roundup.psfhosted.org> New submission from Srinivas Reddy Thatiparthy(?????????? ?????? ?????????) : Title says it all. ---------- messages: 372087 nosy: thatiparthy priority: normal severity: normal status: open title: Make Cookiejar a bit more pythonic _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 10:50:13 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 22 Jun 2020 14:50:13 +0000 Subject: [New-bugs-announce] [issue41078] [C API] Convert PyTuple_GET_ITEM() macro to a static inline function Message-ID: <1592837413.67.0.409413280161.issue41078@roundup.psfhosted.org> New submission from STINNER Victor : PyTuple_GET_ITEM() can be abused to access directly the PyTupleObject.ob_item member: PyObject **items = &PyTuple_GET_ITEM(0); Giving a direct access to an array or PyObject* (PyObject**) is causing issues with other Python implementations, like PyPy, which don't use PyObject internally. I propose to convert the PyTuple_GET_ITEM() and PyList_GET_ITEM() macros to static inline functions to disallow that. ---------- components: C API messages: 372091 nosy: vstinner priority: normal severity: normal status: open title: [C API] Convert PyTuple_GET_ITEM() macro to a static inline function versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 12:19:31 2020 From: report at bugs.python.org (Tomasz Pytel) Date: Mon, 22 Jun 2020 16:19:31 +0000 Subject: [New-bugs-announce] [issue41079] _PyAsyncGenWrappedValue_Type is never Readied Message-ID: <1592842771.21.0.286644744652.issue41079@roundup.psfhosted.org> New submission from Tomasz Pytel : A call is never made to PyType_Ready(&_PyAsyncGenWrappedValue_Type) on initialization unlike for all other Python type objects I can see. Does not seem to have any negative effects at the moment except to mess up my Python type instrumentation. May turn into a bug in the future if all types are assumed readied, is this intended behavior? ---------- components: Interpreter Core messages: 372099 nosy: Tomasz Pytel priority: normal severity: normal status: open title: _PyAsyncGenWrappedValue_Type is never Readied type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 13:28:11 2020 From: report at bugs.python.org (Ryan Westlund) Date: Mon, 22 Jun 2020 17:28:11 +0000 Subject: [New-bugs-announce] [issue41080] re.sub treats * incorrectly? Message-ID: <1592846891.21.0.0506467079294.issue41080@roundup.psfhosted.org> New submission from Ryan Westlund : ``` >>> re.sub('a*', '-', 'a') '--' >>> re.sub('a*', '-', 'aa') '--' >>> re.sub('a*', '-', 'aaa') '--' ``` Shouldn't it be returning one dash, not two, since the greedy quantifier will match all the a's? I understand why substituting on 'b' returns '-a-', but shouldn't this constitute only one match? In Python 2.7, it behaves as I expect: ``` >>> re.sub('a*', '-', 'a') '-' >>> re.sub('a*', '-', 'aa') '-' >>> re.sub('a*', '-', 'aaa') '-' ``` The original case that led me to this was trying to normalize a path to end in one slash. I used `re.sub('/*$', '/', path)`, but a nonzero number of slashes came out as two. ---------- components: Regular Expressions messages: 372104 nosy: Yujiri, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: re.sub treats * incorrectly? type: behavior versions: Python 3.10, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 15:39:42 2020 From: report at bugs.python.org (Jakub Stasiak) Date: Mon, 22 Jun 2020 19:39:42 +0000 Subject: [New-bugs-announce] [issue41081] Exclude __pycache__ directories from backups using CACHEDIR.TAG Message-ID: <1592854782.97.0.179658439261.issue41081@roundup.psfhosted.org> New submission from Jakub Stasiak : It'd be nice of __pycache__ directories didn't pollute backups. Granted, one can add __pycache__ directory to their backup-tool-of-choice exclusion list, but those lists are ever growing and maybe it'd be good to help the tools and the users. There's a Cache Directory Tagging Specification[1] which some backup tools like Borg, restic, GNU Tar and attic use out of the box (well, with a switch) and other tools (like rsync, Bacula, rdiff-backup and I imagine others) can be made to use it with a generic exclude-directories-with-this-file-present option (partially, just the existence of the tag file is used, not its content). I wasn't sure what to select in Components, so I went with Library. [1] https://bford.info/cachedir/ ---------- components: Library (Lib) messages: 372109 nosy: jstasiak priority: normal severity: normal status: open title: Exclude __pycache__ directories from backups using CACHEDIR.TAG type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 17:13:41 2020 From: report at bugs.python.org (Tim Hoffmann) Date: Mon, 22 Jun 2020 21:13:41 +0000 Subject: [New-bugs-announce] [issue41082] Error handling and documentation of Path.home() Message-ID: <1592860421.93.0.490449762565.issue41082@roundup.psfhosted.org> New submission from Tim Hoffmann : Path.home() may fail un (https://github.com/matplotlib/matplotlib/issues/17707#issuecomment-647180252). 1. I think the raised key error is too low-level, and it should be something else; what exactly t.b.d. 2. The documentation (https://docs.python.org/3/library/pathlib.html#pathlib.Path.home) should specify what happens in case of failure. 3. The documentation links to https://docs.python.org/3/library/os.path.html#os.path.expanduser, but that's not correct. _PosixFlavor.gethomedir() implements it's own user lookup, which is slightly different from os.path.expanduser() ---------- components: Library (Lib) messages: 372116 nosy: timhoffm priority: normal severity: normal status: open title: Error handling and documentation of Path.home() type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 18:42:45 2020 From: report at bugs.python.org (Michael Shields) Date: Mon, 22 Jun 2020 22:42:45 +0000 Subject: [New-bugs-announce] [issue41083] plistlib can't decode date from year 0 Message-ID: <1592865765.42.0.949299240899.issue41083@roundup.psfhosted.org> New submission from Michael Shields : On macOS 10.5.5: /tmp $ defaults export com.apple.security.KCN - absentCircleWithNoReason applicationDate 0000-12-30T00:00:00Z lastCircleStatus -1 lastWritten 2019-10-15T17:23:33Z pendingApplicationReminder 4001-01-01T00:00:00Z pendingApplicationReminderInterval 86400 /tmp $ cat plist_date_reduction.py #!/usr/bin/env python3 import plistlib import subprocess if __name__ == "__main__": plist = subprocess.check_output(["defaults", "export", "com.apple.security.KCN", "-"]) print(plistlib.loads(plist, fmt=plistlib.FMT_XML)) /tmp $ python3.8 plist_date_reduction.py Traceback (most recent call last): File "plist_date_reduction.py", line 8, in print(plistlib.loads(plist, fmt=plistlib.FMT_XML)) File "/usr/local/Cellar/python at 3.8/3.8.3/Frameworks/Python.framework/Versions/3.8/lib/python3.8/plistlib.py", line 1000, in loads return load( File "/usr/local/Cellar/python at 3.8/3.8.3/Frameworks/Python.framework/Versions/3.8/lib/python3.8/plistlib.py", line 992, in load return p.parse(fp) File "/usr/local/Cellar/python at 3.8/3.8.3/Frameworks/Python.framework/Versions/3.8/lib/python3.8/plistlib.py", line 288, in parse self.parser.ParseFile(fileobj) File "/private/tmp/python at 3.8-20200527-50093-16hak5w/Python-3.8.3/Modules/pyexpat.c", line 461, in EndElement File "/usr/local/Cellar/python at 3.8/3.8.3/Frameworks/Python.framework/Versions/3.8/lib/python3.8/plistlib.py", line 300, in handle_end_element handler() File "/usr/local/Cellar/python at 3.8/3.8.3/Frameworks/Python.framework/Versions/3.8/lib/python3.8/plistlib.py", line 376, in end_date self.add_object(_date_from_string(self.get_data())) File "/usr/local/Cellar/python at 3.8/3.8.3/Frameworks/Python.framework/Versions/3.8/lib/python3.8/plistlib.py", line 254, in _date_from_string return datetime.datetime(*lst) ValueError: year 0 is out of range ---------- components: Library (Lib) messages: 372131 nosy: shields-fn priority: normal severity: normal status: open title: plistlib can't decode date from year 0 type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 19:53:42 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Mon, 22 Jun 2020 23:53:42 +0000 Subject: [New-bugs-announce] [issue41084] Signify that a SyntaxError comes from an fstring in the error message Message-ID: <1592870022.32.0.609883413471.issue41084@roundup.psfhosted.org> New submission from Lysandros Nikolaou : It's relatively easy to identify if a SyntaxError occurs when parsing an fstring expression or not. The idea is to slightly change the error message to start with "f-string: " when it does (in the same way in which SyntaxError messages are produced by the hand-written fstring parser). I have a working PR that produces the following: $ cat a.py f'{a $ b}' $ ./python.exe a.py File "/Users/lysnikolaou/Repositories/cpython/a.py", line 1 (a $ b) ^ SyntaxError: f-string: invalid syntax Thoughts? ---------- components: Interpreter Core messages: 372134 nosy: eric.smith, gvanrossum, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Signify that a SyntaxError comes from an fstring in the error message type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 20:44:44 2020 From: report at bugs.python.org (William Pickard) Date: Tue, 23 Jun 2020 00:44:44 +0000 Subject: [New-bugs-announce] [issue41085] Array regression test fails Message-ID: <1592873084.36.0.333815567836.issue41085@roundup.psfhosted.org> New submission from William Pickard : Here's the verbose stack trace of the failing test: ====================================================================== FAIL: test_index (test.test_array.LargeArrayTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "L:\GIT\cpython\lib\test\support\__init__.py", line 837, in wrapper return f(self, maxsize) File "L:\GIT\cpython\lib\test\test_array.py", line 1460, in test_index self.assertEqual(example.index(11), size+3) AssertionError: -2147483645 != 2147483651 ---------- components: Tests files: test_output.log messages: 372135 nosy: WildCard65 priority: normal severity: normal status: open title: Array regression test fails type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file49257/test_output.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 22 21:28:41 2020 From: report at bugs.python.org (Brian Faherty) Date: Tue, 23 Jun 2020 01:28:41 +0000 Subject: [New-bugs-announce] [issue41086] Exception for uninstantiated interpolation (configparser) Message-ID: <1592875721.54.0.0419083877118.issue41086@roundup.psfhosted.org> New submission from Brian Faherty : The ConfigParser in Lib has a parameter called `interpolation`, that expects an instance of a subclass of Interpolation. However, when ConfigParser is given an argument of an uninstantiated subclass of Interpolation, the __init__ function of ConfigParser accepts it and continues on. This results in a later receiving an error message along the lines of `TypeError: before_set() missing 1 required positional argument: 'value'` when functions are later called on the ConfigParser instance. This delay between the feedback and the original mistake has led to a few bugs open on the issue tracker (https://bugs.python.org/issue26831 and https://bugs.python.org/issue26469. Both of which were closed after a quick and simple explanation, which can be easily implemented in the library itself. I've created a PR for this work and will attach it shortly. Please let me know if there is a better name for the exception other than `InterpolationIsNotInstantiatedError`. It seems long, but also in line with the other Errors already in configparser. ---------- components: Library (Lib) messages: 372137 nosy: Brian Faherty priority: normal severity: normal status: open title: Exception for uninstantiated interpolation (configparser) type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 04:42:21 2020 From: report at bugs.python.org (Bryan) Date: Tue, 23 Jun 2020 08:42:21 +0000 Subject: [New-bugs-announce] [issue41087] Argparse int / float default Message-ID: <1592901741.64.0.424311777369.issue41087@roundup.psfhosted.org> New submission from Bryan : parser.add_argument('-e', '--Edge', type = int, default = 0.005, metavar = 'Edge') Runs fine. Script uses default of 0.005 even when int specified. But if user tries to change, not an int.... ---------- messages: 372143 nosy: Bryan priority: normal severity: normal status: open title: Argparse int / float default versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 06:44:57 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Tue, 23 Jun 2020 10:44:57 +0000 Subject: [New-bugs-announce] [issue41088] Extend the AST Validator to validate all identifiers Message-ID: <1592909097.69.0.0792029823665.issue41088@roundup.psfhosted.org> New submission from Batuhan Taskaya : These identifiers include; - 'name' of FunctionDef/ClassDef/AsyncFunctionDef/ExceptHandler - 'name' and 'asname' of import aliases within ImportFrom and Import nodes. Any of these cases will crash the interpreter (abort) when used with a constant (such as True). This is a follow-up issue on 40870 ---------- messages: 372154 nosy: BTaskaya priority: normal severity: normal status: open title: Extend the AST Validator to validate all identifiers _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 07:09:26 2020 From: report at bugs.python.org (Nikita Nemkin) Date: Tue, 23 Jun 2020 11:09:26 +0000 Subject: [New-bugs-announce] [issue41089] Filters and other issues in Visual Studio projects Message-ID: <1592910566.56.0.465564486619.issue41089@roundup.psfhosted.org> New submission from Nikita Nemkin : Visual Studio projects need a bit of grooming. * File filters don't reflect recent file movements and additions. * Solution file is missing _zoneinfo project entries, resulting in dirty repo on every save. * bdist_wininst project hasn't been updated with the new zlib location. ---------- components: Windows messages: 372158 nosy: nnemkin, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Filters and other issues in Visual Studio projects type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 07:53:05 2020 From: report at bugs.python.org (Ronald Oussoren) Date: Tue, 23 Jun 2020 11:53:05 +0000 Subject: [New-bugs-announce] [issue41090] Support for "Apple Silicon" Message-ID: <1592913185.49.0.15166324196.issue41090@roundup.psfhosted.org> New submission from Ronald Oussoren : The attached patch implements "universal2" as an option for "--with-univeral-archs" to enable building "Universal 2" binaries on macOS (with x86_64 and arm64 code). This is an extension of the already existing support for other flavours of fat binaries on macOS. NOTE: I've attached a patch instead of creating a PR because I'm not logged on to GitHub in the macOS 11 VM where I created this patch. I'll obviously create a PR later. ---------- assignee: ronaldoussoren components: macOS files: universal2.patch keywords: patch messages: 372160 nosy: ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Support for "Apple Silicon" versions: Python 3.10, Python 3.9 Added file: https://bugs.python.org/file49258/universal2.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 11:26:57 2020 From: report at bugs.python.org (Manuel Jacob) Date: Tue, 23 Jun 2020 15:26:57 +0000 Subject: [New-bugs-announce] [issue41091] Remove recommendation in curses module documentation to initialize LC_ALL and encode strings Message-ID: <1592926017.53.0.619449435613.issue41091@roundup.psfhosted.org> New submission from Manuel Jacob : The documentation for the curses module (https://docs.python.org/3.9/library/curses.html) has the following note: > Since version 5.4, the ncurses library decides how to interpret non-ASCII data using the nl_langinfo function. That means that you have to call locale.setlocale() in the application and encode Unicode strings using one of the system?s available encodings. This example uses the system?s default encoding: > > import locale > locale.setlocale(locale.LC_ALL, '') > code = locale.getpreferredencoding() > > Then use code as the encoding for str.encode() calls. The recommendation to call `locale.setlocale(locale.LC_ALL, '')` is problematic as it initializes all locale categories to the user settings, which might be unintended and is not necessary for curses to work correctly. Initializing LC_CTYPE is sufficient for nl_langinfo() to return the correct encoding. Current versions of Python (*) initialize LC_CTYPE at interpreter startup. Therefore calling locale.setlocale() should not be necessary at all. The curses module automatically encodes strings. Therefore the recommendation to manually encode strings is outdated. (*) It seems to be the case since 177d921c8c03d30daa32994362023f777624b10d. Why was is not previously done on Python 2 and on Python 3 on Windows? ---------- assignee: docs at python components: Documentation messages: 372178 nosy: docs at python, mjacob priority: normal severity: normal status: open title: Remove recommendation in curses module documentation to initialize LC_ALL and encode strings _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 12:44:46 2020 From: report at bugs.python.org (Stephen Finucane) Date: Tue, 23 Jun 2020 16:44:46 +0000 Subject: [New-bugs-announce] [issue41092] Report actual size from 'os.path.getsize' Message-ID: <1592930686.73.0.679149711266.issue41092@roundup.psfhosted.org> New submission from Stephen Finucane : The 'os.path.getsize' API returns the apparent size of the file at *path*, that is, the number of bytes the file reports itself as consuming. However, it's often useful to get the actual size of the file, or the size of file on disk. It would be helpful if one could get this same information from 'os.path.getsize'. ---------- components: Library (Lib) messages: 372183 nosy: stephenfin priority: normal severity: normal status: open title: Report actual size from 'os.path.getsize' type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 15:01:13 2020 From: report at bugs.python.org (Tony) Date: Tue, 23 Jun 2020 19:01:13 +0000 Subject: [New-bugs-announce] [issue41093] BaseServer's server_forever() shutdown immediately when calling shutdown() Message-ID: <1592938873.08.0.742314253675.issue41093@roundup.psfhosted.org> New submission from Tony : Currently calling BaseServer's shutdown() function will not make serve_forever() return immediately from it's select(). I suggest adding a new function called server_shutdown() that will make serve_forever() shutdown immediately. Then in TCPServer(BaseServer) all we need to do is call self.socket.shutdown(socket.SHUT_RDWR) in server_shutdown()'s implementation. To test this I made a simple script: import threading import time from functools import partial from http.server import HTTPServer, SimpleHTTPRequestHandler def serve_http(server): server.serve_forever(poll_interval=2.5) def main(): with HTTPServer(('', 8000), SimpleHTTPRequestHandler) as server: t = threading.Thread(target=partial(serve_http, server)) t.start() time.sleep(3) start = time.time() print('shutdown') server.shutdown() print(f'time it took: {time.time() - start}') if __name__ == "__main__": main() ---------- components: Library (Lib) messages: 372194 nosy: tontinton priority: normal severity: normal status: open title: BaseServer's server_forever() shutdown immediately when calling shutdown() type: enhancement versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 16:25:16 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 23 Jun 2020 20:25:16 +0000 Subject: [New-bugs-announce] [issue41094] Audit does not work with non-ASCII data on non-UTF-8 locale Message-ID: <1592943916.33.0.703707686059.issue41094@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are issues with using PySys_Audit() with non-ASCII data on non-UTF-8 locale. One example is with PYTHONSTARTUP. In pymain_run_startup() in Modules/main.c the value of the PYTHONSTARTUP environment variable is passed to PySys_Audit() as UTF-8 encoded data. If it contains non-ASCII characters and the locale encoding is different from UTF-8, it fails. There are similar bugs in _Py_fopen() and _Py_fopen_obj(). ---------- components: Interpreter Core messages: 372205 nosy: serhiy.storchaka, steve.dower, vstinner priority: normal severity: normal status: open title: Audit does not work with non-ASCII data on non-UTF-8 locale type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 18:26:29 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 23 Jun 2020 22:26:29 +0000 Subject: [New-bugs-announce] [issue41095] inspect.signature() doesn't parse __text_signature__ containing a newline character Message-ID: <1592951189.28.0.183993243703.issue41095@roundup.psfhosted.org> New submission from STINNER Victor : $ ./python Python 3.10.0a0 (heads/unicode_latin1:40855c7064, Jun 24 2020, 00:20:07) >>> import select >>> select.epoll.register.__text_signature__ '($self, /, fd,\n eventmask=select.EPOLLIN | select.EPOLLPRI | select.EPOLLOUT)' >>> import inspect >>> inspect.signature(select.epoll.register) => eventmask parameter is gone! Either signature() must raise an exception, or it must handle a __text_signature__ containing a newline character. Issue spotted on bpo-31938 when fixing "./python -m pydoc select". By the way, as expected, pydoc shows: Help on method_descriptor in select.epoll: --- $ ./python -m pydoc select.epoll.register select.epoll.register = register(self, /, fd) Registers a new fd or raises an OSError if the fd is already registered. (...) --- ---------- components: Library (Lib) messages: 372213 nosy: serhiy.storchaka, vstinner, yselivanov priority: normal severity: normal status: open title: inspect.signature() doesn't parse __text_signature__ containing a newline character versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 23 22:04:35 2020 From: report at bugs.python.org (Kerrick Staley) Date: Wed, 24 Jun 2020 02:04:35 +0000 Subject: [New-bugs-announce] [issue41096] Need command to exit PDB interactive shell Message-ID: <1592964275.48.0.323547017959.issue41096@roundup.psfhosted.org> New submission from Kerrick Staley : In PDB, when you use "interact" to enter an interactive shell, the only way to exit that shell is to send an end-of-transmission (Ctrl+D) character. In some environments, such as Jupyter, this is awkward to do. Here is a StackOverflow post where a user encountered this issue: https://stackoverflow.com/questions/47522316/exit-pdb-interactive-mode-from-jupyter-notebook/62546186 I think that the user should be able to type quit() in order to exit the interactive Python shell and go back to the PDB shell, similar to a regular interactive Python session. I think you should also support exit() because the Python shell supports that one as well (quit() and exit() do the same thing, I think the alias exists to help discoverability for new users). I confirmed this issue on Python 3.6.9 and 3.8.3. ---------- components: Library (Lib) messages: 372226 nosy: Kerrick Staley priority: normal severity: normal status: open title: Need command to exit PDB interactive shell type: enhancement versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 03:29:18 2020 From: report at bugs.python.org (Armin Rigo) Date: Wed, 24 Jun 2020 07:29:18 +0000 Subject: [New-bugs-announce] [issue41097] confusing BufferError: Existing exports of data: object cannot be re-sized Message-ID: <1592983758.49.0.230327773422.issue41097@roundup.psfhosted.org> New submission from Armin Rigo : The behavior (tested in 3.6 and 3.9) of io.BytesIO().getbuffer() gives a unexpected exception message: >>> b = io.BytesIO() >>> b.write(b'abc') 3 >>> buf = b.getbuffer() >>> b.seek(0) 0 >>> b.write(b'?') # or anything up to 3 bytes BufferError: Existing exports of data: object cannot be re-sized The error message pretends that the problem is in resizing the BytesIO object, but the write() is not actually causing any resize. I am not sure if the bug is a wrong error message (and all writes are supposed to be forbidden) or a wrongly forbidden write() (after all, we can use the buffer itself to write into the same area of memory). ---------- components: Interpreter Core messages: 372237 nosy: arigo priority: normal severity: normal stage: test needed status: open title: confusing BufferError: Existing exports of data: object cannot be re-sized type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 04:42:38 2020 From: report at bugs.python.org (Inada Naoki) Date: Wed, 24 Jun 2020 08:42:38 +0000 Subject: [New-bugs-announce] [issue41098] Deprecating PyUnicodeEncodeError_Create Message-ID: <1592988158.78.0.202358099858.issue41098@roundup.psfhosted.org> New submission from Inada Naoki : PyUnicodeEncodeError_Create is using Py_UNICODE and is marked Py_DEPRECATED(3.3). But it is not deprecated in doc yet. There are no alternative API. In CPython code base, UnicodeEncodeError is created by `PyObject_CallFunction(PyExc_UnicodeEncodeError, ...)`. Can we just document it as deprecated since Python 3.3? Or should we add alternative API? ---------- components: C API messages: 372241 nosy: inada.naoki priority: normal severity: normal status: open title: Deprecating PyUnicodeEncodeError_Create versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 04:51:17 2020 From: report at bugs.python.org (Inada Naoki) Date: Wed, 24 Jun 2020 08:51:17 +0000 Subject: [New-bugs-announce] [issue41099] Deprecating PyUnicodeTranslateError_Create Message-ID: <1592988677.6.0.878608342851.issue41099@roundup.psfhosted.org> New submission from Inada Naoki : PyUnicodeTranslateError_Create marked as Py_DEPRECATED since it receives Py_UNICODE* as an argument. But it is not deprecated in the document. On the other hand, we have alternative private API which accepts Unicode object: _PyUnicodeTranslateError_Create. Should we make it public? Otherwise, we can recommend `PyObject_CallFunction(PyExc_UnicodeTranslateError, ...)` as an alternative. ---------- components: C API messages: 372242 nosy: inada.naoki priority: normal severity: normal status: open title: Deprecating PyUnicodeTranslateError_Create versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 07:51:07 2020 From: report at bugs.python.org (Ronald Oussoren) Date: Wed, 24 Jun 2020 11:51:07 +0000 Subject: [New-bugs-announce] [issue41100] Build failure on macOS 11 (beta) Message-ID: <1592999467.1.0.372512113251.issue41100@roundup.psfhosted.org> New submission from Ronald Oussoren : macOS 11 is darwin 20.0.0. This confuses the configure script, resulting in defining _POSIX_C_SOURCE and friends. ---------- assignee: ronaldoussoren components: Build, macOS messages: 372245 nosy: ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Build failure on macOS 11 (beta) versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 08:16:28 2020 From: report at bugs.python.org (Ronald Oussoren) Date: Wed, 24 Jun 2020 12:16:28 +0000 Subject: [New-bugs-announce] [issue41101] Support "arm64" in Mac/Tools/pythonw Message-ID: <1593000988.46.0.0606602806921.issue41101@roundup.psfhosted.org> New submission from Ronald Oussoren : Apple introduced a new CPU architecture for macOS at WWDC, which is arm64. Mac/Tools/pythonw.c launches the real python interpreter in a framework build is and is careful to launch that using the same architecture as it is running (to make it possible to use "arch -x86_64 pythonw" to launch the x86_64 variant when available. The current code does not support ARM64. ---------- components: Build, macOS messages: 372246 nosy: ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Support "arm64" in Mac/Tools/pythonw versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 08:19:05 2020 From: report at bugs.python.org (Xiaolong Liu) Date: Wed, 24 Jun 2020 12:19:05 +0000 Subject: [New-bugs-announce] [issue41102] ZipFile.namelist() does not match the actual files in .zip file Message-ID: <1593001145.16.0.222635534542.issue41102@roundup.psfhosted.org> New submission from Xiaolong Liu : I used zipfile module to archive thousands of .geojson file to zip files and access those .geojson file by ZipFile.open() method. In my hundreds of runnings, one of them was abnormal. As the title says, the ZipFile.namelist() did not match all the files in .zip file. And I extracted it by extractall() method and it only got those files included in the namelist. On the other hand, I extracted it by my compress software (360zip). I got the other files unincluded in the namelist(). Only one file (2564.geojson) appeared with these two extract methods. ZipFile.extractall() method got 674 files from '2654.geojson' to '3989.geojson'. 360zip got 1399 files from '0000.geojson' to '2654.geojson'. The abnormal file is too big to upload this page and I uploaded to google drive: https://drive.google.com/file/d/1UE2N2qwjn4m7uE6YF2A1FhdXYHP_7zQr/view?usp=sharing ---------- components: Library (Lib) messages: 372247 nosy: alanmcintyre, longavailable, serhiy.storchaka, twouters priority: normal severity: normal status: open title: ZipFile.namelist() does not match the actual files in .zip file type: performance versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 09:43:42 2020 From: report at bugs.python.org (Inada Naoki) Date: Wed, 24 Jun 2020 13:43:42 +0000 Subject: [New-bugs-announce] [issue41103] Removing old buffer support Message-ID: <1593006222.66.0.567274053686.issue41103@roundup.psfhosted.org> New submission from Inada Naoki : https://docs.python.org/3/c-api/objbuffer.html Old buffer protocol has been deprecated since Python 3.0. It was useful to make transition from Python 2 easy. But it's time to remove. ---------- components: C API messages: 372251 nosy: inada.naoki priority: normal severity: normal status: open title: Removing old buffer support versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 09:44:24 2020 From: report at bugs.python.org (skorpeo) Date: Wed, 24 Jun 2020 13:44:24 +0000 Subject: [New-bugs-announce] [issue41104] IMAPlib debug errors Message-ID: <1593006264.44.0.612791321942.issue41104@roundup.psfhosted.org> New submission from skorpeo : This line in imaplib.py inside _dump_ur function: l = map(lambda x:'%s: "%s"' % (x[0], x[1][0] and '" "'.join(x[1]) or ''), l) fails because the untagged responses are bytestrings and it expects regular strings. ---------- components: Library (Lib) messages: 372252 nosy: skorpeo priority: normal severity: normal status: open title: IMAPlib debug errors type: behavior versions: Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 11:16:58 2020 From: report at bugs.python.org (Manjusaka) Date: Wed, 24 Jun 2020 15:16:58 +0000 Subject: [New-bugs-announce] [issue41105] Add some extra content check for some who has been deprecated by glibc Message-ID: <1593011818.28.0.182724458451.issue41105@roundup.psfhosted.org> New submission from Manjusaka : Hello everyone When I try to compile the code from the master branch on my Manjaro, one of the Linux system based on the Arch and based on the glibc-2.31 && gcc-10.1.0 . the compiler show me that the fcntl module has been failed to be compiled Here's the message /home/manjusaka/Documents/project/cpython/Modules/fcntlmodule.c:618:33: error: ?I_PUSH? undeclared (first use in this function) 618 | if (PyModule_AddIntMacro(m, I_PUSH)) return -1; | ^~~~~~ ./Include/modsupport.h:146:67: note: in definition of macro ?PyModule_AddIntMacro? 146 | #define PyModule_AddIntMacro(m, c) PyModule_AddIntConstant(m, #c, c) | ^ /home/manjusaka/Documents/project/cpython/Modules/fcntlmodule.c:618:33: note: each undeclared identifier is reported only once for each function it appears in 618 | if (PyModule_AddIntMacro(m, I_PUSH)) return -1; | ^~~~~~ ./Include/modsupport.h:146:67: note: in definition of macro ?PyModule_AddIntMacro? 146 | #define PyModule_AddIntMacro(m, c) PyModule_AddIntConstant(m, #c, c) | ^ /home/manjusaka/Documents/project/cpython/Modules/fcntlmodule.c:619:33: error: ?I_POP? undeclared (first use in this function) 619 | if (PyModule_AddIntMacro(m, I_POP)) return -1; | ^~~~~ ./Include/modsupport.h:146:67: note: in definition of macro ?PyModule_AddIntMacro? 146 | #define PyModule_AddIntMacro(m, c) PyModule_AddIntConstant(m, #c, c) | ^ /home/manjusaka/Documents/project/cpython/Modules/fcntlmodule.c:620:33: error: ?I_LOOK? undeclared (first use in this function); did you mean ?F_LOCK?? 620 | if (PyModule_AddIntMacro(m, I_LOOK)) return -1; | ^~~~~~ I have figured out the reason because the stropts.h has been deprecated since the glic-2.30, but some of the distribution of Linux keep an empty file on the /usr/include, so the configure process will recognize stropts.h is existed and open HAVE_STROPTS_H flag. So should we add an extra content check in the configure process to avoid the empty file problem? ---------- components: Build messages: 372254 nosy: Manjusaka priority: normal severity: normal status: open title: Add some extra content check for some who has been deprecated by glibc type: compile error versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 12:55:03 2020 From: report at bugs.python.org (Cezary Wagner) Date: Wed, 24 Jun 2020 16:55:03 +0000 Subject: [New-bugs-announce] [issue41106] os.scandir() Windows bug dir_entry.stat() not works on file during writing. Message-ID: <1593017703.53.0.919415125728.issue41106@roundup.psfhosted.org> New submission from Cezary Wagner : I have problem with change detection of log during writing under Windows (normal fs and windows share). Probably bad order of Windows API calls - no idea. Test program is attached. You can reproduce it. Try with os.scandir() without os.stats() and os.stat(). Source code responsible for it is probably this -> I do not understand CPython code -> https://github.com/python/cpython/blob/master/Modules/posixmodule.c. Here is full description - many test was done. # os.scandir() Windows bug dir_entry.stat() not works on file during writing. # Such files is for example application log. # No problem with os.stat() # Call of os.stat() before os.scandir() -> dir_entry.stat() is workaround. # Open file during writing other program "fixes" dir_entry.stat(). # Get properties on open file during writing "fixes" dir_entry.stat(). # Notice that I run os.scandir() separately so dir_entry.stat() is not cached. # Steps to reproduce lack of modification update: # 1. Close all explorers or other application using PATH (it has impact). # 2. Set PATH to test folder can be directory or windows share. # 3. Run program without DO_STAT (False). # # Alternative steps (external app force valid modification date): # 4. run 'touch' or 'echo' on file should "fix" problem. 'echo' will throw error not matter. # # Alternative scenario (os.stat() force valid modification date - very slow): # 3. Run program without DO_STAT (True). No problems. # # Error result: # Modification date from dir_entry.stat() is stalled (not changing after modification) # if os.stat() or other Windows application not read file. # # Excepted result: # Modification date from dir_entry.stat() is update from separate calls os.scandir() # or cached if it is same os.scandir() call. # # Notice that os.scandir() must be call before dir_entry.stat() to avoid caching as described in documentation. # And this is done but not work on files during writing.. # # Ask question if you have since is very hard to find bug. ---------- components: Windows files: s03_dir_entry.py messages: 372264 nosy: Cezary.Wagner, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.scandir() Windows bug dir_entry.stat() not works on file during writing. type: crash versions: Python 3.8 Added file: https://bugs.python.org/file49259/s03_dir_entry.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 15:44:38 2020 From: report at bugs.python.org (Natsumi H.) Date: Wed, 24 Jun 2020 19:44:38 +0000 Subject: [New-bugs-announce] [issue41107] Running a generator in a map-like manner Message-ID: <1593027878.48.0.698199601313.issue41107@roundup.psfhosted.org> New submission from Natsumi H. : I suggest adding a function which behaves like map but without returning anything to iterate over a generator. This is useful in cases where you need to run a function on every element in a list without unnecessarily creating a generator object like map would. I think given the existence of the map function that this should be added to Python. ---------- components: Interpreter Core messages: 372275 nosy: natsuwumi priority: normal severity: normal status: open title: Running a generator in a map-like manner type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 17:08:47 2020 From: report at bugs.python.org (Enji Cooper) Date: Wed, 24 Jun 2020 21:08:47 +0000 Subject: [New-bugs-announce] [issue41108] IN module removed in python 3.x; socket doesn't fill in the gap with IP_PORTRANGE* Message-ID: <1593032927.89.0.779362395362.issue41108@roundup.psfhosted.org> New submission from Enji Cooper : The group that I work with uses the IN.py module to access constants available in netinet/in.h on BSD when adjusting socket port ranges. This compile-time module was never ported forward to 3.x and equivalent functionality doesn't exist in the `socket` module. This bug will address that issue. ---------- components: Extension Modules messages: 372282 nosy: ngie priority: normal pull_requests: 20290 severity: normal status: open title: IN module removed in python 3.x; socket doesn't fill in the gap with IP_PORTRANGE* type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Jun 24 22:51:23 2020 From: report at bugs.python.org (=?utf-8?q?=C3=89tienne_Pot?=) Date: Thu, 25 Jun 2020 02:51:23 +0000 Subject: [New-bugs-announce] [issue41109] subclasses of pathlib.PurePosixPath never call __init__ or __new__ Message-ID: <1593053483.35.0.664517914864.issue41109@roundup.psfhosted.org> New submission from ?tienne Pot : I have a subclass GithubPath of PurePosixPath. ``` class GithubPath(pathlib.PurePosixPath): def __new__(cls, *args, **kwargs): print('New') return super().__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): print('Init') super().__init__() ``` Calling `child.parent` create a new GithubPath but without ever calling __new__ nor __init__. So my subclass is never notified it is created. ``` p = GithubPath() # Print "New", "Init" p.parent # Create a new GithubPath but bypass the constructors ``` The reason seems to be that parent calls _from_parts which create a new object through `object.__new__(cls)`: https://github.com/python/cpython/blob/cf18c9e9d4d44f6671a3fe6011bb53d8ee9bd92b/Lib/pathlib.py#L689 A hack is to subclass `_init` but it seems hacky as it relies on internal implementation detail. ---------- messages: 372297 nosy: ?tienne Pot priority: normal severity: normal status: open title: subclasses of pathlib.PurePosixPath never call __init__ or __new__ type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 00:46:55 2020 From: report at bugs.python.org (Ashley Whetter) Date: Thu, 25 Jun 2020 04:46:55 +0000 Subject: [New-bugs-announce] [issue41110] 2to3 reports some files as both not changing and having been modified Message-ID: <1593060415.61.0.690471186098.issue41110@roundup.psfhosted.org> New submission from Ashley Whetter : Many of the fixers cause the tool to report a file as both unchanged and modified. For example when checking a file with the following contents: ``` s = "str" ``` using the following command: `python -m lib2to3 -f unicode unicode_test.py` The following is output: ``` RefactoringTool: No changes to unicode_test.py RefactoringTool: Files that need to be modified: RefactoringTool: unicode_test.py ``` When a fixer returns a node, even if it is the original node without changes, it is considered as a change to the code (https://github.com/python/cpython/blob/cf18c9e9d4d44f6671a3fe6011bb53d8ee9bd92b/Lib/lib2to3/refactor.py#L446-L447) and the file is later added to the list of modified files. I have not yet identified which fixers have this issue. The fix appears to be that fixers need to be aware of when they have not made a change and should `None` when that is the case. ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 372299 nosy: AWhetter priority: normal severity: normal status: open title: 2to3 reports some files as both not changing and having been modified versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 04:54:20 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 25 Jun 2020 08:54:20 +0000 Subject: [New-bugs-announce] [issue41111] Convert a few stdlib extensions to the limited C API Message-ID: <1593075260.88.0.835474638855.issue41111@roundup.psfhosted.org> New submission from STINNER Victor : Python stdlib has around 139 extension modules. Not all of them *need* to use internal C API. I'm interested to try to convert a bunch of them to the limited C API, as part of my work on the PEP 620. IMHO "Eating your own dog food" is a good practice to ensure that the limited C API is usable. Currently, the stdlib has exactly one extension module using the limited C API: the xxlimited module built with Py_LIMITED_API macro defined as 0x03050000 in setup.py. By the way, maybe Py_LIMITED_API should be defined in xxlimited.c, rather than in setup.py. xxlimited.c is not built if Python is built in debug mode. I'm not sure why. The main limitation to use the limited C API for stdlib is Argument Clinic which attempts to always emit the most efficient code, like using the METH_FASTCALL calling convention and use private functions like _PyArg_CheckPositional() or "static _PyArg_Parser _parser". Argument Clinic could be modified to have an option to only use C API of the limited C API. Cython is working on a similar option (restraint emitted code to the limited C API). I already tried to convert stdlib extensions to the limited C API in bpo-39573. I found other issues: * PyTypeObject is opaque and so it's not possible to implement a deallocator function (tp_dealloc) which calls tp_free like: Py_TYPE(self)->tp_free((PyObject*)self); * _Py_IDENTIFIER() is not part of the limited C API https://bugs.python.org/issue39573#msg361514 ---------- components: Extension Modules messages: 372307 nosy: vstinner priority: normal severity: normal status: open title: Convert a few stdlib extensions to the limited C API versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 06:01:15 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 25 Jun 2020 10:01:15 +0000 Subject: [New-bugs-announce] [issue41112] test_peg_generator fails on non-UTF-8 locale Message-ID: <1593079275.98.0.365075470813.issue41112@roundup.psfhosted.org> New submission from Serhiy Storchaka : $ LC_ALL=es_US.iso88591 ./python -m test -v -m test_syntax_error_for_string test_peg_generator ... ====================================================================== ERROR: test_syntax_error_for_string (test.test_peg_generator.test_c_parser.TestCParser) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/test_peg_generator/test_c_parser.py", line 377, in test_syntax_error_for_string self.run_test(grammar_source, test_source) File "/home/serhiy/py/cpython/Lib/test/test_peg_generator/test_c_parser.py", line 85, in run_test assert_python_ok( File "/home/serhiy/py/cpython/Lib/test/support/script_helper.py", line 156, in assert_python_ok return _assert_python(True, *args, **env_vars) File "/home/serhiy/py/cpython/Lib/test/support/script_helper.py", line 140, in _assert_python res, cmd_line = run_python_until_end(*args, **env_vars) File "/home/serhiy/py/cpython/Lib/test/support/script_helper.py", line 127, in run_python_until_end proc = subprocess.Popen(cmd_line, stdin=subprocess.PIPE, File "/home/serhiy/py/cpython/Lib/subprocess.py", line 947, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/home/serhiy/py/cpython/Lib/subprocess.py", line 1752, in _execute_child self.pid = _posixsubprocess.fork_exec( UnicodeEncodeError: 'latin-1' codec can't encode character '\u540d' in position 962: ordinal not in range(256) ---------------------------------------------------------------------- ---------- components: Tests messages: 372336 nosy: gvanrossum, lys.nikolaou, pablogsal, serhiy.storchaka priority: normal severity: normal status: open title: test_peg_generator fails on non-UTF-8 locale type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 07:16:47 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 25 Jun 2020 11:16:47 +0000 Subject: [New-bugs-announce] [issue41113] test_warnings fails on non-Western locales Message-ID: <1593083807.4.0.45792432465.issue41113@roundup.psfhosted.org> New submission from Serhiy Storchaka : $ LC_ALL=uk_UA.koi8u ./python -m test -v -m test_nonascii test_warnings ... ====================================================================== ERROR: test_nonascii (test.test_warnings.CEnvironmentVariableTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/test_warnings/__init__.py", line 1201, in test_nonascii rc, stdout, stderr = assert_python_ok("-c", File "/home/serhiy/py/cpython/Lib/test/support/script_helper.py", line 156, in assert_python_ok return _assert_python(True, *args, **env_vars) File "/home/serhiy/py/cpython/Lib/test/support/script_helper.py", line 140, in _assert_python res, cmd_line = run_python_until_end(*args, **env_vars) File "/home/serhiy/py/cpython/Lib/test/support/script_helper.py", line 127, in run_python_until_end proc = subprocess.Popen(cmd_line, stdin=subprocess.PIPE, File "/home/serhiy/py/cpython/Lib/subprocess.py", line 947, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/home/serhiy/py/cpython/Lib/subprocess.py", line 1739, in _execute_child env_list.append(k + b'=' + os.fsencode(v)) File "/home/serhiy/py/cpython/Lib/os.py", line 812, in fsencode return filename.encode(encoding, errors) File "/home/serhiy/py/cpython/Lib/encodings/koi8_u.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_table) UnicodeEncodeError: 'charmap' codec can't encode character '\xf3' in position 16: character maps to ---------- components: Tests messages: 372344 nosy: serhiy.storchaka priority: normal severity: normal status: open title: test_warnings fails on non-Western locales type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 08:48:17 2020 From: report at bugs.python.org (Samuel Freilich) Date: Thu, 25 Jun 2020 12:48:17 +0000 Subject: [New-bugs-announce] [issue41114] "TypeError: unhashable type" could often be more clear Message-ID: <1593089297.5.0.960414313622.issue41114@roundup.psfhosted.org> New submission from Samuel Freilich : Currently, if you (for example) put a dict as a value in a set or key in a dict, you get: TypeError: unhashable type: 'dict' I'm pretty sure this wording goes back a long time, but I've noticed that Python beginners tend to find this really confusing. It fits into a pattern of error messages where you have to explain why the error message is worded the way it is after you explain why the error message occurs at all. There are many instances of: https://stackoverflow.com/questions/13264511/typeerror-unhashable-type-dict It would be clearer if the message was something like: TypeError: 'dict' can not be used as a set value because it is an unhashable type. (Or "dict key" instead of "set value".) The exception is raised in PyObject_Hash, so that doesn't have some of the context about how/why hash was called. That's called in a lot of places. Possibly, PyObject_Hash and PyObject_HashNotImplemented could take the format string passed to PyErr_Format as an optional second argument, defaulting to the current behavior? Then certain callers (in particular, the set and dict constructor, set and dict methods that add set values or add/modify dict keys) could provide clearer error messages. ---------- messages: 372366 nosy: sfreilich priority: normal severity: normal status: open title: "TypeError: unhashable type" could often be more clear type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 08:49:30 2020 From: report at bugs.python.org (Antoine Pitrou) Date: Thu, 25 Jun 2020 12:49:30 +0000 Subject: [New-bugs-announce] [issue41115] Codecs should raise precise UnicodeDecodeError or UnicodeEncodeError Message-ID: <1593089370.45.0.0611386084027.issue41115@roundup.psfhosted.org> New submission from Antoine Pitrou : A number of codecs raise bare UnicodeError, rather than Unicode{Decode,Encode}Error. Example: File "/home/antoine/miniconda3/envs/pyarrow/lib/python3.7/encodings/utf_16.py", line 67, in _buffer_decode raise UnicodeError("UTF-16 stream does not start with BOM") A more complete list can be found here: https://gist.github.com/pitrou/60594b28d8e47edcdb97d9b15d5f9866 ---------- components: Library (Lib) keywords: easy messages: 372367 nosy: benjamin.peterson, ezio.melotti, lemburg, pitrou, serhiy.storchaka, vstinner priority: normal severity: normal stage: needs patch status: open title: Codecs should raise precise UnicodeDecodeError or UnicodeEncodeError type: behavior versions: Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 11:51:04 2020 From: report at bugs.python.org (Ned Deily) Date: Thu, 25 Jun 2020 15:51:04 +0000 Subject: [New-bugs-announce] [issue41116] build on macOS 11 (beta) does not find system-supplied third-party libraries Message-ID: <1593100264.48.0.613850075185.issue41116@roundup.psfhosted.org> New submission from Ned Deily : When building on macOS 11 (beta), a number of modules that should normally build on macOS fail because the system-supplied third-party libraries are not found. The necessary bits to build these optional modules were not found: _bz2 _curses _curses_panel _gdbm _hashlib _lzma _ssl ossaudiodev readline spwd zlib The list should look like this (with no additional third-party libs supplied from another source like Homebrew or MacPorts): The necessary bits to build these optional modules were not found: _gdbm _hashlib _ssl ossaudiodev spwd The problem is due to a change in the 11 beta versus 10.15 or earlier systems: "New in macOS Big Sur 11 beta, the system ships with a built-in dynamic linker cache of all system-provided libraries. As part of this change, copies of dynamic libraries are no longer present on the filesystem. Code that attempts to check for dynamic library presence by looking for a file at a path or enumerating a directory will fail. Instead, check for library presence by attempting to dlopen() the path, which will correctly check for the library in the cache." This breaks tests in setup.py using find_library_file() to determine if a library is present and in what directory it exists. setup.py depends on Lib/distutils/unixccompiler.py to do the dirty work. A similar problem arose on earlier macOS releases when header files could no longer be installed in the systems /usr/include; setup.py had to be taught to look in the SDK being used implicitly or explicitly by the compiler preprocessor. We could probably do something like that here while trying to avoid changes that might break downstream supplements/replacements to distutils, for example, setuptools. There is a workaround: explicitly specify the SDK location to ./configure (you also need to specify the universal archs setting): ./configure \ --enable-universalsdk=$(xcodebuild -version -sdk macosx Path) \ --with-universal-archs=intel-64 \ ... ---------- components: macOS messages: 372379 nosy: ned.deily, ronaldoussoren priority: high severity: normal status: open title: build on macOS 11 (beta) does not find system-supplied third-party libraries versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 12:35:27 2020 From: report at bugs.python.org (William Pickard) Date: Thu, 25 Jun 2020 16:35:27 +0000 Subject: [New-bugs-announce] [issue41117] [easy C] GC: Use local variable 'op' when invoking 'traverse' in 'subtract_refs' Message-ID: <1593102927.54.0.130776638406.issue41117@roundup.psfhosted.org> New submission from William Pickard : When the GC module goes to collect objects (most notably, during Python shutdown), it makes a call to subtract_refs on the GC container. During this invocation, it creates a local variable "op" who's value is the result of 'FROM_GC(gc)', but when it goes to use the obtained 'traverse' method, it calls FROM_GC(gc) instead of using 'op'. This, unfortunately, makes it rather difficult to debug "Access Violations" for extension modules for when 'traverse' is 'NULL' as inspecting the variable 'op' in the chosen debugger (for my case: Visual Studio on Windows) is impossible. This can potentially introduce a micro optimization in the overall runtime of the GC module as it no longer has to invoke the addition instruction (if it does) to construct the first parameter of the 'traverse' call. ---------- components: C API, Interpreter Core files: cpython.patch keywords: patch messages: 372380 nosy: WildCard65 priority: normal severity: normal status: open title: [easy C] GC: Use local variable 'op' when invoking 'traverse' in 'subtract_refs' type: enhancement versions: Python 3.10 Added file: https://bugs.python.org/file49261/cpython.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 17:22:13 2020 From: report at bugs.python.org (Grant Petty) Date: Thu, 25 Jun 2020 21:22:13 +0000 Subject: [New-bugs-announce] [issue41118] datetime object does not preserve POSIX timestamp Message-ID: <1593120133.28.0.499070536644.issue41118@roundup.psfhosted.org> New submission from Grant Petty : For complete context, see https://stackoverflow.com/questions/62582386/how-do-i-get-a-naive-datetime-object-that-correctly-uses-utc-inputs-and-preserve Short version: It does not seem correct that a datetime object produced *from* a POSIX timestamp using utcfromtimestamp should then return a different timestamp. As a user, I would like to be able to count on the POSIX timestamp as being the one conserved property of a datetime object, regardless of whether it is naive or time zone aware. > dt0 = dt.datetime(year=2020, month=1, day=1,hour=0,minute=0, tzinfo=pytz.utc) > print(dt0) 2020-01-01 00:00:00+00:00 > ts0 = dt0.timestamp() > print(ts0) 1577836800.0 > dt1 = dt.datetime.utcfromtimestamp(ts0) > print(dt1) 2020-01-01 00:00:00 > ts1 = dt1.timestamp() > print(ts1) 1577858400.0 ---------- messages: 372387 nosy: gpetty priority: normal severity: normal status: open title: datetime object does not preserve POSIX timestamp type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 18:36:38 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Thu, 25 Jun 2020 22:36:38 +0000 Subject: [New-bugs-announce] [issue41119] Wrong error message for list/tuple followed by a colon Message-ID: <1593124598.52.0.640760442401.issue41119@roundup.psfhosted.org> New submission from Lysandros Nikolaou : Brandt found this out while testing his implementation of the `match` statement. When a list or tuple are followed by a colon without an annotation, the old parser used to say "invalid syntax", while the new parser considers this an annotation and outputs something along the lines of "only single target (not tuple) can be annotated". For example: ? cpython git:(master) ./python.exe Python 3.10.0a0 (heads/master:06a40d7359, Jun 26 2020, 01:33:34) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> (a, b): File "", line 1 (a, b): ^ SyntaxError: only single target (not tuple) can be annotated >>> [a, b]: File "", line 1 [a, b]: ^ SyntaxError: only single target (not list) can be annotated >>> a,: File "", line 1 a,: ^ SyntaxError: only single target (not tuple) can be annotated The behavior of the old parser seems more logical. ---------- assignee: lys.nikolaou messages: 372390 nosy: brandtbucher, gvanrossum, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Wrong error message for list/tuple followed by a colon type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 19:11:22 2020 From: report at bugs.python.org (Abbas Taher) Date: Thu, 25 Jun 2020 23:11:22 +0000 Subject: [New-bugs-announce] [issue41120] Possible performance improvement for itertools.product example on Python Docs Message-ID: <1593126682.32.0.703433792229.issue41120@roundup.psfhosted.org> New submission from Abbas Taher : In the documentation the following example is given: def product(*args, repeat=1): # product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy # product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111 pools = [tuple(pool) for pool in args] * repeat result = [[]] for pool in pools: result = [x+[y] for x in result for y in pool] for prod in result: yield tuple(prod) The proposed enhancement uses a nested generator so no intermediate results are created. def product2(*args, repeat=1): def concat(result, pool): yield from (x+[y] for x in result for y in pool) pools = [tuple(pool) for pool in args] * repeat result = [[]] for pool in pools: result = concat(result, pool) for prod in result: yield (tuple(prod)) ---------- assignee: docs at python components: Documentation files: product example.py messages: 372392 nosy: ataher, docs at python priority: normal severity: normal status: open title: Possible performance improvement for itertools.product example on Python Docs type: enhancement Added file: https://bugs.python.org/file49262/product example.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 20:39:56 2020 From: report at bugs.python.org (wyz23x2) Date: Fri, 26 Jun 2020 00:39:56 +0000 Subject: [New-bugs-announce] [issue41121] Path sep. in IDLE on Windows changes Message-ID: <1593131996.15.0.810805965142.issue41121@roundup.psfhosted.org> New submission from wyz23x2 : Python supports "/" and "\" separators on Windows. So in IDLE, the path shown sometimes is: D:\xxx\xxx Sometimes is: D:/xxx/xxx That isn't right. ---------- assignee: terry.reedy components: IDLE messages: 372395 nosy: terry.reedy, wyz23x2 priority: normal severity: normal status: open title: Path sep. in IDLE on Windows changes type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 23:29:31 2020 From: report at bugs.python.org (Mark Grandi) Date: Fri, 26 Jun 2020 03:29:31 +0000 Subject: [New-bugs-announce] [issue41122] functools.singledispatchfunction has confusing error message if no position arguments are passed in Message-ID: <1593142171.21.0.765046839926.issue41122@roundup.psfhosted.org> New submission from Mark Grandi : this is with python 3.8: ```plaintext PS C:\Users\mark> py -3 --version --version Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] ``` So when using functools.singledispatch or functools.singledispatchmethod, you need to provide at least 1 positional argument so it can dispatch based on the type of said argument however, with `functools.singledispatchmethod`, you get an error message that is confusing and not obvious what the problem is. Example with `functools.singledispatchmethod`: ``` class NegatorTwo: @singledispatchmethod def neg(arg): raise NotImplementedError("Cannot negate a") @neg.register def _(self, arg: int): return -arg @neg.register def _test(self, arg: bool): return not arg if __name__ == "__main__": n = NegatorTwo() print(n.neg(0)) print(n.neg(False)) print(n.neg(arg=0)) ``` you end up getting: ```plaintext PS C:\Users\mark> py -3 C:\Users\mark\Temp\singledisp.py 0 True Traceback (most recent call last): File "C:\Users\mark\Temp\singledisp.py", line 58, in print(n.neg(arg=0)) File "C:\Python38\lib\functools.py", line 910, in _method method = self.dispatcher.dispatch(args[0].__class__) IndexError: tuple index out of range ``` but with just regular `functools.singledispatch`: ```plaintext @functools.singledispatch def negate_func(arg): raise NotImplementedError("can't negate") @negate_func.register def negate_int_func(arg:int): return -arg @negate_func.register def negate_bool_func(arg:bool): return not arg if __name__ == "__main__": print(negate_func(0)) print(negate_func(False)) print(negate_func(arg=0)) ``` you get an error that tells you what actually is wrong: ```plaintext PS C:\Users\mark> py -3 C:\Users\mark\Temp\singledisp.py 0 True Traceback (most recent call last): File "C:\Users\mark\Temp\singledisp.py", line 63, in print(negate_func(arg=0)) File "C:\Python38\lib\functools.py", line 871, in wrapper raise TypeError(f'{funcname} requires at least ' TypeError: negate_func requires at least 1 positional argument ``` it seems that the code in `functools.singledispatchmethod` needs to check to see if `args` is empty, and throw a similar (if not the same) exception as `functools.singledispatch` ---------- components: Library (Lib) messages: 372406 nosy: markgrandi priority: normal severity: normal status: open title: functools.singledispatchfunction has confusing error message if no position arguments are passed in type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Jun 25 23:53:21 2020 From: report at bugs.python.org (Inada Naoki) Date: Fri, 26 Jun 2020 03:53:21 +0000 Subject: [New-bugs-announce] [issue41123] Remove Py_UNICODE APIs except PEP 623 Message-ID: <1593143601.8.0.862502731431.issue41123@roundup.psfhosted.org> New submission from Inada Naoki : # APIs relating to wstr Since some APIs did not have Py_DEPRECATE until 3.9 (see GH-20941), it can not be removed in 3.10. I wrote PEP 623 for them. This issue doesn't about them. # Deprecated since Python 3.3, and not documented. In Python 3.3 what's new: * :c:macro:`Py_UNICODE_strlen`: use :c:func:`PyUnicode_GetLength` or :c:macro:`PyUnicode_GET_LENGTH` * :c:macro:`Py_UNICODE_strcat`: use :c:func:`PyUnicode_CopyCharacters` or :c:func:`PyUnicode_FromFormat` * :c:macro:`Py_UNICODE_strcpy`, :c:macro:`Py_UNICODE_strncpy`, :c:macro:`Py_UNICODE_COPY`: use :c:func:`PyUnicode_CopyCharacters` or :c:func:`PyUnicode_Substring` * :c:macro:`Py_UNICODE_strcmp`: use :c:func:`PyUnicode_Compare` * :c:macro:`Py_UNICODE_strncmp`: use :c:func:`PyUnicode_Tailmatch` * :c:macro:`Py_UNICODE_strchr`, :c:macro:`Py_UNICODE_strrchr`: use :c:func:`PyUnicode_FindChar` These functions are not documented. But they has Py_DEPRECATED(3.3) from Python 3.6. Let's remove them in 3.10. # Deprecated since Python 3.3 with document Some APIs has document with `.. deprecated:: 3.3 4.0`. * PyLong_FromUnicode * PyUnicode_TransformDecimalToASCII * PyUnicode_AsUnicodeCopy * PyUnicode_Encode * PyUnicode_EncodeUTF7 * PyUnicode_EncodeUTF8 * PyUnicode_EncodeUTF16 * PyUnicode_EncodeUTF32 * PyUnicode_EncodeUnicodeEscape * PyUnicode_EncodeRawUnicodeEscape * PyUnicode_EncodeLatin1 * PyUnicode_EncodeASCII * PyUnicode_EncodeCharmap * PyUnicode_TranslateCharmap * PyUnicode_EncodeMBCS a) Can we replace 4.0 with 3.10 and remove them in 3.10? b) Or should we replace 4.0 with 3.11 and wait one more year? ---------- components: C API messages: 372407 nosy: inada.naoki priority: normal severity: normal status: open title: Remove Py_UNICODE APIs except PEP 623 versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 01:07:19 2020 From: report at bugs.python.org (Srinivas Sampath) Date: Fri, 26 Jun 2020 05:07:19 +0000 Subject: [New-bugs-announce] [issue41124] String with encode causing addition of 'b' in the beginning Message-ID: <1593148039.09.0.312801930904.issue41124@roundup.psfhosted.org> New submission from Srinivas Sampath : I am trying to run the attached code. when hard-coding the string in the URL and doing .encode, then the code works. However since i am requesting user input, i cannot hard-code and the code seem to add the 'b' character in the beginning causing the request to fail with output like this Enter Hostname : hostname cannot be empty Enter Port Number : before invokingGET http://psglx73:24670/coreservices/DomainService HTTP/1.1 after encode b'GET http://psglx73:24670/coreservices/DomainService HTTP/1.1\r\n\r\n' HTTP/1.1 400 Bad Request Transfer-Encoding: chunked Date: Fri, 26 Jun 2020 05:06:21 GMT Connection: close Server: Informatica 0 i cannot pass without .encode to the socket communication. is this reported problem ? ---------- files: ex_12_informatica.py messages: 372408 nosy: Srinivas Sampath priority: normal severity: normal status: open title: String with encode causing addition of 'b' in the beginning type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49264/ex_12_informatica.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 01:18:59 2020 From: report at bugs.python.org (Laurie Opperman) Date: Fri, 26 Jun 2020 05:18:59 +0000 Subject: [New-bugs-announce] [issue41125] Display exit-codes for abruptly terminated processes in concurrent.futures Message-ID: <1593148739.76.0.536748253991.issue41125@roundup.psfhosted.org> New submission from Laurie Opperman : When a process terminates in the process-pool of concurrent.futures.process, it simply gives the exception (with no __cause__): BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. I would like to include the terminated processes' exit codes in the error, either as part of the message or as a separate exception set as the error's __cause__ ---------- components: Library (Lib) messages: 372409 nosy: Epic_Wink priority: normal severity: normal status: open title: Display exit-codes for abruptly terminated processes in concurrent.futures type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 06:11:48 2020 From: report at bugs.python.org (=?utf-8?b?U3Jpbml2YXMgIFJlZGR5IFRoYXRpcGFydGh5KOCwtuCxjeCwsOCxgOCwqA==?= =?utf-8?b?4LC/4LC14LC+4LC44LGNIOCwsOCxhuCwoeCxjeCwoeCwvyDgsKTgsL7gsJ8=?= =?utf-8?b?4LC/4LCq4LCw4LGN4LCk4LC/KQ==?=) Date: Fri, 26 Jun 2020 10:11:48 +0000 Subject: [New-bugs-announce] [issue41126] Running test suite gives me - python.exe(14198, 0x114352dc0) malloc: can't allocate region Message-ID: <1593166308.58.0.119681286256.issue41126@roundup.psfhosted.org> New submission from Srinivas Reddy Thatiparthy(?????????? ?????? ?????????) : While running tests with `./python.exe Lib/test`, I see the following text in the console. Is this a bug? `` python.exe(14198,0x114352dc0) malloc: can't allocate region :*** mach_vm_map(size=842105263157895168, flags: 100) failed (error code=3) python.exe(14198,0x114352dc0) malloc: *** set a breakpoint in malloc_error_break to debug python.exe(14198,0x114352dc0) malloc: can't allocate region :*** mach_vm_map(size=842105263157895168, flags: 100) failed (error code=3) python.exe(14198,0x114352dc0) malloc: *** set a breakpoint in malloc_error_break to debug python.exe(14198,0x114352dc0) malloc: can't allocate region :*** mach_vm_map(size=421052631578947584, flags: 100) failed (error code=3) python.exe(14198,0x114352dc0) malloc: *** set a breakpoint in malloc_error_break to debug python.exe(14198,0x114352dc0) malloc: can't allocate region :*** mach_vm_map(size=421052631578947584, flags: 100) failed (error code=3) python.exe(14198,0x114352dc0) malloc: *** set a breakpoint in malloc_error_break to debug `` My system is macOS Catalina 10.15.5 (19F101). unman -a Darwin Srinivass-MacBook-Pro.local 19.5.0 Darwin Kernel Version 19.5.0: Tue May 26 20:41:44 PDT 2020; root:xnu-6153.121.2~2/RELEASE_X86_64 x86_64 ---------- messages: 372420 nosy: thatiparthy priority: normal severity: normal status: open title: Running test suite gives me - python.exe(14198,0x114352dc0) malloc: can't allocate region _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 11:24:56 2020 From: report at bugs.python.org (Austin Raney) Date: Fri, 26 Jun 2020 15:24:56 +0000 Subject: [New-bugs-announce] [issue41127] Executing code in thread or process pools: run_in_executor example Message-ID: <1593185096.75.0.03844537767.issue41127@roundup.psfhosted.org> New submission from Austin Raney : I found an issue with the concurrent.futures.ProcessPoolExecuter() example (#3) in the asyncio event loops documentation. The call to asyncio.run(main()) should be guarded by `__name__=="__main__":`, as it sits now a RuntimeError is thrown. https://docs.python.org/3/library/asyncio-eventloop.html#executing-code-in-thread-or-process-pools ---------- assignee: docs at python components: Documentation messages: 372428 nosy: aaraney, docs at python priority: normal severity: normal status: open title: Executing code in thread or process pools: run_in_executor example versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 13:34:49 2020 From: report at bugs.python.org (=?utf-8?q?Josef_Havr=C3=A1nek?=) Date: Fri, 26 Jun 2020 17:34:49 +0000 Subject: [New-bugs-announce] [issue41128] Signal handlers should not hang during blocked main thread Message-ID: <1593192889.65.0.44895771655.issue41128@roundup.psfhosted.org> New submission from Josef Havr?nek : python3 When handling signals (via signal module) have delayed execution when main thread is blocked/waiting for event That is sub-optimal(signal "could get lost"). Signals shoud be handled asap... Think about scenario when os may be asking python nicely before it sends os.kill...so we could be dead on next bytecode instruction of main thread. In such scenario when main thread is blocked and VM recives signal it should "context switch" to handler immediately thus not being dependent on state of main thread to be in executable state. This gotcha should be included in documentation of all 3.x version since there should be nothing running(including long-runing c computation like regex matching ;) ) and basicly nothing is preventing handler from execution more details and test script in file attached ps excuse my english/typos and uglines of code... this is my 1st bug report/enhancment proposal ---------- components: Extension Modules files: testscript.py messages: 372435 nosy: Josef Havr?nek priority: normal severity: normal status: open title: Signal handlers should not hang during blocked main thread type: enhancement versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49265/testscript.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 14:11:08 2020 From: report at bugs.python.org (Andrew) Date: Fri, 26 Jun 2020 18:11:08 +0000 Subject: [New-bugs-announce] [issue41129] Python extension modules fail to build on Mac 10.15.1 (Catalina) Message-ID: <1593195068.14.0.719266681703.issue41129@roundup.psfhosted.org> New submission from Andrew : Quick repoduction steps: 1) Log into a mac with macOS version 10.15.1 (10.15.x may work) 2) For build tools, use Xcode11. A minimal xcode command-tools installation also reproduced for me. 3) Download and decompress the latest python 3.8.2 source 4) run "./configure" in the top-level source folder 5) run "make" in the top-level source folder I believe this error may also occur when using python 2.7.x, 3.7.x, and others. Curiously, the errors do not occur when I use the latest 3.6.10 source. A text file containing the output of "./configure" and "make" is attached. Main Description: On Mac 10.15.1 with Xcode11 I encounter compilation errors when building the latest Python 3.8.2 source. When the build reaches the "build_ext" section in setup.py, none of the extension modules build, with each compilation attempt producing an error. For example, when the _struct extension module attempts compilation I get: --- Error Syndrome --- building '_struct' extension gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -Werror=implicit-function-declaration -I./Include/internal -I./Include -I. -I/usr/local/include -I/System/Volumes/Data/mathworks/devel/sandbox/aflewell/python38_source/Python-3.8.2/Include -I/System/Volumes/Data/mathworks/devel/sandbox/aflewell/python38_source/Python-3.8.2 -c _struct.c -o build/temp.macosx-10.15-x86_64-3.8/_struct.o clang: error: no such file or directory: '_struct.c' clang: error: no input files ------ After many similar errors, we get to the main build report where we see the failed extension modules: --- Snippet of build report --- Python build finished successfully! The necessary bits to build these optional modules were not found: _gdbm _hashlib _ssl ossaudiodev spwd To find the necessary bits, look in setup.py in detect_modules() for the module's name. The following modules found by detect_modules() in setup.py, have been built by the Makefile instead, as configured by the Setup files: _abc atexit pwd time Failed to build these modules: _asyncio _bisect _blake2 _bz2 _codecs_cn _codecs_hk _codecs_iso2022 _codecs_jp _codecs_kr _codecs_tw _contextvars _crypt _csv _ctypes _ctypes_test _curses _curses_panel _datetime _dbm _decimal _elementtree _heapq _json _lsprof _lzma _md5 _multibytecodec _multiprocessing _opcode _pickle _posixshmem _posixsubprocess _queue _random _scproxy _sha1 _sha256 _sha3 _sha512 _socket _sqlite3 _statistics _struct _testbuffer _testcapi _testimportmultiple _testinternalcapi _testmultiphase _tkinter _uuid _xxsubinterpreters _xxtestfuzz array audioop binascii cmath fcntl grp math mmap nis parser pyexpat readline resource select syslog termios unicodedata xxlimited zlib ------ The part "-c _struct.c" in the error syndrome stands out to me because there is not a prefix like the other modules that get built successfully before the "build_ext" section. In the successful cases, we see a prefix like "-c ./Modules/faulthandler.c". This can be seen in my attached log. Also, when I use the Python 3.6.10 source, the build for _struct looks like this and goes fine on Mac 10.15.1. This seems to be the only version of python that works for me on 10.15.1 that I know of. ------- building '_struct' extension creating build/temp.macosx-10.15-x86_64-3.6/System creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company/devel creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company/devel/sandbox creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company/devel/sandbox/my_username creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company/devel/sandbox/my_username/python36_source creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company/devel/sandbox/my_username/python36_source/Python-3.6.10 creating build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company/devel/sandbox/my_username/python36_source/Python-3.6.10/Modules gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -std=c99 -Wextra -Wno-unused-result -Wno-unused-parameter -Wno-missing-field-initializers -Wstrict-prototypes -I./Include -I. -I/usr/local/include -I/System/Volumes/Data/my_company/devel/sandbox/my_username/python36_source/Python-3.6.10/Include -I/System/Volumes/Data/my_company/devel/sandbox/my_username/python36_source/Python-3.6.10 -c /System/Volumes/Data/my_company/devel/sandbox/my_username/python36_source/Python-3.6.10/Modules/_struct.c -o build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company/devel/sandbox/my_username/python36_source/Python-3.6.10/Modules/_struct.o gcc -bundle -undefined dynamic_lookup build/temp.macosx-10.15-x86_64-3.6/System/Volumes/Data/my_company/devel/sandbox/my_username/python36_source/Python-3.6.10/Modules/_struct.o -L/usr/local/lib -o build/lib.macosx-10.15-x86_64-3.6/_struct.cpython-36m-darwin.so ----- In this successful case, you can see absolute paths are used to compile the .c source, and some temp folders are created for the .o output. I wonder if there is a workaround where we pass a flag to the configure script to produce the same effects? Obviously it would be nice if a plain build worked though. Also, on Mac 10.14.5, the builds of any python succeeds similarly with no such errors. I am wondering why I am seeing this new pathing and filing behavior on mac 10.15.1 for most versions of Python. Are there any viable workarounds? Thanks for reading! ---------- components: Extension Modules files: configure_and_make_output.txt messages: 372437 nosy: andrewfg1992, ned.deily priority: normal severity: normal status: open title: Python extension modules fail to build on Mac 10.15.1 (Catalina) type: compile error versions: Python 3.8 Added file: https://bugs.python.org/file49266/configure_and_make_output.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 15:00:41 2020 From: report at bugs.python.org (myfreeweb) Date: Fri, 26 Jun 2020 19:00:41 +0000 Subject: [New-bugs-announce] [issue41130] Improve/fix FreeBSD Bluetooth socket support Message-ID: <1593198041.11.0.943194169636.issue41130@roundup.psfhosted.org> New submission from myfreeweb : 1) BTPROTO_HCI addresses only expect string identifiers on NetBSD and DragonFly: https://github.com/python/cpython/blob/2e0a920e9eb540654c0bb2298143b00637dc5961/Modules/socketmodule.c#L1931 But of course this is true on FreeBSD too. (DragonFly inherited the BT stack from FreeBSD!) For example this is how hccontrol creates an address: https://github.com/freebsd/freebsd/blob/6bb9221a9b865ee432269099f341e4230a6cbcd4/usr.sbin/bluetooth/hccontrol/hccontrol.c#L115-L129 So currently it is not possible to bind an HCI socket (without using FFI to directly use the libc bind function) :( 2) BTPROTO_SCO is excluded on FreeBSD: https://github.com/python/cpython/blob/2e0a920e9eb540654c0bb2298143b00637dc5961/Modules/socketmodule.c#L1953 But SCO has been supported since 2008: https://github.com/freebsd/freebsd/commit/bb4c6de0cf336d006e41521cbbd4706f60a0dfe0 ---------- components: FreeBSD messages: 372439 nosy: koobs, myfreeweb priority: normal severity: normal status: open title: Improve/fix FreeBSD Bluetooth socket support type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 16:38:41 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Fri, 26 Jun 2020 20:38:41 +0000 Subject: [New-bugs-announce] [issue41131] Augment random.choices() with the alias method Message-ID: <1593203921.14.0.316410898298.issue41131@roundup.psfhosted.org> New submission from Raymond Hettinger : For n unequal weights and k selections, sample selection with the inverse-cdf method is O(k log? n). Using the alias method, it improves to O(k). The proportionally constants also favor the alias method so that if the set up times were the same, the alias method would always win (even when n=2). However, the set up times are not the same. For the inverse-cdf method, set up is O(1) if cum_weights are given; otherwise, it is O(n) with a fast loop. The setup time for the alias method is also O(n) but is proportionally much slower. So, there would need to be a method selection heuristic based on the best trade-off between setup time and sample selection time. Both methods make k calls to random(). See: https://en.wikipedia.org/wiki/Alias_method Notes on the attached draft implementation: * Needs to add back the error checking code. * Need a better method selection heuristic. * The alias table K defaults to the original index so that there is always a valid selection even if there are small rounding errors. * The condition for the aliasing loop is designed to have an early-out when the remaining blocks all have equal weights. Also, the loop condition makes sure that the pops never fail even if there are small rounding errors when partitioning oversized bins or if the sum of weights isn't exactly 1.0. ---------- assignee: rhettinger components: Library (Lib) files: choices_proposal.py messages: 372441 nosy: mark.dickinson, rhettinger, tim.peters priority: normal severity: normal status: open title: Augment random.choices() with the alias method type: performance versions: Python 3.10 Added file: https://bugs.python.org/file49267/choices_proposal.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 16:56:18 2020 From: report at bugs.python.org (Lysandros Nikolaou) Date: Fri, 26 Jun 2020 20:56:18 +0000 Subject: [New-bugs-announce] [issue41132] F-String parser uses raw allocator Message-ID: <1593204978.6.0.0616060148241.issue41132@roundup.psfhosted.org> New submission from Lysandros Nikolaou : The f-string parser uses the raw allocator in various places. We had a very very brief discussion about this with Pablo and he suggested to change the new parser and the hand-written f-string parser to use the pymalloc allocator. Eric, is there a specific reason the raw allocator was used? ---------- components: Interpreter Core messages: 372442 nosy: eric.smith, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: F-String parser uses raw allocator versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 18:29:51 2020 From: report at bugs.python.org (Daniel Barkalow) Date: Fri, 26 Jun 2020 22:29:51 +0000 Subject: [New-bugs-announce] [issue41133] Insufficient description of cyclic garbage collector for C API Message-ID: <1593210591.52.0.296051875695.issue41133@roundup.psfhosted.org> New submission from Daniel Barkalow : Nothing in the documentation gives you enough information to find what you're doing wrong if you have the following bug for an object tracked by the GC: (a) you store a borrowed reference to an object whose lifetime is always longer that the buggy object (and never decrement it); (b) you visit that object in your tp_traverse. Despite the bug, everything works nearly all of the time. However, when it fails, the effect makes no sense from a documentation-only understanding of the garbage collection (unless Python is built with --with-assertions, perhaps). The only sequence in which it fails is: * All of the objects that will reference the victim get moved into an older generation. * The victim gets exactly as many buggy references from objects in its own generation as there are good references, and no good references from objects in its own generation. * Garbage collection runs for only the younger generation. At this point, the victim is marked as garbage despite the fact that there was a good object referencing it at all times. Reading the Python source, it becomes clear that the garbage collector handles references from older generations not by traversing the objects in older generations, but by comparing the count of references from other young objects to the usual reference count. That is, visiting an object you don't hold a reference to may cause that object to get collected, rather than protecting it, even if there are other reasons not to collect it. The best fix would probably be a warning in the documentation for tp_traverse that visiting an object you do not hold a strong reference to can cause inexplicable effects, because the intuitive guess would be that it could only cause memory leaks. It would probably also be worth mentioning in the documentation of the garbage collector something about its algorithm, so people have a hint that references from older objects aren't necessarily sufficient to overcome buggy use of the C API. ---------- assignee: docs at python components: C API, Documentation messages: 372445 nosy: docs at python, iabervon priority: normal severity: normal status: open title: Insufficient description of cyclic garbage collector for C API versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Jun 26 23:26:02 2020 From: report at bugs.python.org (Tom Hale) Date: Sat, 27 Jun 2020 03:26:02 +0000 Subject: [New-bugs-announce] [issue41134] distutils.dir_util.copy_tree FileExistsError when updating symlinks Message-ID: <1593228362.42.0.387933635808.issue41134@roundup.psfhosted.org> New submission from Tom Hale : Here is a minimal test case: ========================================================== #!/bin/bash cd /tmp || exit 1 dir=test-copy_tree src=$dir/src dst=$dir/dst mkdir -p "$src" touch "$src"/file ln -s file "$src/symlink" python -c "from distutils.dir_util import copy_tree; copy_tree('$src', '$dst', preserve_symlinks=1, update=1); copy_tree('$src', '$dst', preserve_symlinks=1, update=1);" rm -r "$dir" ========================================================== Traceback (most recent call last): File "", line 3, in File "/usr/lib/python3.8/distutils/dir_util.py", line 152, in copy_tree os.symlink(link_dest, dst_name) FileExistsError: [Errno 17] File exists: 'file' -> 'test-copy_tree/dst/symlink' ========================================================== Related: ========= This issue will likely be resolved via: bpo-36656 Add race-free os.link and os.symlink wrapper / helper https://bugs.python.org/issue36656 (WIP under discussion at python-mentor) Prior art: =========== https://stackoverflow.com/questions/53090360/python-distutils-copy-tree-fails-to-update-if-there-are-symlinks ---------- messages: 372449 nosy: Tom Hale priority: normal severity: normal status: open title: distutils.dir_util.copy_tree FileExistsError when updating symlinks _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 00:21:05 2020 From: report at bugs.python.org (Michael Rich) Date: Sat, 27 Jun 2020 04:21:05 +0000 Subject: [New-bugs-announce] [issue41135] Suggested change to http.server.HTTPServer to prevent socket reuse in Windows Message-ID: <1593231665.6.0.126177341986.issue41135@roundup.psfhosted.org> New submission from Michael Rich : Hi, a web server can be incorrectly bound to an already in-use socket when binding a HTTPServer on windows. The issue is discussed here: https://stackoverflow.com/questions/51090637/running-a-python-web-server-twice-on-the-same-port-on-windows-no-port-already This only happens on Windows. In *nix the socketserver will throw an error, on Windows it will not. However the most recently bound server will not receive the requests. I suggest the following code (taken from stackoverflow) at the start of the server_bind method: if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'): self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) # Also need to change the value of allow_reuse_address (defined in http.server.HTTPServer) HTTPServer.allow_reuse_address = 0 I have tested this and it will throw an error upon reuse in Windows and does not change *nix behavior. Thanks, Mike ---------- components: Library (Lib) messages: 372451 nosy: Michael Rich priority: normal severity: normal status: open title: Suggested change to http.server.HTTPServer to prevent socket reuse in Windows type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 03:13:47 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 27 Jun 2020 07:13:47 +0000 Subject: [New-bugs-announce] [issue41136] argparse uses default encoding when read arguments from file Message-ID: <1593242027.84.0.221105259078.issue41136@roundup.psfhosted.org> New submission from Serhiy Storchaka : The fromfile_prefix_chars option allows you to read arguments from file. But open() without explicit encoding is used for this. Therefore the result is depending on the current locale or the PYTHONIOENCODING environment variable. On Linux this is rare a problem, because UTF-8 is a common locale encoding, but on Windows the locale encoding is usually 8-bit and may even be not able to encode all file names. I think we need a new option to specify the encoding for files from which arguments are read. ---------- components: Library (Lib) messages: 372452 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: argparse uses default encoding when read arguments from file type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 03:23:15 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 27 Jun 2020 07:23:15 +0000 Subject: [New-bugs-announce] [issue41137] pdb uses the locale encoding for .pdbrc Message-ID: <1593242595.67.0.594029276449.issue41137@roundup.psfhosted.org> New submission from Serhiy Storchaka : pdb uses the locale encoding when read the .pdbrc file. It means that the current locale of the debugged program affects it. It also makes .pdbrc not portable between different platforms. It is usually not an issue, because the .pdbrc file usually contains ASCII-only data. But maybe always use UTF-8 for .pdbrc files? ---------- components: Library (Lib) messages: 372454 nosy: serhiy.storchaka priority: normal severity: normal status: open title: pdb uses the locale encoding for .pdbrc _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 06:50:02 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 27 Jun 2020 10:50:02 +0000 Subject: [New-bugs-announce] [issue41138] trace CLI reads source files using the locale encoding Message-ID: <1593255002.94.0.166052202636.issue41138@roundup.psfhosted.org> New submission from Serhiy Storchaka : The command line interface of the trace module reads source files using the locale encoding. The following file fixes this by reading a file in binary mode (compile() detects the encoding). It fixes also a resource warning when save counts to a file. ---------- components: Library (Lib) messages: 372461 nosy: belopolsky, serhiy.storchaka priority: normal severity: normal status: open title: trace CLI reads source files using the locale encoding type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 07:30:05 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 27 Jun 2020 11:30:05 +0000 Subject: [New-bugs-announce] [issue41139] cgi uses the locale encoding for log files Message-ID: <1593257405.62.0.0746372218017.issue41139@roundup.psfhosted.org> New submission from Serhiy Storchaka : The cgi module provides undocumented feasibility for logging. cgi.log() formats log message and appends it to the log file with name specified by cgi.logfile if it was not empty before the first use of cgi.log(). One of problems is that it uses the locale encoding for log file. Therefore the result depends on the locale at the moment of the first use of cgi.log(). We can fix this by using some fixed encoding (UTF-8). Or maybe just remove this undocumented feature. ---------- messages: 372462 nosy: Rhodri James, ethan.furman, serhiy.storchaka, vinay.sajip priority: normal severity: normal status: open title: cgi uses the locale encoding for log files type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 07:31:23 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 27 Jun 2020 11:31:23 +0000 Subject: [New-bugs-announce] [issue41140] cgitb uses the locale encoding for log files Message-ID: <1593257483.85.0.980733000785.issue41140@roundup.psfhosted.org> New submission from Serhiy Storchaka : If logdir is not None the exception handler in cgitb tries to save the description of an error in that directory using the locale encoding. It will fail if the description contains non-encodable characters. We should either use corresponding error handlers (e.g. 'xmlcharrreplace' for html and 'backslashreplace' for text) for handling encoding errors or use the UTF-8 encoding. ---------- components: Library (Lib) messages: 372463 nosy: Rhodri James, ethan.furman, serhiy.storchaka, vinay.sajip priority: normal severity: normal status: open title: cgitb uses the locale encoding for log files type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 09:07:24 2020 From: report at bugs.python.org (Sergey Fedoseev) Date: Sat, 27 Jun 2020 13:07:24 +0000 Subject: [New-bugs-announce] [issue41141] remove unneeded handling of '.' and '..' from patlib.Path.iterdir() Message-ID: <1593263244.58.0.0850216109396.issue41141@roundup.psfhosted.org> New submission from Sergey Fedoseev : Currently patlib.Path.iterdir() filters out '.' and '..'. It's unneeded since patlib.Path.iterdir() uses os.listdir() under the hood, which returns neither '.' nor '..'. https://docs.python.org/3/library/os.html#os.listdir ---------- components: Library (Lib) messages: 372465 nosy: sir-sigurd priority: normal severity: normal status: open title: remove unneeded handling of '.' and '..' from patlib.Path.iterdir() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 11:01:56 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 27 Jun 2020 15:01:56 +0000 Subject: [New-bugs-announce] [issue41142] msilib.CAB doesnot support non-ASCII files Message-ID: <1593270116.07.0.608480331512.issue41142@roundup.psfhosted.org> New submission from Serhiy Storchaka : There are several problems with _msi.FCICreate() used to create the CAB file. It encodes the CAB names and added file names to UTF-8 and then use them as encoded with the local encoding. So you cannot create a CAB file in a directory with non-ASCII name or add files from a directory with non-ASCII name. This is a Python 3 regression. Two possible solutions: 1. Encode paths with the locale encoding. It will add support for paths encodable with the locale encoding. 2. Encode them to UTF-8, but use a wrapper for opening file which converts the path from UTF-8 to UTF-16 and uses the Unicode variant of _open(). It will support arbitrary names. ---------- components: Windows messages: 372466 nosy: paul.moore, serhiy.storchaka, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: msilib.CAB doesnot support non-ASCII files type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 11:14:09 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 27 Jun 2020 15:14:09 +0000 Subject: [New-bugs-announce] [issue41143] distutils uses the locale encoding for the .pypirc file Message-ID: <1593270849.73.0.549348582361.issue41143@roundup.psfhosted.org> New submission from Serhiy Storchaka : I have not found any mention about the encoding of .pypirc files. Currently distutils uses the locale encoding for reading and writing them. It makes them potentially nonportable if they contain non-ASCII data (not sure if it is possible) and depending on the user settings. I think that if the only ASCII content is allowed, it would be safer to use explicit ASCII encoding. If non-ASCII content is allowed, then it may be worth always to use UTF-8. What do you think? ---------- components: Distutils messages: 372467 nosy: alexis, dstufft, eric.araujo, lemburg, paul.moore, serhiy.storchaka, tarek priority: normal severity: normal status: open title: distutils uses the locale encoding for the .pypirc file _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Jun 27 12:57:08 2020 From: report at bugs.python.org (E. Paine) Date: Sat, 27 Jun 2020 16:57:08 +0000 Subject: [New-bugs-announce] [issue41144] IDLE: raises ImportError when opening special modules Message-ID: <1593277028.54.0.214290317054.issue41144@roundup.psfhosted.org> New submission from E. Paine : When opening special modules (such as os.path) through the "Open Module" dialog, an ImportError is raised. The fix is to catch this error and retry the loader call without the "name" argument (hence opening the true file). ---------- messages: 372469 nosy: epaine, taleinat, terry.reedy priority: normal severity: normal status: open title: IDLE: raises ImportError when opening special modules _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 05:43:33 2020 From: report at bugs.python.org (Guillaume Gauvrit) Date: Sun, 28 Jun 2020 09:43:33 +0000 Subject: [New-bugs-announce] [issue41145] EmailMessage.as_string is altering the message state and actually fix bugs Message-ID: <1593337413.31.0.292797380525.issue41145@roundup.psfhosted.org> New submission from Guillaume Gauvrit : I am currently refactoring code and use the EmailMessage api to build message. I encountered weird behavior while building an email. The `as_string()` method is fixing the `make_alternative` method. So, to me their is two bug here: a `as_string` method should not mutate internal state, and the `make_alternative` should create a correct internal state. It may be resume in the following program: ``` ? python Python 3.8.3 (default, May 17 2020, 18:15:42) [GCC 10.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import email >>> from email.message import EmailMessage, MIMEPart >>> >>> msg = EmailMessage() >>> msg.make_alternative() >>> print(msg.get_boundary()) None >>> print(msg._headers) [('Content-Type', 'multipart/alternative')] >>> _ = msg.as_string() >>> print(msg.get_boundary()) ===============3171625413581695247== >>> print(msg._headers) [('Content-Type', 'multipart/alternative; boundary="===============3171625413581695247=="')] ``` ---------- files: bug.py messages: 372508 nosy: mardiros priority: normal severity: normal status: open title: EmailMessage.as_string is altering the message state and actually fix bugs type: resource usage versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49271/bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 07:00:43 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 28 Jun 2020 11:00:43 +0000 Subject: [New-bugs-announce] [issue41146] Convert signal.default_int_handler() to Argument Clinic Message-ID: <1593342043.08.0.492755229257.issue41146@roundup.psfhosted.org> New submission from Serhiy Storchaka : It adds a signature to signal.default_int_handler(). ---------- components: Argument Clinic, Extension Modules messages: 372511 nosy: larry, serhiy.storchaka priority: normal severity: normal status: open title: Convert signal.default_int_handler() to Argument Clinic type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 08:27:34 2020 From: report at bugs.python.org (Peter Law) Date: Sun, 28 Jun 2020 12:27:34 +0000 Subject: [New-bugs-announce] [issue41147] Document that redirect_std{out, err} yield the new stream as the context variable Message-ID: <1593347254.67.0.116081162021.issue41147@roundup.psfhosted.org> New submission from Peter Law : In `contextlib`, `_RedirectStream` (the class behind `redirect_stdout` and `redirect_stderr`) returns the current stream target as its context variable, which allows code like this: ``` python with redirect_stdout(io.StringIO()) as buffer: do_stuff() use(buffer.getvalue()) ``` where you capture the redirected stream without a separate line to declare the variable. This isn't documented (See https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout), yet is potentially useful. Unless there's a reason that this isn't documented, I propose that the documentation be modified to include it. Aside: After initially reporting this against the typeshed (https://github.com/python/typeshed/issues/4283) I'm also working on a PR to the typeshed to include this there. ---------- assignee: docs at python components: Documentation messages: 372513 nosy: PeterJCLaw, docs at python priority: normal severity: normal status: open title: Document that redirect_std{out,err} yield the new stream as the context variable type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 11:26:32 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 28 Jun 2020 15:26:32 +0000 Subject: [New-bugs-announce] [issue41148] IDLE uses the locale encoding for config files Message-ID: <1593357992.42.0.986776175737.issue41148@roundup.psfhosted.org> New submission from Serhiy Storchaka : IDLE uses the locale encoding for reading and writing config files. Default config files are ASCII-only, but if user config files contain non-ASCII data, it makes them non-portable and depending on the environment of IDLE. Could they contain file paths? If yes, then not all file paths can be saved. In any case it is better to use a fixed encoding for config files (ASCII or UTF-8). ---------- assignee: terry.reedy components: IDLE messages: 372520 nosy: serhiy.storchaka, taleinat, terry.reedy priority: normal severity: normal status: open title: IDLE uses the locale encoding for config files _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 12:01:04 2020 From: report at bugs.python.org (Barney Stratford) Date: Sun, 28 Jun 2020 16:01:04 +0000 Subject: [New-bugs-announce] [issue41149] Threads can fail to start Message-ID: <1593360064.64.0.828598087215.issue41149@roundup.psfhosted.org> New submission from Barney Stratford : >>> import threading >>> class foo (object): ... def __bool__ (self): ... return False ... def __call__ (self): ... print ("Running") ... >>> threading.Thread (target = foo ()).start () The expected result of these commands would be for the thread to print "Running". However, in actual fact it prints nothing at all. This is because threading.Thread.run only runs the target if it is True as a boolean. This is presumably to make the thread do nothing at all if the target is None. In this case, I have a legitimate target that is False as a boolean. I propose to remove the test altogether. The effect of this is that failure to set the target of the thread, or setting a non-callable target, will cause the thread to raise a TypeError as soon as it is started. Forgetting to set the target is in almost every case a bug, and bugs should never be silent. PR to follow. ---------- components: Library (Lib) messages: 372521 nosy: BarneyStratford priority: normal severity: normal status: open title: Threads can fail to start type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 12:22:22 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 28 Jun 2020 16:22:22 +0000 Subject: [New-bugs-announce] [issue41150] pipes uses text files and the locale encodig Message-ID: <1593361342.8.0.644791868441.issue41150@roundup.psfhosted.org> New submission from Serhiy Storchaka : The pipes module was designed as a Python interface to Unix shell pipelines. In Python 2 it works with binary streams (on Unix). But in Python 3 it opens all files and pipes in text mode with the locale encoding. It makes it unapplicable for processing binary data and text data non-encodable with the locale encoding. ---------- components: Library (Lib) messages: 372524 nosy: serhiy.storchaka priority: normal severity: normal status: open title: pipes uses text files and the locale encodig type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 13:04:03 2020 From: report at bugs.python.org (Nathaniel Smith) Date: Sun, 28 Jun 2020 17:04:03 +0000 Subject: [New-bugs-announce] [issue41151] Support for new Windows pseudoterminals in the subprocess module Message-ID: <1593363843.17.0.163145949347.issue41151@roundup.psfhosted.org> New submission from Nathaniel Smith : So Windows finally has pty support: https://devblogs.microsoft.com/commandline/windows-command-line-introducing-the-windows-pseudo-console-conpty/ However, the API is a bit weird. Unlike Unix, when you create a Windows pty, there's no way to directly get access to the "slave" handle. Instead, you first call CreatePseudoConsole to get a special "HPCON" object, which is similar to a Unix pty master. And then you can have to use a special CreateProcess incantation to spawn a child process that's attached to our pty. Specifically, what you have to do is set a special entry in the "lpAttributeList", with type "PROC_THREAD_ATTRIBUTE_PSEUDOCONSOLE". Details: https://docs.microsoft.com/en-us/windows/console/creating-a-pseudoconsole-session Unfortunately, the subprocess module does not provide any way to pass arbitrary attributes through lpAttributeList, which means that it's currently impossible to use pty support on Windows without reimplementing the whole subprocess module. It would be nice if the subprocess module could somehow support this. Off the top of my head, I can think of three possible APIs: Option 1: full support for Windows ptys: this would require wrapping CreatePseudoConsole, providing some object to represent a Windows pty, etc. This is fairly complex (especially since CreatePseudoConsole requires you to pass in some pipe handles, and the user might want to create those themselves). Option 2: minimal support for Windows ptys: add another supported field to the subprocess module's lpAttributeList wrapper, that lets the user pass in an "HPCON" cast to a Python integer, and stuffs it into the attribute list. This would require users to do all the work to actually *get* the HPCON object, but at least it would make ptys possible to use. Option 3: generic support for unrecognized lpAttributeList entries: add a field to the subprocess module's lpAttributeList wrapper that lets you add arbitrary entries, specified by type number + arbitrary pointer/chunk of bytes. (Similar to how Python's ioctl or setsockopt wrappers work.) Annoyingly, it doesn't seem to be enough to just pass in a buffer object, because for pseudoconsole support, you actually have to pass in an opaque "HPCON" object directly. (This is kind of weird, and might even be a bug, see: https://github.com/microsoft/terminal/issues/6705) ---------- messages: 372526 nosy: giampaolo.rodola, gregory.p.smith, njs priority: normal severity: normal status: open title: Support for new Windows pseudoterminals in the subprocess module type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 13:56:42 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 28 Jun 2020 17:56:42 +0000 Subject: [New-bugs-announce] [issue41152] IDLE: revise setting of iomenu.encoding and .errors Message-ID: <1593367002.84.0.77588386021.issue41152@roundup.psfhosted.org> New submission from Terry J. Reedy : When testing and on Windows, iomenu.encoding and .errors are set to utf-8 and surrogateescape*. When running otherwise, these are set with baroque code I don't understand. (Currently lines 31 to 61.) 1. Combine the two conditional statements for testing and Windows. 2. Ned, on my Catalina Macbook, the 30-line 'else' sections sets encoding, errors to 'utf-8', 'strict'. Should there ever be any other result on Mac we care about? If not, I would like to directly set them, as on Windows. 3. Serhiy, does the 'baroque code' look right to you, for Linux (or *nix in general)? ---------- assignee: terry.reedy components: IDLE messages: 372527 nosy: ned.deily, serhiy.storchaka, taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: revise setting of iomenu.encoding and .errors type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Jun 28 16:16:57 2020 From: report at bugs.python.org (William Pickard) Date: Sun, 28 Jun 2020 20:16:57 +0000 Subject: [New-bugs-announce] [issue41153] [easy Doc] "PyPreConfig_InitIsolatedConfig" and "PyPreConfig_InitPythonConfig" are given opposite documentation. Message-ID: <1593375417.17.0.026489399384.issue41153@roundup.psfhosted.org> New submission from William Pickard : The initconfig API functions "PyPreConfig_InitPythonConfig" and "PyPreConfig_InitIsolatedConfig" are mistakenly documented for the other method. ---------- assignee: docs at python components: Documentation messages: 372531 nosy: WildCard65, docs at python priority: normal pull_requests: 20358 severity: normal status: open title: [easy Doc] "PyPreConfig_InitIsolatedConfig" and "PyPreConfig_InitPythonConfig" are given opposite documentation. type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 02:56:41 2020 From: report at bugs.python.org (Rahul Jha) Date: Mon, 29 Jun 2020 06:56:41 +0000 Subject: [New-bugs-announce] [issue41154] test_pkgutil:test_name_resolution fails on master Message-ID: <1593413801.07.0.974965447149.issue41154@roundup.psfhosted.org> New submission from Rahul Jha : After configuring and building using the command: ./configure --with-pydebug ** make -j I ran the test suite (without changing anything) and saw that test_pkg has failed. Here is the output of `./python.exe -m test -v test_pkgutil`: == CPython 3.10.0a0 (heads/master:cd3c2bdd5d, Jun 28 2020, 13:29:09) [Clang 9.0.0 (clang-900.0.39.2)] == macOS-10.12.6-x86_64-i386-64bit little-endian == cwd: /Users/rahuljha/Documents/cpython/build/test_python_10678? == CPU count: 4 == encodings: locale=UTF-8, FS=utf-8 0:00:00 load avg: 2.01 Run tests sequentially 0:00:00 load avg: 2.01 [1/1] test_pkgutil test_getdata_filesys (test.test_pkgutil.PkgutilTests) ... ok test_getdata_zipfile (test.test_pkgutil.PkgutilTests) ... ok test_name_resolution (test.test_pkgutil.PkgutilTests) ... ERROR test_unreadable_dir_on_syspath (test.test_pkgutil.PkgutilTests) ... ok test_walk_packages_raises_on_string_or_bytes_input (test.test_pkgutil.PkgutilTests) ... ok test_walkpackages_filesys (test.test_pkgutil.PkgutilTests) ... ok test_walkpackages_zipfile (test.test_pkgutil.PkgutilTests) Tests the same as test_walkpackages_filesys, only with a zip file. ... ok test_alreadyloaded (test.test_pkgutil.PkgutilPEP302Tests) ... ok test_getdata_pep302 (test.test_pkgutil.PkgutilPEP302Tests) ... ok test_iter_importers (test.test_pkgutil.ExtendPathTests) ... ok test_mixed_namespace (test.test_pkgutil.ExtendPathTests) ... ok test_simple (test.test_pkgutil.ExtendPathTests) ... ok test_nested (test.test_pkgutil.NestedNamespacePackageTest) ... ok test_find_loader_avoids_emulation (test.test_pkgutil.ImportlibMigrationTests) ... ok test_find_loader_missing_module (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_importer_avoids_emulation (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_None_in_sys_modules (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_avoids_emulation (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_handles_missing_loader_attribute (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_handles_missing_spec_attribute (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_handles_spec_attribute_none (test.test_pkgutil.ImportlibMigrationTests) ... ok test_importer_deprecated (test.test_pkgutil.ImportlibMigrationTests) ... ok test_iter_importers_avoids_emulation (test.test_pkgutil.ImportlibMigrationTests) ... ok test_loader_deprecated (test.test_pkgutil.ImportlibMigrationTests) ... ok ====================================================================== ERROR: test_name_resolution (test.test_pkgutil.PkgutilTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rahuljha/Documents/cpython/Lib/test/test_pkgutil.py", line 262, in test_name_resolution mod = importlib.import_module(uw) File "/Users/rahuljha/Documents/cpython/Lib/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 984, in _find_and_load_unlocked ModuleNotFoundError: No module named '?' ---------------------------------------------------------------------- Ran 24 tests in 0.186s FAILED (errors=1) test test_pkgutil failed test_pkgutil failed == Tests result: FAILURE == 1 test failed: test_pkgutil Total duration: 482 ms Tests result: FAILURE Py vulture ~/Documents/cpython master !1 ?1 ? ./python.exe -m test -v test_pkgutil == CPython 3.10.0a0 (heads/master:cd3c2bdd5d, Jun 28 2020, 13:29:09) [Clang 9.0.0 (clang-900.0.39.2)] == macOS-10.12.6-x86_64-i386-64bit little-endian == cwd: /Users/rahuljha/Documents/cpython/build/test_python_21819? == CPU count: 4 == encodings: locale=UTF-8, FS=utf-8 0:00:00 load avg: 2.69 Run tests sequentially 0:00:00 load avg: 2.69 [1/1] test_pkgutil test_getdata_filesys (test.test_pkgutil.PkgutilTests) ... ok test_getdata_zipfile (test.test_pkgutil.PkgutilTests) ... ok test_name_resolution (test.test_pkgutil.PkgutilTests) ... ERROR test_unreadable_dir_on_syspath (test.test_pkgutil.PkgutilTests) ... ok test_walk_packages_raises_on_string_or_bytes_input (test.test_pkgutil.PkgutilTests) ... ok test_walkpackages_filesys (test.test_pkgutil.PkgutilTests) ... ok test_walkpackages_zipfile (test.test_pkgutil.PkgutilTests) Tests the same as test_walkpackages_filesys, only with a zip file. ... ok test_alreadyloaded (test.test_pkgutil.PkgutilPEP302Tests) ... ok test_getdata_pep302 (test.test_pkgutil.PkgutilPEP302Tests) ... ok test_iter_importers (test.test_pkgutil.ExtendPathTests) ... ok test_mixed_namespace (test.test_pkgutil.ExtendPathTests) ... ok test_simple (test.test_pkgutil.ExtendPathTests) ... ok test_nested (test.test_pkgutil.NestedNamespacePackageTest) ... ok test_find_loader_avoids_emulation (test.test_pkgutil.ImportlibMigrationTests) ... ok test_find_loader_missing_module (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_importer_avoids_emulation (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_None_in_sys_modules (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_avoids_emulation (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_handles_missing_loader_attribute (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_handles_missing_spec_attribute (test.test_pkgutil.ImportlibMigrationTests) ... ok test_get_loader_handles_spec_attribute_none (test.test_pkgutil.ImportlibMigrationTests) ... ok test_importer_deprecated (test.test_pkgutil.ImportlibMigrationTests) ... ok test_iter_importers_avoids_emulation (test.test_pkgutil.ImportlibMigrationTests) ... ok test_loader_deprecated (test.test_pkgutil.ImportlibMigrationTests) ... ok ====================================================================== ERROR: test_name_resolution (test.test_pkgutil.PkgutilTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/rahuljha/Documents/cpython/Lib/test/test_pkgutil.py", line 262, in test_name_resolution mod = importlib.import_module(uw) File "/Users/rahuljha/Documents/cpython/Lib/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 984, in _find_and_load_unlocked ModuleNotFoundError: No module named '?' ---------------------------------------------------------------------- Ran 24 tests in 0.184s FAILED (errors=1) test test_pkgutil failed test_pkgutil failed == Tests result: FAILURE == 1 test failed: test_pkgutil Total duration: 635 ms Tests result: FAILURE ---------- components: Build, Distutils, Tests messages: 372549 nosy: RJ722, dstufft, eric.araujo, vinay.sajip priority: normal severity: normal status: open title: test_pkgutil:test_name_resolution fails on master versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 04:31:35 2020 From: report at bugs.python.org (Svetoslav Inkolov) Date: Mon, 29 Jun 2020 08:31:35 +0000 Subject: [New-bugs-announce] [issue41155] Tkinter -postoffset not working for TCombobox Message-ID: <1593419495.04.0.313976680119.issue41155@roundup.psfhosted.org> New submission from Svetoslav Inkolov : Doing a configuration of the tkinter combobox for the dropdown list with: style = ttk.Style() style.configure('TCombobox', postoffset=(0,0,width,0)) results in no changing of the combobox. In Python 2.7.16 or generally in python 2 this functionality works well. Please see the sample source-code attached. ---------- components: Tkinter files: test_variable_dropdown-width_of_combobox.py messages: 372553 nosy: Sveti007 priority: normal severity: normal status: open title: Tkinter -postoffset not working for TCombobox type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49272/test_variable_dropdown-width_of_combobox.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 05:18:48 2020 From: report at bugs.python.org (Inada Naoki) Date: Mon, 29 Jun 2020 09:18:48 +0000 Subject: [New-bugs-announce] [issue41156] Remove Task.all_tasks and Task.current_task Message-ID: <1593422328.24.0.527300036648.issue41156@roundup.psfhosted.org> New submission from Inada Naoki : They are documented as "will be removed in version 3.9", but they are not removed in 3.9 beta. * Remove them in 3.10. * Update ~3.9 documents to "will be removed in version 3.10" ---------- components: asyncio messages: 372557 nosy: asvetlov, inada.naoki, yselivanov priority: normal severity: normal status: open title: Remove Task.all_tasks and Task.current_task versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 06:51:06 2020 From: report at bugs.python.org (Jay Patel) Date: Mon, 29 Jun 2020 10:51:06 +0000 Subject: [New-bugs-announce] [issue41157] email.message_from_string() is unable to find the headers for the .msg files Message-ID: <1593427866.61.0.334200042199.issue41157@roundup.psfhosted.org> New submission from Jay Patel : I need to extract the email data from an MSG file on Python v2.7. But as Python v2.7 has been deprecated, I tried to replicate this scenario on Python v3.8 and faced the same issue. I am trying to extract the message using the "Message" class of the "extract_msg" module. After extracting the text from the "Message" object, I am using email.message_from_string() method to separate the headers and the body (or payload). The same workflow can be observed in the "extract_mail.py" file. The issue with the attached file, "msgfile_not_working_correctly.msg", is that the headers of this file begin with "Microsoft Mail Internet Headers Version 2.0" which is interpreted as body and not as headers (as it is not in the standard email headers format like "To": "receiver at gmail.com"). According to this (https://support.microsoft.com/en-us/office/view-internet-message-headers-in-outlook-cd039382-dc6e-4264-ac74-c048563d212c) link the message headers in Outlook will begin with "Microsoft Mail Internet Headers Version 2.0" which is added by Outlook (mentioned in the "Interpreting email headers" section of the mentioned link). The email data can be observed in the "email_data.txt" file. I have tried omitting the first line, when there are no headers and it works as per the expectation. Can this scenario be handled at the modular level (email module) or is there any other way to extract headers for the .msg files. ---------- files: extract_mail.py messages: 372560 nosy: jpatel priority: normal severity: normal status: open title: email.message_from_string() is unable to find the headers for the .msg files type: enhancement versions: Python 3.8 Added file: https://bugs.python.org/file49273/extract_mail.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 07:37:57 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 29 Jun 2020 11:37:57 +0000 Subject: [New-bugs-announce] [issue41158] IDLE: rewrite the code for handling file encoding Message-ID: <1593430677.27.0.733244762594.issue41158@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed patch rewrites the code in IDLE for detecting Python file encoding and decoding and encoding Python sources by using the tokenize module instead of handmade implementation of PEP 263. ---------- assignee: terry.reedy components: IDLE messages: 372563 nosy: serhiy.storchaka, terry.reedy priority: normal severity: normal status: open title: IDLE: rewrite the code for handling file encoding type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 10:50:41 2020 From: report at bugs.python.org (=?utf-8?b?SHJ2b2plIE5pa8WhacSH?=) Date: Mon, 29 Jun 2020 14:50:41 +0000 Subject: [New-bugs-announce] [issue41159] Nested async dict comprehension fails with SyntaxError Message-ID: <1593442241.74.0.468196599971.issue41159@roundup.psfhosted.org> New submission from Hrvoje Nik?i? : Originally brought up on StackOverflow, https://stackoverflow.com/questions/60799366/nested-async-comprehension : This dict comprehension parses and works correctly: async def bar(): return { n: await foo(n) for n in [1, 2, 3] } But making it nested fails with a SyntaxError: async def bar(): return { i: { n: await foo(n) for n in [1, 2, 3] } for i in [1,2,3] } The error reported is: File "", line 0 SyntaxError: asynchronous comprehension outside of an asynchronous function ---------- components: Interpreter Core messages: 372582 nosy: hniksic priority: normal severity: normal status: open title: Nested async dict comprehension fails with SyntaxError type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 11:05:53 2020 From: report at bugs.python.org (Marius Bakke) Date: Mon, 29 Jun 2020 15:05:53 +0000 Subject: [New-bugs-announce] [issue41160] Cross-compiling for GNU/Hurd fails Message-ID: <1593443153.41.0.00823928584858.issue41160@roundup.psfhosted.org> New submission from Marius Bakke : Attempting to cross-compile for i586-pc-gnu fails with the following message: checking MACHDEP... configure: error: cross build not supported for i586-pc-gnu Adding a trivial case for it in the configure script is sufficient. ---------- components: Build messages: 372590 nosy: mbakke priority: normal severity: normal status: open title: Cross-compiling for GNU/Hurd fails _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 12:22:57 2020 From: report at bugs.python.org (Christian Heimes) Date: Mon, 29 Jun 2020 16:22:57 +0000 Subject: [New-bugs-announce] [issue41161] libmpdec-2.5.0 update is missing news entry Message-ID: <1593447777.78.0.32434043629.issue41161@roundup.psfhosted.org> New submission from Christian Heimes : libmpdec was updated in bpo-40874 but there is no entry in the changelog. Every non-trivial change should be accompanied by a Misc/NEWS.d/ blurb. Please update the changelog manually. ---------- assignee: skrah components: Documentation keywords: 3.9regression messages: 372600 nosy: christian.heimes, skrah priority: normal severity: normal stage: needs patch status: open title: libmpdec-2.5.0 update is missing news entry type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 12:28:11 2020 From: report at bugs.python.org (Steve Dower) Date: Mon, 29 Jun 2020 16:28:11 +0000 Subject: [New-bugs-announce] [issue41162] Clear audit hooks after destructors Message-ID: <1593448091.24.0.691665000355.issue41162@roundup.psfhosted.org> New submission from Steve Dower : Because of when _Py_ClearAuditHooks is called during finalization, it is possible that __del__ destructors will be called after hooks have been cleared. Audit events that would be raised here are dropped. We should ensure these events are received by any known hooks for the interpreter (Python) or the runtime (C). (Thanks to Frank Li for the report.) ---------- components: Interpreter Core messages: 372601 nosy: christian.heimes, steve.dower priority: normal severity: normal stage: needs patch status: open title: Clear audit hooks after destructors type: security versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 13:35:07 2020 From: report at bugs.python.org (=?utf-8?b?UGV0ZXIgS3XFpcOhaw==?=) Date: Mon, 29 Jun 2020 17:35:07 +0000 Subject: [New-bugs-announce] [issue41163] test_weakref hangs Message-ID: <1593452107.04.0.340745139309.issue41163@roundup.psfhosted.org> New submission from Peter Ku??k : Command make hangs on test_weakref I compile python 3.6.11 (latest compatible with my settings raspbian jessie) I compile on OrangePi i96 - single core ARM I think it is same problem as Issue29796 ---------- components: Tests messages: 372605 nosy: Peter Ku??k priority: normal severity: normal status: open title: test_weakref hangs type: compile error versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Jun 29 19:57:08 2020 From: report at bugs.python.org (Lawrence D'Anna) Date: Mon, 29 Jun 2020 23:57:08 +0000 Subject: [New-bugs-announce] [issue41164] allow python to build for macosx-11.0-arm64 Message-ID: <1593475028.57.0.758354865143.issue41164@roundup.psfhosted.org> New submission from Lawrence D'Anna : allow python to build for macosx-11.0-arm64, by adding the appropriate case to configure.ac ---------- components: Interpreter Core files: 0001-arm64.patch keywords: patch messages: 372641 nosy: lawrence-danna-apple priority: normal severity: normal status: open title: allow python to build for macosx-11.0-arm64 type: enhancement versions: Python 3.9 Added file: https://bugs.python.org/file49277/0001-arm64.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 00:53:16 2020 From: report at bugs.python.org (Inada Naoki) Date: Tue, 30 Jun 2020 04:53:16 +0000 Subject: [New-bugs-announce] [issue41165] [Python 3.10] Remove APIs deprecated since Python 3.3 Message-ID: <1593492796.52.0.194597415316.issue41165@roundup.psfhosted.org> New submission from Inada Naoki : I don't think we need to remove them all at onece. But we can remove some of them for code health. c-api/module.rst .. c:function:: const char* PyModule_GetFilename(PyObject *module) .. deprecated:: 3.2 c-api/init.rst .. c:function:: void PyEval_AcquireLock() .. deprecated:: 3.2 .. c:function:: void PyEval_ReleaseLock() .. deprecated:: 3.2 unittest: .. deprecated:: 3.1 The fail* aliases listed in the second column have been deprecated. .. deprecated:: 3.2 The assert* aliases listed in the third column have been deprecated. .. deprecated:: 3.2 ``assertRegexpMatches`` and ``assertRaisesRegexp`` have been renamed to :meth:`.assertRegex` and :meth:`.assertRaisesRegex`. urllib.request: .. class:: URLopener(proxies=None, **x509) .. deprecated:: 3.3 .. class:: FancyURLopener(...) .. deprecated:: 3.3 turtle: .. function:: settiltangle(angle) .. deprecated:: 3.1 imp: .. function:: get_suffixes() .. function:: find_module(name[, path]) .. function:: load_module(name, file, pathname, description) .. data:: PY_SOURCE .. data:: PY_COMPILED .. data:: C_EXTENSION .. data:: PKG_DIRECTORY .. data:: C_BUILTIN .. data:: PY_FROZEN configparser: .. method:: readfp(fp, filename=None) .. deprecated:: 3.2 email.errors: * :class:`MalformedHeaderDefect` -- A header was found that was missing a colon, or was otherwise malformed. .. deprecated:: 3.3 pkgutil: .. class:: ImpImporter(dirname=None) .. deprecated:: 3.3 .. class:: ImpLoader(fullname, file, filename, etc) .. deprecated:: 3.3 zipfile: .. exception:: BadZipfile Alias of :exc:`BadZipFile`, for compatibility with older Python versions. .. deprecated:: 3.2 inspect: .. function:: getargspec(func) .. deprecated:: 3.0 asyncore: .. deprecated:: 3.2 importlib: .. class:: Finder .. deprecated:: 3.3 .. method:: path_mtime(path) .. deprecated:: 3.3 ---------- components: Library (Lib) messages: 372653 nosy: inada.naoki priority: normal severity: normal status: open title: [Python 3.10] Remove APIs deprecated since Python 3.3 versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 03:01:22 2020 From: report at bugs.python.org (ABELARDO) Date: Tue, 30 Jun 2020 07:01:22 +0000 Subject: [New-bugs-announce] [issue41166] CLASS ATTRIBUTES Message-ID: <1593500482.74.0.237291555104.issue41166@roundup.psfhosted.org> New submission from ABELARDO : Hi there, I have encountered a possible bug inside the documentation. In the attached picture you can see a portion of text highlighted ("Class attribute"). I think it's a typo since there is not "class attributes" nor "instance attributes" but "class value" and "instance value". A "class value" is a value which is shared among instances and an "instance value" is a specific value of an attribute of an instance. Best regards. ---------- assignee: docs at python components: Documentation files: Captura de pantalla de 2020-06-30 08-50-39.png messages: 372660 nosy: ABELARDOLG, docs at python priority: normal severity: normal status: open title: CLASS ATTRIBUTES versions: Python 3.8 Added file: https://bugs.python.org/file49278/Captura de pantalla de 2020-06-30 08-50-39.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 05:03:42 2020 From: report at bugs.python.org (Inada Naoki) Date: Tue, 30 Jun 2020 09:03:42 +0000 Subject: [New-bugs-announce] [issue41167] Add new formats to PyArg_ParseTuple for "str or None" Message-ID: <1593507822.5.0.360334275717.issue41167@roundup.psfhosted.org> New submission from Inada Naoki : PyArg_ParseTuple has 'U' format for getting Unicode as PyObject*. But when user want accept "str or None", they are forced to use 'O&' and write a custom converter. It is not convenient. I am proposing to add 'U?' for "str or None" variants of 'U'. Does it make sense? ---------- components: C API messages: 372670 nosy: inada.naoki priority: normal severity: normal status: open title: Add new formats to PyArg_ParseTuple for "str or None" versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 08:04:44 2020 From: report at bugs.python.org (Iman Sharafodin) Date: Tue, 30 Jun 2020 12:04:44 +0000 Subject: [New-bugs-announce] [issue41168] Lack of proper checking in PyObject_SetAttr leads to segmentation fault Message-ID: <1593518684.95.0.149768712012.issue41168@roundup.psfhosted.org> New submission from Iman Sharafodin : I was testing the latest release of Python 3.6 (June 27, 2020) (https://www.python.org/ftp/python/3.6.11/Python-3.6.11.tgz) and I found that there is lack of enough checks on line number 956 in Objects/object.c file which can cause a segmentation fault. It could lead to security related issues. I've attached the PoC.pyc. Program received signal SIGSEGV, Segmentation fault. PyObject_SetAttr (v=v at entry=0x6d7373616c637463, name=0x7ffff7f75730, value=value at entry=0x0) at Objects/object.c:956 956 PyTypeObject *tp = Py_TYPE(v); ---------- components: Interpreter Core files: PoC.pyc messages: 372683 nosy: Iman Sharafodin priority: normal severity: normal status: open title: Lack of proper checking in PyObject_SetAttr leads to segmentation fault type: security versions: Python 3.6 Added file: https://bugs.python.org/file49280/PoC.pyc _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 10:06:11 2020 From: report at bugs.python.org (Wator Sead) Date: Tue, 30 Jun 2020 14:06:11 +0000 Subject: [New-bugs-announce] [issue41169] socket.inet_pton raised when pass an IPv6 address like "[::]" to it Message-ID: <1593525971.46.0.103341919582.issue41169@roundup.psfhosted.org> New submission from Wator Sead : 3.6: >>> import socket >>> socket.inet_pton(socket.AF_INET6,'[::]') b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' 3.7 and above: >>> import socket >>> socket.inet_pton(socket.AF_INET6,'[::]') Traceback (most recent call last): File "", line 1, in OSError: illegal IP address string passed to inet_pton Both: >>> import socket >>> addr = '[::1]', 888 >>> ls = socket.socket(socket.AF_INET6) >>> cs = socket.socket(socket.AF_INET6) >>> ls.bind(addr) # no raise >>> ls.listen(1) >>> cs.connect(addr) # no raise ---------- components: Library (Lib) messages: 372694 nosy: seahoh priority: normal severity: normal status: open title: socket.inet_pton raised when pass an IPv6 address like "[::]" to it type: behavior versions: Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 10:44:52 2020 From: report at bugs.python.org (Niclas Larsson) Date: Tue, 30 Jun 2020 14:44:52 +0000 Subject: [New-bugs-announce] [issue41170] Use strnlen instead of strlen when the size i known. Message-ID: <1593528292.53.0.480525267545.issue41170@roundup.psfhosted.org> Change by Niclas Larsson : ---------- components: C API nosy: Niclas Larsson priority: normal severity: normal status: open title: Use strnlen instead of strlen when the size i known. versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 11:34:35 2020 From: report at bugs.python.org (William Pickard) Date: Tue, 30 Jun 2020 15:34:35 +0000 Subject: [New-bugs-announce] [issue41171] Create companion methods of "PyType_FromSpec*" to allow setting metaclass. Message-ID: <1593531275.14.0.931108904355.issue41171@roundup.psfhosted.org> New submission from William Pickard : The current goal from what I can tell for Python is to have all C based modules move away from static types and instead use "PyType_FromSpec" and the variant that specifies base classes. The only problem is, PyType_FromSpec and it's variant makes the assumption the caller wants "PyType_Type" as the type's metaclass. Why not add companion methods to them prefixed with "PyMetaType" and have the "PyType" ones internally invoke these new methods with "PyType_Type" as the metaclass (to keep existing behavior and backwards compatibility) ---------- components: C API messages: 372696 nosy: WildCard65 priority: normal severity: normal status: open title: Create companion methods of "PyType_FromSpec*" to allow setting metaclass. type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 11:57:23 2020 From: report at bugs.python.org (Steve Dower) Date: Tue, 30 Jun 2020 15:57:23 +0000 Subject: [New-bugs-announce] [issue41172] test_peg_generator C tests fail on Windows ARM Message-ID: <1593532643.61.0.103127847831.issue41172@roundup.psfhosted.org> New submission from Steve Dower : These tests rely on MSVC to do some building, but Windows ARM devices do not currently have a compiler toolset (you need to cross-compile). We should skip these tests. Sample build: https://buildbot.python.org/all/#/builders/182/builds/773 Sample traceback: ====================================================================== ERROR: test_error_in_rules (test.test_peg_generator.test_c_parser.TestCParser) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\python\lib\test\test_peg_generator\test_c_parser.py", line 404, in test_error_in_rules self.run_test(grammar_source, test_source) File "C:\python\lib\test\test_peg_generator\test_c_parser.py", line 83, in run_test self.build_extension(grammar_source) File "C:\python\lib\test\test_peg_generator\test_c_parser.py", line 80, in build_extension generate_parser_c_extension(grammar, Path(self.tmp_path)) File "C:\python\Tools\peg_generator\pegen\testutil.py", line 104, in generate_parser_c_extension compile_c_extension(str(source), build_dir=str(path)) File "C:\python\Tools\peg_generator\pegen\build.py", line 90, in compile_c_extension cmd.run() File "C:\python\lib\distutils\command\build_ext.py", line 340, in run self.build_extensions() File "C:\python\lib\distutils\command\build_ext.py", line 449, in build_extensions self._build_extensions_serial() File "C:\python\lib\distutils\command\build_ext.py", line 474, in _build_extensions_serial self.build_extension(ext) File "C:\python\lib\distutils\command\build_ext.py", line 529, in build_extension objects = self.compiler.compile(sources, File "C:\python\lib\distutils\_msvccompiler.py", line 323, in compile self.initialize() File "C:\python\lib\distutils\_msvccompiler.py", line 220, in initialize vc_env = _get_vc_env(plat_spec) File "C:\python\lib\distutils\_msvccompiler.py", line 122, in _get_vc_env raise DistutilsPlatformError("Unable to find vcvarsall.bat") distutils.errors.DistutilsPlatformError: Unable to find vcvarsall.bat ---------- components: Tests, Windows messages: 372699 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: test_peg_generator C tests fail on Windows ARM versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 11:58:09 2020 From: report at bugs.python.org (Steve Dower) Date: Tue, 30 Jun 2020 15:58:09 +0000 Subject: [New-bugs-announce] [issue41173] Windows ARM buildbots cannot upload results Message-ID: <1593532689.42.0.224097599218.issue41173@roundup.psfhosted.org> New submission from Steve Dower : Sample build: https://buildbot.python.org/all/#/builders/182/builds/773 The second last step is failing for some reason, probably because it doesn't have the file it needs. ---------- components: Build, Windows messages: 372700 nosy: paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Windows ARM buildbots cannot upload results versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 13:14:21 2020 From: report at bugs.python.org (Allan Feldman) Date: Tue, 30 Jun 2020 17:14:21 +0000 Subject: [New-bugs-announce] [issue41174] asyncio.coroutine decorator returns a non-generator function when using PYTHONASYNCIODEBUG Message-ID: <1593537261.43.0.523552907091.issue41174@roundup.psfhosted.org> New submission from Allan Feldman : This code behaves differently when PYTHONASYNCIODEBUG=1 import asyncio import inspect @asyncio.coroutine def foo(): yield from asyncio.sleep(0) print("isgeneratorfunction:", inspect.isgeneratorfunction(foo)) PYTHONASYNCIODEBUG: isgeneratorfunction: False non-debug mode: isgeneratorfunction: True When in debug mode, the `asyncio.coroutine` decorator returns a function that is not a generator function (https://github.com/python/cpython/blob/bd4a3f21454a6012f4353e2255837561fc9f0e6a/Lib/asyncio/coroutines.py#L144) The result is that introspection of functions is changed when PYTHONASYNCIODEBUG is enabled. ---------- components: asyncio messages: 372706 nosy: a-feld, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio.coroutine decorator returns a non-generator function when using PYTHONASYNCIODEBUG type: behavior versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 14:06:49 2020 From: report at bugs.python.org (Charalampos Stratakis) Date: Tue, 30 Jun 2020 18:06:49 +0000 Subject: [New-bugs-announce] [issue41175] Static analysis issues reported by GCC 10 Message-ID: <1593540409.36.0.220720333967.issue41175@roundup.psfhosted.org> New submission from Charalampos Stratakis : GCC added a static analysis tool recently [0]. Running it under for CPython code base produces some interesting results. Reproducer: ./configure --with-pydebug && CFLAGS='-fanalyzer' make Attaching the log. [0] https://developers.redhat.com/blog/2020/03/26/static-analysis-in-gcc-10/ ---------- files: debugstaticanalysis.txt messages: 372711 nosy: cstratak priority: normal severity: normal status: open title: Static analysis issues reported by GCC 10 versions: Python 3.10, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49281/debugstaticanalysis.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 16:44:35 2020 From: report at bugs.python.org (Richard Sheridan) Date: Tue, 30 Jun 2020 20:44:35 +0000 Subject: [New-bugs-announce] [issue41176] revise Tkinter mainloop dispatching flag behavior Message-ID: <1593549875.63.0.81357373837.issue41176@roundup.psfhosted.org> New submission from Richard Sheridan : This could also be considered a "behavior" type issue. `TkappObject` has a member `dispatching` that could usefully be exposed by a very simple read-only method for users to determine at runtime if the tkinter mainloop is running. Matplotlib and I'm sure other packages rely on fragile hacks (https://github.com/matplotlib/matplotlib/blob/a68562aa230e5895136120f5073dd01f124d728d/lib/matplotlib/cbook/__init__.py#L65-L71) to determine this state. I ran into this in https://github.com/matplotlib/matplotlib/pull/17802. All these projects would be more reliable with a new "dispatching()" method on the TkappObject, tkinter.Misc objects, and possibly the tkinter module itself. Internally, `dispatching` is used to, yes, determine if the mainloop is running. However, this determination is always done within the `WaitForMainloop` function (https://github.com/python/cpython/blob/bd4a3f21454a6012f4353e2255837561fc9f0e6a/Modules/_tkinter.c#L363-L380), which waits up to 1 second for the mainloop to come up. Apparently, this function allows a thread to implicitly wait for the loop to come up by calling any `TkappObject` method. This is a bad design choice in my opinion, because if client code wants to start immediately and the loop is not started by mistake, there will be a meaningless, hard-to-diagnose delay of one second before crashing. Instead, if some client code in a thread needs to wait for the mainloop to run, it should explicitly poll `dispatching()` on its own. This waiting behavior should be deprecated and, after a deprecation cycle perhaps, all `WaitForMainloop()` statements should be converted to inline `self->dispatching`. The correctness of the `dispatching` flag is dampened by the currently existing, undocumented `willdispatch` method which simply arbitrarily sets the `dispatching` to 1. It seems `willdispatch` was added 18 years ago to circumvent a bug building pydoc caused by `WaitForMainloop` not waiting long enough, as it tricks `WaitForMainloop` into... not waiting for the mainloop. This was in my opinion a bad choice in comparison to adding a dispatching flag: again, if some thread needs to wait for the mainloop, it should poll `dispatching()`, and avoid adding spurious 1 second waits. `willdispatch` currently has no references in CPython and most GitHub references are to Pycharm stubs for the CPython method. It should be deprecated and removed to preserve the correctness of `dispatching`. Happy to make a PR about this, except I don't understand clinic at all, nor the specifics of deprecation cycles in CPython. ---------- components: Tkinter messages: 372722 nosy: Richard Sheridan priority: normal severity: normal status: open title: revise Tkinter mainloop dispatching flag behavior type: enhancement versions: Python 3.10, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 19:10:03 2020 From: report at bugs.python.org (Brett Hannigan) Date: Tue, 30 Jun 2020 23:10:03 +0000 Subject: [New-bugs-announce] [issue41177] ConvertingList and ConvertingTuple lack iterators and ConvertingDict lacks items() Message-ID: <1593558603.02.0.467706950868.issue41177@roundup.psfhosted.org> New submission from Brett Hannigan : The logging.config module uses three internal data structures to hold items that may need to be converted to a handler or other object: ConvertingList, ConvertingTuple, and ConvertingDict. These three objects provide interfaces to get converted items using the __getitem__ methods. However, if a user tries to iterate over items in the container, they will get the un-converted entries. ---------- components: Library (Lib) messages: 372724 nosy: Brett Hannigan priority: normal severity: normal status: open title: ConvertingList and ConvertingTuple lack iterators and ConvertingDict lacks items() type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 20:40:23 2020 From: report at bugs.python.org (DataGhost) Date: Wed, 01 Jul 2020 00:40:23 +0000 Subject: [New-bugs-announce] [issue41178] Registry writes on Windows Store - workaround Message-ID: <1593564023.02.0.36011703999.issue41178@roundup.psfhosted.org> New submission from DataGhost : I just installed the Windows Store version of Python (v3.8.3:6f8c832) on Windows 10.0.18362.900 (1903, old VM) and quickly ran into the issue that registry writes aren't visible system-wide. I later found a remark about this in the Known Issues section of the Using Python on Windows page. Real-world scenario: running "python -m pip install --user pipx" and subsequently "python -m pipx ensurepath" says it changed the PATH variable, but it does not take effect even after a reboot. I'd say this is one of the cases where a user might not know about what's going on and why it fails, even after reading the known issues, and the developer might not have thought of the existence or incompatibility of the Windows Store version of Python at all. I have found a workaround to do this, though, and I figured the best place to do that is in the actual winreg module so that users and developers don't need to worry about this. Calling reg.exe allows registry variables to be written in the "normal" registry, and these changes are system-wide rather than in the private copy Store apps use. Some functions could be changed to use the reg.exe binary instead of the system calls on Windows Store builds. Other functions, such as for querying, don't need any changes. I realise this is an ugly workaround so I'm just suggesting it in hopes of improving compatibility, so that users and developers don't need to worry about these issues anymore. I did some basic tests on this, by executing reg.exe through the subprocess module, and testing for a Windows Store instance by checking sys.base_exec_prefix. There are probably better ways but at least this shows that it should be possible in just Python. ---------- components: Windows messages: 372726 nosy: DataGhost, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Registry writes on Windows Store - workaround type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Jun 30 22:10:57 2020 From: report at bugs.python.org (Sumanth Ratna) Date: Wed, 01 Jul 2020 02:10:57 +0000 Subject: [New-bugs-announce] [issue41179] find_library on macOS Big Sur Message-ID: <1593569457.74.0.77283115603.issue41179@roundup.psfhosted.org> New submission from Sumanth Ratna : The following all return None on macOS Big Sur, but return a valid path (str) on macOS Catalina: python3.6 -c "from ctypes import util; print(util.find_library('objc'))" python3.7 -c "from ctypes import util; print(util.find_library('objc'))" python3.8 -c "from ctypes import util; print(util.find_library('objc'))" The solution I'm thinking of is using platform.mac_ver() in ctypes.util.find_library to see if the macOS version is either 10.16 or 11.0; if so, use an alternative method to locate the path of the library somehow?else, use the current functionality. I'm hoping that looking in /System/Library/Frameworks/ is enough. >From the macOS Big Sur release notes (https://developer.apple.com/documentation/macos-release-notes/macos-big-sur-11-beta-release-notes): > New in macOS Big Sur 11 beta, the system ships with a built-in dynamic linker cache of all system-provided libraries. As part of this change, copies of dynamic libraries are no longer present on the filesystem. Code that attempts to check for dynamic library presence by looking for a file at a path or enumerating a directory will fail. Instead, check for library presence by attempting to `dlopen()` the path, which will correctly check for the library in the cache. (62986286) Related links: - https://bugs.python.org/issue41116 (this is a build issue, but is related I think) - https://stackoverflow.com/questions/62587131/macos-big-sur-python-ctypes-find-library-does-not-find-libraries-ssl-corefou - https://www.reddit.com/r/MacOSBeta/comments/hfknpa/is_corefoundation_missing_for_everyone_on_big_sur/ - https://github.com/vispy/vispy/issues/1885 - https://github.com/napari/napari/issues/1393 - https://github.com/espressif/esptool/issues/540 This is my first issue in Python; sorry in advance if I've made a mistake anywhere. ---------- components: ctypes messages: 372728 nosy: Sumanth Ratna priority: normal severity: normal status: open title: find_library on macOS Big Sur type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________