From report at bugs.python.org Wed Sep 1 00:39:04 2021 From: report at bugs.python.org (DMI-1407) Date: Wed, 01 Sep 2021 04:39:04 +0000 Subject: [New-bugs-announce] [issue45073] windows installer quiet installation targetdir escapes "quote"-symbol Message-ID: <1630471144.03.0.656994517168.issue45073@roundup.psfhosted.org> New submission from DMI-1407 : If the windows installer (Python 3.8.9 64bit exe) is run in quiet mode and the TargetDir option is used, then the last quote (") symbol gets escaped if the path ends with an backslash (\). Example: /quiet TargetDir="D:\pyt hon\" AssociateFiles=0 Result: TargetDir=hon\" AssociateFiles=0 this raises the error that the path contains a invalid character... (the quote ofc) Example: /quiet TargetDir="D:\pyt hon" AssociateFiles=0 Result: installs correctly so in general "D:\pyt hon" indicates a file thats named "pyt hon" where "D:\pyt hon\" indicates a folder. whatever "D:\pyt hon\" should be valid and i dont understand why the first backslash does not escape the p and leads to "D:pyt hon" ... its really annoying, pls do at least write a notice into the docs that the installer behaves like this. :/ ---------- components: Installation messages: 400809 nosy: DMI-1407 priority: normal severity: normal status: open title: windows installer quiet installation targetdir escapes "quote"-symbol type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 01:10:33 2021 From: report at bugs.python.org (William Fisher) Date: Wed, 01 Sep 2021 05:10:33 +0000 Subject: [New-bugs-announce] [issue45074] asyncio hang in subprocess wait_closed() on Windows, BrokenPipeError Message-ID: <1630473033.57.0.597404080995.issue45074@roundup.psfhosted.org> New submission from William Fisher : I have a reproducible case where stdin.wait_closed() is hanging on Windows. This happens in response to a BrokenPipeError. The same code works fine on Linux and MacOS. Please see the attached code for the demo. I believe the hang is related to this debug message from the logs: DEBUG <_ProactorWritePipeTransport closing fd=632>: Fatal write error on pipe transport Traceback (most recent call last): File "C:\hostedtoolcache\windows\Python\3.9.6\x64\lib\asyncio\proactor_events.py", line 379, in _loop_writing f.result() File "C:\hostedtoolcache\windows\Python\3.9.6\x64\lib\asyncio\windows_events.py", line 812, in _poll value = callback(transferred, key, ov) File "C:\hostedtoolcache\windows\Python\3.9.6\x64\lib\asyncio\windows_events.py", line 538, in finish_send return ov.getresult() BrokenPipeError: [WinError 109] The pipe has been ended It appears that the function that logs "Fatal write error on pipe transport" also calls _abort on the stream. If _abort is called before stdin.close(), everything is okay. If _abort is called after stdin.close(), stdin.wait_closed() will hang. Please see issue #44428 for another instance of a similar hang in wait_closed(). ---------- components: asyncio files: wait_closed.py messages: 400810 nosy: asvetlov, byllyfish, yselivanov priority: normal severity: normal status: open title: asyncio hang in subprocess wait_closed() on Windows, BrokenPipeError type: behavior versions: Python 3.10, Python 3.9 Added file: https://bugs.python.org/file50250/wait_closed.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 07:45:44 2021 From: report at bugs.python.org (Irit Katriel) Date: Wed, 01 Sep 2021 11:45:44 +0000 Subject: [New-bugs-announce] [issue45075] confusion between frame and frame_summary in traceback module Message-ID: <1630496744.47.0.411180872388.issue45075@roundup.psfhosted.org> New submission from Irit Katriel : def format_frame(frame) should be renamed to format_frame_summary(frame_summary) because it takes a FrameSummary obejct and not a frame. This is a new API in 3.11 so it can be changed now. There are also local variables in traceback.py which are called frame but actually represent FrameSummary. This can be confusing and should be fixed at the same time where it doesn't break API. ---------- messages: 400825 nosy: iritkatriel priority: normal severity: normal status: open title: confusion between frame and frame_summary in traceback module _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 07:56:20 2021 From: report at bugs.python.org (otn) Date: Wed, 01 Sep 2021 11:56:20 +0000 Subject: [New-bugs-announce] [issue45076] open "r+" problem Message-ID: <1630497380.88.0.961026516524.issue45076@roundup.psfhosted.org> New submission from otn : For opened file as "r+", even if write() then readline(), readline() is executed first. When add tell() after write(), it works correctly. ========================================== data1 = "aa\nbb\ncc\n" data2 = "xx\n" with open("data.txt","w") as f: f.write(data1) with open("data.txt","r+") as f: f.write(data2) # f.tell() print("Line:",repr(f.readline())) with open("data.txt") as f: print("All:",repr(f.read())) ========================================== OUTPUT: Line: 'aa\n' All: 'aa\nbb\ncc\nxx\n' EXPECTED: Line: 'bb\n' All: 'xx\nbb\ncc\n' ---------- components: IO messages: 400826 nosy: otn priority: normal severity: normal status: open title: open "r+" problem type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 09:20:44 2021 From: report at bugs.python.org (Kagami Sascha Rosylight) Date: Wed, 01 Sep 2021 13:20:44 +0000 Subject: [New-bugs-announce] [issue45077] multiprocessing.Pool(64) crashes on Windows Message-ID: <1630502444.21.0.00996444749357.issue45077@roundup.psfhosted.org> New submission from Kagami Sascha Rosylight : Similar issue as the previous issue 26903. ``` Python 3.9.7 (tags/v3.9.7:1016ef3, Aug 30 2021, 20:19:38) [MSC v.1929 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import multiprocessing >>> multiprocessing.cpu_count() 64 >>> multiprocessing.Pool(multiprocessing.cpu_count()) Exception in thread Thread-1: Traceback (most recent call last): File "C:\Users\sasch\AppData\Local\Programs\Python\Python39\lib\threading.py", line 973, in _bootstrap_inner >>> self.run() File "C:\Users\sasch\AppData\Local\Programs\Python\Python39\lib\threading.py", line 910, in run self._target(*self._args, **self._kwargs) File "C:\Users\sasch\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 519, in _handle_workers cls._wait_for_updates(current_sentinels, change_notifier) File "C:\Users\sasch\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 499, in _wait_for_updates wait(sentinels, timeout=timeout) File "C:\Users\sasch\AppData\Local\Programs\Python\Python39\lib\multiprocessing\connection.py", line 884, in wait ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout) File "C:\Users\sasch\AppData\Local\Programs\Python\Python39\lib\multiprocessing\connection.py", line 816, in _exhaustive_wait res = _winapi.WaitForMultipleObjects(L, False, timeout) ValueError: need at most 63 handles, got a sequence of length 66 ``` ---------- components: Windows messages: 400832 nosy: paul.moore, saschanaz, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: multiprocessing.Pool(64) crashes on Windows type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 11:54:48 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 01 Sep 2021 15:54:48 +0000 Subject: [New-bugs-announce] [issue45078] test_importlib: test_read_bytes() fails on AMD64 Windows8.1 Non-Debug 3.x Message-ID: <1630511688.74.0.180746211339.issue45078@roundup.psfhosted.org> New submission from STINNER Victor : Since build 305 (commit a40675c659cd8c0699f85ee9ac31660f93f8c2f5), test_importlib fails on AMD64 Windows8.1 Non-Debug 3.x: https://buildbot.python.org/all/#/builders/405/builds/305 The last successful build wa the build 304 (commit ee03bad25e83b00ba5fc2a0265b48c6286e6b3f7). Sadly, the test doesn't report the 'actual' variable value when the test fails. ====================================================================== FAIL: test_read_bytes (test.test_importlib.test_files.OpenNamespaceTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\buildarea\3.x.ware-win81-release.nondebug\build\lib\test\test_importlib\test_files.py", line 14, in test_read_bytes assert actual == b'Hello, UTF-8 world!\n' ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError ---------- components: Tests messages: 400847 nosy: brett.cannon, jaraco, vstinner priority: normal severity: normal status: open title: test_importlib: test_read_bytes() fails on AMD64 Windows8.1 Non-Debug 3.x versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 12:53:41 2021 From: report at bugs.python.org (Arcadiy Ivanov) Date: Wed, 01 Sep 2021 16:53:41 +0000 Subject: [New-bugs-announce] [issue45079] 3.8.11 and 3.8.12 missing Windows artifacts, only tarballs - build system failed? Message-ID: <1630515221.82.0.541526485344.issue45079@roundup.psfhosted.org> New submission from Arcadiy Ivanov : The following versions only contain tarballs with no Windows artifacts available. Is the build system OK? https://www.python.org/ftp/python/3.8.12/ https://www.python.org/ftp/python/3.8.11/ .10 is fine: https://www.python.org/ftp/python/3.8.10/ Latest 3.9 is OK as well. ---------- components: Windows messages: 400857 nosy: arcivanov, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: 3.8.11 and 3.8.12 missing Windows artifacts, only tarballs - build system failed? versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 12:54:27 2021 From: report at bugs.python.org (Leopold Talirz) Date: Wed, 01 Sep 2021 16:54:27 +0000 Subject: [New-bugs-announce] [issue45080] functools._HashedSeq implements __hash__ but not __eq__ Message-ID: <1630515267.9.0.090925115847.issue45080@roundup.psfhosted.org> New submission from Leopold Talirz : Disclaimer: this is my first issue on the python bug tracker. Please feel free to close this issue and redirect this to a specific mailing list, if more appropriate. The implementation of `functools._HashedSeq` [1] does not include an implementation of `__eq__` that takes advantage of the hash: ``` class _HashedSeq(list): """ This class guarantees that hash() will be called no more than once per element. This is important because the lru_cache() will hash the key multiple times on a cache miss. """ __slots__ = 'hashvalue' def __init__(self, tup, hash=hash): self[:] = tup self.hashvalue = hash(tup) def __hash__(self): return self.hashvalue ``` As far as I can tell, the `_HashedSeq` object is used as a key for looking up values in the python dictionary that holds the LRU cache, and this lookup mechanism relies on `__eq__` over `__hash__` (besides shortcuts for objects with the same id, etc.). This can cause potentially expensive `__eq__` calls on the arguments of the cached function and I wonder whether this is intended? Here is a short example code to demonstrate this: ``` from functools import _HashedSeq class CompList(list): """Hashable list (please forgive)""" def __eq__(self, other): print("equality comparison") return super().__eq__(other) def __hash__(self): return hash(tuple(self)) args1=CompList((1,2,3)) # represents function arguments passed to lru_cache args2=CompList((1,2,3)) # identical content but different object hs1=_HashedSeq( (args1,)) hs2=_HashedSeq( (args2,)) hs1 == hs2 # True, prints "equality comparison" d={} d[hs1] = "cached" d[hs2] # "cached", prints "equality comparison" ``` Adding the following to the implementation of `_HashedSeq` gets rid of the calls to `__eq__`: ``` def __eq__(self, other): return self.hashvalue == other.hashvalue ``` Happy to open a PR for this. I'm certainly a bit out of my depth here, so apologies if that is all intended behavior. [1] https://github.com/python/cpython/blob/679cb4781ea370c3b3ce40d3334dc404d7e9d92b/Lib/functools.py#L432-L446 ---------- components: Library (Lib) messages: 400858 nosy: leopold.talirz priority: normal severity: normal status: open title: functools._HashedSeq implements __hash__ but not __eq__ type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 17:06:11 2021 From: report at bugs.python.org (Julian Fortune) Date: Wed, 01 Sep 2021 21:06:11 +0000 Subject: [New-bugs-announce] [issue45081] dataclasses that inherit from Protocol subclasses have wrong __init__ Message-ID: <1630530371.11.0.748962218101.issue45081@roundup.psfhosted.org> New submission from Julian Fortune : I believe [`bpo-44806: Fix __init__ in subclasses of protocols`](https://github.com/python/cpython/pull/27545) has caused a regression when using a Dataclass. In Python `3.9.7`, a `dataclass` that inherits from a subclass of `typing.Protocol` (i.e., a user-defined protocol), does not have the correct `__init__`. ### Demonstration ```python from dataclasses import dataclass from typing import Protocol class P(Protocol): pass @dataclass class B(P): value: str print(B("test")) ``` In `3.9.7`: ```shell Traceback (most recent call last): File "test.py", line 11, in print(B("test")) TypeError: B() takes no arguments ``` In `3.9.6`: ```shell B(value='test') ``` ### Affected Projects - [dbt](https://github.com/dbt-labs/dbt/issues/3843) ---------- components: Library (Lib) messages: 400868 nosy: julianfortune priority: normal severity: normal status: open title: dataclasses that inherit from Protocol subclasses have wrong __init__ type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 17:30:22 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 01 Sep 2021 21:30:22 +0000 Subject: [New-bugs-announce] [issue45082] ctypes: Deprecate c_buffer() alias to create_string_buffer() Message-ID: <1630531822.38.0.282909766442.issue45082@roundup.psfhosted.org> New submission from STINNER Victor : Since the ctypes module was added to the stdlib (commit babddfca758abe34ff12023f63b18d745fae7ca9 in 2006), ctypes.c_buffer() was an alias to ctypes.create_string_buffer(). The implementation contains a commented deprecation: def c_buffer(init, size=None): ## "deprecated, use create_string_buffer instead" ## import warnings ## warnings.warn("c_buffer is deprecated, use create_string_buffer instead", ## DeprecationWarning, stacklevel=2) return create_string_buffer(init, size) I propose to start to deprecate ctypes.c_buffer(): use ctypes.create_string_buffer() directly. In older ctypes version, the function was called c_string(): it's still mentioned in the ctypes documentation. This legacy is confusion, and it's time to simplify the API to provide a single function. ---------- components: ctypes messages: 400871 nosy: vstinner priority: normal severity: normal status: open title: ctypes: Deprecate c_buffer() alias to create_string_buffer() versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 17:46:35 2021 From: report at bugs.python.org (Irit Katriel) Date: Wed, 01 Sep 2021 21:46:35 +0000 Subject: [New-bugs-announce] [issue45083] Incorrect exception output in C Message-ID: <1630532795.5.0.16726920843.issue45083@roundup.psfhosted.org> New submission from Irit Katriel : iritkatriel at Irits-MBP cpython % cat exc.py class A: class B: class E(Exception): pass raise A.B.E() iritkatriel at Irits-MBP cpython % cat test.py import exc iritkatriel at Irits-MBP cpython % ./python.exe test.py Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/test.py", line 2, in import exc ^^^^^^^^^^ File "/Users/iritkatriel/src/cpython/exc.py", line 7, in raise A.B.E() ^^^^^^^^^^^^^ exc.E ============== See the last line of the output: there is no such thing as exc.E. There is exc.A.B.E. The traceback module doesn't have this issue: iritkatriel at Irits-MBP cpython % cat test.py import traceback try: import exc except Exception as e: traceback.print_exception(e) iritkatriel at Irits-MBP cpython % ./python.exe test.py Traceback (most recent call last): File "/Users/iritkatriel/src/cpython/test.py", line 5, in import exc ^^^^^^^^^^ File "/Users/iritkatriel/src/cpython/exc.py", line 7, in raise A.B.E() ^^^^^^^^^^^^^ exc.A.B.E ---------- components: Interpreter Core messages: 400873 nosy: iritkatriel priority: normal severity: normal status: open title: Incorrect exception output in C type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 17:58:56 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 01 Sep 2021 21:58:56 +0000 Subject: [New-bugs-announce] [issue45084] urllib.parse: remove deprecated functions (splittype, to_bytes, etc.) Message-ID: <1630533536.1.0.485805644066.issue45084@roundup.psfhosted.org> New submission from STINNER Victor : bpo-27485 deprecated the following urllib.parse undocumented functions in Python 3.8: * splitattr() * splithost() * splitnport() * splitpasswd() * splitport() * splitquery() * splittag() * splittype() * splituser() * splitvalue() * to_bytes() (commit 0250de48199552cdaed5a4fe44b3f9cdb5325363) I propose to remove them. See attached PR. Note: The Quoter class is only deprecated since Python 3.11. It should be kept around for 2 releases (not removed before Python 3.13): PEP 387. ---------- components: Library (Lib) messages: 400874 nosy: vstinner priority: normal severity: normal status: open title: urllib.parse: remove deprecated functions (splittype, to_bytes, etc.) versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 1 18:44:58 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 01 Sep 2021 22:44:58 +0000 Subject: [New-bugs-announce] [issue45085] Remove the binhex module, binhex4 and hexbin4 standards Message-ID: <1630536298.51.0.0911172102047.issue45085@roundup.psfhosted.org> New submission from STINNER Victor : The binhex module was deprecated in Python 3.9 by bpo-39353 (commit beea26b57e8c80f1eff0f967a0f9d083a7dc3d66). I propose to remove it: see attached PR. The PR also removes the following binascii functions, also deprecated in Python 3.9: * a2b_hqx(), b2a_hqx() * rlecode_hqx(), rledecode_hqx() The binascii.crc_hqx() function remains available. ---------- components: Library (Lib) messages: 400878 nosy: vstinner priority: normal severity: normal status: open title: Remove the binhex module, binhex4 and hexbin4 standards versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 2 10:55:20 2021 From: report at bugs.python.org (Greg Kuhn) Date: Thu, 02 Sep 2021 14:55:20 +0000 Subject: [New-bugs-announce] [issue45086] f-string unmatched ']' Message-ID: <1630594520.19.0.424009972084.issue45086@roundup.psfhosted.org> New submission from Greg Kuhn : Hi All, Is the below a bug? Shouldn't the interpreter be complaining about a curly brace? $ python Python 3.8.5 (tags/v3.8.5:580fbb0, Jul 20 2020, 15:43:08) [MSC v.1926 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> num = 10 >>> f'[{num]' File "", line 1 SyntaxError: f-string: unmatched ']' >>> ---------- messages: 400920 nosy: Greg Kuhn priority: normal severity: normal status: open title: f-string unmatched ']' type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 2 15:37:44 2021 From: report at bugs.python.org (Alex Zaslavskis) Date: Thu, 02 Sep 2021 19:37:44 +0000 Subject: [New-bugs-announce] [issue45087] Confusing error message when trying split bytes. Message-ID: <1630611464.91.0.0686347967364.issue45087@roundup.psfhosted.org> New submission from Alex Zaslavskis : If we will try to split bytes we will got quite strange error in console. Traceback (most recent call last): File "C:/Users/ProAdmin/Desktop/bug_in_python.py", line 6, in byte_message.split(",") TypeError: a bytes-like object is required, not 'str' The problem here is that object should be string and in if I convert it to string the error is going. So that mean that there are error in mistake . Correct variant should be : Traceback (most recent call last): File "C:/Users/ProAdmin/Desktop/bug_in_python.py", line 6, in byte_message.split(",") TypeError: str is required, not a bytes-like object message = 'Python is fun' byte_message = bytes(message, 'utf-8') print(byte_message) #byte_message.split(",") causes error str(byte_message).split(",") # works ---------- components: Argument Clinic files: bug_in_python.py messages: 400948 nosy: larry, sahsariga111 priority: normal severity: normal status: open title: Confusing error message when trying split bytes. type: compile error versions: Python 3.9 Added file: https://bugs.python.org/file50257/bug_in_python.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 2 16:32:30 2021 From: report at bugs.python.org (Yury Selivanov) Date: Thu, 02 Sep 2021 20:32:30 +0000 Subject: [New-bugs-announce] [issue45088] Coroutines & async generators disagree on the iteration protocol semantics Message-ID: <1630614750.42.0.345660375854.issue45088@roundup.psfhosted.org> New submission from Yury Selivanov : See this script: https://gist.github.com/1st1/eccc32991dc2798f3fa0b4050ae2461d Somehow an identity async function alters the behavior of manual iteration though the wrapped nested generator. This is a very subtle bug and I'm not even sure if this is a bug or not. Opening the issue so that I don't forget about this and debug sometime later. ---------- components: Interpreter Core messages: 400951 nosy: lukasz.langa, pablogsal, yselivanov priority: normal severity: normal stage: needs patch status: open title: Coroutines & async generators disagree on the iteration protocol semantics type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 2 16:42:26 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Thu, 02 Sep 2021 20:42:26 +0000 Subject: [New-bugs-announce] [issue45089] [sqlite3] the trace callback does not raise exceptions on error Message-ID: <1630615346.0.0.296018695296.issue45089@roundup.psfhosted.org> New submission from Erlend E. Aasland : Currently, two calls can raise exceptions in the _trace_callback() in Modules/_sqlite/connection.c: 1. PyUnicode_DecodeUTF8() can raise an exception 2. PyObject_CallOneArg() ? calling the user callback ? can raise an exception Currently, we either PyErr_Print() the traceback, or we PyErr_Clear() it. In either case; we clear the current exception. The other SQLite callbacks pass some kind of return value back to SQLite to indicate failure (which is normally then passed to _pysqlite_seterror() via sqlite3_step() or sqlite3_finalize(), but the trace callback does not pass errors back to SQLite; we're unable to detect if the trace callback fails. ---------- components: Extension Modules files: reproducer.py messages: 400955 nosy: erlendaasland priority: normal severity: normal status: open title: [sqlite3] the trace callback does not raise exceptions on error versions: Python 3.11 Added file: https://bugs.python.org/file50259/reproducer.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 2 18:58:25 2021 From: report at bugs.python.org (Luciano Ramalho) Date: Thu, 02 Sep 2021 22:58:25 +0000 Subject: [New-bugs-announce] [issue45090] Add pairwise to What's New in Python 3.10; mark it as new in itertools docs Message-ID: <1630623505.93.0.223953840604.issue45090@roundup.psfhosted.org> New submission from Luciano Ramalho : Thanks for adding `itertools.pairwise()`! Let's make it easier to find by mentioning it in "What's New in Python 3.10" and also marking it as "New in Python 3.10" in the `itertools` module documentation. ---------- assignee: docs at python components: Documentation messages: 400966 nosy: docs at python, ramalho priority: normal severity: normal status: open title: Add pairwise to What's New in Python 3.10; mark it as new in itertools docs versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 2 21:25:32 2021 From: report at bugs.python.org (Antonio Caceres) Date: Fri, 03 Sep 2021 01:25:32 +0000 Subject: [New-bugs-announce] [issue45091] inspect.Parameter.__str__ does not include subscripted types in annotations Message-ID: <1630632332.35.0.622998937307.issue45091@roundup.psfhosted.org> New submission from Antonio Caceres : The __str__ method of the inspect.Parameter class in the standard library's inspect module does not include subscripted types in annotations. For example, consider the function foo(a: list[int]). When I run str(inspect.signature(foo)), I would expect the returned string to be '(a: list[int])', but instead the string is '(a: list)'. (I have tested this on Python 3.9.7, but the code I believe is the problem is on the branches for versions 3.9-3.11.) >From a first glance at the source code, the problem is in the inspect.formatannotation function. If the annotation is a type, the formatannotation uses the __qualname__ attribute of the annotation instead of its __repr__ attribute. Indeed, list[int].__qualname__ == 'list' and repr(list[int]) == 'list[int]'. This problem was probably code that should have been changed, but never was, after PEP 585. The only workarounds I have found is to implement an alternative string method that accepts inspect.Signature or subclass inspect.Parameter and override the __str__ method. ---------- components: Library (Lib) messages: 400972 nosy: antonio-caceres priority: normal severity: normal status: open title: inspect.Parameter.__str__ does not include subscripted types in annotations type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 3 00:08:56 2021 From: report at bugs.python.org (Michael Rans) Date: Fri, 03 Sep 2021 04:08:56 +0000 Subject: [New-bugs-announce] [issue45092] Make set ordered like dict Message-ID: <1630642136.33.0.166486792809.issue45092@roundup.psfhosted.org> New submission from Michael Rans : Now that dict is ordered, it is a bit confusing that set isn't as well. I suggest making set ordered too. ---------- messages: 400974 nosy: mcarans priority: normal severity: normal status: open title: Make set ordered like dict type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 3 00:13:12 2021 From: report at bugs.python.org (Michael Rans) Date: Fri, 03 Sep 2021 04:13:12 +0000 Subject: [New-bugs-announce] [issue45093] Add method to compare dicts accounting for order Message-ID: <1630642392.86.0.263268758429.issue45093@roundup.psfhosted.org> New submission from Michael Rans : I suggest adding a method that allows comparing dicts taking into account the order - an ordered compare (since == does not do that). (Also for my other issue (https://bugs.python.org/issue45092) where I suggested that set be ordered to reduce confusion, the set could also have an ordered compare). ---------- messages: 400975 nosy: mcarans priority: normal severity: normal status: open title: Add method to compare dicts accounting for order versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 3 09:26:40 2021 From: report at bugs.python.org (STINNER Victor) Date: Fri, 03 Sep 2021 13:26:40 +0000 Subject: [New-bugs-announce] [issue45094] Consider using __forceinline and __attribute__((always_inline)) for debug builds Message-ID: <1630675600.0.0.104145127588.issue45094@roundup.psfhosted.org> New submission from STINNER Victor : I converted many C macros to static inline functions to reduce the risk of programming bugs (macros pitfalls), limit variable scopes to the function, have a more readable function, and benefit of the function name even when it's inlined in debuggers and profilers. When Py_INCREF() macro was converted to a static inline function, using __attribute__((always_inline)) was considered, but the idea was rejected. See bpo-35059. I'm now trying to convert the Py_TYPE() macro to a static inline function. The problem is that by default, MSC disables inlining and test_exceptions does crash with a stack overflow, since my PR 28128 increases the usage of the stack memory: see bpo-44348. For the specific case of CPython built by MSC, we can increase the stack size, or change compiler optimizations to enable inlining. But the problem is wider than just CPython built by MSC in debug mode. Third party C extensions built by distutils may get the same issue. Building CPython on other platforms on debug mode with all compiler optimizations disabled (ex: gcc -O0) can also have the same issue. I propose to reconsider the usage __forceinline (MSC) and __attribute__((always_inline)) (GCC, clang) on the most important static inline functions, like Py_INCREF() and Py_TYPE(), to avoid this issue. ---------- components: Interpreter Core messages: 400990 nosy: vstinner priority: normal severity: normal status: open title: Consider using __forceinline and __attribute__((always_inline)) for debug builds versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 3 13:01:49 2021 From: report at bugs.python.org (=?utf-8?q?St=C3=A9phane_Blondon?=) Date: Fri, 03 Sep 2021 17:01:49 +0000 Subject: [New-bugs-announce] [issue45095] Easier loggers traversal tree with a logger.getChildren method Message-ID: <1630688509.83.0.0453605836271.issue45095@roundup.psfhosted.org> New submission from St?phane Blondon : Currently, logging.root.manager.loggerDict is usable to do a homemade traversal of the loggers tree. However, it's not a public interface. Adding a 'logger.getChildren()' method would help to implement the traversal. The method would return a set of loggers. Usage example: >>> import logging >>> logging.basicConfig(level=logging.CRITICAL) >>> root_logger = logging.getLogger() >>> root_logger.getChildren() set() >>> a_logger = logging.getLogger("a") >>> root_logger.getChildren() {} >>> logging.getLogger('a.b').setLevel(logging.DEBUG) >>> _ = logging.getLogger('a.c') >>> a_logger.getChildren() {, } With such method, traverse the tree will be obvious to write with a recursive function. Use cases: - to check all the loggers are setted up correctly. I wrote a small function to get all loggers, and log on every level to check the real behaviour. - to draw the loggers tree like logging_tree library (https://pypi.org/project/logging_tree/). I didn't ask to logging_tree's maintainer but I don't think he would use such function because the library works for huge range of python releases. I plan to write a PR if someone thinks it's a good idea. ---------- components: Library (Lib) messages: 401006 nosy: sblondon priority: normal severity: normal status: open title: Easier loggers traversal tree with a logger.getChildren method type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 3 15:43:48 2021 From: report at bugs.python.org (Eric Snow) Date: Fri, 03 Sep 2021 19:43:48 +0000 Subject: [New-bugs-announce] [issue45096] Update Tools/freeze to make use of Tools/scripts/freeze_modules.py? Message-ID: <1630698228.01.0.74483718816.issue45096@roundup.psfhosted.org> New submission from Eric Snow : In bpo-45019 we added Tools/scripts/freeze_modules.py to improve how we manage which modules get frozen by default. (We turned previously manual steps into automation of generated code.) There is probably some overlap with what we do in Tools/freeze/freeze.py. Is so, we should make changes for better re-use. ---------- components: Demos and Tools messages: 401015 nosy: eric.snow, lemburg priority: normal severity: normal stage: needs patch status: open title: Update Tools/freeze to make use of Tools/scripts/freeze_modules.py? type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 3 22:41:26 2021 From: report at bugs.python.org (Chih-Hsuan Yen) Date: Sat, 04 Sep 2021 02:41:26 +0000 Subject: [New-bugs-announce] [issue45097] "The loop argument is deprecated" reported when user code does not use it Message-ID: <1630723286.0.0.706652277145.issue45097@roundup.psfhosted.org> New submission from Chih-Hsuan Yen : With Python 3.9.7, "DeprecationWarning: The loop argument is deprecated" may be reported when user code does not use it. Here is an example: import asyncio import warnings warnings.filterwarnings('error') def crash(): raise KeyboardInterrupt async def main(): asyncio.get_event_loop().call_soon(crash) await asyncio.sleep(5) try: asyncio.run(main()) except KeyboardInterrupt: pass On 3.9.6, no warning is reported, while results on 3.9.7 are Traceback (most recent call last): File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.9/asyncio/base_events.py", line 629, in run_until_complete self.run_forever() File "/usr/lib/python3.9/asyncio/base_events.py", line 596, in run_forever self._run_once() File "/usr/lib/python3.9/asyncio/base_events.py", line 1890, in _run_once handle._run() File "/usr/lib/python3.9/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/home/yen/var/local/Computer/archlinux/community/python-anyio/trunk/cpython-3.9.7-regression.py", line 11, in crash raise KeyboardInterrupt KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/yen/var/local/Computer/archlinux/community/python-anyio/trunk/cpython-3.9.7-regression.py", line 18, in asyncio.run(main()) File "/usr/lib/python3.9/asyncio/runners.py", line 47, in run _cancel_all_tasks(loop) File "/usr/lib/python3.9/asyncio/runners.py", line 64, in _cancel_all_tasks tasks.gather(*to_cancel, loop=loop, return_exceptions=True)) File "/usr/lib/python3.9/asyncio/tasks.py", line 755, in gather warnings.warn("The loop argument is deprecated since Python 3.8, " DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10. As indicated by the traceback, the loop argument is used inside the asyncio library, not from user code. It has been an issue for some time, and the issue is exposed after changes for issue44815. Credit: this example code is modified from an anyio test https://github.com/agronholm/anyio/blob/3.3.0/tests/test_taskgroups.py#L943. I noticed this issue when I was testing anyio against 3.9.7. ---------- components: asyncio messages: 401032 nosy: asvetlov, yan12125, yselivanov priority: normal severity: normal status: open title: "The loop argument is deprecated" reported when user code does not use it type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 4 09:17:43 2021 From: report at bugs.python.org (Sam Bull) Date: Sat, 04 Sep 2021 13:17:43 +0000 Subject: [New-bugs-announce] [issue45098] asyncio.CancelledError should contain more information on cancellations Message-ID: <1630761463.35.0.712522138496.issue45098@roundup.psfhosted.org> New submission from Sam Bull : There are awkward edge cases caused by race conditions when cancelling tasks which make it impossible to reliably cancel a task. For example, in the async-timeout library there appears to be no way to avoid suppressing an explicit t.cancel() if that cancellation occurs immediately after the timeout. In the alternative case where a cancellation happens immediately before the timeout, the solutions feel dependant on the internal details of how asynico.Task works and could easily break if the behaviour is tweaked in some way. What we really need to know is how many times a task was cancelled as a cause of the CancelledError and ideally were the cancellations caused by us. The solution I'd like to propose is that the args on the exception contain all the messages of every cancel() call leading up to that exception, rather than just the first one. e.g. In these race conditions e.args would look like (None, SENTINEL), where SENTINEL was sent in our own cancellations. From this we can see that the task was cancelled twice and only one was caused by us, therefore we don't want to suppress the CancelledError. For more details to fully understand the problem: https://github.com/aio-libs/async-timeout/pull/230 https://github.com/aio-libs/async-timeout/issues/229#issuecomment-908502523 https://github.com/aio-libs/async-timeout/pull/237 ---------- components: asyncio messages: 401045 nosy: asvetlov, dreamsorcerer, yselivanov priority: normal severity: normal status: open title: asyncio.CancelledError should contain more information on cancellations type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 4 13:03:48 2021 From: report at bugs.python.org (jack1142) Date: Sat, 04 Sep 2021 17:03:48 +0000 Subject: [New-bugs-announce] [issue45099] asyncio.Task's documentation says that loop arg is removed when it's not Message-ID: <1630775028.47.0.984741146051.issue45099@roundup.psfhosted.org> New submission from jack1142 : The documentation here: https://docs.python.org/3.10/library/asyncio-task.html#asyncio.Task Says that `loop` parameter was removed but it's still part of the signature. It gets even more confusing when the deprecation right below it is saying that *not* passing it when there is no running event loop is deprecated :) I could make a PR removing this information but I'm not sure whether there should be also some information put about it being deprecated in 3.8 but not actually getting removed in 3.10? ---------- assignee: docs at python components: Documentation messages: 401047 nosy: docs at python, jack1142 priority: normal severity: normal status: open title: asyncio.Task's documentation says that loop arg is removed when it's not type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 4 14:03:28 2021 From: report at bugs.python.org (Raymond Hettinger) Date: Sat, 04 Sep 2021 18:03:28 +0000 Subject: [New-bugs-announce] [issue45100] Teach help about typing.overload() Message-ID: <1630778608.65.0.813637953031.issue45100@roundup.psfhosted.org> New submission from Raymond Hettinger : Python's help() function does not display overloaded function signatures. For example, this code: from typing import Union class Smudge(str): @overload def __getitem__(self, index: int) -> str: ... @overload def __getitem__(self, index: slice) -> 'Smudge': ... def __getitem__(self, index: Union[int, slice]) -> Union[str, 'Smudge']: 'Return a smudged character or characters.' if isinstance(index, slice): start, stop, step = index.indices(len(self)) values = [self[i] for i in range(start, stop, step)] return Smudge(''.join(values)) c = super().__getitem__(index) return chr(ord(c) ^ 1) Currently gives this help: __getitem__(self, index: Union[int, slice]) -> Union[str, ForwardRef('Smudge')] Return a smudged character or characters. What is desired is: __getitem__(self, index: int) -> str __getitem__(self, index: slice) -> ForwardRef('Smudge') Return a smudged character or characters. The overload() decorator is sufficient for informing a static type checker but insufficient for informing a user or editing tool. ---------- messages: 401052 nosy: rhettinger priority: normal severity: normal status: open title: Teach help about typing.overload() _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 4 14:32:12 2021 From: report at bugs.python.org (Kien Dang) Date: Sat, 04 Sep 2021 18:32:12 +0000 Subject: [New-bugs-announce] [issue45101] Small inconsistency in usage message between the python and shell script versions of python-config Message-ID: <1630780332.11.0.126671398839.issue45101@roundup.psfhosted.org> New submission from Kien Dang : `python-config` outputs a usage instruction message in case either the `--help` option or invalid arguments are provided. However there is a small different in the output I/O between the python and shell script versions of `python-config`. In the python version, the message always prints to `stderr`. https://github.com/python/cpython/blob/bc1c49fa94b2abf70e6937373bf1e6b5378035c5/Misc/python-config.in#L15-L18 def exit_with_usage(code=1): print("Usage: {0} [{1}]".format( sys.argv[0], '|'.join('--'+opt for opt in valid_opts)), file=sys.stderr) sys.exit(code) while in the shell script version it always prints to `stdout`. https://github.com/python/cpython/blob/bc1c49fa94b2abf70e6937373bf1e6b5378035c5/Misc/python-config.sh.in#L5-L9 exit_with_usage () { echo "Usage: $0 --prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--abiflags|--configdir|--embed" exit $1 } This inconsistency does not affect most users of `python-config`, who runs the script interactively. However it might cause issues when run programmatically. ---------- components: Demos and Tools messages: 401054 nosy: kiendang priority: normal severity: normal status: open title: Small inconsistency in usage message between the python and shell script versions of python-config type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 4 16:01:13 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 04 Sep 2021 20:01:13 +0000 Subject: [New-bugs-announce] [issue45102] unittest: add tests for skipping and errors in cleanup Message-ID: <1630785673.23.0.15068608045.issue45102@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR adds tests for skipping and errors in cleanup: after success, after failure, and in combination with the expectedFailure decorator. ---------- components: Library (Lib) messages: 401056 nosy: ezio.melotti, michael.foord, rbcollins, serhiy.storchaka priority: normal severity: normal status: open title: unittest: add tests for skipping and errors in cleanup versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 4 17:16:59 2021 From: report at bugs.python.org (Terry J. Reedy) Date: Sat, 04 Sep 2021 21:16:59 +0000 Subject: [New-bugs-announce] [issue45103] IDLE: make configdialog font page survive font failures Message-ID: <1630790219.84.0.830457804554.issue45103@roundup.psfhosted.org> New submission from Terry J. Reedy : There were reports on a previous issue that attempting to display some characters from some linux fonts in some distributions resulted in crashing Python. The crash reports mentioned XWindows errors about particular characters being too complex or something. I did not open an issue for this because it seemed beyond the ability of tkinter/IDLE to predict and handle. https://stackoverflow.com/questions/68996159/idle-settings-window-wont-appear reported that requesting the IDLE settings window resulting in IDLE hanging. The user used my print debug suggestion to determine that the problem was in configdialog.py at self.fontpage = FontPage(note, self.highpage) The user first thought the issue was with a chess font, and then determined that the problem font was the phaistos font from https://ctan.org/pkg/phaistos?lang=en. The font has the symbols from the Cretan Linear A script on the Disk of Phaistos. This issue is about 1. Try to reproduce the issue by downloading and installing Phaistos.otf (right click on the file and click Install). 2. If successful, trace the problem to a specific line within the FontPage class. 3. Determine whether Python code can prevent the hang or if it is completely in other code. (Either way, perhaps report to tcl/tk.) ... To remove, "open Control Panel -> Fonts then locate and select Phaistos Regular font then proceed with clicking Delete button." I did not select 'test needed' because we cannot routinely install new fonts and I am not sure of how to do an automated test if we could. However, I would add a note as to how to do a manual test. font.otf = OpenTypeFont, used on Windows, Linux, macOS. https://fileinfo.com/extension/otf It might be good test on other systems after Windows. ---------- messages: 401062 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: make configdialog font page survive font failures type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 4 18:38:45 2021 From: report at bugs.python.org (Kevin Shweh) Date: Sat, 04 Sep 2021 22:38:45 +0000 Subject: [New-bugs-announce] [issue45104] Error in __new__ docs Message-ID: <1630795125.96.0.304265394784.issue45104@roundup.psfhosted.org> New submission from Kevin Shweh : The data model docs for __new__ say "If __new__() is invoked during object construction and it returns an instance or subclass of cls, then the new instance?s __init__() method will be invoked..." "instance or subclass of cls" is incorrect - if for some reason __new__ returns a subclass of cls, __init__ will not be invoked, unless the subclass also happens to be an instance of cls (which can happen with metaclasses). This should probably say something like "instance of cls (including subclass instances)", or "instance of cls or of a subclass of cls", or just "instance of cls". ---------- assignee: docs at python components: Documentation messages: 401065 nosy: Kevin Shweh, docs at python priority: normal severity: normal status: open title: Error in __new__ docs versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 5 07:12:09 2021 From: report at bugs.python.org (Max Bachmann) Date: Sun, 05 Sep 2021 11:12:09 +0000 Subject: [New-bugs-announce] [issue45105] Incorrect handling of unicode character \U00010900 Message-ID: <1630840329.53.0.683865786934.issue45105@roundup.psfhosted.org> New submission from Max Bachmann : I noticed that when using the Unicode character \U00010900 when inserting the character as character: Here is the result on the Python console both for 3.6 and 3.9: ``` >>> s = '0?00' >>> s '0?00' >>> ls = list(s) >>> ls ['0', '?', '0', '0'] >>> s[0] '0' >>> s[1] '?' >>> s[2] '0' >>> s[3] '0' >>> ls[0] '0' >>> ls[1] '?' >>> ls[2] '0' >>> ls[3] '0' ``` It appears that for some reason in this specific case the character is actually stored in a different position that shown when printing the complete string. Note that the string is already behaving strange when marking it in the console. When marking the special character it directly highlights the last 3 characters (probably because it already thinks this character is in the second position). The same behavior does not occur when directly using the unicode point ``` >>> s='000\U00010900' >>> s '000?' >>> s[0] '0' >>> s[1] '0' >>> s[2] '0' >>> s[3] '?' ``` This was tested using the following Python versions: ``` Python 3.6.0 (default, Dec 29 2020, 02:18:14) [GCC 10.2.1 20201125 (Red Hat 10.2.1-9)] on linux Python 3.9.6 (default, Jul 16 2021, 00:00:00) [GCC 11.1.1 20210531 (Red Hat 11.1.1-3)] on linux ``` on Fedora 34 ---------- components: Unicode messages: 401078 nosy: ezio.melotti, maxbachmann, vstinner priority: normal severity: normal status: open title: Incorrect handling of unicode character \U00010900 type: behavior versions: Python 3.6, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 5 11:53:18 2021 From: report at bugs.python.org (Andrew Suttle) Date: Sun, 05 Sep 2021 15:53:18 +0000 Subject: [New-bugs-announce] [issue45106] Issue formating for log on Linux machines Message-ID: <1630857198.47.0.486193642329.issue45106@roundup.psfhosted.org> New submission from Andrew Suttle : Using the format option for Python logging works fine on all windows machines: logging.basicConfig(filename='log.log', encoding='utf-8', level=logging.DEBUG,format='%(asctime)s : %(message)s ') But error occurs on Linux mint 64 20.2 using Python 3.8 when the % symbol in the format causes a problem. Traceback (most recent call last): File "/usr/lib/python3.8/idlelib/run.py", line 559, in runcode exec(code, self.locals) File "/media/victor/B694-5211/python now git/prototype server3.py", line 14, in logging.basicConfig(filename='log.log', encoding='utf-8', level=logging.DEBUG,format='%(asctime)s:%(message)s') File "/usr/lib/python3.8/logging/init.py", line 2009, in basicConfig raise ValueError('Unrecognised argument(s): %s' % keys) ValueError: Unrecognised argument(s): encoding ---------- files: program.txt messages: 401089 nosy: andrewsuttle56 priority: normal severity: normal status: open title: Issue formating for log on Linux machines type: compile error versions: Python 3.8 Added file: https://bugs.python.org/file50261/program.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 5 12:39:16 2021 From: report at bugs.python.org (Ken Jin) Date: Sun, 05 Sep 2021 16:39:16 +0000 Subject: [New-bugs-announce] [issue45107] Improve LOAD_METHOD specialization Message-ID: <1630859956.74.0.360849757822.issue45107@roundup.psfhosted.org> New submission from Ken Jin : I plan to do two improvements over the initial implementation: 1. General comments cleanup and optimize LOAD_METHOD_CLASS. 2. Implement LOAD_METHOD_SUPER, for super().meth() calls. See Issue44889 for the precursor. ---------- components: Interpreter Core messages: 401096 nosy: kj priority: normal severity: normal status: open title: Improve LOAD_METHOD specialization versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 5 13:35:08 2021 From: report at bugs.python.org (Alex Hall) Date: Sun, 05 Sep 2021 17:35:08 +0000 Subject: [New-bugs-announce] [issue45108] frame.f_lasti points at DICT_MERGE instead of CALL_FUNCTION_EX in Windows only Message-ID: <1630863308.34.0.524127924669.issue45108@roundup.psfhosted.org> New submission from Alex Hall : In this script: import inspect import dis def foo(**_): frame = inspect.currentframe().f_back print(frame.f_lasti) dis.dis(frame.f_code) d = {'a': 1, 'b': 2} foo(**d) dis shows these instructions for `foo(**d)`: 10 34 LOAD_NAME 2 (foo) 36 BUILD_TUPLE 0 38 BUILD_MAP 0 40 LOAD_NAME 3 (d) 42 DICT_MERGE 1 44 CALL_FUNCTION_EX 1 46 POP_TOP 48 LOAD_CONST 1 (None) 50 RETURN_VALUE On Linux/OSX, frame.f_lasti is 44, pointing to the CALL_FUNCTION_EX as I'd expect. But on Windows it's 42, which is the preceding instruction DICT_MERGE. The bytecode itself is identical on the different systems, it's just the frame offset that differs. This manifested as a test failure in a debugging tool here: https://github.com/samuelcolvin/python-devtools/pull/93 ---------- components: Interpreter Core, Windows messages: 401098 nosy: alexmojaki, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: frame.f_lasti points at DICT_MERGE instead of CALL_FUNCTION_EX in Windows only type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 5 14:54:59 2021 From: report at bugs.python.org (Richard Tollerton) Date: Sun, 05 Sep 2021 18:54:59 +0000 Subject: [New-bugs-announce] [issue45109] pipes seems designed for bytes but is str-only Message-ID: <1630868099.65.0.267189111199.issue45109@roundup.psfhosted.org> New submission from Richard Tollerton : 1. https://github.com/python/cpython/blob/3.9/Lib/pipes.py#L6 > Suppose you have some data that you want to convert to another format, > such as from GIF image format to PPM image format. 2. https://docs.python.org/3.9/library/pipes.html > Because the module uses /bin/sh command lines, a POSIX or compatible shell for os.system() and os.popen() is required. 3. https://docs.python.org/3.9/library/os.html#os.popen > The returned file object reads or writes text strings rather than bytes. (1) and (3) are AFAIK mutually contradictory: you can't reasonably expect to shove GIFs down a str file object. I'm guessing that pipes is an API that never got its bytes API fleshed out? My main interest in this is that I'm writing a large CSV to disk and wanted to pipe it through zstd first. And I wanted something like perl's open FILE, "|zstd -T0 -19 > out.txt.zst". But the CSV at present is all bytes. (Technically the content is all latin1 at the moment, so I may have a workaround, but I'm not 100% certain it will stay that way.) What I'd like to see is for pipes.Template.open() to accept 'b' in flags, and for that to be handled in the usual way. ---------- components: Library (Lib) messages: 401103 nosy: rtollert priority: normal severity: normal status: open title: pipes seems designed for bytes but is str-only versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 5 22:18:58 2021 From: report at bugs.python.org (Forest) Date: Mon, 06 Sep 2021 02:18:58 +0000 Subject: [New-bugs-announce] [issue45110] argparse repeats itself when formatting help metavars Message-ID: <1630894738.41.0.447837429449.issue45110@roundup.psfhosted.org> New submission from Forest : When argparse actions have multiple option strings and at least one argument, the default formatter presents them like this: -t ARGUMENT, --task ARGUMENT Perform a task with the given argument. -p STRING, --print STRING Print the given string. By repeating the metavars, the formatter wastes horizontal space, making the following side-effects more likely: - The easy-to-read tabular format is undermined by overlapping text columns. - An action and its description are split apart, onto separate lines. - Fewer actions can fit on the screen at once. - The user is presented with extra noise (repeat text) to read through. I think the DRY principle is worth considering here. Help text would be cleaner, more compact, and easier to read if formatted like this: -t, --task ARGUMENT Perform a task with the given argument. -p, --print STRING Print the given string. Obviously, actions with especially long option strings or metavars could still trigger line breaks, but they would be much less common and still easier to read without the repeat text. I am aware of ArgumentParser's formatter_class option, but unfortunately, it is of little help here. Since the default formatter class reserves every stage of its work as a private implementation detail, I cannot safely subclass it to get the behavior I want. My choices are apparently to either re-implement an unreasonably large swath of its code in my own formatter class, or override the private _format_action_invocation() method in a subclass and risk future breakage (and still have to re-implement more code than is reasonable.) Would it make sense to give HelpFormatter a "don't repeat yourself" option? (For example, a boolean class attribute could be overridden by a subclass and would be a small change to the existing code.) Alternatively, if nobody is attached to the current behavior, would it make sense to simply change HelpFormatter such that it never repeats itself? ---------- components: Library (Lib) messages: 401110 nosy: forest priority: normal severity: normal status: open title: argparse repeats itself when formatting help metavars type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 05:26:37 2021 From: report at bugs.python.org (W Huang) Date: Mon, 06 Sep 2021 09:26:37 +0000 Subject: [New-bugs-announce] [issue45111] whole website translation error Message-ID: <1630920397.91.0.782714198576.issue45111@roundup.psfhosted.org> New submission from W Huang : Traditional Chinese is translated incorrectly into Simplified Chinese. But Taiwanese CAN NOT read Simplified Chinese fluently. In addition, there are many words called by different name between Chinese and Taiwanese. Sometimes Taiwanese CAN NOT understand what a word used in China means. ---------- messages: 401126 nosy: m4jp62 priority: normal severity: normal status: open title: whole website translation error _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 06:14:34 2021 From: report at bugs.python.org (Chen Zero) Date: Mon, 06 Sep 2021 10:14:34 +0000 Subject: [New-bugs-announce] [issue45112] Python exception object is different after pickle.loads and pickle.dumps Message-ID: <1630923274.69.0.294387149834.issue45112@roundup.psfhosted.org> New submission from Chen Zero : Hi, when I'm trying to serialize/deserialize python exception object through pickle, I found that deserialize result is not the same as the original object... My python version is 3.9.1, working os: macOS Big Sur 11.4 Here is minimum reproducing code example: import pickle class ExcA(Exception): def __init__(self, want): msg = "missing " msg += want super().__init__(msg) ExcA('bb') # this will output ExcA("missing bb"), which is good pickle.loads(pickle.dumps(ExcA('bb'))) # this will output ExcA("missing missing bb"), which is different from `ExcA('bb')` ---------- files: screenshot.png messages: 401128 nosy: yonghengzero priority: normal severity: normal status: open title: Python exception object is different after pickle.loads and pickle.dumps type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50262/screenshot.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 07:10:48 2021 From: report at bugs.python.org (hai shi) Date: Mon, 06 Sep 2021 11:10:48 +0000 Subject: [New-bugs-announce] [issue45113] [subinterpreters][C API] Add a new function to create PyStructSequence from Heap. Message-ID: <1630926648.9.0.514792822043.issue45113@roundup.psfhosted.org> New submission from hai shi : Copied from https://bugs.python.org/issue40512#msg399847: Victor: PyStructSequence_InitType2() is not compatible with subinterpreters: it uses static types. Moreover, it allocates tp_members memory which is not released when the type is destroyed. But I'm not sure that the type is ever destroyed, since this API is designed for static types. > PyStructSequence_InitType2() is not compatible with subinterpreters: it uses static types. Moreover, it allocates tp_members memory which is not released when the type is destroyed. But I'm not sure that the type is ever destroyed, since this API is designed for static types. IMO, I suggest to create a new function, PyStructSequence_FromModuleAndDesc(module, desc, flags) to create a heaptype and don't aloocates memory block for tp_members,something like 'PyType_FromModuleAndSpec()`. ---------- components: C API messages: 401129 nosy: petr.viktorin, shihai1991, vstinner priority: normal severity: normal status: open title: [subinterpreters][C API] Add a new function to create PyStructSequence from Heap. versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 07:54:30 2021 From: report at bugs.python.org (Harri) Date: Mon, 06 Sep 2021 11:54:30 +0000 Subject: [New-bugs-announce] [issue45114] bad example for os.stat Message-ID: <1630929270.55.0.756722168935.issue45114@roundup.psfhosted.org> New submission from Harri : The example on https://docs.python.org/3/library/stat.html should be improved to avoid endless recursion, if there is a symlink loop. I would suggest to use os.lstat instead of os.stat. ---------- assignee: docs at python components: Documentation messages: 401133 nosy: docs at python, harridu priority: normal severity: normal status: open title: bad example for os.stat versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 11:21:42 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 06 Sep 2021 15:21:42 +0000 Subject: [New-bugs-announce] [issue45115] Windows: enable compiler optimizations when building Python in debug mode Message-ID: <1630941702.52.0.0278112507289.issue45115@roundup.psfhosted.org> New submission from STINNER Victor : The Visual Studio project of Python, PCBuild\ directory, disables compiler optimizations when Python is built in debug mode. It seems to be the default in Visual Studio. Disabling compiler optimizations cause two kinds of issues: * It increases the stack memory: tests using a deep call stack are more likely to crash (test_pickle, test_marshal, test_exceptions). * Running the Python test suite take 19 min 41 sec instead of 12 min 19 sec on Windows x64: 1.6x slower. Because of that, we cannot use a debug build in the GitHub Action pre-commit CI, and we miss bugs which are catched "too late", in Windows buildbots. See my latest attempt to use a debug build in GitHub Actions: https://github.com/python/cpython/pull/24914 Example of test_marshal: # The max stack depth should match the value in Python/marshal.c. # BUG: https://bugs.python.org/issue33720 # Windows always limits the maximum depth on release and debug builds #if os.name == 'nt' and hasattr(sys, 'gettotalrefcount'): if os.name == 'nt': MAX_MARSHAL_STACK_DEPTH = 1000 else: MAX_MARSHAL_STACK_DEPTH = 2000 I propose to only change the compiler options for the pythoncore project which builds most important files for the Python "core". See attached PR. ---------- components: Windows messages: 401141 nosy: paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: Windows: enable compiler optimizations when building Python in debug mode versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 11:27:18 2021 From: report at bugs.python.org (neonene) Date: Mon, 06 Sep 2021 15:27:18 +0000 Subject: [New-bugs-announce] [issue45116] Performance regression 3.10b1 and later on Windows Message-ID: <1630942038.24.0.156228540374.issue45116@roundup.psfhosted.org> New submission from neonene : pyperformance on Windows shows some gap between 3.10a7 and 3.10b1. The following are the ratios compared with 3.10a7 (the higher the slower). ------------------------------------------------- Windows x64 | PGO release official-binary ----------------+-------------------------------- 20210405 | 3.10a7 | 1.00 1.24 1.00 (PGO?) 20210408-07:58 | b98eba5 | 0.98 20210408-10:22 | * PR25244 | 1.04 20210503 | 3.10b1 | 1.07 1.21 1.07 ------------------------------------------------- Windows x86 | PGO release official-binary ----------------+-------------------------------- 20210405 | 3.10a7 | 1.00 1.25 1.27 (release?) 20210408-07:58 | b98eba5bc | 1.00 20210408-10:22 | * PR25244 | 1.11 20210503 | 3.10b1 | 1.14 1.28 1.29 Since PR25244 (28d28e053db6b69d91c2dfd579207cd8ccbc39e7), _PyEval_EvalFrameDefault() in ceval.c has seemed to be unoptimized with PGO (msvc14.29.16.10). At least the functions below have become un-inlined there at all. (1) _Py_DECREF() (from Py_DECREF,Py_CLEAR,Py_SETREF) (2) _Py_XDECREF() (from Py_XDECREF,SETLOCAL) (3) _Py_IS_TYPE() (from PyXXX_CheckExact) (4) _Py_atomic_load_32bit_impl() (from CHECK_EVAL_BREAKER) I tried in vain other linker options like thread-safe-profiling, agressive-code-generation, /OPT:NOREF. 3.10a7 can inline them in the eval-loop even if profiling only test_array.py. I measured overheads of (1)~(4) on my own build whose eval-loop uses macros instead of them. ----------------------------------------------------------------- Windows x64 | PGO patched overhead in eval-loop ----------------+------------------------------------------------ 3.10a7 | 1.00 20210802 | 3.10rc1 | 1.09 1.05 4% (slow 43, fast 5, same 10) 20210831-20:42 | 863154c | 0.95 0.90 5% (slow 48, fast 3, same 7) (3.11a0+) | ----------------------------------------------------------------- Windows x86 | PGO patched overhead in eval-loop ----------------+------------------------------------------------ 3.10a7 | 1.00 20210802 | 3.10rc1 | 1.15 1.13 2% (slow 29, fast 14, same 15) 20210831-20:42 | 863154c | 1.05 1.02 3% (slow 44, fast 7, same 7) (3.11a0+) | ---------- components: C API, Interpreter Core, Windows files: 310rc1_confirm_overhead.patch keywords: patch messages: 401143 nosy: Mark.Shannon, neonene, pablogsal, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: Performance regression 3.10b1 and later on Windows type: performance versions: Python 3.10, Python 3.11 Added file: https://bugs.python.org/file50263/310rc1_confirm_overhead.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 12:11:44 2021 From: report at bugs.python.org (=?utf-8?q?=C8=98tefan_Istrate?=) Date: Mon, 06 Sep 2021 16:11:44 +0000 Subject: [New-bugs-announce] [issue45117] `dict` not subscriptable despite using `__future__` typing annotations Message-ID: <1630944704.92.0.838059763947.issue45117@roundup.psfhosted.org> New submission from ?tefan Istrate : According to PEP 585 (https://www.python.org/dev/peps/pep-0585/#implementation), I expect that typing aliases could simply use `dict` instead of `typing.Dict`. This works without problems in Python 3.9, but in Python 3.8, despite using `from __future__ import annotations`, I get the following error: $ /usr/local/Cellar/python at 3.8/3.8.11/bin/python3.8 Python 3.8.11 (default, Jun 29 2021, 03:08:07) [Clang 12.0.5 (clang-1205.0.22.9)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import annotations >>> from typing import Any >>> JsonDictType = dict[str, Any] Traceback (most recent call last): File "", line 1, in TypeError: 'type' object is not subscriptable However, `dict` is subscriptable when not used in an alias: $ /usr/local/Cellar/python at 3.8/3.8.11/bin/python3.8 Python 3.8.11 (default, Jun 29 2021, 03:08:07) [Clang 12.0.5 (clang-1205.0.22.9)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import annotations >>> from typing import Any >>> def f(d: dict[str, Any]): ... print(d) ... >>> f({"abc": 1, "def": 2}) {'abc': 1, 'def': 2} ---------- messages: 401149 nosy: stefanistrate priority: normal severity: normal status: open title: `dict` not subscriptable despite using `__future__` typing annotations type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 12:28:52 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 06 Sep 2021 16:28:52 +0000 Subject: [New-bugs-announce] [issue45118] regrtest no longer lists "re-run tests" in the second summary Message-ID: <1630945732.21.0.656532683711.issue45118@roundup.psfhosted.org> New submission from STINNER Victor : When running Python with test -w/--verbose2 command line option to re-run tests which failed, the failing tests are listed in the first summary, and they are not listed in the final summary. Example with Lib/test/test_x.py: --- import builtins import unittest class Tests(unittest.TestCase): def test_succeed(self): return def test_fail_once(self): if not hasattr(builtins, '_test_failed'): builtins._test_failed = True self.fail("bug") --- Current output when running test_sys (success) and test_x (failed once, then pass): --- $ ./python -m test test_sys test_x -w 0:00:00 load avg: 0.80 Run tests sequentially 0:00:00 load avg: 0.80 [1/2] test_sys 0:00:01 load avg: 0.80 [2/2] test_x test test_x failed -- Traceback (most recent call last): File "/home/vstinner/python/main/Lib/test/test_x.py", line 11, in test_fail_once self.fail("bug") ^^^^^^^^^^^^^^^^ AssertionError: bug test_x failed (1 failure) == Tests result: FAILURE == 1 test OK. 1 test failed: test_x 1 re-run test: test_x 0:00:01 load avg: 0.80 0:00:01 load avg: 0.80 Re-running failed tests in verbose mode 0:00:01 load avg: 0.80 Re-running test_x in verbose mode (matching: test_fail_once) test_fail_once (test.test_x.Tests) ... ok ---------------------------------------------------------------------- Ran 1 test in 0.000s OK == Tests result: FAILURE then SUCCESS == All 2 tests OK. Total duration: 2.0 sec Tests result: FAILURE then SUCCESS --- "re-run tests" is missing in the last summary. ---------- components: Tests messages: 401151 nosy: vstinner priority: normal severity: normal status: open title: regrtest no longer lists "re-run tests" in the second summary versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 13:10:56 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 06 Sep 2021 17:10:56 +0000 Subject: [New-bugs-announce] [issue45119] test_signal.test_itimer_virtual() failed on AMD64 Fedora Rawhide 3.x Message-ID: <1630948256.1.0.298507689656.issue45119@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Fedora Rawhide 3.x: https://buildbot.python.org/all/#/builders/285/builds/685 It is likely test_itimer_virtual() of test_signal which failed. On the buildbot worker: $ rpm -q glibc glibc-2.34.9000-5.fc36.x86_64 $ uname -r 5.15.0-0.rc0.20210902git4ac6d90867a4.4.fc36.x86_64 test.pythoninfo: platform.libc_ver: glibc 2.34.9000 platform.platform: Linux-5.15.0-0.rc0.20210902git4ac6d90867a4.4.fc36.x86_64-x86_64-with-glibc2.34.9000 Logs: test test_signal crashed -- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/test/libregrtest/runtest.py", line 340, in _runtest_inner refleak = _runtest_inner2(ns, test_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/test/libregrtest/runtest.py", line 297, in _runtest_inner2 test_runner() ^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/test/libregrtest/runtest.py", line 261, in _test_module support.run_unittest(tests) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/test/support/__init__.py", line 1123, in run_unittest _run_suite(suite) ^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/test/support/__init__.py", line 998, in _run_suite result = runner.run(suite) ^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/runner.py", line 176, in run test(result) ^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/suite.py", line 84, in __call__ return self.run(*args, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/suite.py", line 122, in run test(result) ^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/suite.py", line 84, in __call__ return self.run(*args, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/suite.py", line 122, in run test(result) ^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/suite.py", line 84, in __call__ return self.run(*args, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/suite.py", line 122, in run test(result) ^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/case.py", line 652, in __call__ return self.run(*args, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/case.py", line 569, in run result.startTest(self) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/test/support/testresult.py", line 41, in startTest super().startTest(test) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/runner.py", line 54, in startTest self.stream.write(self.getDescription(test)) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/unittest/runner.py", line 44, in getDescription def getDescription(self, test): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-x86_64/build/Lib/test/test_signal.py", line 739, in sig_vtalrm raise signal.ItimerError("setitimer didn't disable ITIMER_VIRTUAL " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ signal.itimer_error: setitimer didn't disable ITIMER_VIRTUAL timer. Code of the test: def sig_vtalrm(self, *args): self.hndl_called = True if self.hndl_count > 3: # it shouldn't be here, because it should have been disabled. raise signal.ItimerError("setitimer didn't disable ITIMER_VIRTUAL " # <==== HERE "timer.") # <======================================================= HERE elif self.hndl_count == 3: # disable ITIMER_VIRTUAL, this function shouldn't be called anymore signal.setitimer(signal.ITIMER_VIRTUAL, 0) self.hndl_count += 1 # Issue 3864, unknown if this affects earlier versions of freebsd also @unittest.skipIf(sys.platform in ('netbsd5',), 'itimer not reliable (does not mix well with threading) on some BSDs.') def test_itimer_virtual(self): self.itimer = signal.ITIMER_VIRTUAL signal.signal(signal.SIGVTALRM, self.sig_vtalrm) signal.setitimer(self.itimer, 0.3, 0.2) start_time = time.monotonic() while time.monotonic() - start_time < 60.0: # use up some virtual time by doing real work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): break # sig_vtalrm handler stopped this itimer else: # Issue 8424 self.skipTest("timeout: likely cause: machine too slow or load too " "high") # virtual itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) ---------- components: Tests messages: 401162 nosy: vstinner priority: normal severity: normal status: open title: test_signal.test_itimer_virtual() failed on AMD64 Fedora Rawhide 3.x versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 16:30:12 2021 From: report at bugs.python.org (Rafael Belo) Date: Mon, 06 Sep 2021 20:30:12 +0000 Subject: [New-bugs-announce] [issue45120] Windows cp encodings "UNDEFINED" entries update Message-ID: <1630960212.43.0.654084937143.issue45120@roundup.psfhosted.org> New submission from Rafael Belo : There is a mismatch in specification and behavior in some windows encodings. Some older windows codepages specifications present "UNDEFINED" mapping, whereas in reality, they present another behavior which is updated in a section named "bestfit". For example CP1252 has a corresponding bestfit1525: CP1252: https://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WINDOWS/CP1252.TXT bestfit1525: https://www.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/WindowsBestFit/bestfit1252.txt >From which, in CP1252, bytes \x81 \x8d \x8f \x90 \x9d map to "UNDEFINED", whereas in bestfit1252, they map to \u0081 \u008d \u008f \u0090 \u009d respectively. In the Windows API, the function 'MultiByteToWideChar' exhibits the bestfit1252 behavior. This issue and PR proposes a correction for this behavior, updating the windows codepages where some code points where defined as "UNDEFINED" to the corresponding bestfit?mapping. Related issue: https://bugs.python.org/issue28712 ---------- components: Demos and Tools, Library (Lib), Unicode, Windows messages: 401181 nosy: ezio.melotti, lemburg, paul.moore, rafaelblsilva, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: Windows cp encodings "UNDEFINED" entries update type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 18:05:20 2021 From: report at bugs.python.org (Shantanu) Date: Mon, 06 Sep 2021 22:05:20 +0000 Subject: [New-bugs-announce] [issue45121] Regression in 3.9.7 with typing.Protocol Message-ID: <1630965920.35.0.627159261173.issue45121@roundup.psfhosted.org> New submission from Shantanu : Consider: ``` from typing import Protocol class P(Protocol): ... class C(P): def __init__(self): super().__init__() C() ``` This code passes without error on 3.9.6. With 3.9.7, we get: ``` Traceback (most recent call last): File "/Users/shantanu/dev/test.py", line 10, in C() File "/Users/shantanu/dev/test.py", line 8, in __init__ super().__init__() File "/Users/shantanu/.pyenv/versions/3.9.7/lib/python3.9/typing.py", line 1083, in _no_init raise TypeError('Protocols cannot be instantiated') TypeError: Protocols cannot be instantiated ``` I bisected this to: bpo-44806: Fix __init__ in subclasses of protocols (GH-27545) (GH-27559) Note there is also an interaction with the later commit: bpo-45081: Fix __init__ method generation when inheriting from Protocol (GH-28121) (GH-28132) This later commit actually causes a RecursionError: ``` File "/Users/shantanu/dev/cpython/Lib/typing.py", line 1103, in _no_init_or_replace_init cls.__init__(self, *args, **kwargs) File "/Users/shantanu/dev/test.py", line 8, in __init__ super().__init__() File "/Users/shantanu/dev/cpython/Lib/typing.py", line 1103, in _no_init_or_replace_init cls.__init__(self, *args, **kwargs) File "/Users/shantanu/dev/test.py", line 8, in __init__ super().__init__() ``` Originally reported by @tyralla on Gitter. ---------- components: Library (Lib) messages: 401184 nosy: hauntsaninja priority: normal severity: normal status: open title: Regression in 3.9.7 with typing.Protocol versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 6 18:44:03 2021 From: report at bugs.python.org (Guido van Rossum) Date: Mon, 06 Sep 2021 22:44:03 +0000 Subject: [New-bugs-announce] [issue45122] Remove PyCode_New and PyCode_NewWithPosOnlyArgs Message-ID: <1630968243.03.0.371650813101.issue45122@roundup.psfhosted.org> New submission from Guido van Rossum : This is up for grabs. For reference, the Steering Council approved getting rid of these two C APIs without honoring the customary two-release deprecation period required by PEP 387. For reference see https://github.com/python/steering-council/issues/75. This also has references to python-dev and python-committers discussions. ---------- messages: 401188 nosy: gvanrossum priority: normal severity: normal stage: needs patch status: open title: Remove PyCode_New and PyCode_NewWithPosOnlyArgs type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 00:50:40 2021 From: report at bugs.python.org (Yury Selivanov) Date: Tue, 07 Sep 2021 04:50:40 +0000 Subject: [New-bugs-announce] [issue45123] PyAiter_Check & PyObject_GetAiter issues Message-ID: <1630990240.1.0.111200047499.issue45123@roundup.psfhosted.org> New submission from Yury Selivanov : Per discussion on python-dev (also see the linked email), PyAiter_Check should only check for `__anext__` existence (and not for `__aiter__`) to be consistent with `Py_IterCheck`. While there, I'd like to rename PyAiter_Check to PyAIter_Check and PyObject_GetAiter to PyObject_GetAIter (i -> I). First, we should apply CamelCase convention correctly, here "async" and "iter" are separate words; second, "Aiter" is marked as invalid spelling by spell checkers in IDEs which is annoying. See https://mail.python.org/archives/list/python-dev at python.org/message/BRHMOFPEKGQCCKEKEEKGSYDR6NOPMRCC/ for more details. ---------- components: Interpreter Core messages: 401206 nosy: pablogsal, yselivanov priority: release blocker severity: normal stage: patch review status: open title: PyAiter_Check & PyObject_GetAiter issues type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 05:08:39 2021 From: report at bugs.python.org (Hugo van Kemenade) Date: Tue, 07 Sep 2021 09:08:39 +0000 Subject: [New-bugs-announce] [issue45124] Remove deprecated bdist_msi command Message-ID: <1631005719.97.0.212991443867.issue45124@roundup.psfhosted.org> New submission from Hugo van Kemenade : The bdist_msi command was deprecated in Python 3.9 by bpo-39586 (commit 2d65fc940b897958e6e4470578be1c5df78e319a). It can be removed in Python 3.11. PR to follow. ---------- components: Distutils messages: 401216 nosy: dstufft, eric.araujo, hugovk priority: normal severity: normal status: open title: Remove deprecated bdist_msi command versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 06:17:52 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Tue, 07 Sep 2021 10:17:52 +0000 Subject: [New-bugs-announce] [issue45125] Improve tests and docs of how `pickle` works with `SharedMemory` obejcts Message-ID: <1631009872.94.0.369430162871.issue45125@roundup.psfhosted.org> New submission from Nikita Sobolev : As we've discussed in https://bugs.python.org/msg401146 we need to improve how `SharedMemory` is documented and tested. Right now docs do not cover `pickle` at all (https://github.com/python/cpython/blob/main/Doc/library/multiprocessing.shared_memory.rst). And this is an important aspect in multiprocessing. `SharedMemory` and `pickle` in `test_shared_memory_basics` (https://github.com/python/cpython/blob/a5c6bcf24479934fe9c5b859dd1cf72685a0003a/Lib/test/_test_multiprocessing.py#L3789-L3794): ``` sms.buf[0:6] = b'pickle' pickled_sms = pickle.dumps(sms) sms2 = pickle.loads(pickled_sms) self.assertEqual(sms.name, sms2.name) self.assertEqual(bytes(sms.buf[0:6]), bytes(sms2.buf[0:6]), b'pickle') ``` `ShareableList` has better coverage in this regard (https://github.com/python/cpython/blob/a5c6bcf24479934fe9c5b859dd1cf72685a0003a/Lib/test/_test_multiprocessing.py#L4121-L4144): ``` def test_shared_memory_ShareableList_pickling(self): sl = shared_memory.ShareableList(range(10)) self.addCleanup(sl.shm.unlink) serialized_sl = pickle.dumps(sl) deserialized_sl = pickle.loads(serialized_sl) self.assertTrue( isinstance(deserialized_sl, shared_memory.ShareableList) ) self.assertTrue(deserialized_sl[-1], 9) self.assertFalse(sl is deserialized_sl) deserialized_sl[4] = "changed" self.assertEqual(sl[4], "changed") # Verify data is not being put into the pickled representation. name = 'a' * len(sl.shm.name) larger_sl = shared_memory.ShareableList(range(400)) self.addCleanup(larger_sl.shm.unlink) serialized_larger_sl = pickle.dumps(larger_sl) self.assertTrue(len(serialized_sl) == len(serialized_larger_sl)) larger_sl.shm.close() deserialized_sl.shm.close() sl.shm.close() ``` So, my plan is: 1. Improve testing of `SharedMemory` after pickling/unpickling. I will create a separate test with all the `pickle`-related stuff there 2. Improve docs: user must understand what will have when `SharedMemory` / `SharableList` is pickled and unpickled. For example, the fact that it will still be shared can be a surprise to many. I am going to send a PR with both thing somewhere this week. I will glad to head any feedback before / after :) ---------- assignee: docs at python components: Documentation, Tests messages: 401219 nosy: docs at python, sobolevn priority: normal severity: normal status: open title: Improve tests and docs of how `pickle` works with `SharedMemory` obejcts type: behavior versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 07:46:02 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Tue, 07 Sep 2021 11:46:02 +0000 Subject: [New-bugs-announce] [issue45126] [sqlite3] cleanup and harden connection init Message-ID: <1631015162.08.0.638095834379.issue45126@roundup.psfhosted.org> New submission from Erlend E. Aasland : Quoting Petr Viktorin in PR 27940, https://github.com/python/cpython/pull/27940#discussion_r703424148: - db is not set to NULL if init fails. - This segfaults for me: import sqlite3 conn = sqlite3.connect(':memory:') conn.execute('CREATE TABLE foo (bar)') try: conn.__init__('/bad-file/') except sqlite3.OperationalError: pass conn.execute('INSERT INTO foo (bar) VALUES (1), (2), (3), (4)') Other issues: - reinitialisation is not handled gracefully - __init__ is messy; members are initialised here and there Suggested to reorder connection __init__ in logical groups to more easily handle errors: 1. handle reinit 2. open and configure database 3. create statement LRU cache and weak ref cursor list 4. initialise members in the order they're defined in connection.h 5. set isolation level, since it's a weird case anyway ---------- components: Extension Modules messages: 401248 nosy: erlendaasland, petr.viktorin priority: normal severity: normal status: open title: [sqlite3] cleanup and harden connection init versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 09:57:18 2021 From: report at bugs.python.org (Petr Viktorin) Date: Tue, 07 Sep 2021 13:57:18 +0000 Subject: [New-bugs-announce] [issue45127] Code objects can contain unmarshallable objects Message-ID: <1631023038.1.0.991246928913.issue45127@roundup.psfhosted.org> New submission from Petr Viktorin : The `replace` method of `code` allows setting e.g. * co_filename to a subclass of str * co_consts to an arbitrary tuple and possibly more weird cases. This makes code objects unmarshallable. One way to create such a code object is to call `compileall.compile_file` with a str subclass as path. See the attached reproducers. This hit pip, see: https://github.com/pypa/pip/pull/10358#issuecomment-914320728 ---------- files: reproducer_replace.py messages: 401277 nosy: petr.viktorin priority: normal severity: normal status: open title: Code objects can contain unmarshallable objects Added file: https://bugs.python.org/file50268/reproducer_replace.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 10:33:53 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 07 Sep 2021 14:33:53 +0000 Subject: [New-bugs-announce] [issue45128] test_multiprocessing fails sporadically on the release artifacts Message-ID: <1631025233.12.0.173850188755.issue45128@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : While testing the release artifacts I encountered this failure: test test_multiprocessing_fork failed -- Traceback (most recent call last): File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1239, in _dot_lookup return getattr(thing, comp) AttributeError: module 'multiprocessing' has no attribute 'shared_memory' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/tmp/tmpu30qfjpr/installation/lib/python3.10/test/_test_multiprocessing.py", line 3818, in test_shared_memory_basics with unittest.mock.patch( File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1422, in __enter__ self.target = self.getter() File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1609, in getter = lambda: _importer(target) File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1252, in _importer thing = _dot_lookup(thing, comp, import_path) File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1242, in _dot_lookup return getattr(thing, comp) AttributeError: module 'multiprocessing' has no attribute 'shared_memory' 0:09:11 load avg: 0.71 [231/427/1] test_multiprocessing_forkserver -- test_multiprocessing_fork failed (1 error) in 1 min 11 sec test test_multiprocessing_forkserver failed -- Traceback (most recent call last): File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1239, in _dot_lookup return getattr(thing, comp) AttributeError: module 'multiprocessing' has no attribute 'shared_memory' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/tmp/tmpu30qfjpr/installation/lib/python3.10/test/_test_multiprocessing.py", line 3818, in test_shared_memory_basics with unittest.mock.patch( File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1422, in __enter__ self.target = self.getter() File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1609, in getter = lambda: _importer(target) File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1252, in _importer thing = _dot_lookup(thing, comp, import_path) File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1242, in _dot_lookup return getattr(thing, comp) AttributeError: module 'multiprocessing' has no attribute 'shared_memory' 0:11:11 load avg: 0.93 [232/427/2] test_multiprocessing_main_handling -- test_multiprocessing_forkserver failed (1 error) in 2 min 0:11:18 load avg: 1.09 [233/427/2] test_multiprocessing_spawn test test_multiprocessing_spawn failed -- Traceback (most recent call last): File "/tmp/tmpu30qfjpr/installation/lib/python3.10/unittest/mock.py", line 1239, in _dot_lookup return getattr(thing, comp) AttributeError: module 'multiprocessing' has no attribute 'shared_memory' I cannot reproduce it from installed Python or when testing directly on the repo, but the error seems to still happen from time to time. ---------- components: Tests messages: 401287 nosy: pablogsal priority: normal severity: normal status: open title: test_multiprocessing fails sporadically on the release artifacts versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 10:36:46 2021 From: report at bugs.python.org (Hugo van Kemenade) Date: Tue, 07 Sep 2021 14:36:46 +0000 Subject: [New-bugs-announce] [issue45129] Remove deprecated reuse_address parameter from create_datagram_endpoint() Message-ID: <1631025406.86.0.884993515985.issue45129@roundup.psfhosted.org> New submission from Hugo van Kemenade : The reuse_address parameter was deprecated in Python 3.9 by bpo-37228. It can be removed in Python 3.11. PR to follow. ---------- components: asyncio messages: 401290 nosy: asvetlov, hugovk, yselivanov priority: normal severity: normal status: open title: Remove deprecated reuse_address parameter from create_datagram_endpoint() versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 12:27:33 2021 From: report at bugs.python.org (Richard) Date: Tue, 07 Sep 2021 16:27:33 +0000 Subject: [New-bugs-announce] [issue45130] shlex.join() does not accept pathlib.Path objects Message-ID: <1631032053.23.0.444664437221.issue45130@roundup.psfhosted.org> New submission from Richard : When one of the items in the iterable passed to shlex.join() is a pathlib.Path object, it throws an exception saying it must be str or bytes. I believe it should accept Path objects just like other parts of the standard library such as subprocess.run() already do. Python 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import shlex >>> from pathlib import Path >>> shlex.join(['foo', Path('bar baz')]) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.9/shlex.py", line 320, in join return ' '.join(quote(arg) for arg in split_command) File "/usr/lib/python3.9/shlex.py", line 320, in return ' '.join(quote(arg) for arg in split_command) File "/usr/lib/python3.9/shlex.py", line 329, in quote if _find_unsafe(s) is None: TypeError: expected string or bytes-like object >>> shlex.join(['foo', str(Path('bar baz'))]) "foo 'bar baz'" Python 3.11.0a0 (heads/main:fa15df77f0, Sep 7 2021, 18:22:35) [GCC 10.2.1 20210110] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import shlex >>> from pathlib import Path >>> shlex.join(['foo', Path('bar baz')]) Traceback (most recent call last): File "", line 1, in File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/shlex.py", line 320, in join return ' '.join(quote(arg) for arg in split_command) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/shlex.py", line 320, in return ' '.join(quote(arg) for arg in split_command) ^^^^^^^^^^ File "/home/nyuszika7h/.pyenv/versions/3.11-dev/lib/python3.11/shlex.py", line 329, in quote if _find_unsafe(s) is None: ^^^^^^^^^^^^^^^ TypeError: expected string or bytes-like object, got 'PosixPath' >>> shlex.join(['foo', str(Path('bar baz'))]) "foo 'bar baz'" ---------- components: Library (Lib) messages: 401301 nosy: nyuszika7h priority: normal severity: normal status: open title: shlex.join() does not accept pathlib.Path objects type: behavior versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 14:15:21 2021 From: report at bugs.python.org (Sean Kelly) Date: Tue, 07 Sep 2021 18:15:21 +0000 Subject: [New-bugs-announce] =?utf-8?b?W2lzc3VlNDUxMzFdIGB2ZW52YCDihpIg?= =?utf-8?q?=60ensurepip=60_may_read_local_=60setup=2Ecfg=60_and_fail_myste?= =?utf-8?q?riously?= Message-ID: <1631038521.89.0.222056793507.issue45131@roundup.psfhosted.org> New submission from Sean Kelly : Creating a new virtual environment with the `venv` module reads any local `setup.cfg` file that may be found; if such a file has garbage, the `venv` fails with a mysterious message. Reproduce: ``` $ date -u Tue Sep 7 18:12:27 UTC 2021 $ mkdir /tmp/demo $ cd /tmp/demo $ echo 'a < b' >setup.cfg $ python3 -V Python 3.9.5 $ python3 -m venv venv Error: Command '['/tmp/demo/venv/bin/python3.9', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. ``` (Took me a little while to figure out I had some garbage in a `setup.cfg` file in $CWD that was causing it.) Implications: Potential implications are that a specially crafted `setup.cfg` might cause a security-compromised virtual environment to be created maybe? I don't know. ---------- messages: 401320 nosy: nutjob4life priority: normal severity: normal status: open title: `venv` ? `ensurepip` may read local `setup.cfg` and fail mysteriously type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 14:35:35 2021 From: report at bugs.python.org (Hugo van Kemenade) Date: Tue, 07 Sep 2021 18:35:35 +0000 Subject: [New-bugs-announce] [issue45132] Remove deprecated __getitem__ methods Message-ID: <1631039735.38.0.446517893584.issue45132@roundup.psfhosted.org> New submission from Hugo van Kemenade : The __getitem__ methods of xml.dom.pulldom.DOMEventStream, wsgiref.util.FileWrapper and were deprecated in Python 3.8 by bpo-9372 / GH-8609. They can be removed in Python 3.11. ---------- components: Library (Lib) messages: 401322 nosy: hugovk priority: normal severity: normal status: open title: Remove deprecated __getitem__ methods versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 15:26:31 2021 From: report at bugs.python.org (David Mertz) Date: Tue, 07 Sep 2021 19:26:31 +0000 Subject: [New-bugs-announce] [issue45133] Open functions in dbm submodule should support path-like objects Message-ID: <1631042791.04.0.276454650964.issue45133@roundup.psfhosted.org> New submission from David Mertz : Evan Greenup via Python-ideas Currently, in Python 3.9, `dbm.open()`, `dbm.gnu.open()` and `dbm.ndbm.open()` doesn't support path-like object, class defined in `pathlib`. It would be nice to add support with it. ---------- components: Library (Lib), Tests messages: 401334 nosy: DavidMertz priority: normal severity: normal status: open title: Open functions in dbm submodule should support path-like objects type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 7 19:52:45 2021 From: report at bugs.python.org (Mark) Date: Tue, 07 Sep 2021 23:52:45 +0000 Subject: [New-bugs-announce] [issue45134] Protocol dealloc not called if Server is closed Message-ID: <1631058765.71.0.794565950389.issue45134@roundup.psfhosted.org> New submission from Mark : If I create_server server_coro = loop.create_server( lambda: self._protocol_factory(self), sock=sock, ssl=ssl) server = loop.run_until_complete(server_coro) Then connect and disconnect a client the protocol connection lost and dealloc are called. If however I close the server with existing connections then protocol dealloc is never called and I leak memory due to a malloc in my protocol.c init. server.close() loop.run_until_complete(server.wait_closed()) ---------- components: asyncio messages: 401349 nosy: MarkReedZ, asvetlov, yselivanov priority: normal severity: normal status: open title: Protocol dealloc not called if Server is closed type: resource usage versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 05:18:41 2021 From: report at bugs.python.org (Thomas Fischbacher) Date: Wed, 08 Sep 2021 09:18:41 +0000 Subject: [New-bugs-announce] [issue45135] dataclasses.asdict() incorrectly calls __deepcopy__() on values. Message-ID: <1631092721.82.0.0802849147398.issue45135@roundup.psfhosted.org> New submission from Thomas Fischbacher : This problem may also be the issue underlying some other dataclasses.asdict() bugs: https://bugs.python.org/issue?%40columns=id%2Cactivity%2Ctitle%2Ccreator%2Cassignee%2Cstatus%2Ctype&%40sort=-activity&%40filter=status&%40action=searchid&ignore=file%3Acontent&%40search_text=dataclasses.asdict&submit=search&status=-1%2C1%2C2%2C3 The documentation of dataclasses.asdict() states: https://docs.python.org/3/library/dataclasses.html#dataclasses.asdict === Converts the dataclass instance to a dict (by using the factory function dict_factory). Each dataclass is converted to a dict of its fields, as name: value pairs. dataclasses, dicts, lists, and tuples are recursed into. For example: (...) === Given this documentation, the expectation about behavior is roughly: def _dataclasses_asdict_equivalent_helper(obj, dict_factory=dict): rec = lambda x: ( _dataclasses_asdict_equivalent_helper(x, dict_factory=dict_factory)) if isinstance(obj, (list, tuple)): return type(obj)(rec(x) for x in obj) elif isinstance(obj, dict): return type(obj)((k, rec(v) for k, v in obj.items()) # Otherwise, we are looking at a dataclass-instance. for field in type(obj).__dataclass_fields__: val = obj.__getattribute__[field] if (hasattr(type(obj), '__dataclass_fields__')): # ^ approx check for "is this a dataclass instance"? # Not 100% correct. For illustration only. ret[field] = rec(val) ret[field] = val return ret def dataclasses_asdict_equivalent(x, dict_factory=dict): if not hasattr(type(x), '__dataclass_fields__'): raise ValueError(f'Not a dataclass: {x!r}') return _dataclasses_asdict_equivalent(x, dict_factory=dict_factory) In particular, field-values that are neither dict, list, tuple, or dataclass-instances are expected to be used identically. What actually happens however is that .asdict() DOES call __deepcopy__ on field values it has no business inspecting: === import dataclasses @dataclasses.dataclass class Demo: field_a: object class Obj: def __init__(self, x): self._x = x def __deepcopy__(self, *args): raise ValueError('BOOM!') ### d1 = Demo(field_a=Obj([1,2,3])) dd = dataclasses.asdict(d1) # ...Execution does run into a "BOOM!" ValueError. === Apart from this: It would be very useful if dataclasses.asdict() came with a recurse={boolish} parameter with which one can turn off recursive translation of value-objects. ---------- components: Library (Lib) messages: 401360 nosy: tfish2 priority: normal severity: normal status: open title: dataclasses.asdict() incorrectly calls __deepcopy__() on values. type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 06:46:29 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 08 Sep 2021 10:46:29 +0000 Subject: [New-bugs-announce] [issue45136] test_sysconfig: test_user_similar() fails if sys.platlibdir is 'lib64' Message-ID: <1631097989.84.0.629517715324.issue45136@roundup.psfhosted.org> New submission from STINNER Victor : When Python is configured to use 'lib64' for sys.platlibdir, test_sysconfig fails: $ ./configure --with-platlibdir=lib64 $ make $ ./python -m test -v test_sysconfig ====================================================================== FAIL: test_user_similar (test.test_sysconfig.TestSysConfig) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/vstinner/python/main/Lib/test/test_sysconfig.py", line 299, in test_user_similar self.assertEqual(user_path, global_path.replace(base, user, 1)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: '/home/vstinner/.local/lib/python3.11/site-packages' != '/home/vstinner/.local/lib64/python3.11/site-packages' - /home/vstinner/.local/lib/python3.11/site-packages + /home/vstinner/.local/lib64/python3.11/site-packages ? ++ ---------- components: Tests messages: 401372 nosy: vstinner priority: normal severity: normal status: open title: test_sysconfig: test_user_similar() fails if sys.platlibdir is 'lib64' versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 07:44:26 2021 From: report at bugs.python.org (Victor Vorobev) Date: Wed, 08 Sep 2021 11:44:26 +0000 Subject: [New-bugs-announce] [issue45137] Fix for bpo-37788 was not backported to Python3.8 Message-ID: <1631101466.05.0.472114996344.issue45137@roundup.psfhosted.org> New submission from Victor Vorobev : There is a [fix](https://github.com/python/cpython/pull/26103) for [bpo-37788](https://bugs.python.org/issue37788), but automatic backport to 3.8 branch [have failed](https://github.com/python/cpython/pull/26103#issuecomment-841460885), and it looks like no one have backported it manually. ---------- components: Library (Lib) messages: 401377 nosy: victorvorobev priority: normal severity: normal status: open title: Fix for bpo-37788 was not backported to Python3.8 type: resource usage versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 10:05:15 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Wed, 08 Sep 2021 14:05:15 +0000 Subject: [New-bugs-announce] [issue45138] [sqlite3] expand bound values in traced statements if possible Message-ID: <1631109915.01.0.788685972139.issue45138@roundup.psfhosted.org> New submission from Erlend E. Aasland : For SQLite 3.14.0 and newer, we're using the v2 trace API. This means that the trace callback receives a pointer to the sqlite3_stmt object. We can use the sqlite3_stmt pointer to retrieve expanded SQL string. The following statement...: cur.executemany("insert into t values(?)", ((v,) for v in range(3))) ...will produce the following traces: insert into t values(0) insert into t values(1) insert into t values(2) ...instead of: insert into t values(?) insert into t values(?) insert into t values(?) ---------- assignee: erlendaasland components: Extension Modules messages: 401383 nosy: erlendaasland priority: low severity: normal status: open title: [sqlite3] expand bound values in traced statements if possible type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 13:17:47 2021 From: report at bugs.python.org (Jean Abou Samra) Date: Wed, 08 Sep 2021 17:17:47 +0000 Subject: [New-bugs-announce] [issue45139] Simplify source links in documentation? Message-ID: <1631121467.16.0.546303933319.issue45139@roundup.psfhosted.org> New submission from Jean Abou Samra : Currently, links to source code in the documentation look like this: **Source code:** :source:`Lib/abc.py` For documentation translators, this means that every module contains a boilerplate string to translate. A small burden perhaps, but avoidable. I propose to refactor the links in a directive, like so: .. source:: Lib/abc.py Then just the text "Source code:" gets translated (in sphinx.po) and the rest is automatic. This could also benefit the presentation in the future, if anyone wants to change the styling of these links. Open question is how to handle the handful of links that contain several source files (async I/O modules in particular). To make it language-agnostic, perhaps the markup .. source:: Lib/asyncio/futures.py, Lib/asyncio/base_futures.py could be rendered as if it were **Source code:** :source:`Lib/asyncio/futures.py` | :source:`Lib/asyncio/base_futures.py` Thoughts? ---------- assignee: docs at python components: Documentation messages: 401410 nosy: Jean_Abou_Samra, docs at python, mdk priority: normal severity: normal status: open title: Simplify source links in documentation? _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 13:36:10 2021 From: report at bugs.python.org (Rebecca Wallander) Date: Wed, 08 Sep 2021 17:36:10 +0000 Subject: [New-bugs-announce] [issue45140] strict_timestamps for PyZipFile Message-ID: <1631122570.36.0.100472687583.issue45140@roundup.psfhosted.org> New submission from Rebecca Wallander : https://github.com/python/cpython/pull/8270 Above fix solved the problem with pre-1980 files for regular ZipFile, but I still have issues when using PyZipFile object. https://docs.python.org/3.11/library/zipfile.html#pyzipfile-objects I would be glad if `strict_timestamps` was added to PyZipFile as well. The documentation of PyZipFile also states that PyZipFile has the same parameters as ZipFile with the addition of one extra. If this issue can't be fixed in code, at least the documentation should be updated to reflect this is not longer true. ---------- components: Library (Lib) messages: 401417 nosy: sakcheen priority: normal severity: normal status: open title: strict_timestamps for PyZipFile versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 14:41:53 2021 From: report at bugs.python.org (pacien) Date: Wed, 08 Sep 2021 18:41:53 +0000 Subject: [New-bugs-announce] [issue45141] mailcap.getcaps() from given file(s) Message-ID: <1631126513.06.0.552308747638.issue45141@roundup.psfhosted.org> New submission from pacien : Currently, `mailcap.getcaps()` can only be used to load mailcap dictionaries from files located at some specific hard-coded paths. It is however also desirable to use the mailcap parser to load files located elsewhere. An optional parameter could be added to this function to explicitly specify the mailcap file(s) to read. ---------- components: email messages: 401420 nosy: barry, pacien, r.david.murray priority: normal severity: normal status: open title: mailcap.getcaps() from given file(s) type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 19:00:24 2021 From: report at bugs.python.org (Joshua) Date: Wed, 08 Sep 2021 23:00:24 +0000 Subject: [New-bugs-announce] [issue45142] Import error for Iterable in collections Message-ID: <1631142024.27.0.823096064512.issue45142@roundup.psfhosted.org> New submission from Joshua : Traceback: Traceback (most recent call last): File "/Users/user/PycharmProjects/phys2/main.py", line 5, in from collections import Iterable ImportError: cannot import name 'Iterable' from 'collections' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/collections/__init__.py) Done using a venv based on python 3.10.0rc1 running on Darwin/MacOS. I am new to adding bugs so I don't quite know what information I can provide that could be of use other than that this issue persists in REPL. ---------- components: Library (Lib) messages: 401425 nosy: Joshuah143 priority: normal severity: normal status: open title: Import error for Iterable in collections versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 19:02:31 2021 From: report at bugs.python.org (Hanu) Date: Wed, 08 Sep 2021 23:02:31 +0000 Subject: [New-bugs-announce] [issue45143] ipaddress multicast v6 RFC documentation correction Message-ID: <1631142151.61.0.977852423445.issue45143@roundup.psfhosted.org> New submission from Hanu : In the ipaddress library documentation related to multicast. https://docs.python.org/3/library/ipaddress.html#ip-addresses the is_multicast, refers to the v6 multicast RFC as 2373: "is_multicast True if the address is reserved for multicast use. See RFC 3171 (for IPv4) or RFC 2373 (for IPv6)." Should that be referred to as RFC 2375 (for IPv6)? - RFC 2373 is "IP Version 6 Addressing Architecture" - RFC 2375 is "IPv6 Multicast Address Assignments" Also for IPv4, the multicast is referred to RFC 3171, which is obsoleted by RFC 5771 (IPv4 Multicast Address Assignments) ---------- messages: 401426 nosy: hanugit priority: normal severity: normal status: open title: ipaddress multicast v6 RFC documentation correction versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 20:19:04 2021 From: report at bugs.python.org (Irit Katriel) Date: Thu, 09 Sep 2021 00:19:04 +0000 Subject: [New-bugs-announce] [issue45144] Use subtests in test_peepholer Message-ID: <1631146744.41.0.0401812311167.issue45144@roundup.psfhosted.org> New submission from Irit Katriel : test_peepholer has many tests that loop over test cases. Identifying them as subtests will make them easier to work with when something breaks. ---------- components: Tests messages: 401428 nosy: iritkatriel priority: normal severity: normal status: open title: Use subtests in test_peepholer type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 8 20:39:40 2021 From: report at bugs.python.org (=?utf-8?b?RS4gTS4gUC4gSMO2bGxlcg==?=) Date: Thu, 09 Sep 2021 00:39:40 +0000 Subject: [New-bugs-announce] [issue45145] Case of headers in urllib.request.Request Message-ID: <1631147980.88.0.915702969887.issue45145@roundup.psfhosted.org> New submission from E. M. P. H?ller : urllib.request.Request internally .capitalize()s header names before adding them, as can be seen here: https://github.com/python/cpython/blob/3.9/Lib/urllib/request.py#L399 Since HTTP headers are case-insensitive, but dicts are not, this ensures that add_header and add_unredirected_header overwrite an existing header (as documented) even if they were passed in different cases. However, this also carries two problems with it: 1. has_header, get_header, and remove_header do not apply this normalisation to their header_name parameter, causing them to fail unexpectedly when the header is passed in the wrong case. 2. Some servers do not comply with the standard and check some headers case-sensitively. If the case they expect is different from the result of .capitalize(), those headers effectively cannot be passed to them via urllib. These problems have already been discussed quite some time ago, and yet they still are present: https://bugs.python.org/issue2275 https://bugs.python.org/issue12455 Or did I overlook something and there is a good reason why things are this way? If not, I suggest that add_header and add_unredirected_header store the headers in the case they were passed (while preserving the case-insensitive overwriting behaviour) and that has_header, get_header, and remove_header find headers independent of case. Here is a possible implementation: # Helper outside class # Stops after the first hit since there should be at most one of each header in the dict def _find_key_insensitive(d, key): key = key.lower() for key2 in d: if key2.lower() == key: return key2 return None # Unnecessary, but explicit is better than implicit ;-) # Methods of Request def add_header(self, key, val): # useful for something like authentication existing_key = _find_key_insensitive(self.headers, key) if existing_key: self.headers.pop(existing_key) self.headers[key] = val def add_unredirected_header(self, key, val): # will not be added to a redirected request existing_key = _find_key_insensitive(self.unredirected_hdrs, key) if existing_key: self.unredirected_hdrs.pop(existing_key) self.unredirected_hdrs[key] = val def has_header(self, header_name): return bool(_find_key_insensitive(self.headers, header_name) or _find_key_insensitive(self.unredirected_hdrs, header_name)) def get_header(self, header_name, default=None): key = _find_key_insensitive(self.headers, header_name) if key: return self.headers[key] key = _find_key_insensitive(self.unredirected_hdrs, header_name) if key: return self.unredirected_hdrs[key] return default def remove_header(self, header_name): key = _find_key_insensitive(self.headers, header_name) if key: self.headers.pop(key) key = _find_key_insensitive(self.unredirected_hdrs, header_name) if key: self.unredirected_hdrs.pop(key) I?m sorry if it is frowned upon to post code suggestions here like that; I didn?t have the confidence to create a pull request right away. ---------- components: Library (Lib) messages: 401429 nosy: emphoeller priority: normal severity: normal status: open title: Case of headers in urllib.request.Request type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 00:46:16 2021 From: report at bugs.python.org (Mykola Mokhnach) Date: Thu, 09 Sep 2021 04:46:16 +0000 Subject: [New-bugs-announce] [issue45146] Add a possibility for asyncio.Condition to determine the count of currently waiting consumers Message-ID: <1631162776.4.0.536124821312.issue45146@roundup.psfhosted.org> New submission from Mykola Mokhnach : Currently the asyncio.Condition class does not provide any properties that would allow to determine how many (if) consumers are waiting for the current condition. My scenario is the following: ``` ... FILE_STATS_CONDITIONS: Dict[str, asyncio.Condition] = {} FILE_STATS_CACHE: Dict[str, Union[FileStat, Exception]] = {} ... async def _fetch_file_stat(self: T, file_id: str) -> FileStat: """ The idea behind this caching is to avoid sending multiple info requests for the same file id to the server and thus to avoid throttling. This is safe, because stored files are immutable. """ try: file_stat: FileStat if file_id in FILE_STATS_CONDITIONS: self.log.debug(f'Reusing the previously cached stat request for the file {file_id}') async with FILE_STATS_CONDITIONS[file_id]: await FILE_STATS_CONDITIONS[file_id].wait() cached_result = FILE_STATS_CACHE[file_id] if isinstance(cached_result, FileStat): file_stat = cached_result else: raise cached_result else: FILE_STATS_CONDITIONS[file_id] = asyncio.Condition() cancellation_exception: Optional[asyncio.CancelledError] = None async with FILE_STATS_CONDITIONS[file_id]: file_stat_task = asyncio.create_task(self.storage_client.get_file_stat(file_id)) try: try: file_stat = await asyncio.shield(file_stat_task) except asyncio.CancelledError as e: if len(getattr(FILE_STATS_CONDITIONS[file_id], '_waiters', (None,))) == 0: # it is safe to cancel now if there are no consumers in the waiting queue file_stat_task.cancel() raise # we don't want currently waiting consumers to fail because of the task cancellation file_stat = await file_stat_task cancellation_exception = e FILE_STATS_CACHE[file_id] = file_stat except Exception as e: FILE_STATS_CACHE[file_id] = e raise finally: FILE_STATS_CONDITIONS[file_id].notify_all() if cancellation_exception is not None: raise cancellation_exception # noinspection PyUnboundLocalVariable self.log.info(f'File stat: {file_stat}') return file_stat except ObjectNotFoundError: self.log.info(f'The file identified by "{file_id}" either does not exist or has expired') raise file_not_found_error(file_id) finally: if file_id in FILE_STATS_CONDITIONS and not FILE_STATS_CONDITIONS[file_id].locked(): del FILE_STATS_CONDITIONS[file_id] if file_id in FILE_STATS_CACHE: del FILE_STATS_CACHE[file_id] ``` Basically I need to use `getattr(FILE_STATS_CONDITIONS[file_id], '_waiters', (None,))` to workaround this limitation in order to figure out whether to cancel my producer now or later. It would be nice to have a public property on the Condition class, which would basically return the value of `len(condition._waiters)`. ---------- components: asyncio messages: 401434 nosy: asvetlov, mykola-mokhnach, yselivanov priority: normal severity: normal status: open title: Add a possibility for asyncio.Condition to determine the count of currently waiting consumers type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 02:19:00 2021 From: report at bugs.python.org (Ming Hua) Date: Thu, 09 Sep 2021 06:19:00 +0000 Subject: [New-bugs-announce] [issue45147] Type in "What's New In Python 3.10" documentation Message-ID: <1631168340.99.0.160040683472.issue45147@roundup.psfhosted.org> New submission from Ming Hua : It's just a small typo, but since the documentation recommends reporting to bug tracker, here it is. After downloading the 64-bit Windows Installer for 3.10.0 rc2 and successfully installing on my Windows 10, the "Python 3.10 Manuals" in start menu opens a (presumably) .chm documentation in Windows HTML Helper. There in the "What's New in Python" > "What's New In Python 3.10" > "Deprecated" section, first paragraph, last line, is: If future releases it will be changed... It should be "IN future releases" instead. ---------- assignee: docs at python components: Documentation messages: 401435 nosy: docs at python, minghua priority: normal severity: normal status: open title: Type in "What's New In Python 3.10" documentation versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 03:09:50 2021 From: report at bugs.python.org (baterflyrity) Date: Thu, 09 Sep 2021 07:09:50 +0000 Subject: [New-bugs-announce] [issue45148] ensurepip upgrade fails Message-ID: <1631171390.16.0.990723185381.issue45148@roundup.psfhosted.org> New submission from baterflyrity : Upgrading pip via ensurepip unfortunately doesn't do anything wealthy. ```bash user at host MINGW64 ~ $ pip list | grep pip pip 21.2.3 WARNING: You are using pip version 21.2.3; however, version 21.2.4 is available. You should consider upgrading via the 'C:\Python39\python.exe -m pip install --upgrade pip' command. user at host MINGW64 ~ $ py -m ensurepip --upgrade Looking in links: c:\Users\BATERF~1\AppData\Local\Temp\tmpuv4go5fy Requirement already satisfied: setuptools in c:\python39\lib\site-packages (57.4.0) Requirement already satisfied: pip in c:\python39\lib\site-packages (21.2.3) user at host MINGW64 ~ $ pip list | grep pip pip 21.2.3 WARNING: You are using pip version 21.2.3; however, version 21.2.4 is available. You should consider upgrading via the 'C:\Python39\python.exe -m pip install --upgrade pip' command. ``` ---------- components: Windows messages: 401436 nosy: baterflyrity, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ensurepip upgrade fails versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 03:45:42 2021 From: report at bugs.python.org (Nick Coghlan) Date: Thu, 09 Sep 2021 07:45:42 +0000 Subject: [New-bugs-announce] [issue45149] Cover directory and zipfile execution in __main__ module docs Message-ID: <1631173542.48.0.638794162329.issue45149@roundup.psfhosted.org> New submission from Nick Coghlan : bpo-39452 covered significant improvements to the __main__ module documentation at https://docs.python.org/dev/library/__main__.html, making it a better complement to the CPython CLI documentation at https://docs.python.org/dev/using/cmdline.html#command-line This follow up ticket covers the `__main__` variant that the previous ticket didn't cover: directory and zipfile execution (which underlies the utility of the `zipapp` stdlib module: https://docs.python.org/dev/library/zipapp.html) The relevant info is present in the CLI and runpy.run_path documentation, so this ticket just covers adding that info to the `__main__` module docs. ---------- messages: 401440 nosy: ncoghlan priority: normal severity: normal stage: needs patch status: open title: Cover directory and zipfile execution in __main__ module docs type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 06:03:17 2021 From: report at bugs.python.org (=?utf-8?q?Tarek_Ziad=C3=A9?=) Date: Thu, 09 Sep 2021 10:03:17 +0000 Subject: [New-bugs-announce] [issue45150] Add a file_digest() function in hashlib Message-ID: <1631181797.37.0.346326276713.issue45150@roundup.psfhosted.org> New submission from Tarek Ziad? : I am proposing the addition of a very simple helper to return the hash of a file. ---------- assignee: tarek components: Library (Lib) messages: 401457 nosy: tarek priority: normal severity: normal status: open title: Add a file_digest() function in hashlib _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 09:47:28 2021 From: report at bugs.python.org (Brian Hunt) Date: Thu, 09 Sep 2021 13:47:28 +0000 Subject: [New-bugs-announce] [issue45151] Logger library with task scheduler Message-ID: <1631195248.81.0.804167801454.issue45151@roundup.psfhosted.org> New submission from Brian Hunt : Version: Python 3.9.3 Package: Logger + Windows 10 Task Scheduler Error Msg: None Behavior: I built a logging process to use with a python script that will be scheduled with Windows 10 task scheduler (this could be a bug in task scheduler too, but starting here). I noticed it wasn?t working and started trying to trace the root of the issue. If I created a new file called scratch.py and ran some code, the logs showed up. However, the exact same code in the exact same folder (titled: run_xyz.py) didn?t log those same messages. It appears that something in either the task scheduler or logger library doesn?t like the fact that my file name contains an underscore because as soon as I pointed my task scheduler task that didn?t log to my other file, it worked again. Also, when I finally removed the underscores it started working. I believe it is Logger library related because the task with underscores does in fact run the python code and generate the script output. Code in both files: -----------------a_b_c.py code----------- import os import pathlib import sys pl_path = pathlib.Path(__file__).parents[1].resolve() sys.path.append(str(pl_path)) from src.Core.Logging import get_logger # logger = get_logger(__name__, False) logger.info("TESTING_USing taskScheduler") -------src.Core.Logging.py get_logger code-------- import logging import datetime import time import os # from logging.handlers import SMTPHandler from config import build_stage, log_config from Pipelines.Databases import sqlAlchemy_logging_con class DBHandler(logging.Handler): def __init__(self, name): """ :param name: Deprecated """ logging.StreamHandler.__init__(self) self.con = sqlAlchemy_logging_con() self.sql = """insert into Logs (LogMessage, Datetime, FullPathNM, LogLevelNM, ErrorLine) values ('{message}', '{dbtime}', '{pathname}', '{level}', '{errLn}')""" self.name = name def formatDBTime(self, record): record.dbtime = datetime.strftime("#%Y/%m/%d#", datetime.localtime(record.created)) def emit(self, record): creation_time = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(record.created)) try: self.format(record) if record.exc_info: record.exc_text = logging._defaultFormatter.formatException(record.exc_info) else: record.exc_text = "" sql = self.sql.format(message=record.message , dbtime=creation_time , pathname=record.pathname , level=record.levelname , errLn=record.lineno) self.con.execute(sql) except: pass def get_logger(name, use_local_logging=False): """ Returns a logger based on a name given. Name should be __name__ variable or unique for each call. Never call more than one time in a given file as it clears the logger. Use config.log_config to define configuration of the logger and its handlers. :param name: :return: logger """ logger = logging.getLogger(name) logger.handlers.clear() # used to send logs to local file location. Level set to that of logger if use_local_logging: handler = logging.FileHandler("Data\\Output\\log_" + build_stage + str(datetime.date.today()).replace("-","") + ".log") handler.setLevel(log_config['logger_level']) logger.addHandler(handler) dbhandler = DBHandler(name) dbhandler.setLevel(log_config['db_handler_level']) logger.addHandler(dbhandler) logger.setLevel(log_config['logger_level']) return logger ---------- components: IO, Library (Lib), Windows messages: 401482 nosy: btjehunt, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Logger library with task scheduler type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 10:09:24 2021 From: report at bugs.python.org (Irit Katriel) Date: Thu, 09 Sep 2021 14:09:24 +0000 Subject: [New-bugs-announce] [issue45152] Prepare for splitting LOAD_CONST into several opcodes Message-ID: <1631196564.41.0.680638780635.issue45152@roundup.psfhosted.org> New submission from Irit Katriel : This issue is to prepare the code for splitting LOAD_CONST to several opcodes. There are a number of places in the code where an opcode is compared to LOAD_CONST (such as dis, the compiler, and the peephole optimizer). These need to be refactored to make the query "is this a hasconst opcode", and the value calculation needs to be refactored into a single place, which can later be updated to get the value from places other than co_consts. ---------- assignee: iritkatriel components: Interpreter Core, Library (Lib) messages: 401485 nosy: iritkatriel priority: normal severity: normal status: open title: Prepare for splitting LOAD_CONST into several opcodes type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 10:36:38 2021 From: report at bugs.python.org (Sanmitha) Date: Thu, 09 Sep 2021 14:36:38 +0000 Subject: [New-bugs-announce] [issue45154] Enumerate() function or class? Message-ID: <1631198198.04.0.262463411165.issue45154@roundup.psfhosted.org> New submission from Sanmitha : I was learning about enumerate(). While learning, when I used, >>>help(enumerate) Help on class enumerate in module builtins: class enumerate(object) | enumerate(iterable, start=0) | | Return an enumerate object. | | iterable | an object supporting iteration | | The enumerate object yields pairs containing a count (from start, which | defaults to zero) and a value yielded by the iterable argument. | | enumerate is useful for obtaining an indexed list: | (0, seq[0]), (1, seq[1]), (2, seq[2]), ? | | Methods defined here: | | getattribute(self, name, /) | Return getattr(self, name). | | iter(self, /) | Implement iter(self). | | next(self, /) | Implement next(self). | | reduce(?) | Return state information for pickling. Static methods defined here: new(*args, **kwargs) from builtins.type Create and return a new object. See help(type) for accurate signature. Even when I gave as, >>>enumerate But, when I checked the documentation in the official website of python, www.python.org for enumerate() Here : https://docs.python.org/3/library/functions.html#enumerate It showed that enumerate() is a function which violated the information shown by help(). I couldn?t get whether enumerate() is a class or a function. Anyone please help me out of this please? By the way, I had python 3.8.3. I even checked in python 3.6 and 3.7.10. ---------- assignee: docs at python components: Documentation messages: 401487 nosy: Sanmitha Sadhishkumar, docs at python priority: normal severity: normal status: open title: Enumerate() function or class? type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 10:35:50 2021 From: report at bugs.python.org (Shlomi) Date: Thu, 09 Sep 2021 14:35:50 +0000 Subject: [New-bugs-announce] [issue45153] Except multiple types of exceptions with OR keyword is not working Message-ID: <1631198150.11.0.821790716144.issue45153@roundup.psfhosted.org> New submission from Shlomi : When I want to catch multiple types of exceptions naturally 'OR' keyword is used. But it doesn't work. The interpreter doesn't show any error for the syntax, so developer may think it would work. Small example: try: myfunc() except ConnectionResetError or ConnectionAbortedError: print("foo") except Exception as e: print("bar") When myfunc() throws 'ConnectionAbortedError' the interpreter enters "bar" block, and not "foo" block. ---------- components: Interpreter Core messages: 401486 nosy: ShlomiRex priority: normal severity: normal status: open title: Except multiple types of exceptions with OR keyword is not working type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 18:00:31 2021 From: report at bugs.python.org (Barry A. Warsaw) Date: Thu, 09 Sep 2021 22:00:31 +0000 Subject: [New-bugs-announce] [issue45155] Add default arguments for int.to_bytes() Message-ID: <1631224831.05.0.436611360009.issue45155@roundup.psfhosted.org> New submission from Barry A. Warsaw : In the PEP 467 discussion, I proposed being able to use >>> (65).to_bytes() b'A' IOW, adding default arguments for the `length` and `byteorder` arguments to `int.to_bytes()` https://mail.python.org/archives/list/python-dev at python.org/message/PUR7UCOITMMH6TZVVJA5LKRCBYS4RBMR/ It occurs to me that this is (1) useful on its own merits; (2) easy to do. So I've done it. Creating this bug so I can link a PR against it. ---------- components: Interpreter Core messages: 401524 nosy: barry priority: normal severity: normal stage: patch review status: open title: Add default arguments for int.to_bytes() versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 18:09:24 2021 From: report at bugs.python.org (David Mandelberg) Date: Thu, 09 Sep 2021 22:09:24 +0000 Subject: [New-bugs-announce] [issue45156] mock.seal has infinite recursion with int class attributes Message-ID: <1631225364.14.0.346239674418.issue45156@roundup.psfhosted.org> New submission from David Mandelberg : The code below seems to have infinite recursion in the mock.seal call with python 3.9.2. from unittest import mock class Foo: foo = 0 foo = mock.create_autospec(Foo) mock.seal(foo) ---------- components: Library (Lib) messages: 401525 nosy: dseomn priority: normal severity: normal status: open title: mock.seal has infinite recursion with int class attributes type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 18:37:07 2021 From: report at bugs.python.org (STINNER Victor) Date: Thu, 09 Sep 2021 22:37:07 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue45157=5D_DTrace_on_RHEL7?= =?utf-8?q?=2C_generated_Include/pydtrace=5Fprobes=2Eh_fails_to_build=3A_e?= =?utf-8?q?rror=3A_impossible_constraint_in_=E2=80=98asm=E2=80=99?= Message-ID: <1631227027.9.0.449949319255.issue45157@roundup.psfhosted.org> New submission from STINNER Victor : I modified RHEL7 configuration to build Python using --with-dtrace: argv: [b'./configure', b'--prefix', b'$(PWD)/target', b'--with-pydebug', b'--with-platlibdir=lib64', b'--enable-ipv6', b'--enable-shared', b'--with-computed-gotos=yes', b'--with-dbmliborder=gdbm:ndbm:bdb', b'--enable-loadable-sqlite-extensions', b'--with-dtrace', b'--with-lto', b'--with-ssl-default-suites=openssl', b'--without-static-libpython', b'--with-valgrind'] Problem: the generated Include/pydtrace_probes.h failed to build :-( --- /usr/bin/dtrace -o Include/pydtrace_probes.h -h -s Include/pydtrace.d sed 's/PYTHON_/PyDTrace_/' Include/pydtrace_probes.h > Include/pydtrace_probes.h.tmp mv Include/pydtrace_probes.h.tmp Include/pydtrace_probes.h (...) In file included from ./Include/pydtrace_probes.h:10:0, from ./Include/pydtrace.h:11, from Modules/gcmodule.c:33: Modules/gcmodule.c: In function ?gc_collect_main?: ./Include/pydtrace_probes.h:98:1: error: impossible constraint in ?asm? DTRACE_PROBE1 (python, gc__start, arg1) ^ ./Include/pydtrace_probes.h:109:1: error: impossible constraint in ?asm? DTRACE_PROBE1 (python, gc__done, arg1) ^ --- Full logs of AMD64 RHEL7 3.x: https://buildbot.python.org/all/#/builders/15/builds/761 In the meantime, I disabled --with-dtrace since other buildbot workers failed when dtrace was no installed. See: https://github.com/python/buildmaster-config/pull/264 Maybe it's a problem on RHEL7. Maybe the problem is that Python is built with LTO? ---------- components: Build messages: 401526 nosy: cstratak, hroncok, vstinner priority: normal severity: normal status: open title: DTrace on RHEL7, generated Include/pydtrace_probes.h fails to build: error: impossible constraint in ?asm? versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 9 18:41:24 2021 From: report at bugs.python.org (Anton Abrosimov) Date: Thu, 09 Sep 2021 22:41:24 +0000 Subject: [New-bugs-announce] [issue45158] Refactor traceback.py to make TracebackException more extensible. Message-ID: <1631227284.21.0.317045625732.issue45158@roundup.psfhosted.org> New submission from Anton Abrosimov : 1. Move internal dependencies (`FrameSummary`, `StackSummary`) to class attributes. Reduce coupling. 2. Separate receiving, processing and presenting traceback information. How to replace `repr` with `pformat` in `FrameSummary`? def __init__(...): ... self.locals = {k: repr(v) for k, v in locals.items()} if locals else None ... 3. Move formatting templates to class attributes. 4. ... Motivation: 1. For the sake of small changes to the view, you have to rewrite the entire `TracebackException` hierarchy. Or use string parsing. 2.1. During development, I want to see as much information as possible. 2.2. During production, displaying unnecessary information can lead to security problems. 2.3. In large projects, it is more convenient to use JSON for further processing in the system environment. I have not found any PEPs describing `traceback.py`. I can make a prototype of the changes if anyone is interested. ---------- components: Library (Lib) messages: 401528 nosy: abrosimov.a.a priority: normal severity: normal status: open title: Refactor traceback.py to make TracebackException more extensible. type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 00:23:52 2021 From: report at bugs.python.org (Christopher Brichford) Date: Fri, 10 Sep 2021 04:23:52 +0000 Subject: [New-bugs-announce] [issue45159] data_received called on protocol after call to pause_reading on ssl transport Message-ID: <1631247832.37.0.646410094935.issue45159@roundup.psfhosted.org> New submission from Christopher Brichford : An SSL connection created with loop.create_connection may have data_received called on its protocol after pause_reading has been called on the transport. If an application has a protocol whose data_received method calls pause_reading on the transport then there is a chance that the data_received method will be called again before the application calls resume_reading on the transport. That existing implementation of pause_reading at: https://github.com/python/cpython/blob/62fa613f6a6e872723505ee9d56242c31a654a9d/Lib/asyncio/sslproto.py#L335 calls pause_reading on the underlying socket transport, which is correct. However, there is a loop in the SSLProtocol's data_received method: https://github.com/python/cpython/blob/62fa613f6a6e872723505ee9d56242c31a654a9d/Lib/asyncio/sslproto.py#L335 If the loop referenced above has more than one iteration then there is a chance that the application protocol's data_received method could call pause_reading on the transport. If that happens on any iteration of the loop other than the last iteration, then the SSLProtocol's data_received method will call the application protocol's data_received method when it should not. Stealing uvloop's asyncio ssl implementation would resolve this bug: https://bugs.python.org/issue44011 ---------- components: asyncio messages: 401553 nosy: asvetlov, chrisb2, yselivanov priority: normal severity: normal status: open title: data_received called on protocol after call to pause_reading on ssl transport type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 03:13:00 2021 From: report at bugs.python.org (fhdrsdg) Date: Fri, 10 Sep 2021 07:13:00 +0000 Subject: [New-bugs-announce] [issue45160] ttk.OptionMenu radiobuttons change variable value twice Message-ID: <1631257980.33.0.506661911605.issue45160@roundup.psfhosted.org> New submission from fhdrsdg : Found when trying to answer this SO question: https://stackoverflow.com/questions/53171384/tkinter-function-repeats-itself-twice-when-ttk-widgets-are-engaged The ttk.OptionMenu uses radiobuttons for the dropdown menu, whereas the tkinter.OptionMenu uses commands. Both of them use `_setit` as command, presumably first used for tkinter and then copied to the ttk implementation. The `_setit` does two things: changing the associated `variable` and calling the associated callback function. This is needed for the tkinter.OptionMenu since commands don't support changing a `variable`. However, for the ttk.OptionMenu an additional reference to the variable was added to the radiobutton following this issue: https://bugs.python.org/issue25684. This was needed to group radiobuttons, but now leads to the variable being changed twice: once by the radiobutton and once through `_setit`. When tracing the variable this leads to the tracing callback being called twice on only one change of the OptionMenu. The solution is to not use `_setit` for the radiobutton command but instead just use `command=self._callback` ---------- components: Tkinter messages: 401557 nosy: fhdrsdg priority: normal severity: normal status: open title: ttk.OptionMenu radiobuttons change variable value twice type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 04:40:03 2021 From: report at bugs.python.org (Ronald Oussoren) Date: Fri, 10 Sep 2021 08:40:03 +0000 Subject: [New-bugs-announce] [issue45161] _Py_DecodeUTF8_surrogateescape not exported from 3.10 framework build Message-ID: <1631263203.32.0.990187656459.issue45161@roundup.psfhosted.org> New submission from Ronald Oussoren : The symbol _Py_DecodeUTF8_surrogateescape is not exported from Python.framework on macOS in Python 3.10. The symbol was exported in earlier versions of 3.x. I'm not sure if this was intentional, so far I haven't been able to find when this was changed. This change breaks py2app which uses _Py_DecodeUTF8_surrogateescape to convert the C argv array to an array of 'wchar_t*' for use with Python's C API. ---------- components: C API, macOS messages: 401564 nosy: ned.deily, ronaldoussoren priority: normal severity: normal status: open title: _Py_DecodeUTF8_surrogateescape not exported from 3.10 framework build versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 06:15:06 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 10 Sep 2021 10:15:06 +0000 Subject: [New-bugs-announce] [issue45162] Remove old deprecated unittest features Message-ID: <1631268906.05.0.258489564429.issue45162@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR removes the following unittest features: * "fail*" and "assert*" aliases of TestCase methods. * Broken from start TestCase method assertDictContainsSubset(). * Ignored TestLoader.loadTestsFromModule() parameter use_load_tests. * Old alias _TextTestResult of TextTestResult. Most features were deprecated in 3.2, "fail*" methods in 3.1, assertNotRegexpMatches in 3.5. They were kept mostly for compatibility with 2.7 (although some of them were new in Python 3 and not compatible with 2.7). Using deprecated assertEquals instead of assertEqual is a common error which we need to fix regularly, so removing deprecated features will not only make the current code clearer, but save as from future errors. ---------- components: Library (Lib) messages: 401568 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Remove old deprecated unittest features type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 07:30:29 2021 From: report at bugs.python.org (David CARLIER) Date: Fri, 10 Sep 2021 11:30:29 +0000 Subject: [New-bugs-announce] [issue45163] Haiku build fix Message-ID: <1631273429.49.0.723757834418.issue45163@roundup.psfhosted.org> Change by David CARLIER : ---------- components: Library (Lib) nosy: devnexen priority: normal pull_requests: 26689 severity: normal status: open title: Haiku build fix type: compile error versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 07:45:38 2021 From: report at bugs.python.org (daniel capelle) Date: Fri, 10 Sep 2021 11:45:38 +0000 Subject: [New-bugs-announce] [issue45164] int.to_bytes() Message-ID: <1631274338.6.0.95342526397.issue45164@roundup.psfhosted.org> New submission from daniel capelle : for example check: (6500).to_bytes(2, 'big') result is: b'\x19d' but was expected to be: b'\x1964' since ord('d') is 100 = 0x64 there seems to be hex and char representation mixed up. ---------- messages: 401571 nosy: hypnoticum priority: normal severity: normal status: open title: int.to_bytes() type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 11:10:56 2021 From: report at bugs.python.org (Anthony Sottile) Date: Fri, 10 Sep 2021 15:10:56 +0000 Subject: [New-bugs-announce] [issue45165] alighment format for nullable values Message-ID: <1631286656.15.0.899331692277.issue45165@roundup.psfhosted.org> New submission from Anthony Sottile : currently this works correctly: ``` >>> '%8s %8s' % (None, 1) ' None 1' ``` but conversion to f-string fails: ``` >>> f'{None:>8} {1:>8}' Traceback (most recent call last): File "", line 1, in TypeError: unsupported format string passed to NoneType.__format__ ``` my proposal is to implement alignment `__format__` for `None` following the same as for `str` for alignment specifiers ---------- components: Interpreter Core messages: 401582 nosy: Anthony Sottile priority: normal severity: normal status: open title: alighment format for nullable values type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 12:19:49 2021 From: report at bugs.python.org (Timothee Mazzucotelli) Date: Fri, 10 Sep 2021 16:19:49 +0000 Subject: [New-bugs-announce] [issue45166] get_type_hints + Final + future annotations = TypeError Message-ID: <1631290789.33.0.47825654034.issue45166@roundup.psfhosted.org> New submission from Timothee Mazzucotelli : This is my first issue on bugs.python.org, let me know if I can improve it in any way. --- Originally reported and investigated here: https://github.com/mkdocstrings/mkdocstrings/issues/314 ``` # final.py from __future__ import annotations from typing import Final, get_type_hints name: Final[str] = "final" class Class: value: Final = 3000 get_type_hints(Class) ``` Run `python final.py`, and you'll get this traceback: ``` Traceback (most recent call last): File "final.py", line 11, in get_type_hints(Class) File "/.../lib/python3.8/typing.py", line 1232, in get_type_hints value = _eval_type(value, base_globals, localns) File "/.../lib/python3.8/typing.py", line 270, in _eval_type return t._evaluate(globalns, localns) File "/.../lib/python3.8/typing.py", line 517, in _evaluate self.__forward_value__ = _type_check( File "/.../lib/python3.8/typing.py", line 145, in _type_check raise TypeError(f"Plain {arg} is not valid as type argument") TypeError: Plain typing.Final is not valid as type argument ``` Now comment the `get_type_hints` call, launch a Python interpreter, and enter these commands: >>> import final >>> from typing import get_type_hints >>> get_type_hints(final) And you'll get this traceback: ``` Traceback (most recent call last): File "", line 1, in File "/.../lib/python3.9/typing.py", line 1449, in get_type_hints value = _eval_type(value, globalns, localns) File "/.../lib/python3.9/typing.py", line 283, in _eval_type return t._evaluate(globalns, localns, recursive_guard) File "/.../lib/python3.9/typing.py", line 538, in _evaluate type_ =_type_check( File "/.../lib/python3.9/typing.py", line 149, in _type_check raise TypeError(f"{arg} is not valid as type argument") TypeError: typing.Final[str] is not valid as type argument ``` I was able to replicate in 3.8, 3.9 and 3.10. The annotations future is not available in 3.7 or lower. Didn't try on 3.11. ---------- messages: 401590 nosy: pawamoy priority: normal severity: normal status: open title: get_type_hints + Final + future annotations = TypeError type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 13:09:52 2021 From: report at bugs.python.org (Jacob Hayes) Date: Fri, 10 Sep 2021 17:09:52 +0000 Subject: [New-bugs-announce] [issue45167] deepcopy of GenericAlias with __deepcopy__ method is broken Message-ID: <1631293792.65.0.39078575611.issue45167@roundup.psfhosted.org> New submission from Jacob Hayes : When deepcopying a parametrized types.GenericAlias (eg: a dict subclass) that has a __deepcopy__ method, the copy module doesn't detect the GenericAlias as a type and instead tries to call cls.__deepcopy__, passing `memo` inplace of self. This doesn't seem to happen with `typing.Generic` however. Example: ``` from copy import deepcopy class X(dict): def __deepcopy__(self, memo): return self print(deepcopy(X())) print(deepcopy(X)) print(type(X[str, int])) print(deepcopy(X[str, int]())) print(deepcopy(X[str, int])) ``` shows ``` {} {} Traceback (most recent call last): File "/tmp/demo.py", line 14, in print(deepcopy(X[str, int])) File "/Users/jacobhayes/.pyenv/versions/3.9.6/lib/python3.9/copy.py", line 153, in deepcopy y = copier(memo) TypeError: __deepcopy__() missing 1 required positional argument: 'memo' ``` I don't know if it's better to update `copy.deepcopy` here or perhaps narrow the `__getattr__` for `types.GenericAlias` (as `typing. _BaseGenericAlias` seems to). ---------- components: Library (Lib) messages: 401601 nosy: JacobHayes priority: normal severity: normal status: open title: deepcopy of GenericAlias with __deepcopy__ method is broken type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 19:31:01 2021 From: report at bugs.python.org (Irit Katriel) Date: Fri, 10 Sep 2021 23:31:01 +0000 Subject: [New-bugs-announce] [issue45168] dis output for co_consts is confusing Message-ID: <1631316661.34.0.984607812995.issue45168@roundup.psfhosted.org> New submission from Irit Katriel : When dis doesn't have co_consts, it just prints the index of the const in () instead. This is both unnecessary and confusing because (1) we already have the index and (2) there is no indication that this is the index and not the value. Can we change the output so that it doesn't print the value if it's missing? ---------- messages: 401620 nosy: gvanrossum, iritkatriel priority: normal severity: normal status: open title: dis output for co_consts is confusing _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 10 22:55:07 2021 From: report at bugs.python.org (2106 Hyunwoo Oh) Date: Sat, 11 Sep 2021 02:55:07 +0000 Subject: [New-bugs-announce] [issue45169] Shallow copy occurs when list multiplication is used to create nested lists; can confuse users Message-ID: <1631328907.83.0.354100007036.issue45169@roundup.psfhosted.org> New submission from 2106 Hyunwoo Oh : If you do the following: lists=[[]]*100 lists[1].append('text') print(lists[2]) you can see lists[2] contains 'text' even though it was appended to lists[1] in the text. A little more investigation with the id() function can show that the lists are shallowly copied when list multiplication occurs. I think this can confuse users when they try to use list multiplication to create nested lists, as they expected a deep copy to occur. ---------- components: Demos and Tools messages: 401625 nosy: ohwphil priority: normal severity: normal status: open title: Shallow copy occurs when list multiplication is used to create nested lists; can confuse users type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 11 03:45:41 2021 From: report at bugs.python.org (daji ma) Date: Sat, 11 Sep 2021 07:45:41 +0000 Subject: [New-bugs-announce] [issue45170] tarfile missing cross-directory checking Message-ID: <1631346341.79.0.894184434435.issue45170@roundup.psfhosted.org> New submission from daji ma : tarfile missing cross-directory checking, like ../ or ..\, this potentially cause cross-directory decompression. the exp: # -*- coding: utf-8 -*- import tarfile def extract_tar(file_path, dest_path): try: with tarfile.open(file_path, 'r') as src_file: for info in src_file.getmembers(): src_file.extract(info.name, dest_path) return True except (IOError, OSError, tarfile.TarError): return False def make_tar(): tar_file=tarfile.open('x.tar.gz','w:gz') tar_file.add('bashrc', '/../../../../root/.bashrc') tar_file.list(verbose=True) tar_file.close() if __name__ == '__main__': make_tar() extract_tar('x.tar.gz', 'xx') ---------- components: Library (Lib) messages: 401631 nosy: xiongpanju priority: normal severity: normal status: open title: tarfile missing cross-directory checking type: security versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 11 09:17:18 2021 From: report at bugs.python.org (Jouke Witteveen) Date: Sat, 11 Sep 2021 13:17:18 +0000 Subject: [New-bugs-announce] [issue45171] stacklevel handling in logging module is inconsistent Message-ID: <1631366238.78.0.687157218809.issue45171@roundup.psfhosted.org> New submission from Jouke Witteveen : Handling of `stacklevel` in the logging module makes a few unwarranted assumptions, for instance on the depth of the stack due to internal logging module calls. This can be seen for instance when using the shortcut call `logging.warning` to the root logger, instead of using `root_logger.warning`. Consider the following `stacklevel.py` file: ``` import logging import warnings root_logger = logging.getLogger() root_logger.handle = print def test(**kwargs): warnings.warn("warning-module", **kwargs) logging.warning("on-module", **kwargs) root_logger.warning("on-logger", **kwargs) for stacklevel in range(5): print(f"{stacklevel=}") test(stacklevel=stacklevel) ``` The output of running `PYTHONWARNINGS=always python stacklevel.py` is: ``` stacklevel=0 stacklevel.py:8: UserWarning: warning-module warnings.warn("warning-module", **kwargs) stacklevel=1 stacklevel.py:8: UserWarning: warning-module warnings.warn("warning-module", **kwargs) stacklevel=2 stacklevel.py:14: UserWarning: warning-module test(stacklevel=stacklevel) stacklevel=3 sys:1: UserWarning: warning-module stacklevel=4 sys:1: UserWarning: warning-module ``` Looking at the line numbers, we get: stacklevel 0: lines 8, 9, 10. stacklevel 1: lines 8, 9, 10. stacklevel 2: lines 14, 9, 14. stacklevel 3: lines sys:1, 14, 10. stacklevel 4: lines sys:1, 9, 10. As can be seen, the stacklevel for the on-module (shortcut) calls lags one level of unwinding behind. ---------- components: Library (Lib) messages: 401638 nosy: joukewitteveen priority: normal severity: normal status: open title: stacklevel handling in logging module is inconsistent versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 11 09:43:46 2021 From: report at bugs.python.org (David CARLIER) Date: Sat, 11 Sep 2021 13:43:46 +0000 Subject: [New-bugs-announce] [issue45172] netbsd CAN protocol flags addition Message-ID: <1631367826.9.0.525504625314.issue45172@roundup.psfhosted.org> Change by David CARLIER : ---------- components: Library (Lib) nosy: devnexen priority: normal pull_requests: 26704 severity: normal status: open title: netbsd CAN protocol flags addition type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 11 14:10:39 2021 From: report at bugs.python.org (Hugo van Kemenade) Date: Sat, 11 Sep 2021 18:10:39 +0000 Subject: [New-bugs-announce] [issue45173] Remove configparser deprecations Message-ID: <1631383839.78.0.984503527213.issue45173@roundup.psfhosted.org> New submission from Hugo van Kemenade : In the configparser module, these have been deprecated since Python 3.2: * the SafeConfigParser class, * the filename property of the ParsingError class, * the readfp method of the ConfigParser class, They can be removed in Python 3.11. ---------- components: Library (Lib) messages: 401644 nosy: hugovk priority: normal severity: normal status: open title: Remove configparser deprecations versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 11 16:34:59 2021 From: report at bugs.python.org (David CARLIER) Date: Sat, 11 Sep 2021 20:34:59 +0000 Subject: [New-bugs-announce] [issue45174] DragonflyBSD fix nis module build Message-ID: <1631392499.44.0.816615961094.issue45174@roundup.psfhosted.org> Change by David CARLIER : ---------- components: Library (Lib) nosy: devnexen priority: normal pull_requests: 26711 severity: normal status: open title: DragonflyBSD fix nis module build type: compile error versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 11 17:10:06 2021 From: report at bugs.python.org (Jason Yundt) Date: Sat, 11 Sep 2021 21:10:06 +0000 Subject: [New-bugs-announce] [issue45175] No warning with '-W always' and cached import Message-ID: <1631394606.92.0.819675114557.issue45175@roundup.psfhosted.org> New submission from Jason Yundt : I have a script which always produces a warning when you run it. If I import always_warns from another script, that script will only produce a warning once. Steps to reproduce: $ python -W always always_warns.py /tmp/Bug reproduction/always_warns.py:1: DeprecationWarning: invalid escape sequence \ "\ " $ python -W always always_warns.py /tmp/Bug reproduction/always_warns.py:1: DeprecationWarning: invalid escape sequence \ "\ " $ python -W always imports_always_warns.py /tmp/Bug reproduction/always_warns.py:1: DeprecationWarning: invalid escape sequence \ "\ " $ python -W always imports_always_warns.py $ There should be a warning for that last one, but there isn?t. If I delete __pycache__, imports_always_warns.py makes the warning appear again. ---------- files: Bug reproduction.zip messages: 401648 nosy: jayman priority: normal severity: normal status: open title: No warning with '-W always' and cached import type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50278/Bug reproduction.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 12 05:50:10 2021 From: report at bugs.python.org (Ming Hua) Date: Sun, 12 Sep 2021 09:50:10 +0000 Subject: [New-bugs-announce] [issue45176] Many regtest failures on Windows with non-ASCII account name Message-ID: <1631440210.21.0.262674266068.issue45176@roundup.psfhosted.org> New submission from Ming Hua : Background: Since at least Windows 8, it is possible to invoke the input method engine (IME) when installing Windows and creating accounts. So at least among simplified Chinese users, it's not uncommon to have a Chinese account name. Issue: After successful installation using the 64-bit .exe installer for Windows, just to be paranoid (and to get familiar with Python's test framework), I decided to run the bundled regression tests. To my surprise I got many failures. The following is the summary of "python.exe -m test" with 3.8 some months ago (likely 3.8.6): 371 tests OK. 11 tests failed: test_cmd_line_script test_compileall test_distutils test_doctest test_locale test_mimetypes test_py_compile test_tabnanny test_urllib test_venv test_zipimport_support 43 tests skipped: test_asdl_parser test_check_c_globals test_clinic test_curses test_dbm_gnu test_dbm_ndbm test_devpoll test_epoll test_fcntl test_fork1 test_gdb test_grp test_ioctl test_kqueue test_multiprocessing_fork test_multiprocessing_forkserver test_nis test_openpty test_ossaudiodev test_pipes test_poll test_posix test_pty test_pwd test_readline test_resource test_smtpnet test_socketserver test_spwd test_syslog test_threadsignals test_timeout test_tix test_tk test_ttk_guionly test_urllib2net test_urllibnet test_wait3 test_wait4 test_winsound test_xmlrpc_net test_xxtestfuzz test_zipfile64 Total duration: 59 min 49 sec Tests result: FAILURE The failures all look similar though, it seems Python on Windows assumes the home directory of the user, "C:\Users\\", is either in ASCII or UTF-8 encoding, while it is actually in Windows native codepage, in my case cp936 for simplified Chinese (zh-CN). To take a couple of examples (these are from recent testing with 3.10.0 rc2): > python.exe -m test -W test_cmd_line_script 0:00:03 Run tests sequentially 0:00:03 [1/1] test_cmd_line_script [...] test_consistent_sys_path_for_direct_execution (test.test_cmd_line_script.CmdLineTest) ... ERROR [...] test_directory_error (test.test_cmd_line_script.CmdLineTest) ... FAIL [...] ERROR: test_consistent_sys_path_for_direct_execution (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python\python310\lib\test\test_cmd_line_script.py", line 677, in test_consistent_sys_path_for_direct_execution out_by_name = kill_python(p).decode().splitlines() UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbb in position 9: invalid start byte [...] FAIL: test_directory_error (test.test_cmd_line_script.CmdLineTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python\python310\lib\test\test_cmd_line_script.py", line 268, in test_directory_error self._check_import_error(script_dir, msg) File "C:\Programs\Python\python310\lib\test\test_cmd_line_script.py", line 151, in _check_import_error self.assertIn(expected_msg.encode('utf-8'), err) AssertionError: b"can't find '__main__' module in 'C:\\\\Users\\\\\xe5<5 bytes redacted>\\\\AppData\\\\Local\\\\Temp\\\\tmpcwkfn9ct'" not found in b"C:\\Programs\\Python\\python310\\python.exe: can't find '__main__' module in 'C:\\\\Users\\\\\xbb<3 bytes redacted>\\\\AppData\\\\Local\\\\Temp\\\\tmpcwkfn9ct'\r\n" [...] ---------------------------------------------------------------------- Ran 44 tests in 29.769s FAILED (failures=2, errors=5) test test_cmd_line_script failed test_cmd_line_script failed (5 errors, 2 failures) in 30.4 sec == Tests result: FAILURE == In the above test_directory_error AssertionError message I redacted part of the path as my account name is my real name. Hope the issue is clear enough despite the redaction, since the "\xe5<5 bytes redacted>" part is 6 bytes and apparently in UTF-8 (for two Chinese characters) and the "\xbb<3 bytes redacted>" part is 4 bytes and apparently in cp936. Postscript: As I've said above, I discovered this issue some time ago, but only have time now to report it. I believe I've see these failures in 3.8.2/6, 3.9.7, and 3.10.0 rc2. It shouldn't be hard to reproduce for people with ways to create account with non-ASCII name on Windows. If reproducing turns out to be difficult though, I'm happy to provide more information and/or run more tests. ---------- components: Tests messages: 401659 nosy: minghua priority: normal severity: normal status: open title: Many regtest failures on Windows with non-ASCII account name versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 12 19:12:23 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Sun, 12 Sep 2021 23:12:23 +0000 Subject: [New-bugs-announce] [issue45177] Use shared test loader when possible when running test suite Message-ID: <1631488343.34.0.878407782528.issue45177@roundup.psfhosted.org> New submission from Erlend E. Aasland : Use unittest.defaultTestLoader instead of unittest.TestLoader() when possible, to avoid creating unnecessary many instances. ---------- components: Tests messages: 401674 nosy: erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: Use shared test loader when possible when running test suite versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 12 19:45:46 2021 From: report at bugs.python.org (WGH) Date: Sun, 12 Sep 2021 23:45:46 +0000 Subject: [New-bugs-announce] [issue45178] Support for linking unnamed temporary files into filesystem on Linux Message-ID: <1631490346.44.0.449536820401.issue45178@roundup.psfhosted.org> New submission from WGH : In Linux, it's possible to create an unnamed temporary file in a specified directory by using open with O_TMPFILE flag (as if it was created with random name and immediately unlinked, but atomically). Unless O_EXCL is specified, the file can be then linked into filesystem using linkat syscall. It would be neat if it was possible in Python. There're a couple of things missing: 1) tempfile.TemporaryFile creates a file with O_EXCL flag, which prevents linking it into filesystem. 2) linkat must be called with AT_SYMLINK_FOLLOW flag (otherwise EXDEV is returned), which is broken right now (#37612) ---------- components: Library (Lib) messages: 401676 nosy: WGH priority: normal severity: normal status: open title: Support for linking unnamed temporary files into filesystem on Linux type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 12 22:02:42 2021 From: report at bugs.python.org (meng_xiaohui) Date: Mon, 13 Sep 2021 02:02:42 +0000 Subject: [New-bugs-announce] [issue45179] List.sort ERROR Message-ID: <1631498562.98.0.796323679023.issue45179@roundup.psfhosted.org> New submission from meng_xiaohui <1294886278 at qq.com>: There is a bug in this method? L.sort(key=None, reverse=False) -> None L is an instance of list. Argument key is a function. If L is in the body of argument key, L is always an empty list in test case, which is wrong ================= Run this: F = ['2', '3', '1'] G = ['7', '9', '8'] def key(i): print(F) print(G) res = int(i) + len(F) + len(G) return res G.sort(key=key) F.sort(key=key) ================= Actual output: ['2', '3', '1'] [] ['2', '3', '1'] [] ['2', '3', '1'] [] [] ['7', '8', '9'] [] ['7', '8', '9'] [] ['7', '8', '9'] ---------- components: Interpreter Core messages: 401679 nosy: meng_xiaohui priority: normal severity: normal status: open title: List.sort ERROR type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 00:02:54 2021 From: report at bugs.python.org (Nabeel Alzahrani) Date: Mon, 13 Sep 2021 04:02:54 +0000 Subject: [New-bugs-announce] [issue45180] possible wrong result for difflib.SequenceMatcher.ratio() Message-ID: <1631505774.89.0.445385938596.issue45180@roundup.psfhosted.org> New submission from Nabeel Alzahrani : The difflib.SequenceMatcher.ratio() gives 0.3 instead of 1.0 or at least 0.9 for the following two strings a and b: a=""" #include #include using namespace std; int main() { string userWord; unsigned int i; cin >> userWord; for(i = 0; i < userWord.size(); i++) { if(userWord.at(i) == 'i') { userWord.at(i) = '1'; } if(userWord.at(i) == 'a') { userWord.at(i) = '@'; } if(userWord.at(i) == 'm') { userWord.at(i) = 'M'; } if(userWord.at(i) == 'B') { userWord.at(i) = '8'; } if(userWord.at(i) == 's') { userWord.at(i) = '$'; } userWord.push_back('!'); } cout << userWord << endl; return 0; } """ b=""" #include #include using namespace std; int main() { string userWord; unsigned int i; cin >> userWord; userWord.push_back('!'); for(i = 0; i < userWord.size(); i++) { if(userWord.at(i) == 'i') { userWord.at(i) = '1'; } if(userWord.at(i) == 'a') { userWord.at(i) = '@'; } if(userWord.at(i) == 'm') { userWord.at(i) = 'M'; } if(userWord.at(i) == 'B') { userWord.at(i) = '8'; } if(userWord.at(i) == 's') { userWord.at(i) = '$'; } } cout << userWord << endl; return 0; } """ ---------- components: Library (Lib) messages: 401683 nosy: nalza001 priority: normal severity: normal status: open title: possible wrong result for difflib.SequenceMatcher.ratio() type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 03:10:10 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 13 Sep 2021 07:10:10 +0000 Subject: [New-bugs-announce] [issue45181] Rewrite loading sqlite3 tests Message-ID: <1631517010.88.0.220838315131.issue45181@roundup.psfhosted.org> New submission from Serhiy Storchaka : The proposed PR rewrites the loading of sqlite3 tests. Instead of explicitly enumerating test modules and test classes in every module, and manually creating test suites, it uses unittest discover ability. Every new test files and test classes will be automatically added to tests. As a side effect, unittest filtering by pattern works now. $ ./python -m test.test_sqlite -vk long test_sqlite: testing with version '2.6.0', sqlite_version '3.31.1' test_func_return_long_long (sqlite3.test.test_userfunctions.FunctionTests) ... ok test_param_long_long (sqlite3.test.test_userfunctions.FunctionTests) ... ok ---------------------------------------------------------------------- Ran 2 tests in 0.001s OK ---------- components: Tests messages: 401687 nosy: erlendaasland, ghaering, serhiy.storchaka priority: normal severity: normal status: open title: Rewrite loading sqlite3 tests type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 03:46:28 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 13 Sep 2021 07:46:28 +0000 Subject: [New-bugs-announce] [issue45182] Incorrect use of requires_zlib in test_bdist_rpm Message-ID: <1631519188.27.0.751615077153.issue45182@roundup.psfhosted.org> New submission from Serhiy Storchaka : requires_zlib is a decorator factory which returns a decorator, not a decorator. It should always be followed by parenthesis. In Lib/distutils/tests/test_bdist_rpm.py it is used improperly, so the corresponding tests were never ran. ---------- components: Tests messages: 401690 nosy: dstufft, eric.araujo, serhiy.storchaka priority: normal severity: normal status: open title: Incorrect use of requires_zlib in test_bdist_rpm type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 07:29:05 2021 From: report at bugs.python.org (Ronald Oussoren) Date: Mon, 13 Sep 2021 11:29:05 +0000 Subject: [New-bugs-announce] [issue45183] Unexpected exception with zip importer Message-ID: <1631532545.02.0.971781687861.issue45183@roundup.psfhosted.org> New submission from Ronald Oussoren : The attached file demonstrates the problem: If importlib.invalidate_caches() is called while the zipfile used by the zip importer is not available the import system breaks entirely. I found this in a testsuite that accedently did this (it should have updated sys.path). I get the following exception: $ python3.10 t.py Traceback (most recent call last): File "/Users/ronald/Projects/modulegraph2/t.py", line 27, in import uu File "", line 1027, in _find_and_load File "", line 1002, in _find_and_load_unlocked File "", line 945, in _find_spec File "", line 1430, in find_spec File "", line 1402, in _get_spec File "", line 168, in find_spec File "", line 375, in _get_module_info TypeError: argument of type 'NoneType' is not iterable This exception is not very friendly.... This particular exception is caused by setting self._files to None in the importer's invalidate_caches method, while not checking for None in _get_modules_info. I'm not sure what the best fix would be, setting self._files to an empty list would likely be the easiest fix. Note that the script runs without errors in Python 3.9. ---------- files: repro.py keywords: 3.10regression messages: 401698 nosy: ronaldoussoren priority: normal severity: normal status: open title: Unexpected exception with zip importer versions: Python 3.10, Python 3.11 Added file: https://bugs.python.org/file50279/repro.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 10:40:02 2021 From: report at bugs.python.org (Andreas H.) Date: Mon, 13 Sep 2021 14:40:02 +0000 Subject: [New-bugs-announce] [issue45184] Add `pop` function to remove context manager from (Async)ExitStack Message-ID: <1631544002.04.0.855227455654.issue45184@roundup.psfhosted.org> New submission from Andreas H. : Currently it is not possible to remove context managers from an ExitStack (or AsyncExitStack). Workarounds are difficult and generally do accesses implementation details of (Async)ExitStack. See e.g. https://stackoverflow.com/a/37607405. It could be done as follows: class AsyncExitStackWithPop(contextlib.AsyncExitStack): """Same as AsyncExitStack but with pop, i.e. removal functionality""" async def pop(self, cm): callbacks = self._exit_callbacks self._exit_callbacks = collections.deque() found = None while callbacks: cb = callbacks.popleft() if cb[1].__self__ == cm: found = cb else: self._exit_callbacks.append(cb) if not found: raise KeyError("context manager not found") if found[0]: return found[1](None,None,None) else: return await found[1](None, None, None) The alternative is re-implementation of ExitStack with pop functionality, but that is also very difficult to get right (especially with exceptions). Which is probably the reason why there is ExitStack in the library at all. So I propose to augment (Async)ExitStack with a `pop` method like above or similar to the above. Use-Cases: An example is a component that manages several connections to network services. During run-time the network services might need to change (i.e. some be disconnected and some be connected according to business logic), or handle re-connection events (ie. graceful response to network errors). It is not too hard to imagine more use cases. Essentially every case where dynamic resource management is needed and where single resources are managable with python context managers. ---------- components: Library (Lib) messages: 401703 nosy: andreash, ncoghlan, yselivanov priority: normal severity: normal status: open title: Add `pop` function to remove context manager from (Async)ExitStack type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 16:14:35 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 13 Sep 2021 20:14:35 +0000 Subject: [New-bugs-announce] [issue45185] test.test_ssl.TestEnumerations is not run Message-ID: <1631564075.33.0.576511422146.issue45185@roundup.psfhosted.org> New submission from Serhiy Storchaka : test.test_ssl.TestEnumerations is not run when run ssl tests. If add it to the list of test classes it fails: ====================================================================== ERROR: test_options (test.test_ssl.TestEnumerations) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/test_ssl.py", line 4981, in test_options enum._test_simple_enum(CheckedOptions, ssl.Options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/serhiy/py/cpython/Lib/enum.py", line 1803, in _test_simple_enum raise TypeError('enum mismatch:\n %s' % '\n '.join(failed)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: enum mismatch: '_use_args_': checked -> False simple -> True '__format__': checked -> simple -> '__getnewargs__': checked -> None simple -> ====================================================================== ERROR: test_sslerrornumber (test.test_ssl.TestEnumerations) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/test_ssl.py", line 4998, in test_sslerrornumber enum._test_simple_enum(Checked_SSLMethod, ssl._SSLMethod) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/serhiy/py/cpython/Lib/enum.py", line 1803, in _test_simple_enum raise TypeError('enum mismatch:\n %s' % '\n '.join(failed)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: enum mismatch: extra key: 'PROTOCOL_SSLv23' ====================================================================== ERROR: test_sslmethod (test.test_ssl.TestEnumerations) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/test_ssl.py", line 4973, in test_sslmethod enum._test_simple_enum(Checked_SSLMethod, ssl._SSLMethod) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/serhiy/py/cpython/Lib/enum.py", line 1803, in _test_simple_enum raise TypeError('enum mismatch:\n %s' % '\n '.join(failed)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: enum mismatch: extra key: 'PROTOCOL_SSLv23' ====================================================================== ERROR: test_verifyflags (test.test_ssl.TestEnumerations) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/test_ssl.py", line 5002, in test_verifyflags enum.FlagEnum, 'VerifyFlags', 'ssl', ^^^^^^^^^^^^^ AttributeError: module 'enum' has no attribute 'FlagEnum' ---------------------------------------------------------------------- ---------- components: Tests messages: 401723 nosy: christian.heimes, serhiy.storchaka priority: normal severity: normal status: open title: test.test_ssl.TestEnumerations is not run type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 16:34:10 2021 From: report at bugs.python.org (Eric Snow) Date: Mon, 13 Sep 2021 20:34:10 +0000 Subject: [New-bugs-announce] [issue45186] Marshal output isn't completely deterministic. Message-ID: <1631565250.25.0.751940112521.issue45186@roundup.psfhosted.org> New submission from Eric Snow : (See: https://github.com/python/cpython/pull/28107#issuecomment-915627148) The output from marshal (e.g. PyMarshal_WriteObjectToString(), marshal.dump()) may be different depending on if it is a debug or non-debug build. I found this while working on freezing stdlib modules. ---------- components: Interpreter Core messages: 401724 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Marshal output isn't completely deterministic. type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 16:34:45 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Mon, 13 Sep 2021 20:34:45 +0000 Subject: [New-bugs-announce] [issue45187] Some tests in test_socket are not run Message-ID: <1631565285.06.0.0779555231158.issue45187@roundup.psfhosted.org> New submission from Serhiy Storchaka : Test classes ISOTPTest, J1939Test, BasicUDPLITETest, UDPLITETimeoutTest in test_socket are not included in the list of test classes and are not run by regrtest. ---------- components: Tests messages: 401725 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Some tests in test_socket are not run type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 16:44:06 2021 From: report at bugs.python.org (Eric Snow) Date: Mon, 13 Sep 2021 20:44:06 +0000 Subject: [New-bugs-announce] [issue45188] De-couple the Windows builds from freezing modules. Message-ID: <1631565846.06.0.121053368867.issue45188@roundup.psfhosted.org> New submission from Eric Snow : Currently for Windows builds, generating the frozen modules depends on first building python.exe. One consequence of this is that we must keep all frozen module .h files in the repo (which we'd like to avoid for various reasons). We should be able to freeze modules before building python.exe, like we already do via our Makefile. From what I understand, this will require that a subset of the runtime be separately buildable so we can use it in _freeze_module.c and use that before actually building python.exe. @Steve, please correct any details I got wrong here. :) ---------- components: Build messages: 401731 nosy: eric.snow, steve.dower priority: normal severity: normal stage: needs patch status: open title: De-couple the Windows builds from freezing modules. type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 19:22:53 2021 From: report at bugs.python.org (Eric Snow) Date: Mon, 13 Sep 2021 23:22:53 +0000 Subject: [New-bugs-announce] [issue45189] Drop the "list_frozen" command from _test_embed. Message-ID: <1631575373.13.0.588141988765.issue45189@roundup.psfhosted.org> New submission from Eric Snow : In Programs/_test_embed.c the "list_frozen" command prints out the name of each frozen module (defined in Python/frozen.c). The only place this is used is in Tools/scripts/generate_stdlib_module_names.py (in list_frozen()). That script can be updated to call imp._get_frozen_module_names(), which was added in PR GH-28319 for bpo-45019. Then _test_embed can go back to being used strictly for tests. (FWIW, the script could also read from Python/frozen_modules/MANIFEST after running "make regen-frozen". That file was added in GH-27980). ---------- components: Demos and Tools messages: 401741 nosy: eric.snow, vstinner priority: normal severity: normal stage: needs patch status: open title: Drop the "list_frozen" command from _test_embed. versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 13 21:17:53 2021 From: report at bugs.python.org (Benjamin Peterson) Date: Tue, 14 Sep 2021 01:17:53 +0000 Subject: [New-bugs-announce] [issue45190] unicode 14.0 upgrade Message-ID: <1631582273.28.0.521036633786.issue45190@roundup.psfhosted.org> New submission from Benjamin Peterson : Unicode 14.0 is expected on September 14. We'll need to do the usual table regenerations. ---------- assignee: benjamin.peterson components: Unicode messages: 401747 nosy: benjamin.peterson, ezio.melotti, vstinner priority: normal severity: normal status: open title: unicode 14.0 upgrade _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 00:56:40 2021 From: report at bugs.python.org (nahco314) Date: Tue, 14 Sep 2021 04:56:40 +0000 Subject: [New-bugs-announce] [issue45191] Error.__traceback__.tb_lineno is wrong Message-ID: <1631595400.46.0.316069984941.issue45191@roundup.psfhosted.org> New submission from nahco314 : I think that the emphasis of the errors caused by some of these equivalent expressions is wrong or inconsistent. expr1: 1 .bit_length("aaa") expr2: 1 \ .bit_length("aaa") expr3: 1 \ .bit_length(*["aaa"]) Below is the __traceback__.tb_lineno of the error given when running each version of the expression. (kubuntu20.0.4, CPython (pyenv)) The line number and location shown in the error message also correspond to this. in 3.6.14, 3,7,11 expr1: 0 expr2: 1 expr3: 1 in 3.8.11, 3.9.6 expr1: 0 expr2: 0 expr3: 0 in 3.10.0rc1, 3.11-dev(3.11.0a0) expr1: 0 expr2: 1 expr3: 0 I think the results in 3.6.14 and 3.7.11 are correct. ---------- components: Interpreter Core messages: 401748 nosy: nahco314 priority: normal severity: normal status: open title: Error.__traceback__.tb_lineno is wrong type: behavior versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 01:44:04 2021 From: report at bugs.python.org (Kyungmin Lee) Date: Tue, 14 Sep 2021 05:44:04 +0000 Subject: [New-bugs-announce] [issue45192] The tempfile._infer_return_type function cannot infer the type of os.PathLike objects. Message-ID: <1631598244.07.0.518019074651.issue45192@roundup.psfhosted.org> New submission from Kyungmin Lee : The tempfile module has been updated to accept an object implementing os.PathLike protocol for path-related parameters as of Python 3.6 (e.g. dir parameter). An os.PathLike object represents a filesystem path as a str or bytes object (i.e. def __fspath__(self) -> Union[str, bytes]:). However, if an object implementing os.PathLike[bytes] is passed as a dir argument, a TypeError is raised. This bug occurs because the tempfile._infer_return_type function considers all objects other than bytes as str type. ---------- components: Library (Lib) messages: 401754 nosy: rekyungmin priority: normal severity: normal status: open title: The tempfile._infer_return_type function cannot infer the type of os.PathLike objects. type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 02:29:20 2021 From: report at bugs.python.org (Tal Einat) Date: Tue, 14 Sep 2021 06:29:20 +0000 Subject: [New-bugs-announce] [issue45193] IDLE Show completions pop-up not working on Ubuntu Linux Message-ID: <1631600960.46.0.341409360398.issue45193@roundup.psfhosted.org> New submission from Tal Einat : The completion window never appears with Python version 3.9.7 and with the current main branch. Ubuntu 20.04 (reproduced on two separate machines) Tested with Tcl/Tk versions 8.6.10 and 8.6.11. This is directly caused by the fix for issue #40128. Commenting out that line resolves this issue entirely. (See also the PR for that fix, PR GH-26672.) ---------- assignee: terry.reedy components: IDLE messages: 401758 nosy: taleinat, terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE Show completions pop-up not working on Ubuntu Linux type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 06:35:13 2021 From: report at bugs.python.org (QuadCorei8085) Date: Tue, 14 Sep 2021 10:35:13 +0000 Subject: [New-bugs-announce] [issue45194] asyncio scheduler jitter Message-ID: <1631615713.39.0.358857928244.issue45194@roundup.psfhosted.org> New submission from QuadCorei8085 : I'm trying to do something periodically and this would be on the millisec level precision. However for some reason I could not achieve precise timing in a task. Below example shows that a simple sleep randomly awakes with a jitter between 0.1-20.0ms I haven't hystogrammed the distribution but it seems to be mostly 19-20ms. Any way to achieve better timings with asyncio? async def test_task(): while True: ts_now = time.time(); await asyncio.sleep(1.000); print("{}".format((time.time()-ts_now)*1000.0)); if __name__ == "__main__": loop = asyncio.get_event_loop() loop.create_task(thread_main()) loop.run_forever() ---------- components: asyncio messages: 401769 nosy: QuadCorei8085, asvetlov, yselivanov priority: normal severity: normal status: open title: asyncio scheduler jitter versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 07:14:01 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 14 Sep 2021 11:14:01 +0000 Subject: [New-bugs-announce] [issue45195] test_readline: test_nonascii() failed on aarch64 RHEL8 Refleaks 3.x Message-ID: <1631618041.01.0.810175030001.issue45195@roundup.psfhosted.org> New submission from STINNER Victor : aarch64 RHEL8 Refleaks 3.x: https://buildbot.python.org/all/#/builders/551/builds/131 This issue looks like bpo-44949 which has been fixed by commit 6fb62b42f4db56ed5efe0ca4c1059049276c1083. "\r\n" is missing at the end of the expected output. === logs === readline version: 0x700 readline runtime version: 0x700 readline library version: '7.0' use libedit emulation? False FAIL: test_nonascii (test.test_readline.TestReadline) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-RHEL8-aarch64.refleak/build/Lib/test/test_readline.py", line 258, in test_nonascii self.assertIn(b"history " + expected + b"\r\n", output) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: b"history '[\\xefnserted]|t\\xebxt[after]'\r\n" not found in bytearray(b"^A^B^B^B^B^B^B^B\t\tx\t\r\n[\xc3\xafnserted]|t\xc3\xab[after]\x08\x08\x08\x08\x08\x08\x08text \'t\\xeb\'\r\nline \'[\\xefnserted]|t\\xeb[after]\'\r\nindexes 11 13\r\n\x07text \'t\\xeb\'\r\nline \'[\\xefnserted]|t\\xeb[after]\'\r\nindexes 11 13\r\nsubstitution \'t\\xeb\'\r\nmatches [\'t\\xebnt\', \'t\\xebxt\']\r\nx[after]\x08\x08\x08\x08\x08\x08\x08t[after]\x08\x08\x08\x08\x08\x08\x08\r\nresult \'[\\xefnserted]|t\\xebxt[after]\'\r\nhistory \'[\\xefnserted]|t\\xebxt[after]\'") === test.pythoninfo === readline._READLINE_LIBRARY_VERSION: 7.0 readline._READLINE_RUNTIME_VERSION: 0x700 readline._READLINE_VERSION: 0x700 platform.architecture: 64bit ELF platform.libc_ver: glibc 2.28 platform.platform: Linux-4.18.0-305.12.1.el8_4.aarch64-aarch64-with-glibc2.28 os.environ[LANG]: en_US.UTF-8 locale.encoding: UTF-8 sys.filesystem_encoding: utf-8/surrogateescape sys.stderr.encoding: utf-8/backslashreplace sys.stdin.encoding: utf-8/strict sys.stdout.encoding: utf-8/strict ---------- components: Tests messages: 401775 nosy: vstinner priority: normal severity: normal status: open title: test_readline: test_nonascii() failed on aarch64 RHEL8 Refleaks 3.x versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 09:01:31 2021 From: report at bugs.python.org (junyixie) Date: Tue, 14 Sep 2021 13:01:31 +0000 Subject: [New-bugs-announce] [issue45196] macOS. ./configure --with-address-sanitizer; make test; cause test case crash. Message-ID: <1631624491.63.0.886030663913.issue45196@roundup.psfhosted.org> New submission from junyixie : test_io.py ``` ================================================================= ==54932==ERROR: AddressSanitizer: requested allocation size 0x7fffffffffffffff (0x8000000000001000 after adjustments for alignment, red zones etc.) exceeds maximum supported size of 0x10000000000 (thread T0) #0 0x102f1fa6c in wrap_malloc+0x94 (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x3fa6c) #1 0x102565fcc in _buffered_init bufferedio.c:730 #2 0x10255bba4 in _io_BufferedReader___init__ bufferedio.c.h:435 #3 0x10226c8c8 in wrap_init typeobject.c:6941 #4 0x10216d3f8 in _PyObject_Call call.c:305 #5 0x102387a6c in _PyEval_EvalFrameDefault ceval.c:4285 #6 0x10237eaa8 in _PyEval_Vector ceval.c:5073 #7 0x102396860 in call_function ceval.c:5888 #8 0x102385444 in _PyEval_EvalFrameDefault ceval.c:4206 #9 0x10237eaa8 in _PyEval_Vector ceval.c:5073 #10 0x102396860 in call_function ceval.c:5888 #11 0x102385444 in _PyEval_EvalFrameDefault ceval.c:4206 #12 0x10237eaa8 in _PyEval_Vector ceval.c:5073 #13 0x102172bec in method_vectorcall classobject.c:53 #14 0x102396860 in call_function ceval.c:5888 #15 0x1023885e4 in _PyEval_EvalFrameDefault ceval.c:4221 #16 0x10237eaa8 in _PyEval_Vector ceval.c:5073 #17 0x102396860 in call_function ceval.c:5888 #18 0x102385444 in _PyEval_EvalFrameDefault ceval.c:4206 #19 0x10237eaa8 in _PyEval_Vector ceval.c:5073 #20 0x102172af4 in method_vectorcall classobject.c:83 #21 0x10216d0a8 in PyVectorcall_Call call.c:255 #22 0x102387a6c in _PyEval_EvalFrameDefault ceval.c:4285 #23 0x10237eaa8 in _PyEval_Vector ceval.c:5073 #24 0x10216c248 in _PyObject_FastCallDictTstate call.c:142 #25 0x10216dc00 in _PyObject_Call_Prepend call.c:431 #26 0x102268740 in slot_tp_call typeobject.c:7481 #27 0x10216c5d4 in _PyObject_MakeTpCall call.c:215 #28 0x102396b88 in call_function ceval.c #29 0x1023885e4 in _PyEval_EvalFrameDefault ceval.c:4221 ==54932==HINT: if you don't care about these errors you may set allocator_may_return_null=1 SUMMARY: AddressSanitizer: allocation-size-too-big (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x3fa6c) in wrap_malloc+0x94 ==54932==ABORTING Fatal Python error: Aborted Current thread 0x0000000102e93d40 (most recent call first): File "/Users/xiejunyi/github-cpython/Lib/unittest/case.py", line 201 in handle File "/Users/xiejunyi/github-cpython/Lib/unittest/case.py", line 730 in assertRaises File "/Users/xiejunyi/github-cpython/Lib/test/test_io.py", line 1558 in test_constructor File "/Users/xiejunyi/github-cpython/Lib/unittest/case.py", line 549 in _callTestMethod File "/Users/xiejunyi/github-cpython/Lib/unittest/case.py", line 591 in run File "/Users/xiejunyi/github-cpython/Lib/unittest/case.py", line 650 in __call__ File "/Users/xiejunyi/github-cpython/Lib/unittest/suite.py", line 122 in run File "/Users/xiejunyi/github-cpython/Lib/unittest/suite.py", line 84 in __call__ File "/Users/xiejunyi/github-cpython/Lib/unittest/suite.py", line 122 in run File "/Users/xiejunyi/github-cpython/Lib/unittest/suite.py", line 84 in __call__ File "/Users/xiejunyi/github-cpython/Lib/unittest/suite.py", line 122 in run File "/Users/xiejunyi/github-cpython/Lib/unittest/suite.py", line 84 in __call__ File "/Users/xiejunyi/github-cpython/Lib/test/support/testresult.py", line 140 in run File "/Users/xiejunyi/github-cpython/Lib/test/support/__init__.py", line 990 in _run_suite File "/Users/xiejunyi/github-cpython/Lib/test/support/__init__.py", line 1115 in run_unittest File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/runtest.py", line 261 in _test_module File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/runtest.py", line 297 in _runtest_inner2 File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/runtest.py", line 335 in _runtest_inner File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/runtest.py", line 215 in _runtest File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/runtest.py", line 245 in runtest File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/runtest_mp.py", line 83 in run_tests_worker File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/main.py", line 678 in _main File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/main.py", line 658 in main File "/Users/xiejunyi/github-cpython/Lib/test/libregrtest/main.py", line 736 in main File "/Users/xiejunyi/github-cpython/Lib/test/regrtest.py", line 43 in _main File "/Users/xiejunyi/github-cpython/Lib/test/regrtest.py", line 47 in File "/Users/xiejunyi/github-cpython/Lib/runpy.py", line 86 in _run_code File "/Users/xiejunyi/github-cpython/Lib/runpy.py", line 196 in _run_module_as_main ``` test_decimal.py ``` 0:05:09 load avg: 159.57 [287/427/30] test_decimal crashed (Exit code -6) -- running: test_pickle (1 min 35 sec), test_tokenize (3 min 14 sec), test_unparse (4 min 32 sec), test_peg_generator (48.6 sec), test_subprocess (1 min 50 sec), test_faulthandler (36.0 sec), test_capi (3 min 3 sec), test_pdb (2 min 27 sec) ================================================================= ==6547==ERROR: AddressSanitizer: requested allocation size 0xbafc24672035e58 (0xbafc24672036e58 after adjustments for alignment, red zones etc.) exceeds maximum supported size of 0x10000000000 (thread T0) #0 0x103d4ba6c in wrap_malloc+0x94 (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x3fa6c) #1 0x10e1c9828 in mpd_switch_to_dyn mpalloc.c:217 #2 0x10e1d7a50 in mpd_qshiftl mpdecimal.c:2514 #3 0x10e2207a4 in _mpd_qsqrt mpdecimal.c:7945 #4 0x10e21f188 in mpd_qsqrt mpdecimal.c:8037 #5 0x10e190d58 in dec_mpd_qsqrt _decimal.c:4128 #6 0x102f8fd88 in method_vectorcall_VARARGS_KEYWORDS descrobject.c:346 #7 0x1031a6c44 in call_function ceval.c #8 0x103195cb0 in _PyEval_EvalFrameDefault ceval.c:4350 #9 0x10318bc3c in _PyEval_Vector ceval.c:5221 #10 0x102f7f994 in method_vectorcall classobject.c:54 #11 0x1031a6c44 in call_function ceval.c #12 0x1031bb1f8 in DROGON_JIT_HELPER_CALL_FUNCTION ceval_jit_helper.h:3041 #13 0x10f0c8164 () #14 0x10318d644 in _PyEval_EvalFrameDefault ceval.c:1827 #15 0x10318bc3c in _PyEval_Vector ceval.c:5221 #16 0x1031a6c44 in call_function ceval.c #17 0x1031baf98 in DROGON_JIT_HELPER_CALL_METHOD ceval_jit_helper.h:3020 #18 0x10f09e6ec () #19 0x10318d644 in _PyEval_EvalFrameDefault ceval.c:1827 #20 0x10318bc3c in _PyEval_Vector ceval.c:5221 #21 0x102f7f89c in method_vectorcall classobject.c:84 #22 0x102f79e50 in PyVectorcall_Call call.c:255 #23 0x1031bb700 in DROGON_JIT_HELPER_CALL_FUNCTION_EX ceval_jit_helper.h:3117 #24 0x10f02c2c4 () #25 0x10318d644 in _PyEval_EvalFrameDefault ceval.c:1827 #26 0x10318bc3c in _PyEval_Vector ceval.c:5221 #27 0x102f78ff0 in _PyObject_FastCallDictTstate call.c:142 #28 0x102f7a9a8 in _PyObject_Call_Prepend call.c:431 #29 0x103074ce0 in slot_tp_call typeobject.c:7605 ==6547==HINT: if you don't care about these errors you may set allocator_may_return_null=1 SUMMARY: AddressSanitizer: allocation-size-too-big (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x3fa6c) in wrap_malloc+0x94 ==6547==ABORTING Fatal Python error: Aborted ``` ---------- components: Tests messages: 401779 nosy: JunyiXie, gregory.p.smith priority: normal severity: normal status: open title: macOS. ./configure --with-address-sanitizer; make test; cause test case crash. versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 11:33:16 2021 From: report at bugs.python.org (Raymond Hettinger) Date: Tue, 14 Sep 2021 15:33:16 +0000 Subject: [New-bugs-announce] [issue45197] IDLE should suppress ValueError for list.remove() Message-ID: <1631633596.4.0.569139047582.issue45197@roundup.psfhosted.org> New submission from Raymond Hettinger : I got this today running a stock Python 3.9.6 for macOS downloaded from python.org. ------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/tkinter/__init__.py", line 1892, in __call__ return self.func(*args) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/idlelib/multicall.py", line 176, in handler r = l[i](event) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/idlelib/autocomplete_w.py", line 350, in keypress_event self.hide_window() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/idlelib/autocomplete_w.py", line 463, in hide_window self.widget.event_delete(HIDE_VIRTUAL_EVENT_NAME, seq) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/idlelib/multicall.py", line 392, in event_delete triplets.remove(triplet) ValueError: list.remove(x): x not in list ---------- assignee: terry.reedy components: IDLE messages: 401782 nosy: rhettinger, terry.reedy priority: normal severity: normal status: open title: IDLE should suppress ValueError for list.remove() type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 12:40:25 2021 From: report at bugs.python.org (xitop) Date: Tue, 14 Sep 2021 16:40:25 +0000 Subject: [New-bugs-announce] [issue45198] __set_name__ documentation not clear about its usage with non-descriptor classes Message-ID: <1631637625.07.0.44020466506.issue45198@roundup.psfhosted.org> New submission from xitop : The object.__set_name__() function (introduced in Python 3.6 by PEP-487) is mentioned in the "what's new " summary as an extension to the descriptor protocol [1] and documented in the "implementing descriptors" section [2]. However, the PEP itself states that it "adds an __set_name__ initializer for class attributes, especially if they are descriptors.". And it indeed works for plain classes where the descriptor protocol is not used at all (no __get__ or __set__ or __delete__): ---- class NotDescriptor: def __set_name__(self, owner, name): print('__set_name__ called') class SomeClass: attr = NotDescriptor() ---- It is clear that this method is helpful when used in descriptors and that is its intended use, but other valid use-cases probably exist. I suggest to amend the documentation to clarify that (correct me if I'm wrong) the __set_name__ is called for every class used as an attribute in an other class, not only for descriptors. --- URLs: [1]: https://docs.python.org/3/whatsnew/3.6.html#pep-487-descriptor-protocol-enhancements [2]: https://docs.python.org/3/reference/datamodel.html#implementing-descriptors ---------- assignee: docs at python components: Documentation messages: 401785 nosy: docs at python, xitop priority: normal severity: normal status: open title: __set_name__ documentation not clear about its usage with non-descriptor classes versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 14 18:13:34 2021 From: report at bugs.python.org (Terry J. Reedy) Date: Tue, 14 Sep 2021 22:13:34 +0000 Subject: [New-bugs-announce] [issue45199] IDLE: document search (find) and replace better Message-ID: <1631657614.15.0.441102434851.issue45199@roundup.psfhosted.org> New submission from Terry J. Reedy : The doc currently just says that the Search, File Search, and Search&Replace dialogs exist for the corresponding menu entries. Add a short section in "Editing and navigation" to say more. 1. Any selection becomes search target, except that S&R is buggy. 2. Search is only within lines. .* and \n do not match \n even with RE. 3. [x]RE uses Python re module, not tcl re. It applies to replace also. So if target RE has capture groups, \1 (and \gname? test) in replacement works.(match.expand(repl)) 4. Refer to re chapter and RegularExpression HOWTO. ---------- assignee: terry.reedy components: IDLE messages: 401801 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: IDLE: document search (find) and replace better type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 00:17:40 2021 From: report at bugs.python.org (Terry J. Reedy) Date: Wed, 15 Sep 2021 04:17:40 +0000 Subject: [New-bugs-announce] [issue45200] test_multiprocessing_fork failws with timeout Message-ID: <1631679460.71.0.626374231164.issue45200@roundup.psfhosted.org> New submission from Terry J. Reedy : https://github.com/python/cpython/pull/28344/checks?check_run_id=3605759743 All tests pass until test_multiprocessing_fork timed out after 25 min. On the rerun: refail with timeout. test_get (test.test_multiprocessing_fork.WithProcessesTestQueue) ... Timeout (0:20:00)! Thread 0x00007f176a71ebc0 (most recent call first): File "/home/runner/work/cpython/cpython/Lib/multiprocessing/synchronize.py", line 261 in wait File "/home/runner/work/cpython/cpython/Lib/multiprocessing/synchronize.py", line 349 in wait File "/home/runner/work/cpython/cpython/Lib/test/_test_multiprocessing.py", line 1001 in test_get File "/home/runner/work/cpython/cpython/Lib/unittest/case.py", line 549 in _callTestMethod File "/home/runner/work/cpython/cpython/Lib/unittest/case.py", line 593 in run File "/home/runner/work/cpython/cpython/Lib/unittest/case.py", line 652 in __call__ File "/home/runner/work/cpython/cpython/Lib/unittest/suite.py", line 122 in run File "/home/runner/work/cpython/cpython/Lib/unittest/suite.py", line 84 in __call__ File "/home/runner/work/cpython/cpython/Lib/unittest/suite.py", line 122 in run File "/home/runner/work/cpython/cpython/Lib/unittest/suite.py", line 84 in __call__ File "/home/runner/work/cpython/cpython/Lib/unittest/suite.py", line 122 in run File "/home/runner/work/cpython/cpython/Lib/unittest/suite.py", line 84 in __call__ File "/home/runner/work/cpython/cpython/Lib/unittest/runner.py", line 206 in run File "/home/runner/work/cpython/cpython/Lib/test/support/__init__.py", line 998 in _run_suite File "/home/runner/work/cpython/cpython/Lib/test/support/__init__.py", line 1124 in run_unittest File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/runtest.py", line 261 in _test_module File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/runtest.py", line 297 in _runtest_inner2 File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/runtest.py", line 340 in _runtest_inner File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/runtest.py", line 215 in _runtest File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/runtest.py", line 245 in runtest File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/main.py", line 337 in rerun_failed_tests File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/main.py", line 715 in _main File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/main.py", line 658 in main File "/home/runner/work/cpython/cpython/Lib/test/libregrtest/main.py", line 736 in main File "/home/runner/work/cpython/cpython/Lib/test/__main__.py", line 2 in File "/home/runner/work/cpython/cpython/Lib/runpy.py", line 86 in _run_code File "/home/runner/work/cpython/cpython/Lib/runpy.py", line 196 in _run_module_as_main ---------- components: Library (Lib), Tests messages: 401808 nosy: terry.reedy priority: normal severity: normal stage: needs patch status: open title: test_multiprocessing_fork failws with timeout type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 02:03:15 2021 From: report at bugs.python.org (Cy) Date: Wed, 15 Sep 2021 06:03:15 +0000 Subject: [New-bugs-announce] [issue45201] API function PySignal_SetWakeupFd is not exported and unusable Message-ID: <1631685795.19.0.380145753565.issue45201@roundup.psfhosted.org> New submission from Cy : I want to wait on curl_multi_wait which messes up python's signal handling since libcurl doesn't know to exit the polling loop on signal. I created a pipe that curl could read from when python registered a signal, and PySignal_SetWakeupFd would write to the pipe, thus letting curl leave its polling loop, letting my C module return an exception condition. Except PySignal_SetWakeupFd cannot be used. In Modules/signalmodule.c, it says: int PySignal_SetWakeupFd(int fd) when it should say: PyAPI_FUNC(int) PySignal_SetWakeupFd(int fd) This probably isn't a problem for most, since gcc has visiblity=public by default, but I guess Gentoo's process for building python sets -fvisibility=hidden, so I can't access it, and all Microsoft users should have no access to PySignal_SetWakeupFd, since Microsoft does have hidden visibility by default. ---------- components: C API messages: 401810 nosy: cy priority: normal severity: normal status: open title: API function PySignal_SetWakeupFd is not exported and unusable type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 06:01:52 2021 From: report at bugs.python.org (theeshallnotknowethme) Date: Wed, 15 Sep 2021 10:01:52 +0000 Subject: [New-bugs-announce] [issue45202] Add 'remove_barry_from_BDFL' future to revert effects of 'from __future__ import barry_as_FLUFL' Message-ID: <1631700112.53.0.581578028786.issue45202@roundup.psfhosted.org> New submission from theeshallnotknowethme : Add a flag named 'CO_FUTURE_REVOLT_AND_REMOVE_BARRY_FROM_BDFL' assigned to `0x2000000` that can be activated with 'from __future__ import remove_barry_from_BDFL'. Reverts the effects of 'from __future__ import barry_as_FLUFL' and adds the 'CO_FUTURE_REVOLT_AND_REMOVE_BARRY_FROM_BDFL' flag in indication that Barry has been overthrown from his position. Doing this before a 'from __future__ import barry_as_FLUFL' import will have no effect whatsoever. Redoing 'from __future__ import barry_as_FLUFL' will remove the flag and re-add the 'CO_FUTURE_BARRY_AS_BDFL' flag. There can be optional messages informing users of the change. ---------- components: Library (Lib) messages: 401822 nosy: February291948 priority: normal severity: normal status: open title: Add 'remove_barry_from_BDFL' future to revert effects of 'from __future__ import barry_as_FLUFL' type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 08:24:20 2021 From: report at bugs.python.org (Mark Shannon) Date: Wed, 15 Sep 2021 12:24:20 +0000 Subject: [New-bugs-announce] [issue45203] Improve specialization stats for LOAD_METHOD and BINARY_SUBSCR Message-ID: <1631708660.87.0.151822641223.issue45203@roundup.psfhosted.org> New submission from Mark Shannon : The stats for BINARY_SUBSCR and to a lesser amount LOAD_METHOD don't tell us much about what isn't being specialized. We should refine the stats to give us a better idea of what to optimize for. ---------- assignee: Mark.Shannon components: Interpreter Core messages: 401823 nosy: Mark.Shannon priority: normal severity: normal status: open title: Improve specialization stats for LOAD_METHOD and BINARY_SUBSCR _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 09:17:11 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 15 Sep 2021 13:17:11 +0000 Subject: [New-bugs-announce] [issue45204] test_peg_generator: test_soft_keyword() logs many messages into stdout Message-ID: <1631711831.11.0.180810271503.issue45204@roundup.psfhosted.org> New submission from STINNER Victor : Example (not in verbose mode!): $ ./python -m test test_peg_generator -m test_soft_keyword 0:00:00 load avg: 4.54 Run tests sequentially 0:00:00 load avg: 4.54 [1/1] test_peg_generator start() ... (looking at 1.0: NAME:'number') expect('number') ... (looking at 1.0: NAME:'number') ... expect('number') -> TokenInfo(type=1 (NAME), string='number', start=(1, 0), end=(1, 6), line='number 1') number() ... (looking at 1.7: NUMBER:'1') ... number() -> TokenInfo(type=2 (NUMBER), string='1', start=(1, 7), end=(1, 8), line='number 1') ... start() -> 1 start() ... (looking at 1.0: NAME:'string') expect('number') ... (looking at 1.0: NAME:'string') ... expect('number') -> None expect('string') ... (looking at 1.0: NAME:'string') ... expect('string') -> TokenInfo(type=1 (NAME), string='string', start=(1, 0), end=(1, 6), line="string 'b'") string() ... (looking at 1.7: STRING:"'b'") ... string() -> TokenInfo(type=3 (STRING), string="'b'", start=(1, 7), end=(1, 10), line="string 'b'") ... start() -> 'b' start() ... (looking at 1.0: NAME:'number') expect('number') ... (looking at 1.0: NAME:'number') ... expect('number') -> TokenInfo(type=1 (NAME), string='number', start=(1, 0), end=(1, 6), line='number test 1') number() ... (looking at 1.7: NAME:'test') ... number() -> None expect('string') ... (looking at 1.0: NAME:'number') ... expect('string') -> None soft_keyword() ... (looking at 1.0: NAME:'number') ... soft_keyword() -> TokenInfo(type=1 (NAME), string='number', start=(1, 0), end=(1, 6), line='number test 1') name() ... (looking at 1.7: NAME:'test') ... name() -> TokenInfo(type=1 (NAME), string='test', start=(1, 7), end=(1, 11), line='number test 1') _tmp_1() ... (looking at 1.12: NUMBER:'1') number() ... (looking at 1.12: NUMBER:'1') ... number() -> TokenInfo(type=2 (NUMBER), string='1', start=(1, 12), end=(1, 13), line='number test 1') ... _tmp_1() -> TokenInfo(type=2 (NUMBER), string='1', start=(1, 12), end=(1, 13), line='number test 1') ... start() -> test = 1 start() ... (looking at 1.0: NAME:'string') expect('number') ... (looking at 1.0: NAME:'string') ... expect('number') -> None expect('string') ... (looking at 1.0: NAME:'string') ... expect('string') -> TokenInfo(type=1 (NAME), string='string', start=(1, 0), end=(1, 6), line="string test 'b'") string() ... (looking at 1.7: NAME:'test') ... string() -> None soft_keyword() ... (looking at 1.0: NAME:'string') ... soft_keyword() -> TokenInfo(type=1 (NAME), string='string', start=(1, 0), end=(1, 6), line="string test 'b'") name() ... (looking at 1.7: NAME:'test') ... name() -> TokenInfo(type=1 (NAME), string='test', start=(1, 7), end=(1, 11), line="string test 'b'") _tmp_1() ... (looking at 1.12: STRING:"'b'") number() ... (looking at 1.12: STRING:"'b'") ... number() -> None name() ... (looking at 1.12: STRING:"'b'") ... name() -> None string() ... (looking at 1.12: STRING:"'b'") ... string() -> TokenInfo(type=3 (STRING), string="'b'", start=(1, 12), end=(1, 15), line="string test 'b'") ... _tmp_1() -> TokenInfo(type=3 (STRING), string="'b'", start=(1, 12), end=(1, 15), line="string test 'b'") ... start() -> test = 'b' start() ... (looking at 1.0: NAME:'test') expect('number') ... (looking at 1.0: NAME:'test') ... expect('number') -> None expect('string') ... (looking at 1.0: NAME:'test') ... expect('string') -> None soft_keyword() ... (looking at 1.0: NAME:'test') ... soft_keyword() -> None ... start() -> None == Tests result: SUCCESS == 1 test OK. Total duration: 246 ms Tests result: SUCCESS ---------- components: Tests messages: 401832 nosy: lys.nikolaou, pablogsal, vstinner priority: normal severity: normal status: open title: test_peg_generator: test_soft_keyword() logs many messages into stdout versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 09:19:29 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 15 Sep 2021 13:19:29 +0000 Subject: [New-bugs-announce] [issue45205] test_compileall logs "Compiling ..." messages Message-ID: <1631711969.83.0.460025250602.issue45205@roundup.psfhosted.org> New submission from STINNER Victor : The following 4 test_compileall tests logs "Compiling ..." messages: test_larger_than_32_bit_times (test.test_compileall.CompileallTestsWithSourceEpoch) ... Compiling '/tmp/tmp1k_q89f5/_test.py'... ok test_year_2038_mtime_compilation (test.test_compileall.CompileallTestsWithSourceEpoch) ... Compiling '/tmp/tmp83hk4o6n/_test.py'... ok test_larger_than_32_bit_times (test.test_compileall.CompileallTestsWithoutSourceEpoch) ... Compiling '/tmp/tmpf9fir94a/_test.py'... ok test_year_2038_mtime_compilation (test.test_compileall.CompileallTestsWithoutSourceEpoch) ... Compiling '/tmp/tmpw9mtirkx/_test.py'... ok Current output: ---------------- $ ./python -m test test_compileall 0:00:00 load avg: 1.09 Run tests sequentially 0:00:00 load avg: 1.09 [1/1] test_compileall Compiling '/tmp/tmpdc269658/_test.py'... Compiling '/tmp/tmppeummd0q/_test.py'... Compiling '/tmp/tmp_vf3awm7/_test.py'... Compiling '/tmp/tmpgkxrt872/_test.py'... == Tests result: SUCCESS == 1 test OK. Total duration: 23.3 sec Tests result: SUCCESS ---------------- I would prefer a quiet output (no message). ---------- components: Tests messages: 401833 nosy: vstinner priority: normal severity: normal status: open title: test_compileall logs "Compiling ..." messages versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 09:23:17 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 15 Sep 2021 13:23:17 +0000 Subject: [New-bugs-announce] [issue45206] test_contextlib_async logs "Task was destroyed but it is pending" messages Message-ID: <1631712197.72.0.810359664637.issue45206@roundup.psfhosted.org> New submission from STINNER Victor : 3 tests of test_contextlib_async logs messages. I would prefer a quiet output. test_contextmanager_trap_second_yield (test.test_contextlib_async.AsyncContextManagerTestCase) ... Task was destroyed but it is pending! task: ()>> ok test_contextmanager_trap_yield_after_throw (test.test_contextlib_async.AsyncContextManagerTestCase) ... Task was destroyed but it is pending! task: ()>> ok test_async_gen_propagates_generator_exit (test.test_contextlib_async.TestAbstractAsyncContextManager) ... Task was destroyed but it is pending! task: ()>> ok Current output: --- $ ./python -m test test_contextlib_async 0:00:00 load avg: 12.33 Run tests sequentially 0:00:00 load avg: 12.33 [1/1] test_contextlib_async Task was destroyed but it is pending! task: ()>> Task was destroyed but it is pending! task: ()>> Task was destroyed but it is pending! task: ()>> == Tests result: SUCCESS == 1 test OK. Total duration: 837 ms Tests result: SUCCESS --- ---------- components: Tests messages: 401834 nosy: vstinner priority: normal severity: normal status: open title: test_contextlib_async logs "Task was destroyed but it is pending" messages versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 09:24:16 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 15 Sep 2021 13:24:16 +0000 Subject: [New-bugs-announce] [issue45207] test_gdb logs "Function ... not defined" messages Message-ID: <1631712256.1.0.943755935446.issue45207@roundup.psfhosted.org> New submission from STINNER Victor : test_gdb logs many messages. I would prefer a quiet output: 0:02:05 load avg: 11.65 [155/427] test_gdb passed (...) Function "meth_varargs" not defined. Function "meth_varargs" not defined. Function "meth_varargs" not defined. Function "meth_varargs" not defined. Function "meth_varargs" not defined. Function "meth_varargs" not defined. Function "meth_varargs" not defined. Function "meth_varargs" not defined. Function "meth_varargs_keywords" not defined. Function "meth_varargs_keywords" not defined. Function "meth_varargs_keywords" not defined. Function "meth_varargs_keywords" not defined. Function "meth_varargs_keywords" not defined. Function "meth_varargs_keywords" not defined. Function "meth_varargs_keywords" not defined. Function "meth_varargs_keywords" not defined. Function "meth_o" not defined. Function "meth_o" not defined. Function "meth_o" not defined. Function "meth_o" not defined. Function "meth_o" not defined. Function "meth_o" not defined. Function "meth_o" not defined. Function "meth_o" not defined. Function "meth_noargs" not defined. Function "meth_noargs" not defined. Function "meth_noargs" not defined. Function "meth_noargs" not defined. Function "meth_noargs" not defined. Function "meth_noargs" not defined. Function "meth_noargs" not defined. Function "meth_noargs" not defined. Function "meth_fastcall" not defined. Function "meth_fastcall" not defined. Function "meth_fastcall" not defined. Function "meth_fastcall" not defined. Function "meth_fastcall" not defined. Function "meth_fastcall" not defined. Function "meth_fastcall" not defined. Function "meth_fastcall" not defined. Function "meth_fastcall_keywords" not defined. Function "meth_fastcall_keywords" not defined. Function "meth_fastcall_keywords" not defined. Function "meth_fastcall_keywords" not defined. Function "meth_fastcall_keywords" not defined. Function "meth_fastcall_keywords" not defined. Function "meth_fastcall_keywords" not defined. Function "meth_fastcall_keywords" not defined. ---------- components: Tests messages: 401835 nosy: vstinner priority: normal severity: normal status: open title: test_gdb logs "Function ... not defined" messages versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 09:28:33 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 15 Sep 2021 13:28:33 +0000 Subject: [New-bugs-announce] [issue45208] test_pdb: test_checkline_is_not_executable() logs messages Message-ID: <1631712513.9.0.565506614817.issue45208@roundup.psfhosted.org> New submission from STINNER Victor : I would prefer a quiet test: test_checkline_is_not_executable (test.test_pdb.ChecklineTests) ... End of file *** Blank or comment *** Blank or comment *** Blank or comment *** Blank or comment *** Blank or comment End of file ok Current output: ------------ $ ./python -m test test_pdb 0:00:00 load avg: 1.93 Run tests sequentially 0:00:00 load avg: 1.93 [1/1] test_pdb End of file *** Blank or comment *** Blank or comment *** Blank or comment *** Blank or comment *** Blank or comment End of file == Tests result: SUCCESS == 1 test OK. Total duration: 5.1 sec Tests result: SUCCESS ------------ ---------- components: Tests messages: 401836 nosy: vstinner priority: normal severity: normal status: open title: test_pdb: test_checkline_is_not_executable() logs messages versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 09:29:13 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 15 Sep 2021 13:29:13 +0000 Subject: [New-bugs-announce] [issue45209] multiprocessing tests log: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown Message-ID: <1631712553.58.0.588327071719.issue45209@roundup.psfhosted.org> New submission from STINNER Victor : 0:03:25 load avg: 12.33 [250/427] test_multiprocessing_forkserver passed (...) /home/vstinner/python/main/Lib/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /home/vstinner/python/main/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/psm_33506d9a': [Errno 2] No such file or directory: '/psm_33506d9a' warnings.warn('resource_tracker: %r: %s' % (name, e)) ---------- components: Tests messages: 401837 nosy: vstinner priority: normal severity: normal status: open title: multiprocessing tests log: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 12:47:23 2021 From: report at bugs.python.org (Edward Yang) Date: Wed, 15 Sep 2021 16:47:23 +0000 Subject: [New-bugs-announce] [issue45210] tp_dealloc docs should mention error indicator may be set Message-ID: <1631724443.4.0.588312077609.issue45210@roundup.psfhosted.org> New submission from Edward Yang : The fact that the error indicator may be set during tp_dealloc is somewhat well known (https://github.com/posborne/dbus-python/blob/fef4bccfc535c6c2819e3f15384600d7bc198bc5/_dbus_bindings/conn.c#L387) but it's not documented in the official manual. We should document it. A simple suggested patch: diff --git a/Doc/c-api/typeobj.rst b/Doc/c-api/typeobj.rst index b17fb22b69..e7c9b13646 100644 --- a/Doc/c-api/typeobj.rst +++ b/Doc/c-api/typeobj.rst @@ -668,6 +668,20 @@ and :c:type:`PyType_Type` effectively act as defaults.) :c:func:`PyObject_GC_Del` if the instance was allocated using :c:func:`PyObject_GC_New` or :c:func:`PyObject_GC_NewVar`. + If you may call functions that may set the error indicator, you must + use :c:func:`PyErr_Fetch` and :c:func:`PyErr_Restore` to ensure you + don't clobber a preexisting error indicator (the deallocation could + have occurred while processing a different error): + + .. code-block:: c + + static void foo_dealloc(foo_object *self) { + PyObject *et, *ev, *etb; + PyErr_Fetch(&et, &ev, &etb); + ... + PyErr_Restore(et, ev, etb); + } + Finally, if the type is heap allocated (:const:`Py_TPFLAGS_HEAPTYPE`), the deallocator should decrement the reference count for its type object after calling the type deallocator. In order to avoid dangling pointers, the ---------- assignee: docs at python components: Documentation messages: 401854 nosy: docs at python, ezyang priority: normal severity: normal status: open title: tp_dealloc docs should mention error indicator may be set type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 13:19:45 2021 From: report at bugs.python.org (Eric Snow) Date: Wed, 15 Sep 2021 17:19:45 +0000 Subject: [New-bugs-announce] [issue45211] Useful (expensive) information is discarded in getpath.c. Message-ID: <1631726385.76.0.396419410784.issue45211@roundup.psfhosted.org> New submission from Eric Snow : Currently we calculate a number of filesystem paths during runtime initialization in Modules/getpath.c (with the key goal of producing what will end up in sys.path). Some of those paths are preserved and some are not. In cases where the discarded data comes from filesystem access, we should preserve as much as possible. The most notable info is location of the stdlib source files. We would store this as PyConfig.stdlib_dir (and _PyPathConfig.stdlib_dir). We'd expose it with sys.stdlibdir (or sys.get_stdlib_dir() if we might need to calculate lazily), similar to sys.platlibdir, sys.home, and sys.prefix. sys.stdlibdir would allow us to avoid filesystem access, for example: * in site.py * in sysconfig.py * detect if python is running out of the source tree (needed for bpo-45020) FYI, I have a branch that mostly does what I'm suggesting here. ---------- assignee: eric.snow components: Interpreter Core messages: 401860 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: Useful (expensive) information is discarded in getpath.c. type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 13:30:17 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 15 Sep 2021 17:30:17 +0000 Subject: [New-bugs-announce] [issue45212] Dangling threads in skipped tests in test_socket Message-ID: <1631727017.23.0.70122988847.issue45212@roundup.psfhosted.org> New submission from Serhiy Storchaka : Dangling threads are reported when run test_socket tests which raise SkipTest in setUp() in refleak mode. $ ./python -m test -R 3:3 test_socket -m testBCM 0:00:00 load avg: 2.53 Run tests sequentially 0:00:00 load avg: 2.53 [1/1] test_socket beginning 6 repetitions 123456 .Warning -- threading_cleanup() failed to cleanup 1 threads (count: 1, dangling: 1) Warning -- Dangling thread: <_MainThread(MainThread, started 139675429708416)> .Warning -- threading_cleanup() failed to cleanup 1 threads (count: 1, dangling: 1) Warning -- Dangling thread: <_MainThread(MainThread, started 139675429708416)> .Warning -- threading_cleanup() failed to cleanup 1 threads (count: 1, dangling: 1) Warning -- Dangling thread: <_MainThread(MainThread, started 139675429708416)> ... test_socket failed (env changed) == Tests result: SUCCESS == 1 test altered the execution environment: test_socket Total duration: 655 ms Tests result: SUCCESS It happens because tearDown() is not called if setUp() raises any exception (including SkipTest). If we want to execute some cleanup code it should be registered with addCleanup(). ---------- components: Tests messages: 401862 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Dangling threads in skipped tests in test_socket type: resource usage versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 13:42:12 2021 From: report at bugs.python.org (Eric Snow) Date: Wed, 15 Sep 2021 17:42:12 +0000 Subject: [New-bugs-announce] [issue45213] Frozen modules are looked up using a linear search. Message-ID: <1631727732.79.0.742484624179.issue45213@roundup.psfhosted.org> New submission from Eric Snow : When looking up a frozen modules, we loop over the array of frozen modules until we find a match (or don't). See find_frozen() in Python/import.c. The frozen importer sits right after the builtin importer and right before the file-based importer. This means the import system does that frozen module lookup every time import happens (where it isn't a builtin module), even if it's a source module. ---------- components: Interpreter Core messages: 401863 nosy: barry, brett.cannon, eric.snow priority: normal severity: normal stage: needs patch status: open title: Frozen modules are looked up using a linear search. type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 16:26:10 2021 From: report at bugs.python.org (Irit Katriel) Date: Wed, 15 Sep 2021 20:26:10 +0000 Subject: [New-bugs-announce] [issue45214] implement LOAD_NONE opcode Message-ID: <1631737570.2.0.756455142992.issue45214@roundup.psfhosted.org> Change by Irit Katriel : ---------- assignee: iritkatriel components: Interpreter Core nosy: iritkatriel priority: normal severity: normal status: open title: implement LOAD_NONE opcode type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 15 22:13:10 2021 From: report at bugs.python.org (Andrei Kulakov) Date: Thu, 16 Sep 2021 02:13:10 +0000 Subject: [New-bugs-announce] [issue45215] Add docs for Mock name and parent args and deprecation warning when wrong args are passed Message-ID: <1631758390.34.0.18935168801.issue45215@roundup.psfhosted.org> New submission from Andrei Kulakov : Currently using *name* and *parent* args in Mock and MagicMock is problematic in a few ways: *name* - any value can be passed silently but at a later time, any value except for an str will cause an exception when repr() or str() on the mock object is done. - a string name can be passed but will not be equal to the respective attr value: Mock(name='foo').name != 'foo', as users would expect. (this should be documented). *parent* - any value can be passed but, similarly to *name*, will cause an exception when str() or repr() is done on the object. - this arg is not documented so users will expect it to be set as an attr, but instead the attribute is going to be a Mock instance. [1] I propose to fix these issues by: - checking the types that are passed in and display a DeprecationWarning if types are wrong. (note that this check should be fast because at first value can be compared to None default, which is what it's going to be in vast majority of cases, and isinstance() check is only done after that.) (in 3.11) - in 3.12, convert warnings into TypeError. - Document that *name* attribute will be a Mock instance. - Document that *name* argument needs to be a string. - Document *parent* argument. - In the docs for the two args, point to `configure_mock()` method for setting them to arbitrary values. (Note that other args for Mock() have more specialized names and are much less likely to cause similar issues.) [1] https://bugs.python.org/issue39222 ---------- components: Tests messages: 401913 nosy: andrei.avk priority: normal severity: normal status: open title: Add docs for Mock name and parent args and deprecation warning when wrong args are passed type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 03:33:30 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 16 Sep 2021 07:33:30 +0000 Subject: [New-bugs-announce] [issue45216] Remove redundand information from difflib docstrings Message-ID: <1631777610.22.0.0340838668525.issue45216@roundup.psfhosted.org> New submission from Serhiy Storchaka : Difflib docstrings contain short descriptions of functions and methods defined in the module and classes. It is redundant because pydoc shows descriptions for every function and method just few lines below. For example: | Methods: | | __init__(linejunk=None, charjunk=None) | Construct a text differencer, with optional filters. | | compare(a, b) | Compare two sequences of lines; generate the resulting delta. | | Methods defined here: | | __init__(self, linejunk=None, charjunk=None) | Construct a text differencer, with optional filters. | | The two optional keyword parameters are for filter functions: | | - `linejunk`: A function that should accept a single string argument, | and return true iff the string is junk. The module-level function | `IS_LINE_JUNK` may be used to filter out lines without visible | characters, except for at most one splat ('#'). It is recommended | to leave linejunk None; the underlying SequenceMatcher class has | an adaptive notion of "noise" lines that's better than any static | definition the author has ever been able to craft. | | - `charjunk`: A function that should accept a string of length 1. The | module-level function `IS_CHARACTER_JUNK` may be used to filter out | whitespace characters (a blank or tab; **note**: bad idea to include | newline in this!). Use of IS_CHARACTER_JUNK is recommended. | | compare(self, a, b) | Compare two sequences of lines; generate the resulting delta. | | Each sequence must contain individual single-line strings ending with | newlines. Such sequences can be obtained from the `readlines()` method | of file-like objects. The delta generated also consists of newline- | terminated strings, ready to be printed as-is via the writeline() | method of a file-like object. | | Example: It leads to confusion because it looks like methods are described twice. Also the signature of a method in the class docstring can be outdated. For example the description of SequenceMatcher.__init__ was not updated for new parameter autojunk. ---------- assignee: docs at python components: Documentation messages: 401923 nosy: docs at python, serhiy.storchaka priority: normal severity: normal status: open title: Remove redundand information from difflib docstrings type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 04:51:18 2021 From: report at bugs.python.org (sbougnoux) Date: Thu, 16 Sep 2021 08:51:18 +0000 Subject: [New-bugs-announce] [issue45217] ConfigParser does not accept "No value" reversely to the doc Message-ID: <1631782278.7.0.0138696609954.issue45217@roundup.psfhosted.org> New submission from sbougnoux : Just the simple following config crashes """ [Bug] Here """ Hopefully using "Here=" solves the issue, but the doc claims it shall work. https://docs.python.org/3.8/library/configparser.html?highlight=configparser#supported-ini-file-structure Save the config in "bug.ini", then write (it will raise an exception) """ from configparser import ConfigParser ConfigParser().read('bug.ini') """ ---------- assignee: docs at python components: Demos and Tools, Documentation, Library (Lib) messages: 401932 nosy: docs at python, sbougnoux priority: normal severity: normal status: open title: ConfigParser does not accept "No value" reversely to the doc type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 05:09:00 2021 From: report at bugs.python.org (Mark Dickinson) Date: Thu, 16 Sep 2021 09:09:00 +0000 Subject: [New-bugs-announce] [issue45218] cmath.log has an invalid signature Message-ID: <1631783340.74.0.420089493101.issue45218@roundup.psfhosted.org> New submission from Mark Dickinson : inspect.signature reports that the cmath.log function has an invalid signature: Python 3.11.0a0 (heads/fix-44954:d0ea569eb5, Aug 19 2021, 14:59:04) [Clang 12.0.0 (clang-1200.0.32.29)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cmath >>> import inspect >>> inspect.signature(cmath.log) Traceback (most recent call last): File "", line 1, in File "/Users/mdickinson/Python/cpython/Lib/inspect.py", line 3215, in signature return Signature.from_callable(obj, follow_wrapped=follow_wrapped, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mdickinson/Python/cpython/Lib/inspect.py", line 2963, in from_callable return _signature_from_callable(obj, sigcls=cls, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mdickinson/Python/cpython/Lib/inspect.py", line 2432, in _signature_from_callable return _signature_from_builtin(sigcls, obj, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mdickinson/Python/cpython/Lib/inspect.py", line 2244, in _signature_from_builtin return _signature_fromstr(cls, func, s, skip_bound_arg) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mdickinson/Python/cpython/Lib/inspect.py", line 2114, in _signature_fromstr raise ValueError("{!r} builtin has invalid signature".format(obj)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: builtin has invalid signature ---------- components: Extension Modules messages: 401933 nosy: mark.dickinson priority: normal severity: normal status: open title: cmath.log has an invalid signature versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 07:21:18 2021 From: report at bugs.python.org (Mark Shannon) Date: Thu, 16 Sep 2021 11:21:18 +0000 Subject: [New-bugs-announce] [issue45219] Expose indexing and other simple operations on dict-keys in internal API Message-ID: <1631791278.28.0.25675084804.issue45219@roundup.psfhosted.org> New submission from Mark Shannon : Specialization and other optimizations rely on shared dictionary key properties (version number, no deletions, etc). However checking those properties during specialization is tricky and rather clunky as the dict-keys can only be tested indirectly through a dictionary. We should add a few internal API functions. Specifically we want to know: Is a key in a dict-keys? What index is that key at? Is a dict-keys all unicode? ---------- assignee: Mark.Shannon messages: 401936 nosy: Mark.Shannon priority: normal severity: normal status: open title: Expose indexing and other simple operations on dict-keys in internal API _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 10:45:16 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Thu, 16 Sep 2021 14:45:16 +0000 Subject: [New-bugs-announce] [issue45220] Windows builds sometimes fail on main branch Message-ID: <1631803516.52.0.769978805951.issue45220@roundup.psfhosted.org> New submission from Nikita Sobolev : I've started to notice that CPython's builds on Windows now simetimes fail with something like this: (both Azure and Github Actions are affected) ``` Using "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\msbuild.exe" (found in the Visual Studio installation) Using py -3.9 (found 3.9 with py.exe) D:\a\cpython\cpython>"C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\msbuild.exe" "D:\a\cpython\cpython\PCbuild\pcbuild.proj" /t:Build /m /nologo /v:m /clp:summary /p:Configuration=Release /p:Platform=Win32 /p:IncludeExternals=true /p:IncludeCTypes=true /p:IncludeSSL=true /p:IncludeTkinter=true /p:UseTestMarker= /p:GIT="C:\Program Files\Git\bin\git.exe" _freeze_module.c config_minimal.c atexitmodule.c faulthandler.c gcmodule.c getbuildinfo.c posixmodule.c signalmodule.c _tracemalloc.c _iomodule.c bufferedio.c bytesio.c fileio.c iobase.c stringio.c textio.c winconsoleio.c abstract.c accu.c boolobject.c Compiling... bytearrayobject.c bytes_methods.c bytesobject.c call.c capsule.c cellobject.c classobject.c codeobject.c complexobject.c descrobject.c dictobject.c enumobject.c exceptions.c fileobject.c floatobject.c frameobject.c funcobject.c genericaliasobject.c genobject.c interpreteridobject.c Compiling... iterobject.c listobject.c longobject.c memoryobject.c methodobject.c moduleobject.c namespaceobject.c object.c obmalloc.c odictobject.c picklebufobject.c rangeobject.c setobject.c sliceobject.c structseq.c tupleobject.c typeobject.c unicodectype.c unicodeobject.c unionobject.c Compiling... weakrefobject.c myreadline.c parser.c peg_api.c pegen.c string_parser.c token.c tokenizer.c getpathp.c invalid_parameter_handler.c msvcrtmodule.c winreg.c _warnings.c asdl.c ast.c ast_opt.c ast_unparse.c bltinmodule.c bootstrap_hash.c ceval.c D:\a\cpython\cpython\Python\ceval.c(3669,13): warning C4018: '>=': signed/unsigned mismatch [D:\a\cpython\cpython\PCbuild\_freeze_module.vcxproj] D:\a\cpython\cpython\Python\ceval.c(3777,13): warning C4018: '>=': signed/unsigned mismatch [D:\a\cpython\cpython\PCbuild\_freeze_module.vcxproj] Compiling... codecs.c compile.c context.c dtoa.c dynamic_annotations.c dynload_win.c errors.c fileutils.c formatter_unicode.c frame.c future.c getargs.c getcompiler.c getcopyright.c getopt.c getplatform.c getversion.c hamt.c hashtable.c import.c Compiling... importdl.c initconfig.c marshal.c modsupport.c mysnprintf.c mystrtoul.c pathconfig.c preconfig.c pyarena.c pyctype.c pyfpe.c pyhash.c pylifecycle.c pymath.c pystate.c pystrcmp.c pystrhex.c pystrtod.c Python-ast.c pythonrun.c Compiling... Python-tokenize.c pytime.c specialize.c structmember.c suggestions.c symtable.c sysmodule.c thread.c traceback.c Creating library D:\a\cpython\cpython\PCbuild\win32\_freeze_module.lib and object D:\a\cpython\cpython\PCbuild\win32\_freeze_module.exp Generating code Finished generating code _freeze_module.vcxproj -> D:\a\cpython\cpython\PCbuild\win32\_freeze_module.exe Updated files: __hello__.h Killing any running python.exe instances... Regenerate pycore_ast.h pycore_ast_state.h Python-ast.c D:\a\cpython\cpython\Python\Python-ast.c, D:\a\cpython\cpython\Include\internal\pycore_ast.h, D:\a\cpython\cpython\Include\internal\pycore_ast_state.h regenerated. Regenerate opcode.h opcode_targets.h Include\opcode.h regenerated from Lib\opcode.py Jump table written into Python\opcode_targets.h Regenerate token-list.inc token.h token.c token.py Generated sources are up to date Getting build info from "C:\Program Files\Git\bin\git.exe" Building heads/main-dirty:7dacb70 main _abc.c _bisectmodule.c blake2module.c blake2b_impl.c blake2s_impl.c _codecsmodule.c _collectionsmodule.c _contextvarsmodule.c _csv.c _functoolsmodule.c _heapqmodule.c _json.c _localemodule.c _lsprof.c _math.c _pickle.c _randommodule.c sha3module.c _sre.c _stat.c Compiling... _struct.c _weakref.c arraymodule.c atexitmodule.c audioop.c binascii.c cmathmodule.c _datetimemodule.c errnomodule.c faulthandler.c gcmodule.c itertoolsmodule.c main.c mathmodule.c md5module.c mmapmodule.c _opcode.c _operator.c posixmodule.c rotatingtree.c Compiling... sha1module.c sha256module.c sha512module.c signalmodule.c _statisticsmodule.c symtablemodule.c _threadmodule.c _tracemalloc.c _typingmodule.c timemodule.c xxsubtype.c _xxsubinterpretersmodule.c fileio.c bytesio.c stringio.c bufferedio.c iobase.c textio.c winconsoleio.c _iomodule.c Compiling... _codecs_cn.c _codecs_hk.c _codecs_iso2022.c _codecs_jp.c _codecs_kr.c _codecs_tw.c multibytecodec.c _winapi.c abstract.c accu.c boolobject.c bytearrayobject.c bytes_methods.c bytesobject.c call.c capsule.c cellobject.c classobject.c codeobject.c complexobject.c Compiling... descrobject.c dictobject.c enumobject.c exceptions.c fileobject.c floatobject.c frameobject.c funcobject.c genericaliasobject.c genobject.c interpreteridobject.c iterobject.c listobject.c longobject.c memoryobject.c methodobject.c moduleobject.c namespaceobject.c object.c obmalloc.c Compiling... odictobject.c picklebufobject.c rangeobject.c setobject.c sliceobject.c structseq.c tupleobject.c typeobject.c unicodectype.c unicodeobject.c unionobject.c weakrefobject.c myreadline.c tokenizer.c token.c pegen.c parser.c string_parser.c peg_api.c invalid_parameter_handler.c Compiling... winreg.c config.c getpathp.c msvcrtmodule.c pyhash.c _warnings.c asdl.c ast.c ast_opt.c ast_unparse.c bltinmodule.c bootstrap_hash.c ceval.c D:\a\cpython\cpython\Python\ceval.c(3669,13): warning C4018: '>=': signed/unsigned mismatch [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj] D:\a\cpython\cpython\Python\ceval.c(3777,13): warning C4018: '>=': signed/unsigned mismatch [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj] codecs.c compile.c context.c dynamic_annotations.c dynload_win.c errors.c fileutils.c Compiling... formatter_unicode.c frame.c frozen.c future.c getargs.c getcompiler.c getcopyright.c getopt.c getplatform.c getversion.c hamt.c hashtable.c import.c importdl.c initconfig.c marshal.c modsupport.c mysnprintf.c mystrtoul.c pathconfig.c Compiling... preconfig.c pyarena.c pyctype.c pyfpe.c pylifecycle.c pymath.c pytime.c pystate.c pystrcmp.c pystrhex.c pystrtod.c dtoa.c Python-ast.c Python-tokenize.c pythonrun.c specialize.c suggestions.c structmember.c symtable.c sysmodule.c Compiling... thread.c traceback.c zlibmodule.c adler32.c compress.c crc32.c deflate.c infback.c inffast.c inflate.c inftrees.c trees.c uncompr.c zutil.c dl_nt.c getbuildinfo.c C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\um\winnt.h(253): error RC2188: D:\a\cpython\cpython\PCbuild\obj\311win32_Release\pythoncore\RCa05056(47) : fatal error RC1116: RC terminating after preprocessor errors [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj] Build FAILED. D:\a\cpython\cpython\Python\ceval.c(3669,13): warning C4018: '>=': signed/unsigned mismatch [D:\a\cpython\cpython\PCbuild\_freeze_module.vcxproj] D:\a\cpython\cpython\Python\ceval.c(3777,13): warning C4018: '>=': signed/unsigned mismatch [D:\a\cpython\cpython\PCbuild\_freeze_module.vcxproj] D:\a\cpython\cpython\Python\ceval.c(3669,13): warning C4018: '>=': signed/unsigned mismatch [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj] D:\a\cpython\cpython\Python\ceval.c(3777,13): warning C4018: '>=': signed/unsigned mismatch [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj] C:\Program Files (x86)\Windows Kits\10\Include\10.0.22000.0\um\winnt.h(253): error RC2188: D:\a\cpython\cpython\PCbuild\obj\311win32_Release\pythoncore\RCa05056(47) : fatal error RC1116: RC terminating after preprocessor errors [D:\a\cpython\cpython\PCbuild\pythoncore.vcxproj] 4 Warning(s) 1 Error(s) ``` Links: - https://github.com/python/cpython/runs/3620202822 - https://dev.azure.com/Python/cpython/_build/results?buildId=87889&view=logs&j=91c152bd-7320-5194-b252-1404e56e2478&t=c7e99cd8-4756-5292-d34b-246ff5fc615f ---------- components: Build messages: 401948 nosy: sobolevn priority: normal severity: normal status: open title: Windows builds sometimes fail on main branch type: compile error versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 10:56:43 2021 From: report at bugs.python.org (ux) Date: Thu, 16 Sep 2021 14:56:43 +0000 Subject: [New-bugs-announce] [issue45221] Linker flags starting with -h breaks setup.py (regression) Message-ID: <1631804203.7.0.524724045399.issue45221@roundup.psfhosted.org> New submission from ux : Hi, Since 3.8 (included), the following build command fails: LDFLAGS=-headerpad_max_install_names ./configure make With the following error: setup.py: error: argument -h/--help: ignored explicit argument 'eaderpad_max_install_names' A quick hack in setup.py "fixes" the issue: - options, _ = parser.parse_known_args(env_val.split()) + options, _ = parser.parse_known_args([x for x in env_val.split() if not x.startswith('-h')]) Another workaround as a user is to do use `LDFLAGS=-Wl,-headerpad_max_install_names`. ---------- components: Build messages: 401951 nosy: ux priority: normal severity: normal status: open title: Linker flags starting with -h breaks setup.py (regression) type: compile error versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 11:11:41 2021 From: report at bugs.python.org (tongxiaoge) Date: Thu, 16 Sep 2021 15:11:41 +0000 Subject: [New-bugs-announce] [issue45222] test_with_pip fail Message-ID: <1631805101.42.0.465576816757.issue45222@roundup.psfhosted.org> New submission from tongxiaoge : Today, I tried to upgrade Python 3 to version 3.8.12, the test case test_with_pip failed. The error message is as follows: [ 356s] test_with_pip (test.test_venv.EnsurePipTest) ... FAIL [ 356s] [ 356s] ====================================================================== [ 356s] FAIL: test_with_pip (test.test_venv.EnsurePipTest) [ 356s] ---------------------------------------------------------------------- [ 356s] Traceback (most recent call last): [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/test/test_venv.py", line 444, in do_test_with_pip [ 356s] self.run_with_capture(venv.create, self.env_dir, [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/test/test_venv.py", line 77, in run_with_capture [ 356s] func(*args, **kwargs) [ 356s] subprocess.CalledProcessError: Command '['/tmp/tmpm13zz0cn/bin/python', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. [ 356s] [ 356s] During handling of the above exception, another exception occurred: [ 356s] [ 356s] Traceback (most recent call last): [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/test/test_venv.py", line 504, in test_with_pip [ 356s] self.do_test_with_pip(False) [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/test/test_venv.py", line 452, in do_test_with_pip [ 356s] self.fail(msg.format(exc, details)) [ 356s] AssertionError: Command '['/tmp/tmpm13zz0cn/bin/python', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 1. [ 356s] [ 356s] **Subprocess Output** [ 356s] Looking in links: /tmp/tmp7usqy615 [ 356s] Processing /tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl [ 356s] Processing /tmp/tmp7usqy615/setuptools-54.2.0-py3-none-any.whl [ 356s] Installing collected packages: setuptools, pip [ 356s] ERROR: Exception: [ 356s] Traceback (most recent call last): [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/packaging/version.py", line 57, in parse [ 356s] return Version(version) [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/packaging/version.py", line 298, in __init__ [ 356s] raise InvalidVersion("Invalid version: '{0}'".format(version)) [ 356s] pip._vendor.packaging.version.InvalidVersion: Invalid version: 'setuptools' [ 356s] [ 356s] During handling of the above exception, another exception occurred: [ 356s] [ 356s] Traceback (most recent call last): [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_internal/cli/base_command.py", line 224, in _main [ 356s] status = self.run(options, args) [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_internal/cli/req_command.py", line 180, in wrapper [ 356s] return func(self, options, args) [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_internal/commands/install.py", line 439, in run [ 356s] working_set = pkg_resources.WorkingSet(lib_locations) [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/pkg_resources/__init__.py", line 567, in __init__ [ 356s] self.add_entry(entry) [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/pkg_resources/__init__.py", line 623, in add_entry [ 356s] for dist in find_distributions(entry, True): [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/pkg_resources/__init__.py", line 2061, in find_on_path [ 356s] path_item_entries = _by_version_descending(filtered) [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/pkg_resources/__init__.py", line 2034, in _by_version_descending [ 356s] return sorted(names, key=_by_version, reverse=True) [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/pkg_resources/__init__.py", line 2032, in _by_version [ 356s] return [packaging.version.parse(part) for part in parts] [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/pkg_resources/__init__.py", line 2032, in [ 356s] return [packaging.version.parse(part) for part in parts] [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/packaging/version.py", line 59, in parse [ 356s] return LegacyVersion(version) [ 356s] File "/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl/pip/_vendor/packaging/version.py", line 127, in __init__ [ 356s] warnings.warn( [ 356s] DeprecationWarning: Creating a LegacyVersion has been deprecated and will be removed in the next major release [ 356s] Traceback (most recent call last): [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/runpy.py", line 194, in _run_module_as_main [ 356s] return _run_code(code, main_globals, None, [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/runpy.py", line 87, in _run_code [ 356s] exec(code, run_globals) [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/ensurepip/__main__.py", line 5, in [ 356s] sys.exit(ensurepip._main()) [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/ensurepip/__init__.py", line 219, in _main [ 356s] return _bootstrap( [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/ensurepip/__init__.py", line 138, in _bootstrap [ 356s] return _run_pip(args + [p[0] for p in _PROJECTS], additional_paths) [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/ensurepip/__init__.py", line 50, in _run_pip [ 356s] return subprocess.run([sys.executable, "-c", code], check=True).returncode [ 356s] File "/home/abuild/rpmbuild/BUILD/Python-3.8.12/Lib/subprocess.py", line 516, in run [ 356s] raise CalledProcessError(retcode, process.args, [ 356s] subprocess.CalledProcessError: Command '['/tmp/tmpm13zz0cn/bin/python', '-c', '\nimport runpy\nimport sys\nsys.path = [\'/tmp/tmp7usqy615/setuptools-54.2.0-py3-none-any.whl\', \'/tmp/tmp7usqy615/pip-20.3.3-py2.py3-none-any.whl\'] + sys.path\nsys.argv[1:] = [\'install\', \'--no-cache-dir\', \'--no-index\', \'--find-links\', \'/tmp/tmp7usqy615\', \'--upgrade\', \'setuptools\', \'pip\']\nrunpy.run_module("pip", run_name="__main__", alter_sys=True)\n']' returned non-zero exit status 2. [ 356s] [ 356s] [ 356s] ---------------------------------------------------------------------- [ 356s] [ 356s] Ran 18 tests in 7.458s The current version of Python I use is Python 3.8.5. There is no such problem.What is the reason? ---------- messages: 401955 nosy: christian.heimes, sxt1001, thatiparthy, vstinner priority: normal severity: normal status: open title: test_with_pip fail versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 11:38:35 2021 From: report at bugs.python.org (Alexander Kanavin) Date: Thu, 16 Sep 2021 15:38:35 +0000 Subject: [New-bugs-announce] [issue45223] test_spawn_doesnt_hang (test.test_pty.PtyTest) fails when stdin isn't readable Message-ID: <1631806715.36.0.768648922349.issue45223@roundup.psfhosted.org> New submission from Alexander Kanavin : I am observing the following under yocto's test harness: ====================================================================== ERROR: test_spawn_doesnt_hang (test.test_pty.PtyTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.10/test/test_pty.py", line 316, in test_spawn_doesnt_hang pty.spawn([sys.executable, '-c', 'print("hi there")']) File "/usr/lib/python3.10/pty.py", line 181, in spawn _copy(master_fd, master_read, stdin_read) File "/usr/lib/python3.10/pty.py", line 157, in _copy data = stdin_read(STDIN_FILENO) File "/usr/lib/python3.10/pty.py", line 132, in _read return os.read(fd, 1024) OSError: [Errno 5] Input/output error The same tests runs fine in a regular console. ---------- messages: 401961 nosy: Alexander Kanavin priority: normal severity: normal status: open title: test_spawn_doesnt_hang (test.test_pty.PtyTest) fails when stdin isn't readable _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 12:26:49 2021 From: report at bugs.python.org (Khalid Mammadov) Date: Thu, 16 Sep 2021 16:26:49 +0000 Subject: [New-bugs-announce] [issue45224] Argparse shows required arguments as optional Message-ID: <1631809609.68.0.875196244045.issue45224@roundup.psfhosted.org> New submission from Khalid Mammadov : Currently argparse module shows all optional arguments under "optional arguments" section of the help. It also includes those flags/arguments that are required as well. This add confusion to a user and does not properly show intention ---------- components: Library (Lib) messages: 401969 nosy: khalidmammadov priority: normal severity: normal status: open title: Argparse shows required arguments as optional type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 14:54:54 2021 From: report at bugs.python.org (speedrun-program) Date: Thu, 16 Sep 2021 18:54:54 +0000 Subject: [New-bugs-announce] [issue45225] use map function instead of genexpr in capwords Message-ID: <1631818494.19.0.506604137018.issue45225@roundup.psfhosted.org> New submission from speedrun-program : In string.py, the capwords function passes str.join a generator expression, but the map function could be used instead. This is how capwords is currently written: -------------------- ```py def capwords(s, sep=None): """capwords(s [,sep]) -> string Split the argument into words using split, capitalize each word using capitalize, and join the capitalized words using join. If the optional second argument sep is absent or None, runs of whitespace characters are replaced by a single space and leading and trailing whitespace are removed, otherwise sep is used to split and join the words. """ return (sep or ' ').join(x.capitalize() for x in s.split(sep)) ``` -------------------- This is how capwords could be written: -------------------- ```py def capwords(s, sep=None): """capwords(s [,sep]) -> string Split the argument into words using split, capitalize each word using capitalize, and join the capitalized words using join. If the optional second argument sep is absent or None, runs of whitespace characters are replaced by a single space and leading and trailing whitespace are removed, otherwise sep is used to split and join the words. """ return (sep or ' ').join(map(str.capitalize, s.split(sep))) ``` -------------------- These are the benefits: 1. Faster performance which increases with the number of times the str is split. 2. Very slightly smaller .py and .pyc file sizes. 3. Source code is slightly more concise. This is the performance test code in ipython: -------------------- ```py def capwords_current(s, sep=None): return (sep or ' ').join(x.capitalize() for x in s.split(sep)) ? def capwords_new(s, sep=None): return (sep or ' ').join(map(str.capitalize, s.split(sep))) ? tests = ["a " * 10**n for n in range(9)] tests.append("a " * (10**9 // 2)) # I only have 16GB of RAM ``` -------------------- These are the results of a performance test using %timeit in ipython: -------------------- %timeit x = capwords_current("") 835 ns ? 15.2 ns per loop (mean ? std. dev. of 7 runs, 1000000 loops each) %timeit x = capwords_new("") 758 ns ? 35.1 ns per loop (mean ? std. dev. of 7 runs, 1000000 loops each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[0]) 977 ns ? 16.9 ns per loop (mean ? std. dev. of 7 runs, 1000000 loops each) %timeit x = capwords_new(tests[0]) 822 ns ? 30 ns per loop (mean ? std. dev. of 7 runs, 1000000 loops each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[1]) 3.07 ?s ? 88.8 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) %timeit x = capwords_new(tests[1]) 2.17 ?s ? 194 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[2]) 28 ?s ? 896 ns per loop (mean ? std. dev. of 7 runs, 10000 loops each) %timeit x = capwords_new(tests[2]) 19.4 ?s ? 352 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[3]) 236 ?s ? 14.5 ?s per loop (mean ? std. dev. of 7 runs, 1000 loops each) %timeit x = capwords_new(tests[3]) 153 ?s ? 2 ?s per loop (mean ? std. dev. of 7 runs, 10000 loops each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[4]) 2.12 ms ? 106 ?s per loop (mean ? std. dev. of 7 runs, 100 loops each) %timeit x = capwords_new(tests[4]) 1.5 ms ? 9.61 ?s per loop (mean ? std. dev. of 7 runs, 1000 loops each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[5]) 23.8 ms ? 1.38 ms per loop (mean ? std. dev. of 7 runs, 10 loops each) %timeit x = capwords_new(tests[5]) 15.6 ms ? 355 ?s per loop (mean ? std. dev. of 7 runs, 100 loops each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[6]) 271 ms ? 10.6 ms per loop (mean ? std. dev. of 7 runs, 1 loop each) %timeit x = capwords_new(tests[6]) 192 ms ? 807 ?s per loop (mean ? std. dev. of 7 runs, 10 loops each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[7]) 2.66 s ? 14.3 ms per loop (mean ? std. dev. of 7 runs, 1 loop each) %timeit x = capwords_new(tests[7]) 1.95 s ? 26.7 ms per loop (mean ? std. dev. of 7 runs, 1 loop each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[8]) 25.9 s ? 80.2 ms per loop (mean ? std. dev. of 7 runs, 1 loop each) %timeit x = capwords_new(tests[8]) 18.4 s ? 123 ms per loop (mean ? std. dev. of 7 runs, 1 loop each) - - - - - - - - - - - - - - - - - - - - %timeit x = capwords_current(tests[9]) 6min 17s ? 29 s per loop (mean ? std. dev. of 7 runs, 1 loop each) %timeit x = capwords_new(tests[9]) 5min 36s ? 24.8 s per loop (mean ? std. dev. of 7 runs, 1 loop each) -------------------- ---------- components: Library (Lib) messages: 401981 nosy: speedrun-program priority: normal pull_requests: 26808 severity: normal status: open title: use map function instead of genexpr in capwords type: performance _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 16 21:06:02 2021 From: report at bugs.python.org (Maor Feldinger) Date: Fri, 17 Sep 2021 01:06:02 +0000 Subject: [New-bugs-announce] [issue45226] Misleading error message when pathlib.Path.symlink_to() fails with permissions error Message-ID: <1631840762.01.0.206152391874.issue45226@roundup.psfhosted.org> New submission from Maor Feldinger : Reproduction Steps: ------------------- Create a symlink in a directory without permissions: >>> from pathlib import Path >>> >>> target = Path('/tmp/tmp/target') >>> src = Path('/tmp/tmp/src') >>> src.symlink_to(target) Actual: ------- Permission error shows reversed order: Traceback (most recent call last): File "", line 1, in File "/opt/rh/rh-python38/root/usr/lib64/python3.8/pathlib.py", line 1382, in symlink_to if self._closed: File "/opt/rh/rh-python38/root/usr/lib64/python3.8/pathlib.py", line 445, in symlink def symlink(a, b, target_is_directory): PermissionError: [Errno 13] Permission denied: '/tmp/tmp/target' -> '/tmp/tmp/src' Expected: --------- Same as os.symlink the permission error should show the right symlink order: >>> import os >>> >>> os.symlink(str(src), str(target)) Traceback (most recent call last): File "", line 1, in PermissionError: [Errno 13] Permission denied: '/tmp/tmp/src' -> '/tmp/tmp/target' ---------- messages: 401996 nosy: dfntlymaybe priority: normal severity: normal status: open title: Misleading error message when pathlib.Path.symlink_to() fails with permissions error type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 01:57:31 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Sep 2021 05:57:31 +0000 Subject: [New-bugs-announce] [issue45227] Control reaches end of non-void function in specialize.c Message-ID: <1631858251.9.0.87496376949.issue45227@roundup.psfhosted.org> New submission from Serhiy Storchaka : Python/specialize.c: In function ?load_method_fail_kind?: Python/specialize.c:878:1: warning: control reaches end of non-void function [-Wreturn-type] 878 | } | ^ ---------- components: Interpreter Core messages: 402001 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Control reaches end of non-void function in specialize.c type: compile error versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 02:56:42 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Sep 2021 06:56:42 +0000 Subject: [New-bugs-announce] [issue45228] Stack buffer overflow in parsing J1939 network address Message-ID: <1631861802.59.0.0581666676393.issue45228@roundup.psfhosted.org> New submission from Serhiy Storchaka : It can be reproduced when run test.test_socket.J1939Test (omitted in regrtests now, see issue45187) with Address Sanitizer. See for example https://github.com/python/cpython/pull/28317/checks?check_run_id=3625390397. It can be reproduced when run test.test_socket.J1939Test with unittest: $ ./python -m unittest -v test.test_socket -k J1939Test See J1939Test.log for output. The cause is using PyArg_ParseTuple() with format unit "k" (unsigned long) and variable of type uint32_t. PyArg_ParseTuple() should only be used with native integer types (short, int, long, long long), it does not support support types of fixed size (uint16_t, uint32_t, uint64_t). ---------- components: Extension Modules files: J1939Test.log messages: 402003 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Stack buffer overflow in parsing J1939 network address type: crash versions: Python 3.11 Added file: https://bugs.python.org/file50283/J1939Test.log _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 03:31:12 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 17 Sep 2021 07:31:12 +0000 Subject: [New-bugs-announce] [issue45229] Always use unittest for collecting tests in regrtests Message-ID: <1631863872.14.0.108756439349.issue45229@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently regrtest supports two ways of collecting and running tests in module. 1. If the module contains the "test_main" function, regrtest just calls it. This function usually calls run_unittest() with a list of test classes, composed manually, it can also run doctests with run_doctest(), generate list of test classes dynamically, and run some code before and after running tests. 2. Otherwise regrtest uses unittest for for collecting tests. The disadvantage of the former way is that new test classes should be added manually to the list. If you forget to do this, new tests will not be run. See for example issue45185 and issue45187. Not runned tests can hide bugs. So it would be better to eliminate human factor and detect tests automatically. ---------- assignee: serhiy.storchaka components: Tests messages: 402006 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Always use unittest for collecting tests in regrtests type: enhancement versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 03:45:12 2021 From: report at bugs.python.org (Wang Bingchao) Date: Fri, 17 Sep 2021 07:45:12 +0000 Subject: [New-bugs-announce] [issue45230] Something wrong when Calculate "3==3 is not True" Message-ID: <1631864712.39.0.927723834147.issue45230@roundup.psfhosted.org> New submission from Wang Bingchao <819576257 at qq.com>: I use python3.7 python3.6 python2.7, and run the following code: print(3==3 is not True) print(3==3 is True) print(3==2 is not True) print(3==2 is True) I got the same results as follow: True False False False but I don't think it is a reasonable result, it may be bugs? ---------- components: Subinterpreters messages: 402010 nosy: ET priority: normal severity: normal status: open title: Something wrong when Calculate "3==3 is not True" type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 05:22:43 2021 From: report at bugs.python.org (STINNER Victor) Date: Fri, 17 Sep 2021 09:22:43 +0000 Subject: [New-bugs-announce] [issue45231] make regen-all changes files on Linux Message-ID: <1631870563.98.0.72816558791.issue45231@roundup.psfhosted.org> New submission from STINNER Victor : "make regen-frozen" changes PCbuild/_freeze_module.vcxproj and PCbuild/_freeze_module.vcxproj.filters changes files end of line on Linux. I'm working on a fix. When Python is built out of the source free, "make regen-pegen" changes Parser/parser.c and Tools/peg_generator/pegen/grammar_parser.py: diff --git a/Parser/parser.c b/Parser/parser.c index 3cea370c5a..caf864d86c 100644 --- a/Parser/parser.c +++ b/Parser/parser.c @@ -1,4 +1,4 @@ -// @generated by pegen from ./Grammar/python.gram +// @generated by pegen from ../Grammar/python.gram #include "pegen.h" #if defined(Py_DEBUG) && defined(Py_BUILD_CORE) diff --git a/Tools/peg_generator/pegen/grammar_parser.py b/Tools/peg_generator/pegen/grammar_parser.py index 6e9f7d3d11..6e9885cef7 100644 --- a/Tools/peg_generator/pegen/grammar_parser.py +++ b/Tools/peg_generator/pegen/grammar_parser.py @@ -1,5 +1,5 @@ #!/usr/bin/env python3.8 -# @generated by pegen from ./Tools/peg_generator/pegen/metagrammar.gram +# @generated by pegen from ../Tools/peg_generator/pegen/metagrammar.gram import ast import sys ---------- components: Build messages: 402019 nosy: vstinner priority: normal severity: normal status: open title: make regen-all changes files on Linux versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 08:30:29 2021 From: report at bugs.python.org (Olivier Delhomme) Date: Fri, 17 Sep 2021 12:30:29 +0000 Subject: [New-bugs-announce] [issue45232] ascii codec is used by default when LANG is not set Message-ID: <1631881829.49.0.378694444331.issue45232@roundup.psfhosted.org> New submission from Olivier Delhomme : $ python3 --version Python 3.6.4 Setting LANG to en_US.UTF8 works like a charm $ export LANG=en_US.UTF8 $ python3 Python 3.6.4 (default, Jan 11 2018, 16:45:55) [GCC 4.8.5] on linux Type "help", "copyright", "credits" or "license" for more information. >>> machaine='????help me if you can' >>> print('{}'.format(machaine)) ????help me if you can Unsetting LANG?shell variable fails the program: $ unset LANG $ python3 Python 3.6.4 (default, Jan 11 2018, 16:45:55) [GCC 4.8.5] on linux Type "help", "copyright", "credits" or "license" for more information. >>> machaine='????help me if you can' File "", line 0 ^ SyntaxError: 'ascii' codec can't decode byte 0xc3 in position 10: ordinal not in range(128) Setting LANG?inside the program does not change this behavior: $ unset LANG $ python3 Python 3.6.4 (default, Jan 11 2018, 16:45:55) [GCC 4.8.5] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.environ['LANG'] = 'en_US.UTF8' >>> machaine='????help me if you can' File "", line 0 ^ SyntaxError: 'ascii' codec can't decode byte 0xc3 in position 10: ordinal not in range(128) Is this an expected behavior ? How can I force an utf8 codec ? ---------- components: Interpreter Core messages: 402046 nosy: od-cea priority: normal severity: normal status: open title: ascii codec is used by default when LANG is not set type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 08:47:38 2021 From: report at bugs.python.org (Mark Shannon) Date: Fri, 17 Sep 2021 12:47:38 +0000 Subject: [New-bugs-announce] [issue45233] Allow split key dictionaries with values owned by other objects. Message-ID: <1631882858.41.0.966804501884.issue45233@roundup.psfhosted.org> New submission from Mark Shannon : Currently, if a dictionary is split, then the dictionary owns the memory for the values. Unless the values is the unique empty-values array. In order to support lazily created dictionaries for objects (see https://github.com/faster-cpython/ideas/issues/72#issuecomment-886796360), we need to allow shared keys dicts that do not own the memory of the values. I propose the following changes to the internals of dict objects. Add 4 flag bits (these can go in the low bits of the version tag) 2 bit for ownership of values, the other 2 bits for the stride of the values (1 or 3). All dictionaries would then have a non-NULL values pointer. The value of index `ix` would be always located at `dict->ma_values[ix*stride]` The two ownership bits indicate whether the dictionary owns the references and whether it owns the memory. When a dictionary is freed, the items in the values array would be decref'd if the references are owned. The values array would be freed if the memory is owned. I don't think it is possible to own the memory, but not the references. Examples: A combined dict. Stride = 3, owns_refs = 1, owns_mem = 0. A split keys dict. Stride = 1, owns_refs = 1, owns_mem = 1. Empty dict (split). Stride = 1, owns_refs = 0, owns_mem = 0. Dictionary with values embedded in object (https://github.com/faster-cpython/ideas/issues/72#issuecomment-886796360, second diagram). Stride = 1, owns_refs = 0, owns_mem = 0. ---------- assignee: Mark.Shannon components: Interpreter Core messages: 402047 nosy: Mark.Shannon, methane priority: normal severity: normal status: open title: Allow split key dictionaries with values owned by other objects. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 10:04:17 2021 From: report at bugs.python.org (Alex Grund) Date: Fri, 17 Sep 2021 14:04:17 +0000 Subject: [New-bugs-announce] [issue45234] copy_file raises FileNotFoundError when src is a directory Message-ID: <1631887457.05.0.0932118278478.issue45234@roundup.psfhosted.org> New submission from Alex Grund : After https://bugs.python.org/issue43219 was resolved the function now shows faulty behavior when the source is a directory: `copy_file('/path/to/dir', '/target')` throws a FileNotFoundError while previously it was a IsADirectoryError which is clearly correct. See https://github.com/python/cpython/pull/27049#issuecomment-921647431 ---------- components: Library (Lib) messages: 402057 nosy: Alex Grund priority: normal severity: normal status: open title: copy_file raises FileNotFoundError when src is a directory versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 10:58:54 2021 From: report at bugs.python.org (Adam Schwalm) Date: Fri, 17 Sep 2021 14:58:54 +0000 Subject: [New-bugs-announce] [issue45235] argparse does not preserve namespace with subparser defaults Message-ID: <1631890734.12.0.701463144819.issue45235@roundup.psfhosted.org> New submission from Adam Schwalm : The following snippet demonstrates the problem. If a subparser flag has a default set, argparse will override the existing value in the provided 'namespace' if the flag does not appear (e.g., if the default is used): import argparse parser = argparse.ArgumentParser() sub = parser.add_subparsers() example_subparser = sub.add_parser("example") example_subparser.add_argument("--flag", default=10) print(parser.parse_args(["example"], argparse.Namespace(flag=20))) This should return 'Namespace(flag=20)' because 'flag' already exists in the namespace, but instead it returns 'Namespace(flag=10)'. This intended behavior is described and demonstrated in the second example here: https://docs.python.org/3/library/argparse.html#default Lib's behavior is correct for the non-subparser cause. ---------- components: Library (Lib) messages: 402060 nosy: ALSchwalm priority: normal severity: normal status: open title: argparse does not preserve namespace with subparser defaults type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 16:36:52 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Fri, 17 Sep 2021 20:36:52 +0000 Subject: [New-bugs-announce] [issue45236] pyperformance fails to build master Message-ID: <1631911012.65.0.752307140026.issue45236@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : In the main branch, pyperformance fails to build due to something regarding the freeze module: 2021-09-17 00:03:46,170: /home/pablogsal/cpython_cron/Programs/_freeze_module importlib._bootstrap /home/pablogsal/cpython_cron/Lib/importlib/_bootstrap.py /home/pablogsal/cpython_cron/Python/frozen_modules/importlib__bootstrap.h 2021-09-17 00:03:46,170: make[3]: /home/pablogsal/cpython_cron/Programs/_freeze_module: Command not found 2021-09-17 00:03:46,170: Makefile:766: recipe for target 'Python/frozen_modules/importlib__bootstrap.h' failed 2021-09-17 00:03:46,171: make[3]: *** [Python/frozen_modules/importlib__bootstrap.h] Error 127 2021-09-17 00:03:46,171: make[3]: Leaving directory '/home/pablogsal/bench_tmpdir_cron/build' 2021-09-17 00:03:46,171: Makefile:535: recipe for target 'build_all_generate_profile' failed 2021-09-17 00:03:46,172: make[2]: *** [build_all_generate_profile] Error 2 2021-09-17 00:03:46,172: make[2]: Leaving directory '/home/pablogsal/bench_tmpdir_cron/build' 2021-09-17 00:03:46,172: Makefile:509: recipe for target 'profile-gen-stamp' failed 2021-09-17 00:03:46,172: make[1]: *** [profile-gen-stamp] Error 2 2021-09-17 00:03:46,172: make[1]: Leaving directory '/home/pablogsal/bench_tmpdir_cron/build' 2021-09-17 00:03:46,172: Makefile:520: recipe for target 'profile-run-stamp' failed 2021-09-17 00:03:46,172: make: *** [profile-run-stamp] Error 2 2021-09-17 00:03:46,173: Command make profile-opt failed with exit code 2 2021-09-17 00:03:46,188: Command /home/pablogsal/venv_cron/bin/python -m pyperformance compile /home/pablogsal/simple.conf fdc6b3d9316501d2f0068a1bf4334debc1949e62 master failed with exit code 11 2021-09-17 00:03:46,188: Benchmark exit code: 11 2021-09-17 00:03:46,188: FAILED: master-fdc6b3d9316501d2f0068a1bf4334debc1949e62 The performance server has been failing from some time due to this problem :( ---------- messages: 402089 nosy: eric.snow, pablogsal, vstinner priority: normal severity: normal status: open title: pyperformance fails to build master _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 17 17:11:43 2021 From: report at bugs.python.org (wolfgang kuehn) Date: Fri, 17 Sep 2021 21:11:43 +0000 Subject: [New-bugs-announce] [issue45237] Python subprocess not honoring append mode for stdout on Windows Message-ID: <1631913103.36.0.277836027655.issue45237@roundup.psfhosted.org> New submission from wolfgang kuehn : On Windows, if you pass an existing file object in append mode to a subprocess, the subprocess does **not** really append to the file: 1. A file object with `Hello World` content is passed to the subprocess 2. The content is erased 3. The subprocess writes to the file 4. The expected output does not contain `Hello World` Demo: import subprocess, time, pathlib, sys print(f'Caller {sys.platform=} {sys.version=}') pathlib.Path('sub.py').write_text("""import sys, time time.sleep(1) print(f'Callee {sys.stdout.buffer.mode=}')""") file = pathlib.Path('dummy.txt') file.write_text('Hello World') popen = subprocess.Popen([sys.executable, 'sub.py'], stdout=file.open(mode='a')) file.write_text('') time.sleep(2) print(file.read_text()) Expected output on Linux Caller sys.platform='linux' sys.version='3.8.6' Callee sys.stdout.buffer.mode='wb' Unexpected bad output on Windows Caller sys.platform='win32' sys.version='3.8.6' NULNULNULNULNULNULNULNULNULNULNULCallee sys.stdout.buffer.mode='wb' Note that the expected output is given on Windows if the file is opened in the subprocess via `sys.stdout = open('dummy.txt', 'a')`. So it is definitely a subprocess thing. ---------- components: IO messages: 402096 nosy: wolfgang-kuehn priority: normal severity: normal status: open title: Python subprocess not honoring append mode for stdout on Windows type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 18 09:54:37 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 18 Sep 2021 13:54:37 +0000 Subject: [New-bugs-announce] [issue45238] Fix debug() in IsolatedAsyncioTestCase Message-ID: <1631973277.87.0.115937136988.issue45238@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently debug() does not work in IsolatedAsyncioTestCase subclasses. It runs synchronous code (setUp(), tearDown(), synchronous tests and cleanup functions), but does not run any asynchronous code (asyncSetUp(), asyncTearDown(), asynchronous tests and cleanup functions) and produces warnings "RuntimeWarning: coroutine 'xxx' was never awaited". ---------- components: Library (Lib), asyncio messages: 402132 nosy: asvetlov, ezio.melotti, michael.foord, rbcollins, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Fix debug() in IsolatedAsyncioTestCase type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 18 18:35:53 2021 From: report at bugs.python.org (Ben Hoyt) Date: Sat, 18 Sep 2021 22:35:53 +0000 Subject: [New-bugs-announce] [issue45239] email.utils.parsedate_tz raises UnboundLocalError if time has more than 2 dots in it Message-ID: <1632004553.3.0.0473045390855.issue45239@roundup.psfhosted.org> New submission from Ben Hoyt : In going through some standard library code, I found that the email.utils.parsedate_tz() function (https://docs.python.org/3/library/email.utils.html#email.utils.parsedate_tz) has a bug if the time value is in dotted format and has more than 2 dots in it, for example: "12.34.56.78". Per the docs, it should return None in the case of invalid input. This is happening because in the case that handles the '.' separator (instead of the normal ':'), there's no else clause to return None when there's not 2 or 3 segments. >From https://github.com/python/cpython/blob/dea59cf88adf5d20812edda330e085a4695baba4/Lib/email/_parseaddr.py#L118-L132: if len(tm) == 2: [thh, tmm] = tm tss = '0' elif len(tm) == 3: [thh, tmm, tss] = tm elif len(tm) == 1 and '.' in tm[0]: # Some non-compliant MUAs use '.' to separate time elements. tm = tm[0].split('.') if len(tm) == 2: [thh, tmm] = tm tss = 0 elif len(tm) == 3: [thh, tmm, tss] = tm # HERE: need "else: return None" else: return None We simply need to include that additional "else: return None" block in the '.' handling case. I'll submit a pull request that fixes this soon (and adds a test for this case). ---------- components: Library (Lib) messages: 402140 nosy: barry, benhoyt priority: normal severity: normal status: open title: email.utils.parsedate_tz raises UnboundLocalError if time has more than 2 dots in it versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 18 20:58:51 2021 From: report at bugs.python.org (Andrei Kulakov) Date: Sun, 19 Sep 2021 00:58:51 +0000 Subject: [New-bugs-announce] [issue45240] Add +REPORT_NDIFF option to pdb tests that use doctest Message-ID: <1632013131.74.0.0677212065076.issue45240@roundup.psfhosted.org> New submission from Andrei Kulakov : It would be useful to have +REPORT_NDIFF option applied to pdb doctests because it makes it much more convenient to add / modify pdb tests, to fix pdb bugs, add pdb features. I've worked on two pdb issues in the last few months and I wasn't aware of +REPORT_NDIFF, and so I was just looking at the output of failing tests (which are very long for some pdb tests), and comparing them manually. This option, when enabled, automatically provides diff with in-line highlighted difference for a failed doctest. Otherwise two chunks of text are provided, 'EXPECTED' and 'GOT ...'. ---------- assignee: andrei.avk components: Tests messages: 402142 nosy: andrei.avk, kj priority: normal severity: normal status: open title: Add +REPORT_NDIFF option to pdb tests that use doctest type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 19 04:20:44 2021 From: report at bugs.python.org (Ian Henderson) Date: Sun, 19 Sep 2021 08:20:44 +0000 Subject: [New-bugs-announce] [issue45241] python REPL leaks local variables when an exception is thrown Message-ID: <1632039644.51.0.808669340461.issue45241@roundup.psfhosted.org> New submission from Ian Henderson : To reproduce, copy the following code: ---- import gc gc.collect() objs = gc.get_objects() for obj in objs: try: if isinstance(obj, X): print(obj) except NameError: class X: pass def f(): x = X() raise Exception() f() ---- then open a Python REPL and paste repeatedly at the prompt. Each time the code runs, another copy of the local variable x is leaked. This was originally discovered while using PyTorch -- tensors leaked this way tend to exhaust GPU memory pretty quickly. Version Info: Python 3.9.7 (default, Sep 3 2021, 04:31:11) [Clang 12.0.5 (clang-1205.0.22.9)] on darwin ---------- components: Interpreter Core messages: 402144 nosy: ianh2 priority: normal severity: normal status: open title: python REPL leaks local variables when an exception is thrown type: resource usage versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 19 15:20:19 2021 From: report at bugs.python.org (=?utf-8?q?L=C3=A9on_Planken?=) Date: Sun, 19 Sep 2021 19:20:19 +0000 Subject: [New-bugs-announce] [issue45242] test_pdb fails Message-ID: <1632079219.73.0.310736156405.issue45242@roundup.psfhosted.org> Change by L?on Planken : ---------- components: Tests nosy: oliphaunt priority: normal severity: normal status: open title: test_pdb fails versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 19 18:35:54 2021 From: report at bugs.python.org (Erlend E. Aasland) Date: Sun, 19 Sep 2021 22:35:54 +0000 Subject: [New-bugs-announce] [issue45243] [sqlite3] add support for changing connection limits Message-ID: <1632090954.41.0.8887678667.issue45243@roundup.psfhosted.org> New submission from Erlend E. Aasland : I propose to add wrappers for the SQLite sqlite3_limit() C API. Using this API, it is possible to query and set limits on a connection basis. This will make it easier (and faster) to test various corner cases in the test suite without relying on test.support.bigmemtest. Quoting from the SQLite sqlite3_limit() docs: Run-time limits are intended for use in applications that manage both their own internal database and also databases that are controlled by untrusted external sources. An example application might be a web browser that has its own databases for storing history and separate databases controlled by JavaScript applications downloaded off the Internet. The internal databases can be given the large, default limits. Databases managed by external sources can be given much smaller limits designed to prevent a denial of service attack. See also: - https://sqlite.org/c3ref/limit.html - https://sqlite.org/c3ref/c_limit_attached.html - https://sqlite.org/limits.html Limit categories (C&P from SQLite docs) --------------------------------------- SQLITE_LIMIT_LENGTH The maximum size of any string or BLOB or table row, in bytes. SQLITE_LIMIT_SQL_LENGTH The maximum length of an SQL statement, in bytes. SQLITE_LIMIT_COLUMN The maximum number of columns in a table definition or in the result set of a SELECT or the maximum number of columns in an index or in an ORDER BY or GROUP BY clause. SQLITE_LIMIT_EXPR_DEPTH The maximum depth of the parse tree on any expression. SQLITE_LIMIT_COMPOUND_SELECT The maximum number of terms in a compound SELECT statement. SQLITE_LIMIT_VDBE_OP The maximum number of instructions in a virtual machine program used to implement an SQL statement. If sqlite3_prepare_v2() or the equivalent tries to allocate space for more than this many opcodes in a single prepared statement, an SQLITE_NOMEM error is returned. SQLITE_LIMIT_FUNCTION_ARG The maximum number of arguments on a function. SQLITE_LIMIT_ATTACHED The maximum number of attached databases. SQLITE_LIMIT_LIKE_PATTERN_LENGTH The maximum length of the pattern argument to the LIKE or GLOB operators. SQLITE_LIMIT_VARIABLE_NUMBER The maximum index number of any parameter in an SQL statement. SQLITE_LIMIT_TRIGGER_DEPTH The maximum depth of recursion for triggers. SQLITE_LIMIT_WORKER_THREADS The maximum number of auxiliary worker threads that a single prepared statement may start. ---------- assignee: erlendaasland components: Extension Modules messages: 402176 nosy: berker.peksag, erlendaasland, serhiy.storchaka priority: low severity: normal status: open title: [sqlite3] add support for changing connection limits type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 02:29:26 2021 From: report at bugs.python.org (Shreyans Jain) Date: Mon, 20 Sep 2021 06:29:26 +0000 Subject: [New-bugs-announce] [issue45244] pip not installed with fresh python3.8.10 installation Message-ID: <1632119366.31.0.964394896374.issue45244@roundup.psfhosted.org> New submission from Shreyans Jain : I have installed Python 3.8.10 on windows server 2019. When I try to run "pip list" or any other pip command, it throws error of "pip not found. unrecognized command". I checked Scripts directory where usually pip resides in the installation, and pip is missing. Note: while installing I made sure that pip should be installed along with the python installation. ---------- components: Installation messages: 402193 nosy: shreyanse081 priority: normal severity: normal status: open title: pip not installed with fresh python3.8.10 installation type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 02:40:35 2021 From: report at bugs.python.org (Shreyans Jain) Date: Mon, 20 Sep 2021 06:40:35 +0000 Subject: [New-bugs-announce] [issue45245] ValueError: check_hostname requires server_hostname while pip install command on windows server 2019 Message-ID: <1632120035.45.0.581135774584.issue45245@roundup.psfhosted.org> New submission from Shreyans Jain : I have installed Python 3.9.7 on windows server 2019. When I run "pip list" command, it works. When I try to install any library using "pip install ", I get "ValueError: check_hostname requires server_hostname" error. Please check the screenshot for further details. I am getting the same error on python 3.8.10 version. ---------- assignee: christian.heimes components: Build, Library (Lib), SSL, Windows files: Value Error check hostname requires server hostname.JPG messages: 402196 nosy: christian.heimes, paul.moore, shreyanse081, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: ValueError: check_hostname requires server_hostname while pip install command on windows server 2019 versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50287/Value Error check hostname requires server hostname.JPG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 04:06:52 2021 From: report at bugs.python.org (Dimitri Papadopoulos Orfanos) Date: Mon, 20 Sep 2021 08:06:52 +0000 Subject: [New-bugs-announce] [issue45246] the sorted() documentation should refer to operator < Message-ID: <1632125212.88.0.211688526442.issue45246@roundup.psfhosted.org> New submission from Dimitri Papadopoulos Orfanos : The documentation of sorted() lacks any reference to the comparison mechanism between items. Compare with the documentation of list.sort(), which starts with: using only < comparisons between items This is mentioned in the "Sorting HOW TO", under "Odd and Ends": The sort routines are guaranteed to use __lt__() when making comparisons between two objects. However, the "Sorting HOW TO" is "a brief sorting tutorial", not the reference documentation. This property needs to be documented in the reference documentation of sorted(). ---------- assignee: docs at python components: Documentation messages: 402209 nosy: DimitriPapadopoulosOrfanos, docs at python priority: normal severity: normal status: open title: the sorted() documentation should refer to operator < versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 06:59:47 2021 From: report at bugs.python.org (Mark Shannon) Date: Mon, 20 Sep 2021 10:59:47 +0000 Subject: [New-bugs-announce] [issue45247] Add explicit support for Cython to the C API. Message-ID: <1632135587.14.0.29634943523.issue45247@roundup.psfhosted.org> New submission from Mark Shannon : As the C API has evolved it has grown features in an ad-hoc way, driven by the needs to whoever has bothered to add the code. Maybe we should be a bit more principled about this. Specifically we should make sure that there is a well defined interface between CPython and the other major components of the Python ecosystem. The obvious places to start are with Cython and Numpy. This issue deals specifically with Cython. I will leave it to someone who know more about Numpy to open an issue for Numpy. Matching Cython issue: https://github.com/cython/cython/issues/4382 This issue is an attempt to stop the annual occurrence of bugs like https://bugs.python.org/issue43760#msg393401 ---------- components: C API messages: 402224 nosy: Mark.Shannon, scoder priority: normal severity: normal status: open title: Add explicit support for Cython to the C API. type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 08:27:27 2021 From: report at bugs.python.org (Ciaran Welsh) Date: Mon, 20 Sep 2021 12:27:27 +0000 Subject: [New-bugs-announce] [issue45248] Documentation example in copyreg errors Message-ID: <1632140847.01.0.509914617617.issue45248@roundup.psfhosted.org> New submission from Ciaran Welsh : The example on https://docs.python.org/3/library/copyreg.html does not work: ``` import copyreg, copy, pickle class C: def __init__(self, a): self.a = a def pickle_c(c): print("pickling a C instance...") return C, (c.a,) copyreg.pickle(C, pickle_c) c = C(1) d = copy.copy(c) > p = pickle.dumps(c) E AttributeError: Can't pickle local object 'RoadRunnerPickleTests.test_ex..C' picklable_swig_tests.py:133: AttributeError ``` ---------- assignee: docs at python components: Documentation messages: 402227 nosy: CiaranWelsh, docs at python priority: normal severity: normal status: open title: Documentation example in copyreg errors type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 12:30:58 2021 From: report at bugs.python.org (Andrei Kulakov) Date: Mon, 20 Sep 2021 16:30:58 +0000 Subject: [New-bugs-announce] [issue45249] Fine grained error locations do not work in doctests Message-ID: <1632155458.61.0.570546761746.issue45249@roundup.psfhosted.org> New submission from Andrei Kulakov : It seems like fine grained error locations do not work in failed doctest traceback output: version 3.11.0a0 file contents: ------------------ def a(x): """ >>> 1 1 1 """ import doctest doctest.testmod() OUTPUT ------- Failed example: 1 1 Exception raised: Traceback (most recent call last): File "/Users/ak/opensource/cpython/Lib/doctest.py", line 1348, in __run exec(compile(example.source, filename, "single", ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1 1 1 ^ SyntaxError: invalid syntax. Perhaps you forgot a comma? The location in doctests that causes this: https://github.com/python/cpython/blob/5846c9b71ee9277fe866b1bdee4cc6702323fe7e/Lib/doctest.py#L1348 ---------- components: Interpreter Core, Tests messages: 402257 nosy: andrei.avk, kj, pablogsal priority: low severity: normal status: open title: Fine grained error locations do not work in doctests type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 16:03:02 2021 From: report at bugs.python.org (Brett Cannon) Date: Mon, 20 Sep 2021 20:03:02 +0000 Subject: [New-bugs-announce] [issue45250] Make sure documentation is accurate for what an (async) iterable and (async) iterator are Message-ID: <1632168182.73.0.0551762354894.issue45250@roundup.psfhosted.org> New submission from Brett Cannon : There's some inaccuracies when it comes to iterable and iterators (async and not). See https://mail.python.org/archives/list/python-dev at python.org/thread/3W7TDX5KNVQVGT5CUHBK33M7VNTP25DZ/#3W7TDX5KNVQVGT5CUHBK33M7VNTP25DZ for background. Should probably check: 1. The glossary. 2. Language reference. 3. Built-ins. This applies to iter()/__iter__/iterable, next()/__next__/iterator, and their async equivalents. ---------- assignee: brett.cannon components: Documentation messages: 402276 nosy: brett.cannon priority: normal severity: normal status: open title: Make sure documentation is accurate for what an (async) iterable and (async) iterator are _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 18:27:22 2021 From: report at bugs.python.org (Thomas Caswell) Date: Mon, 20 Sep 2021 22:27:22 +0000 Subject: [New-bugs-announce] [issue45251] signal.SIGCLD alias is not available on OSX Message-ID: <1632176842.25.0.412458705067.issue45251@roundup.psfhosted.org> New submission from Thomas Caswell : The module attribute signal.SIGCLD (https://docs.python.org/3/library/signal.html#signal.SIGCLD) is an "archaic" (quoting from the GNU C Library source) alias for signal.SIGCHLD (https://docs.python.org/3/library/signal.html#signal.SIGCHLD). signal.SIGCHLD is documented as being available on unix, and signal.SIGCLD is documented as an alias of signal.SIGCHLD. However, it seems that clang does not define the SIGCLD back-compatibility name [1] so the SIGCLD alias is missing on OSX (all the way to at least 2.7) because the clang headers appear to not define the SIGCLD macro and hence the logic in modulesignal.c does not find it, and hence the rest of the tooling in signal.py does not find it. I am not sure if the correct fix is to document that SIGCLD in only available on linux (which I am not sure is completely correct, maybe "availability is platform dependent, but definitely not on darwin"?) or to add the macro if SIGCHLD is defined and SIGCLD is missing (see attached patch) [1] SIGCLD is not documented in https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/signal.3.html and not the signal.h that ships with xcode ---------- assignee: docs at python components: Documentation, Library (Lib) files: osx_signal_compat.patch keywords: patch messages: 402280 nosy: docs at python, tcaswell priority: normal severity: normal status: open title: signal.SIGCLD alias is not available on OSX versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50290/osx_signal_compat.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 20 20:26:04 2021 From: report at bugs.python.org (CireSnave) Date: Tue, 21 Sep 2021 00:26:04 +0000 Subject: [New-bugs-announce] [issue45252] Missing support for Source Specific Multicast Message-ID: <1632183964.66.0.836467478723.issue45252@roundup.psfhosted.org> New submission from CireSnave : It appears that Python's socket module is missing support for the IGMPv3 socket options needed to support Source Specific Multicast. Many developers appear to be adding the necessary constants through something like: if not hasattr(socket, "IP_UNBLOCK_SOURCE"): setattr(socket, "IP_UNBLOCK_SOURCE", 37) if not hasattr(socket, "IP_BLOCK_SOURCE"): setattr(socket, "IP_BLOCK_SOURCE", 38) if not hasattr(socket, "IP_ADD_SOURCE_MEMBERSHIP"): setattr(socket, "IP_ADD_SOURCE_MEMBERSHIP", 39) if not hasattr(socket, "IP_DROP_SOURCE_MEMBERSHIP"): setattr(socket, "IP_DROP_SOURCE_MEMBERSHIP", 40) ...but it would be nice if these were added to the official module as they are supported under current versions of Windows, Linux, and BSD at the least. ---------- components: Library (Lib) messages: 402283 nosy: ciresnave priority: normal severity: normal status: open title: Missing support for Source Specific Multicast type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 21 03:49:08 2021 From: report at bugs.python.org (Manuj Chandra) Date: Tue, 21 Sep 2021 07:49:08 +0000 Subject: [New-bugs-announce] [issue45253] mimetypes cannot detect mime of mka files Message-ID: <1632210548.56.0.352982458661.issue45253@roundup.psfhosted.org> New submission from Manuj Chandra : The mimetypes library cannot detect mime of mka files. It returns None instead of audio. mp3 and mkv are working. Only mka is not working. ---------- components: Library (Lib) files: Screenshot from 2021-09-21 13-10-33.png messages: 402290 nosy: manujchandra priority: normal severity: normal status: open title: mimetypes cannot detect mime of mka files type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file50292/Screenshot from 2021-09-21 13-10-33.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 21 03:49:36 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Tue, 21 Sep 2021 07:49:36 +0000 Subject: [New-bugs-announce] [issue45254] HAS_SHMEM detection logic is duplicated in implementation and tests Message-ID: <1632210576.65.0.316213829461.issue45254@roundup.psfhosted.org> New submission from Nikita Sobolev : `HAS_SHMEM` is defined in `multiprocessing.managers` module https://github.com/python/cpython/blob/0bfa1106acfcddc03590e1f5d6789dbad3affe70/Lib/multiprocessing/managers.py#L35-L40 Later the same logic is duplicated in `_test_multiprocessing`: https://github.com/python/cpython/blob/0bfa1106acfcddc03590e1f5d6789dbad3affe70/Lib/test/_test_multiprocessing.py#L52-L56 We can just use `multiprocessing.managers.HAS_SHMEM` instead. I am going to send a PR with the fix. ---------- messages: 402291 nosy: sobolevn priority: normal severity: normal status: open title: HAS_SHMEM detection logic is duplicated in implementation and tests type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 21 06:10:04 2021 From: report at bugs.python.org (Isaac Boates) Date: Tue, 21 Sep 2021 10:10:04 +0000 Subject: [New-bugs-announce] [issue45255] sqlite3.connect() should check if the sqlite file exists and throw a FileNotFoundError if it doesn't Message-ID: <1632219004.38.0.384880031539.issue45255@roundup.psfhosted.org> New submission from Isaac Boates : I was just using the sqlite3 package and was very confused when trying to open an sqlite database from a relative path, because the only error provided was: File "/path/to/filepy", line 50, in __init__ self.connection = sqlite3.connect(path) sqlite3.OperationalError: unable to open database file It turns out I was just executing Python from the wrong location and therefore my relative path was broken. Not a big problem. But it was confusing because it only throws this generic OperationalError. Could it instead throw a FileNotFoundError if the db simply doesn't exist at the specified path? ---------- messages: 402309 nosy: iboates priority: normal severity: normal status: open title: sqlite3.connect() should check if the sqlite file exists and throw a FileNotFoundError if it doesn't versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 21 06:41:59 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Tue, 21 Sep 2021 10:41:59 +0000 Subject: [New-bugs-announce] [issue45256] Remove the usage of the cstack in Python to Python calls Message-ID: <1632220919.85.0.309558064668.issue45256@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : Removing the usage of the C stack in Python-to-Python calls will allow future optimizations in the eval loop to take place and can yield some speed ups given that we will be removing C function calls and preambles by inlining the callee in the same eval loop as the caller. ---------- messages: 402311 nosy: Mark.Shannon, pablogsal priority: normal severity: normal status: open title: Remove the usage of the cstack in Python to Python calls versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 21 09:56:48 2021 From: report at bugs.python.org (William Proffitt) Date: Tue, 21 Sep 2021 13:56:48 +0000 Subject: [New-bugs-announce] [issue45257] Compiling 3.8 branch on Windows attempts to use incompatible libffi version Message-ID: <1632232608.46.0.23021562846.issue45257@roundup.psfhosted.org> New submission from William Proffitt : Wasn't sure where to file this. I built Python 3.8.12 for Windows recently from the latest bugfix source release in the cpython repository. One tricky thing came up I wanted to write-up in case it matters to someone else. The version of libffi in the cpython-bin-deps repository seems to be too new for Python 3.8.12 now. The script for fetching the external dependencies (PCBuild\get_externals.bat) on the 3.8 branch fetches whatever is newest on the libffi branch, and this led to it downloading files starting with "libffi-8" and the build complaining about being unable to locate "libffi-7". I managed to resolve this by manually replacing the fetched libffi in the externals directory with the one from this commit, the latest commit I could find where the filenames started with "libffi-7": https://github.com/python/cpython-bin-deps/commit/1cf06233e3ceb49dc0a73c55e04b1174b436b632 After that, I was able to successfully run "build.bat -e -p x64" in PCBuild and "build.bat -x64" in "Tools\msi\" and end up with a working build and a working installer. (Side note that isn't that important for me but maybe worth mentioning while I'm here: the uninstaller on my newly minted installer didn't seem to work at all and I had to manually resort to deleting registry keys to overwrite my previous attempted install.) ---------- components: Build, Installation, Windows, ctypes messages: 402318 nosy: paul.moore, proffitt, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Compiling 3.8 branch on Windows attempts to use incompatible libffi version type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 21 14:04:25 2021 From: report at bugs.python.org (Isuru Fernando) Date: Tue, 21 Sep 2021 18:04:25 +0000 Subject: [New-bugs-announce] [issue45258] sysroot_paths in setup.py does not consider -isysroot for macOS Message-ID: <1632247465.13.0.0994928564671.issue45258@roundup.psfhosted.org> New submission from Isuru Fernando : It only looks at --sysroot which is Linux specific ---------- components: Build messages: 402338 nosy: FFY00, isuruf priority: normal severity: normal status: open title: sysroot_paths in setup.py does not consider -isysroot for macOS versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 21 15:16:06 2021 From: report at bugs.python.org (Thomas) Date: Tue, 21 Sep 2021 19:16:06 +0000 Subject: [New-bugs-announce] [issue45259] No _heappush_max() Message-ID: <1632251766.89.0.378514161628.issue45259@roundup.psfhosted.org> New submission from Thomas : There is no heappush function for a max heap when the other supporting helper functions are already implemented (_siftdown_max()) ---------- components: Library (Lib) messages: 402351 nosy: ThomasLee94 priority: normal severity: normal status: open title: No _heappush_max() versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 22 02:14:37 2021 From: report at bugs.python.org (zcpara) Date: Wed, 22 Sep 2021 06:14:37 +0000 Subject: [New-bugs-announce] [issue45260] Implement superinstruction UNPACK_SEQUENCE_ST Message-ID: <1632291277.5.0.29382799945.issue45260@roundup.psfhosted.org> New submission from zcpara : PEP 659 quickening provides a mechanism for replacing instructions. We add another super-instruction UNPACK_SEQUENCE_ST to replace the original UNPACK_SEQUENCE and the following n STROE_FAST instructions. See https://github.com/faster-cpython/ideas/issues/16. ---------- messages: 402407 nosy: gvanrossum, zhangchaospecial priority: normal severity: normal status: open title: Implement superinstruction UNPACK_SEQUENCE_ST type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 22 04:15:55 2021 From: report at bugs.python.org (Tim Holy) Date: Wed, 22 Sep 2021 08:15:55 +0000 Subject: [New-bugs-announce] [issue45261] Unreliable (?) results from timeit (cache issue?) Message-ID: <1632298555.06.0.822450771823.issue45261@roundup.psfhosted.org> New submission from Tim Holy : This is a lightly-edited reposting of https://stackoverflow.com/questions/69164027/unreliable-results-from-timeit-cache-issue I am interested in timing certain operations in numpy and skimage, but I'm occasionally seeing surprising transitions (not entirely reproducible) in the reported times. Briefly, sometimes timeit returns results that differ by about 5-fold from earlier runs. Here's the setup: import skimage import numpy as np import timeit nrep = 16 def run_complement(img): def inner(): skimage.util.invert(img) return inner img = np.random.randint(0, 65535, (512, 512, 3), dtype='uint16') and here's an example session: In [1]: %run python_timing_bug.py In [2]: t = timeit.Timer(run_complement(img)) In [3]: t.repeat(nrep, number=1) Out[3]: [0.0024439050030196086, 0.0020311699918238446, 0.00033007100864779204, 0.0002889479947043583, 0.0002851780009223148, 0.0002851030003512278, 0.00028487699455581605, 0.00032116699730977416, 0.00030912700458429754, 0.0002877369988709688, 0.0002840430097421631, 0.00028515000303741544, 0.00030791999597568065, 0.00029302599432412535, 0.00030723700183443725, 0.0002916679950430989] In [4]: t = timeit.Timer(run_complement(img)) In [5]: t.repeat(nrep, number=1) Out[5]: [0.0006320849934127182, 0.0004014919977635145, 0.00030359599622897804, 0.00029224599711596966, 0.0002907510061049834, 0.0002920039987657219, 0.0002918920072261244, 0.0003095199936069548, 0.00029789700056426227, 0.0002885590074583888, 0.00040198900387622416, 0.00037131100543774664, 0.00040271600300911814, 0.0003492849937174469, 0.0003378120018169284, 0.00029762100894004107] In [6]: t = timeit.Timer(run_complement(img)) In [7]: t.repeat(nrep, number=1) Out[7]: [0.00026428700948599726, 0.00012682100350502878, 7.380900206044316e-05, 6.346100417431444e-05, 6.29679998382926e-05, 6.278700311668217e-05, 6.320899410638958e-05, 6.25409884378314e-05, 6.262199894990772e-05, 6.247499550227076e-05, 6.293901242315769e-05, 6.259800284169614e-05, 6.285199197009206e-05, 6.293600017670542e-05, 6.309800664894283e-05, 6.248900899663568e-05] Notice that in the final run, the minimum times were on the order of 0.6e-4 vs the previous minimum of ~3e-4, about 5x smaller than the times measured in previous runs. It's not entirely predictable when this "kicks in." The faster time corresponds to 0.08ns/element of the array, which given that the 2.6GHz clock on my i7-8850H CPU ticks every ~0.4ns, seems to be pushing the limits of credibility (though thanks to SIMD on my AVX2 CPU, this cannot be entirely ruled out). My understanding is that this operation is implemented as a subtraction and most likely gets reduced to a bitwise-not by the compiler. So you do indeed expect this to be fast, but it's not entirely certain it should be this fast, and in either event the non-reproducibility is problematic. It may be relevant to note that the total amount of data is In [15]: img.size * 2 Out[15]: 1572864 and lshw reports that I have 384KiB L1 cache and 1536KiB of L2 cache: In [16]: 384*1024 Out[16]: 393216 In [17]: 1536*1024 Out[17]: 1572864 So it seems possible that this result is being influenced by just fitting in L2 cache. (Note that even in the "fast block," the first run was not fast.) If I increase the size of the image: img = np.random.randint(0, 65535, (2048, 2048, 3), dtype='uint16') then my results seem more reproducible in the sense that I have not yet seen one of these transitions. ---------- components: Library (Lib) messages: 402411 nosy: timholy priority: normal severity: normal status: open title: Unreliable (?) results from timeit (cache issue?) type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 22 04:32:26 2021 From: report at bugs.python.org (Benjamin Schiller) Date: Wed, 22 Sep 2021 08:32:26 +0000 Subject: [New-bugs-announce] [issue45262] crash if asyncio is used before and after re-initialization if using python embedded in an application Message-ID: <1632299546.72.0.498896278525.issue45262@roundup.psfhosted.org> New submission from Benjamin Schiller : We have embedded Python in our application and we deinitialize/initialize the interpreter at some point of time. If a simple script with a thread that sleeps with asyncio.sleep is loaded before and after the re-initialization, then we get the following assertion in the second run of the python module: "Assertion failed: Py_IS_TYPE(rl, &PyRunningLoopHolder_Type), file D:\a\1\s\Modules_asynciomodule.c, line 261" Example to reproduce this crash: https://github.com/benjamin-sch/asyncio_crash_in_second_run ---------- components: asyncio messages: 402412 nosy: asvetlov, benjamin-sch, yselivanov priority: normal severity: normal status: open title: crash if asyncio is used before and after re-initialization if using python embedded in an application type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 22 05:51:41 2021 From: report at bugs.python.org (Kenneth Fossen) Date: Wed, 22 Sep 2021 09:51:41 +0000 Subject: [New-bugs-announce] [issue45263] round displays 2 instead of 3 digits Message-ID: <1632304301.35.0.921542504833.issue45263@roundup.psfhosted.org> New submission from Kenneth Fossen : When round is given 3 as argument for number of decimal points, the expected behaviour is to return a digit with 3 decimal points Example: ig1 = 0.4199730940219749 ig2 = 0.4189730940219749 print(round(ig1, 3)) # 0.42 expected to be 0.420 print(round(ig2, 3)) # 0.419 ---------- messages: 402413 nosy: kenfos priority: normal severity: normal status: open title: round displays 2 instead of 3 digits type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 22 11:23:41 2021 From: report at bugs.python.org (John Wodder) Date: Wed, 22 Sep 2021 15:23:41 +0000 Subject: [New-bugs-announce] [issue45264] venv: Make activate et al. export custom prompt prefix as an envvar Message-ID: <1632324221.63.0.136489331913.issue45264@roundup.psfhosted.org> New submission from John Wodder : I use a custom script (and I'm sure many others have similar scripts as well) for setting my prompt in Bash. It shows the name of the current venv (if any) by querying the `VIRTUAL_ENV` environment variable, but if the venv was created with a custom `--prompt`, it is unable to use this prompt prefix, as the `activate` script does not make this information available. I thus suggest that the `activate` et al. scripts should set and export an environment variable named something like `VIRTUAL_ENV_PROMPT_PREFIX` that contains the prompt prefix (either custom or default) that venv would prepend to the prompt. Ideally, this should be set even when `VIRTUAL_ENV_DISABLE_PROMPT` is set in case the user wants total control over their prompt. (This was originally posted as an issue for virtualenv at , and it was suggested to post it here for feedback.) ---------- components: Library (Lib) messages: 402444 nosy: jwodder priority: normal severity: normal status: open title: venv: Make activate et al. export custom prompt prefix as an envvar type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 22 14:01:39 2021 From: report at bugs.python.org (Nima) Date: Wed, 22 Sep 2021 18:01:39 +0000 Subject: [New-bugs-announce] [issue45265] Bug in append() method in order to appending a temporary list to a empty list using for loop Message-ID: <1632333699.99.0.749582001831.issue45265@roundup.psfhosted.org> New submission from Nima : I want to make an list consist on subsequences of another list, for example: -------------------------------------------------------------- input: array = [4, 1, 8, 2] output that i expect: [[4], [4,1], [4, 1, 8], [4, 1, 8, 2]] -------------------------------------------------------------- but output is: [[4, 1, 8, 2], [4, 1, 8, 2], [4, 1, 8, 2], [4, 1, 8, 2]] -------------------------------------------------------------- my code is: num = [4, 1, 8, 2] temp = [] sub = [] for item in num: temp.append(item) sub.append(temp) -------------------------------------------------------------- i think it's a bug because, append() must add an item to the end of the list, but here every time it add the temp to the sub it changes the previous item as well, so if it's not a bug please help me to write the correct code. ---------- components: Windows messages: 402458 nosy: nima_fl, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Bug in append() method in order to appending a temporary list to a empty list using for loop type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 22 15:16:19 2021 From: report at bugs.python.org (Victor Milovanov) Date: Wed, 22 Sep 2021 19:16:19 +0000 Subject: [New-bugs-announce] [issue45266] subtype_clear can not be called from derived types Message-ID: <1632338179.69.0.00198937271084.issue45266@roundup.psfhosted.org> New submission from Victor Milovanov : I am trying to define a type in C, that derives from PyTypeObject. I want to override tp_clear. To do so properly, I should call base type's tp_clear and have it perform its cleanup steps. PyTypeObject has a tp_clear implementation: subtype_clear. Problem is, it assumes the instance it gets is of a type, that does not override PyTypeObject's tp_clear, and behaves incorrectly in 2 ways: 1) it does not perform the usual cleanup, because in this code base = type; while ((baseclear = base->tp_clear) == subtype_clear) the loop condition is immediately false, as my types overrode tp_clear 2) later on it calls baseclear on the same object. But because of the loop above baseclear actually points to my type's custom tp_clear implementation, which leads to reentry to that function (basically a stack overflow, unless there's a guard against it). ---------- components: C API messages: 402466 nosy: Victor Milovanov priority: normal severity: normal status: open title: subtype_clear can not be called from derived types type: behavior versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 22 18:21:45 2021 From: report at bugs.python.org (Paul Broe) Date: Wed, 22 Sep 2021 22:21:45 +0000 Subject: [New-bugs-announce] [issue45267] New install Python 3.9.7 install of Sphinx Document Generator fails Message-ID: <1632349305.85.0.928032648736.issue45267@roundup.psfhosted.org> New submission from Paul Broe : Brand new build of Python 3.9.7 on RHEL 7. Placed in /usr/local/python3 Created new python environment cd /usr/opt/oracle/ python3 -m venv py3-sphinx source /usr/opt/oracle/py3-sphinx/bin/activate Now verify that python is now linked to Python 3. In this virtual environment python ? python3 python -V Python 3.9.7 I installed all the pre-requisites correctly for Sphinx 4.2.0 See the output of the command in the attached file command: pip install -vvv --no-index --find-link=/usr/opt/oracle/downloads/python-addons sphinx ---------- components: Demos and Tools files: Sphinx install output.txt messages: 402472 nosy: pcbroe priority: normal severity: normal status: open title: New install Python 3.9.7 install of Sphinx Document Generator fails versions: Python 3.9 Added file: https://bugs.python.org/file50295/Sphinx install output.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 23 00:37:19 2021 From: report at bugs.python.org (zeroswan) Date: Thu, 23 Sep 2021 04:37:19 +0000 Subject: [New-bugs-announce] [issue45268] use multiple "in" in one expression? Message-ID: <1632371839.56.0.25067525558.issue45268@roundup.psfhosted.org> New submission from zeroswan : I find it's a valid expression: `1 in [1, 2, 3] in [4, 5, 6]` `a in b in c` is equivalent to `a in b and b in c` but this expression seems useless, and easy to confused with (a in b) in c . in this program, what I originally want is `if a in b and a in c` , But it was mistakenly written as `a in b in c` This expression is similar to `a _______________________________________ From report at bugs.python.org Thu Sep 23 05:32:38 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Thu, 23 Sep 2021 09:32:38 +0000 Subject: [New-bugs-announce] [issue45269] c_make_encoder() has uncovered error: "argument 1 must be dict or None" Message-ID: <1632389558.47.0.504504996183.issue45269@roundup.psfhosted.org> New submission from Nikita Sobolev : Looks like we never test error that `markers` are limited to `None` and `dict` types. Here: https://github.com/python/cpython/blob/main/Modules/_json.c#L1252-L1255 Coverage report: https://app.codecov.io/gh/python/cpython/blob/master/Modules/_json.c line: 1252 I will submit a unit test for it today. Related: https://bugs.python.org/issue6986 ---------- components: Tests messages: 402482 nosy: sobolevn priority: normal severity: normal status: open title: c_make_encoder() has uncovered error: "argument 1 must be dict or None" type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 23 11:01:31 2021 From: report at bugs.python.org (Pramod) Date: Thu, 23 Sep 2021 15:01:31 +0000 Subject: [New-bugs-announce] [issue45270] Clicking "Add to Custom C Message-ID: <1632409291.23.0.352696604895.issue45270@roundup.psfhosted.org> Change by Pramod : ---------- nosy: pramodsarathy17 priority: normal severity: normal status: open title: Clicking "Add to Custom C _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 23 11:05:06 2021 From: report at bugs.python.org (=?utf-8?q?D=C3=A1vid_Nemeskey?=) Date: Thu, 23 Sep 2021 15:05:06 +0000 Subject: [New-bugs-announce] [issue45271] Add a 'find' method (a'la str) to list Message-ID: <1632409506.27.0.197043194996.issue45271@roundup.psfhosted.org> New submission from D?vid Nemeskey : There is an unjustified asymmetry between `str` and `list`, as far as lookup goes. Both have an `index()` method that returns the first index of a value, or raises a `ValueError` if it doesn't exist. However, only `str` has the `find` method, which returns -1 if the value is not in the string. I think it would make sense to add `find` to `list` as well. For starters, it would make the API between the two sequence types more consistent. More importantly (though it depends on the use-case), `find` is usually more convenient than `index`, as one doesn't have to worry about handling an exception. As a bonus, since `list` is mutable, it allows one to write code such as if (idx := lst.find(value)) == -1: lst.append(value) call_some_function(lst[idx]) , making the method even more useful as it is in `str`. ---------- messages: 402497 nosy: nemeskeyd priority: normal severity: normal status: open title: Add a 'find' method (a'la str) to list _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 23 12:24:55 2021 From: report at bugs.python.org (Steve Dower) Date: Thu, 23 Sep 2021 16:24:55 +0000 Subject: [New-bugs-announce] [issue45272] 'os.path' should not be a frozen module Message-ID: <1632414295.46.0.740648553614.issue45272@roundup.psfhosted.org> New submission from Steve Dower : I noticed that Python/frozen.c includes posixpath as 'os.path'. This is not correct, and shouldn't be necessary anyway, because os.path is just an attribute in "os" and not a concrete module (see Lib/os.py#L95 for the bit that makes it importable, and Lib/os.py#L61 and Lib/os.py#L81 for the imports). ---------- assignee: eric.snow messages: 402506 nosy: eric.snow, steve.dower priority: normal severity: normal stage: needs patch status: open title: 'os.path' should not be a frozen module type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 23 13:31:45 2021 From: report at bugs.python.org (Eric Snow) Date: Thu, 23 Sep 2021 17:31:45 +0000 Subject: [New-bugs-announce] [issue45273] OS-specific frozen modules are built, even on other OSes. Message-ID: <1632418305.93.0.879771690458.issue45273@roundup.psfhosted.org> New submission from Eric Snow : The list of frozen modules in Python/frozen.c is generated by Tools/scripts/freeze_modules.py. Currently we freeze both posixpath and ntpath, even though for startup we only need one of the two (depending on the OS). In this case both modules are available on all platforms (and only os.path differs), so we might be okay to leave both frozen. The cost isn't high. However, we may need to accommodate freezing a module only on a subset of supported OSes: * if the extra cost (to the size of the executable) is too high * if there's an OS-specific module that is only available on that OS In that case, we'd need to generate the appropriate ifdefs in Python/frozen.c. We can't just exclude the module during generation since frozen.c is committed to the repo. ---------- assignee: eric.snow components: Interpreter Core messages: 402514 nosy: eric.snow, steve.dower priority: normal severity: normal stage: needs patch status: open title: OS-specific frozen modules are built, even on other OSes. type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 23 16:13:09 2021 From: report at bugs.python.org (STINNER Victor) Date: Thu, 23 Sep 2021 20:13:09 +0000 Subject: [New-bugs-announce] [issue45274] Race condition in Thread._wait_for_tstate_lock() Message-ID: <1632427989.48.0.0399900831112.issue45274@roundup.psfhosted.org> New submission from STINNER Victor : Bern?t G?bor found an interesting bug on Windows. Sometimes, when a process is interrupted on Windows with CTRL+C, its parent process hangs in thread.join(): https://twitter.com/gjbernat/status/1440949682759426050 Reproducer: * Install https://github.com/gaborbernat/tox/tree/2159 * Make an empty folder and put the above tox.ini in it * Invoke python -m tox and once that command is sleeping press CTRL+C (the app should lock there indefinitely). tox.ini: --- [testenv] skip_install = true commands = python -c 'import os; print(f"start {os.getpid()}"); import time; time.sleep(100); print("end");' --- Source: https://gist.github.com/gaborbernat/f1e1aff0f2ee514b50a92a4d019d4d13 I tried to attach the Python process in Python: there is a single thread, the main thread which is blocked in thread.join(). You can also see it in the faulthandler traceback. I did a long analysis of the _tstate_lock and I checked that thread really completed. Raw debug traces: == thread 6200 exit == thread_run[pid=3984, thread_id=6200]: clear PyThreadState_Clear[pid=3984, thread_id=6200]: on_delete() release_sentinel[pid=3984, thread_id=6200]: enter release_sentinel[pid=3984, thread_id=6200]: release(obj=000001C1122669C0, lock=000001C110BBDA00) release_sentinel[pid=3984, thread_id=6200]: exit PyThreadState_Clear[pid=3984, thread_id=6200]: on_delete()-- == main thread is calling join() but gets a KeyboardInterrupt == [pid=3984, thread_id=8000] Lock.acquire() -> ACQUIRED Current thread 0x00001f40 (most recent call first): File "C:\vstinner\python\3.10\lib\threading.py", line 1118 in _wait_for_tstate_lock File "C:\vstinner\python\3.10\lib\threading.py", line 1092 in join File "C:\vstinner\env\lib\site-packages\tox\session\cmthread_run[pid=3984, thread_id=6200]: exit d\run\common.py", line 203 in execute File "C:\vstinner\env\lib\site-packages\tox\session\cmd\run\sequential.py", line 20 in run_sequential File "C:\vstinner\env\lib\site-packages\tox\session\cmd\legacy.py", line 104 in legacy File "C:\vstinner\env\lib\site-packages\tox\run.py", line 49 in main File "C:\vstinner\env\lib\site-packages\tox\run.py", line 23 in run File "C:\vstinner\env\lib\site-packages\tox\__main__.py", line 4 in File "C:\vstinner\python\3.10\lib\runpy.py", line 86 in _run_code File "C:\vstinner\python\3.10\lib\runpy.py", line 196 in _run_module_as_main _wait_for_tstate_lock[pid=3984, current thread_id=8000, self thread_id=6200]: EXC: KeyboardInterrupt(); acquired? None == main thread calls repr(thread) == ROOT: [3984] KeyboardInterrupt - teardown started _wait_for_tstate_lock[pid=3984, current thread_id=8000, self thread_id=6200]: acquire(block=False, timeout=-1): lock obj= 0x1c1122669c0 File "C:\vstinner\python\3.10\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\vstinner\python\3.10\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\vstinner\env\lib\site-packages\tox\__main__.py", line 4, in run() File "C:\vstinner\env\lib\site-packages\tox\run.py", line 23, in run result = main(sys.argv[1:] if args is None else args) File "C:\vstinner\env\lib\site-packages\tox\run.py", line 49, in main result = handler(state) File "C:\vstinner\env\lib\site-packages\tox\session\cmd\legacy.py", line 104, in legacy return run_sequential(state) File "C:\vstinner\env\lib\site-packages\tox\session\cmd\run\sequential.py", line 20, in run_sequential return execute(state, max_workers=1, has_spinner=False, live=True) File "C:\vstinner\env\lib\site-packages\tox\session\cmd\run\common.py", line 217, in execute print(f'join {thread}') File "C:\vstinner\python\3.10\lib\threading.py", line 901, in __repr__ self.is_alive() # easy way to get ._is_stopped set when appropriate File "C:\vstinner\python\3.10\lib\threading.py", line 1181, in is_alive self._wait_for_tstate_lock(False) File "C:\vstinner\python\3.10\lib\threading.py", line 1113, in _wait_for_tstate_lock traceback.print_stack() _wait_for_tstate_lock[pid=3984, current thread_id=8000, self thread_id=6200]: failed to acquire 0x1c1122669c0 _wait_for_tstate_lock[pid=3984, current thread_id=8000, self thread_id=6200]: exit 0x1c1122669c0 join == main thread calls thread.join()... which hangs == _wait_for_tstate_lock[pid=3984, current thread_id=8000, self thread_id=6200]: acquire(block=True, timeout=-1): lock obj= 0x1c1122669c0 File "C:\vstinner\python\3.10\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\vstinner\python\3.10\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\vstinner\env\lib\site-packages\tox\__main__.py", line 4, in run() File "C:\vstinner\env\lib\site-packages\tox\run.py", line 23, in run result = main(sys.argv[1:] if args is None else args) File "C:\vstinner\env\lib\site-packages\tox\run.py", line 49, in main result = handler(state) File "C:\vstinner\env\lib\site-packages\tox\session\cmd\legacy.py", line 104, in legacy return run_sequential(state) File "C:\vstinner\env\lib\site-packages\tox\session\cmd\run\sequential.py", line 20, in run_sequential return execute(state, max_workers=1, has_spinner=False, live=True) File "C:\vstinner\env\lib\site-packages\tox\session\cmd\run\common.py", line 218, in execute thread.join() File "C:\vstinner\python\3.10\lib\threading.py", line 1092, in join self._wait_for_tstate_lock() File "C:\vstinner\python\3.10\lib\threading.py", line 1113, in _wait_for_tstate_lock traceback.print_stack() Explanation: * Context: The main thread is waiting on thread.join() * The thread 6200 completes: release_sentinel() is called to to release _tstate_lock * The main thread succeed to acquire _tstate_lock (of the thread 6200) since it was just release * Ooops oops oops, the main thread gets KeyboardInterrupt in Thread._wait_for_tstate_lock() before being able to release the lock. As if the function if interrupted here: def _wait_for_tstate_lock(self, block=True, timeout=-1): lock = self._tstate_lock if lock is None: assert self._is_stopped elif lock.acquire(block, timeout): # -- got KeyboardInterrupt here --- lock.release() self._stop() * (tox does something in the main thread) * (there are only one remaining thread: the main thread) * tox catchs KeyboardInterrupt and calls thread.join() again * thread.join() hangs because the _tstate_lock was already acquire, so lock.acquire() hangs forever => NOT GOOD You can reproduce the issue on Linux with attached patch and script: * Apply threading_bug.patch on Python * Run threading_bug.py * See that the script hangs forever Example: --- $ git apply threading_bug.patch $ ./python threading_bug.py join... join failed with: KeyboardInterrupt() join again... --- I'm now working on a PR to fix the race condition. ---------- components: Library (Lib) files: threading_bug.patch keywords: patch messages: 402523 nosy: vstinner priority: normal severity: normal status: open title: Race condition in Thread._wait_for_tstate_lock() versions: Python 3.11 Added file: https://bugs.python.org/file50299/threading_bug.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 00:32:20 2021 From: report at bugs.python.org (Chuanlong Du) Date: Fri, 24 Sep 2021 04:32:20 +0000 Subject: [New-bugs-announce] [issue45275] Make argparse print description of subcommand when invoke help doc on subcommand Message-ID: <1632457940.29.0.512184786201.issue45275@roundup.psfhosted.org> New submission from Chuanlong Du : I have command-line script `blog` written using argparse. It supports subcommands. When I check the help doc of a subcommand, e.g., using `blog convert -h`, it prints the help doc of the subcommand `blog convert` but doesn't print the description of the subcommand `blog convert`. A screenshot is attached. It is quite often that I know a command-line application have certain subcommands but I forget exactly what they do. It would be very helpful if `blog subcmd -h` prints the description of the subcmd so that users do not have to call `blog -h` again to check exactly what the subcommand does. ---------- components: Library (Lib) files: Selection_011.png messages: 402541 nosy: longendu priority: normal severity: normal status: open title: Make argparse print description of subcommand when invoke help doc on subcommand type: enhancement versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50301/Selection_011.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 04:10:36 2021 From: report at bugs.python.org (Thomas Grainger) Date: Fri, 24 Sep 2021 08:10:36 +0000 Subject: [New-bugs-announce] [issue45276] avoid try 1000 in asyncio all_tasks by making weak collection .copy() atomic Message-ID: <1632471036.7.0.15153501951.issue45276@roundup.psfhosted.org> New submission from Thomas Grainger : the weak collections should have the same threadsafe/thread unsafe guarantees as their strong reference counterparts - eg dict.copy and set.copy are atomic and so the weak versions should be atomic also ---------- components: Interpreter Core, asyncio messages: 402544 nosy: asvetlov, graingert, yselivanov priority: normal severity: normal status: open title: avoid try 1000 in asyncio all_tasks by making weak collection .copy() atomic versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 04:52:55 2021 From: report at bugs.python.org (HugoThiolliere) Date: Fri, 24 Sep 2021 08:52:55 +0000 Subject: [New-bugs-announce] [issue45277] typo in codecs documentation Message-ID: <1632473575.85.0.482002410759.issue45277@roundup.psfhosted.org> New submission from HugoThiolliere : There is a typo in https://docs.python.org/3/library/codecs.html#encodings-and-unicode The first sentence in the last paragraph before the table reads : "There?s another encoding that is able to encoding the full range of Unicode characters" When it should read "There?s another encoding that is able to encode the full range of Unicode characters" ---------- assignee: docs at python components: Documentation messages: 402545 nosy: Gronahak, docs at python priority: normal severity: normal status: open title: typo in codecs documentation type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 07:03:08 2021 From: report at bugs.python.org (Ben) Date: Fri, 24 Sep 2021 11:03:08 +0000 Subject: [New-bugs-announce] [issue45278] RuntimeError on race on weakset concurrent iteration Message-ID: <1632481388.9.0.303284230953.issue45278@roundup.psfhosted.org> New submission from Ben : This is a very subtle race WeakSet uses _weakrefset.py's _IterationGuard structure to protect against the case where the elements the WeakSet refers to get cleaned up while a thread is iterating over the WeakSet. It defers the actual removal of any elements which get gc'd during iteration until the end of the iteration. the WeakSet keeps track of all the iterators, and waits until there are no more threads iterating over the set before it does the removal: https://github.com/python/cpython/blob/main/Lib/_weakrefset.py#L30 However there is a race, if another thread begins iterating after the `if s:` check but before the _commit_removals call has ended, that iteration can get a RuntimeError. attached is an example script that can generate such RuntimeError's, although the race window here is very small and so to observe yourself you may have to tweak the magic constants around. As far as I'm aware nobody has reported seeing this bug happen in production, but some libraries (e.g. asyncio) do currently rely on concurrently iterating a weakset, so it's not implausible. ---------- files: weakset_concurrent_iter_runtimeerror.py messages: 402553 nosy: bjs priority: normal severity: normal status: open title: RuntimeError on race on weakset concurrent iteration type: behavior Added file: https://bugs.python.org/file50302/weakset_concurrent_iter_runtimeerror.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 07:07:20 2021 From: report at bugs.python.org (Thomas Grainger) Date: Fri, 24 Sep 2021 11:07:20 +0000 Subject: [New-bugs-announce] [issue45279] avoid redundant _commit_removals pending_removals guard Message-ID: <1632481640.03.0.970036840438.issue45279@roundup.psfhosted.org> New submission from Thomas Grainger : refactor to avoid redundant _commit_removals pending_removals guard ---------- components: Library (Lib) messages: 402554 nosy: graingert priority: normal severity: normal status: open title: avoid redundant _commit_removals pending_removals guard versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 07:23:41 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Fri, 24 Sep 2021 11:23:41 +0000 Subject: [New-bugs-announce] [issue45280] Empty typing.NamedTuple creation is not tested Message-ID: <1632482621.54.0.232905456931.issue45280@roundup.psfhosted.org> New submission from Nikita Sobolev : While working on `mypy` support of `NamedTuple`s (https://github.com/python/mypy/issues/11047), I've noticed that currently when testing `typing.NamedTuple` cases like: 1. `N = NamedTuple('N')` and 2. ```python class N(NamedTuple): ... ``` are ignored. However, this is an important corner-cases, we need to be sure that use can create empty named tuples if needed. Related position in code: https://github.com/python/cpython/blob/3f8b23f8ddab75d9b77a3997d54e663187e12cc8/Lib/test/test_typing.py#L4102-L4114 I will send a PR add these two cases. ---------- components: Tests messages: 402557 nosy: sobolevn priority: normal severity: normal status: open title: Empty typing.NamedTuple creation is not tested type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 08:11:27 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Fri, 24 Sep 2021 12:11:27 +0000 Subject: [New-bugs-announce] [issue45281] Make `is_attribute` and `module` arguments to `ForwardRef` kw-only Message-ID: <1632485487.21.0.977020306746.issue45281@roundup.psfhosted.org> New submission from Nikita Sobolev : After https://github.com/python/cpython/pull/28279 and https://bugs.python.org/issue45166 `ForwardRef` and `_type_check` now have `is_class` kw-only argument. It is now inconsistent with `is_argument` and `module` arguments. It would be quite nice to make them all kw-only. Quoting @ambv: > _type_check we can just change in Python 3.11 without any further ado. ForwardRef() will need a deprecation period when the users is calling is_argument= and module= as non-keyword arguments ---------- components: Library (Lib) messages: 402561 nosy: sobolevn priority: normal severity: normal status: open title: Make `is_attribute` and `module` arguments to `ForwardRef` kw-only type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 09:25:14 2021 From: report at bugs.python.org (Daisuke Takahashi) Date: Fri, 24 Sep 2021 13:25:14 +0000 Subject: [New-bugs-announce] [issue45282] isinstance(x, typing.Protocol-class) unexpectedly evaluates properties Message-ID: <1632489914.89.0.766542444973.issue45282@roundup.psfhosted.org> New submission from Daisuke Takahashi : Because __instancecheck__ of _ProtocolMeta uses hasattr() and getattr(), both of which evaluate property attributes, calling isinstance() with an object and a class that inherits typing.Protocol evaluates the input object's properties in some cases. The attached testcases include three cases of checking subtype relationship of an instance (having a property "x" that may raise RuntimeError on evaluation) and following three protocol classes; (1) a protocol class having "x" as a property, (2) a protocol class having "x" as a data attribute, and (3) a protocol class having "x" as a class property that raises RuntimeError on evaluation (>= python 3.9 only). Expected behavior: 1. The isinstance(obj, Protocol_class) does not evaluate anything but just checks existence of attribute names. 2. All cases in the attached testcases run without any error Thank you very much in advance. ---------- files: test_protocol_property.py messages: 402562 nosy: daitakahashi priority: normal severity: normal status: open title: isinstance(x, typing.Protocol-class) unexpectedly evaluates properties type: behavior versions: Python 3.8, Python 3.9 Added file: https://bugs.python.org/file50303/test_protocol_property.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 09:44:07 2021 From: report at bugs.python.org (Nikita Sobolev) Date: Fri, 24 Sep 2021 13:44:07 +0000 Subject: [New-bugs-announce] [issue45283] Top / function argument level ClassVar should not be allowed during `get_type_hints()` Message-ID: <1632491047.59.0.655973158516.issue45283@roundup.psfhosted.org> New submission from Nikita Sobolev : This code currently does not raise any issues: ```python # contents of `ex.py` from typing import ClassVar a: ClassVar[int] = 1 def b(c: ClassVar[int]): ... ``` And then: 1. Module: `python -c 'import ex; from typing import get_type_hints; print(get_type_hints(ex))'` 2. Function argument: `python -c 'import ex; from typing import get_type_hints; print(get_type_hints(ex.b))'` It should not be allowed. Currently, the same with `from __future__ import annotations` does correctly raise `TypeError: typing.ClassVar[int] is not valid as type argument` Related: https://github.com/python/cpython/pull/28279 I will send a PR with the fix shortly. ---------- components: Library (Lib) messages: 402563 nosy: sobolevn priority: normal severity: normal status: open title: Top / function argument level ClassVar should not be allowed during `get_type_hints()` type: behavior versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 12:33:24 2021 From: report at bugs.python.org (Ganesh Kumar) Date: Fri, 24 Sep 2021 16:33:24 +0000 Subject: [New-bugs-announce] [issue45284] Better `TypeError` message when a string is indexed using a string Message-ID: <1632501204.02.0.12612984159.issue45284@roundup.psfhosted.org> New submission from Ganesh Kumar : The `TypeError` message when a string is indexed using a string should be similar to the `TypeError` message when a list or tuple is indexed using a string. >>> my_str = 'Rin' >>> my_str[1:3] # works 'in' >>> my_str['no'] Traceback (most recent call last): File "", line 1, in TypeError: string indices must be integers >>> my_str[slice(1, 3)] # works with slices 'in' Certainly it does work with slice as intended but the error message should explain that, as seen in the following >>> my_list = [1, 2, 3] >>> my_list['no'] Traceback (most recent call last): File "", line 1, in TypeError: list indices must be integers or slices, not str >>> my_tuple = (1, 2, 3) >>> my_tuple['no'] Traceback (most recent call last): File "", line 1, in TypeError: tuple indices must be integers or slices, not str The error message shows `slices` are indeed an option to use when indexing a list or tuple. Would be happy to submit a documentation PR if this minor change would be accepted. ---------- components: Library (Lib) messages: 402577 nosy: Rin priority: normal severity: normal status: open title: Better `TypeError` message when a string is indexed using a string type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Sep 24 15:23:45 2021 From: report at bugs.python.org (CireSnave) Date: Fri, 24 Sep 2021 19:23:45 +0000 Subject: [New-bugs-announce] [issue45285] c_char incorrectly treated as bytes in Structure Message-ID: <1632511425.07.0.917515072933.issue45285@roundup.psfhosted.org> New submission from CireSnave : When dealing with a Structure containing c_char variables, the variables are incorrectly being typed as bytes. As a result, a pointer to those c_char variables can not be created because bytes is not a ctypes type. from ctypes import ( Structure, c_char, pointer, ) class MyStruct(Structure): _fields_ = [("a", c_char), ("b", c_char), ("c", c_char)] x: MyStruct = MyStruct(98, 99, 100) print(type(x.a)) # Prints ??? Both mypy and PyRight agree that x.a is a c_char. some_variable = pointer(x.a) # Traceback (most recent call last): # File "C:\Users\cires\ctypes_test.py", line 23, in # some_variable = pointer(x.a) # TypeError: _type_ must have storage info ---------- components: ctypes messages: 402582 nosy: ciresnave priority: normal severity: normal status: open title: c_char incorrectly treated as bytes in Structure type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 25 08:27:00 2021 From: report at bugs.python.org (mxmlnkn) Date: Sat, 25 Sep 2021 12:27:00 +0000 Subject: [New-bugs-announce] [issue45286] zipfile missing API for links Message-ID: <1632572820.24.0.376186028462.issue45286@roundup.psfhosted.org> New submission from mxmlnkn : When using zipfile as a library to get simple files, there is no way to method to determine whether a ZipInfo object is a link or not. Moreover, links can be simply opened and read like normal files and will contain the link path, which is unexpected in most cases, I think. ZipInfo already has an `is_dir` getter. It would be nice if there also was an `is_link` getter. Note that `__repr__` actually shows the filemode, which is `lrwxrwxrwx`. But there is not even a getter for the file mode. For now, I can try to use the code from `__repl__` to extract the file mode from the `external_attr` member but the contents of that member are not documented in zipfile and assuming it is the same as in the ZIP file format specification, it's OS-dependent. Additionally to `is_link` some getter like `linkname` or so would be nice. As to how it should behave when calling `open` or `read` on a link, I'm not sure. ---------- messages: 402617 nosy: mxmlnkn priority: normal severity: normal status: open title: zipfile missing API for links _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 25 10:20:18 2021 From: report at bugs.python.org (mxmlnkn) Date: Sat, 25 Sep 2021 14:20:18 +0000 Subject: [New-bugs-announce] [issue45287] zipfile.is_zipfile returns true for a rar file containing zips Message-ID: <1632579618.64.0.52405699789.issue45287@roundup.psfhosted.org> New submission from mxmlnkn : I have created a RAR file containing two zip files like this: zip bag.zip README.md CHANGELOG.md zip bag1.zip CHANGELOG.md rar a zips.rar bag.zip bag1.zip And when calling `zipfile.is_zipfile` on zips.rar, it returns true even though it obviously is not a zip. The zips.rar file doesn't even begin with the magic bytes `PK` for zip but with `Rar!`. ---------- files: zips.rar messages: 402624 nosy: mxmlnkn priority: normal severity: normal status: open title: zipfile.is_zipfile returns true for a rar file containing zips Added file: https://bugs.python.org/file50305/zips.rar _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 25 10:28:24 2021 From: report at bugs.python.org (Cristobal Riaga) Date: Sat, 25 Sep 2021 14:28:24 +0000 Subject: [New-bugs-announce] [issue45288] Inspect - Added sort_result parameter on getmembers function. Message-ID: <1632580104.8.0.617180207375.issue45288@roundup.psfhosted.org> New submission from Cristobal Riaga : Added `sort_result` parameter (`bool=True`) on `getmembers` function inside `Lib/inspect.py`, that, as it name says, allows you to `getmembers` result without sorting it. I'm needed of this and it seems impossible to achieve because of [`367` line](https://github.com/python/cpython/blob/3.9/Lib/inspect.py#L367): ```py results.sort(key=lambda pair: pair[0]) ``` Any other solution is very welcomed. (I need it because I'm working on an [API Reference creator](https://github.com/Patitotective/PyAPIReference) and I think it would be better if it the members are ordered in the same order you define them.) ---------- components: Library (Lib) messages: 402626 nosy: Patitotective priority: normal pull_requests: 26947 severity: normal status: open title: Inspect - Added sort_result parameter on getmembers function. versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 25 15:56:32 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 25 Sep 2021 19:56:32 +0000 Subject: [New-bugs-announce] [issue45289] test_gdbm segfaults in M1 Mac Message-ID: <1632599792.09.0.486120515403.issue45289@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : 0:04:14 load avg: 3.86 [141/427/1] test_dbm crashed (Exit code -11) Fatal Python error: Segmentation fault Current thread 0x0000000102e2bd40 (most recent call first): File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/test_dbm.py", line 58 in keys_helper File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/test_dbm.py", line 143 in read_helper File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/test_dbm.py", line 74 in test_anydbm_creation File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/case.py", line 547 in _callTestMethod File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/case.py", line 591 in run File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/case.py", line 646 in __call__ File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/suite.py", line 122 in run File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/suite.py", line 84 in __call__ File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/suite.py", line 122 in run File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/suite.py", line 84 in __call__ File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/suite.py", line 122 in run File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/suite.py", line 84 in __call__ File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/unittest/runner.py", line 197 in run File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/support/__init__.py", line 992 in _run_suite File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/support/__init__.py", line 1118 in run_unittest File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/runtest.py", line 261 in _test_module File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/runtest.py", line 297 in _runtest_inner2 File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/runtest.py", line 340 in _runtest_inner File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/runtest.py", line 202 in _runtest File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/runtest.py", line 245 in runtest File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/runtest_mp.py", line 83 in run_tests_worker File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/main.py", line 678 in _main File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/main.py", line 658 in main File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/libregrtest/main.py", line 736 in main File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/regrtest.py", line 43 in _main File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/test/regrtest.py", line 47 in File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/runpy.py", line 86 in _run_code File "/Users/buildbot/buildarea/3.x.pablogsal-macos-m1.macos-with-brew/build/Lib/runpy.py", line 196 in _run_module_as_main Extension modules: _testcapi (total: 1) 0:04:15 load avg: 3.87 [142/427/1] test_c_ The backtrace according to lldb: (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x10365c000) * frame #0: 0x0000000182bc6994 libsystem_platform.dylib`_platform_memmove + 84 frame #1: 0x00000001000b71d0 python.exe`PyBytes_FromStringAndSize(str="c", size=4294967297) at bytesobject.c:165:5 frame #2: 0x000000010360b234 _dbm.cpython-311d-darwin.so`_dbm_dbm_keys_impl(self=0x00000001034b3ca0, cls=0x000000010088a580) at _dbmmodule.c:249:16 frame #3: 0x000000010360af34 _dbm.cpython-311d-darwin.so`_dbm_dbm_keys(self=0x00000001034b3ca0, cls=0x000000010088a580, args=0x00000001007991d8, nargs=0, kwnames=0x0000000000000000) at _dbmmodule.c.h:46:20 frame #4: 0x00000001000dca8c python.exe`method_vectorcall_FASTCALL_KEYWORDS_METHOD(func=0x00000001034db290, args=0x00000001007991d0, nargsf=9223372036854775809, kwnames=0x0000000000000000) at descrobject.c:367:24 frame #5: 0x0000000100252374 python.exe`_PyObject_VectorcallTstate(tstate=0x0000000100808b30, callable=0x00000001034db290, args=0x00000001007991d0, nargsf=9223372036854775809, kwnames=0x0000000000000000) at abstract.h:114:11 frame #6: 0x000000010024fc10 python.exe`PyObject_Vectorcall(callable=0x00000001034db290, args=0x00000001007991d0, nargsf=9223372036854775809, kwnames=0x0000000000000000) at abstract.h:123:12 frame #7: 0x000000010024fd70 python.exe`call_function(tstate=0x0000000100808b30, pp_stack=0x000000016fdc8aa0, oparg=1, kwnames=0x0000000000000000, use_tracing=0) at ceval.c:6418:13 frame #8: 0x000000010024a518 python.exe`_PyEval_EvalFrameDefault(tstate=0x0000000100808b30, frame=0x0000000100799150, throwflag=0) at ceval.c:4568:19 frame #9: 0x000000010023a0f0 python.exe`_PyEval_EvalFrame(tstate=0x0000000100808b30, frame=0x0000000100799150, throwflag=0) at pycore_ceval.h:46:12 frame #10: 0x000000010023a00c python.exe`_PyEval_Vector(tstate=0x0000000100808b30, con=0x00000001034e0320, locals=0x0000000000000000, args=0x0000000100799118, argcount=2, kwnames=0x0000000000000000) at ceval.c:5609:24 frame #11: 0x00000001000ccbe4 python.exe`_PyFunction_Vectorcall(func=0x00000001034e0310, stack=0x0000000100799118, nargsf=9223372036854775810, kwnames=0x0000000000000000) at call.c:342:16 frame #12: 0x0000000100252374 python.exe`_PyObject_VectorcallTstate(tstate=0x0000000100808b30, callable=0x00000001034e0310, args=0x0000000100799118, nargsf=9223372036854775810, kwnames=0x0000000000000000) at abstract.h:114:11 frame #13: 0x000000010024fc10 python.exe`PyObject_Vectorcall(callable=0x00000001034e0310, args=0x0000000100799118, nargsf=9223372036854775810, kwnames=0x0000000000000000) at abstract.h:123:12 frame #14: 0x000000010024fd70 python.exe`call_function(tstate=0x0000000100808b30, pp_stack=0x000000016fdca950, oparg=2, kwnames=0x0000000000000000, use_tracing=0) at ceval.c:6418:13 frame #15: 0x000000010024a518 python.exe`_PyEval_EvalFrameDefault(tstate=0x0000000100808b30, frame=0x00000001007990a8, throwflag=0) at ceval.c:4568:19 frame #16: 0x000000010023a0f0 python.exe`_PyEval_EvalFrame(tstate=0x0000000100808b30, frame=0x00000001007990a8, throwflag=0) at pycore_ceval.h:46:12 frame #17: 0x000000010023a00c python.exe`_PyEval_Vector(tstate=0x0000000100808b30, con=0x00000001034e0c10, locals=0x0000000000000000, args=0x0000000100799080, argcount=2, kwnames=0x0000000000000000) at ceval.c:5609:24 frame #18: 0x00000001000ccbe4 python.exe`_PyFunction_Vectorcall(func=0x00000001034e0c00, stack=0x0000000100799080, nargsf=9223372036854775810, kwnames=0x0000000000000000) at call.c:342:16 frame #19: 0x0000000100252374 python.exe`_PyObject_VectorcallTstate(tstate=0x0000000100808b30, callable=0x00000001034e0c00, args=0x0000000100799080, nargsf=9223372036854775810, kwnames=0x0000000000000000) at abstract.h:114:11 frame #20: 0x000000010024fc10 python.exe`PyObject_Vectorcall(callable=0x00000001034e0c00, args=0x0000000100799080, nargsf=9223372036854775810, kwnames=0x0000000000000000) at abstract.h:123:12 frame #21: 0x000000010024fd70 python.exe`call_function(tstate=0x0000000100808b30, pp_stack=0x000000016fdcc800, oparg=2, kwnames=0x0000000000000000, use_tracing=0) at ceval.c:6418:13 frame #22: 0x000000010024a518 python.exe`_PyEval_EvalFrameDefault(tstate=0x0000000100808b30, frame=0x0000000100799018, throwflag=0) at ceval.c:4568:19 frame #23: 0x000000010023a0f0 python.exe`_PyEval_EvalFrame(tstate=0x0000000100808b30, frame=0x0000000100799018, throwflag=0) at pycore_ceval.h:46:12 frame #24: 0x000000010023a00c python.exe`_PyEval_Vector(tstate=0x0000000100808b30, con=0x00000001034e0530, locals=0x0000000000000000, args=0x0000000100798ff0, argcount=1, kwnames=0x0000000000000000) at ceval.c:5609:24 ...... The last 3 frames: frame #1: 0x00000001000b71d0 python.exe`PyBytes_FromStringAndSize(str="c", size=4294967297) at bytesobject.c:165:5 162 if (str == NULL) 163 return (PyObject *) op; 164 -> 165 memcpy(op->ob_sval, str, size); 166 /* share short strings */ 167 if (size == 1) { 168 struct _Py_bytes_state *state = get_bytes_state(); (lldb) frame #2: 0x000000010360b234 _dbm.cpython-311d-darwin.so`_dbm_dbm_keys_impl(self=0x00000001034b3ca0, cls=0x000000010088a580) at _dbmmodule.c:249:16 246 } 247 for (key = dbm_firstkey(self->di_dbm); key.dptr; 248 key = dbm_nextkey(self->di_dbm)) { -> 249 item = PyBytes_FromStringAndSize(key.dptr, key.dsize); 250 if (item == NULL) { 251 Py_DECREF(v); 252 return NULL; (lldb) frame #3: 0x000000010360af34 _dbm.cpython-311d-darwin.so`_dbm_dbm_keys(self=0x00000001034b3ca0, cls=0x000000010088a580, args=0x00000001007991d8, nargs=0, kwnames=0x0000000000000000) at _dbmmodule.c.h:46:20 43 )) { 44 goto exit; 45 } -> 46 return_value = _dbm_dbm_keys_impl(self, cls); 47 48 exit: 49 return return_value; The segfault happens here: frame #0: 0x0000000182bc6994 libsystem_platform.dylib`_platform_memmove + 84 libsystem_platform.dylib`_platform_memmove: -> 0x182bc6994 <+84>: ldnp q0, q1, [x1] 0x182bc6998 <+88>: add x1, x1, #0x20 ; =0x20 0x182bc699c <+92>: subs x2, x2, #0x20 ; =0x20 0x182bc69a0 <+96>: b.hi 0x182bc698c ; <+76> ---------- components: Tests messages: 402643 nosy: pablogsal priority: normal severity: normal status: open title: test_gdbm segfaults in M1 Mac type: crash versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Sep 25 17:27:51 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Sat, 25 Sep 2021 21:27:51 +0000 Subject: [New-bugs-announce] [issue45290] test_multiprocessing_pool_circular_import fails in M1 mac Message-ID: <1632605271.03.0.63095371582.issue45290@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : ====================================================================== FAIL: test_multiprocessing_pool_circular_import (test.test_importlib.test_threaded_import.ThreadedImportTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/pablogsal/github/cpython/Lib/test/test_importlib/test_threaded_import.py", line 258, in test_multiprocessing_pool_circular_import script_helper.assert_python_ok(fn) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/test/support/script_helper.py", line 160, in assert_python_ok return _assert_python(True, *args, **env_vars) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/test/support/script_helper.py", line 145, in _assert_python res.fail(cmd_line) ^^^^^^^^^^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/test/support/script_helper.py", line 72, in fail raise AssertionError("Process return code is %d\n" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Process return code is 1 command line: ['/Users/pablogsal/github/cpython/python.exe', '-X', 'faulthandler', '-I', '/Users/pablogsal/github/cpython/Lib/test/test_importlib/partial/pool_in_threads.py'] stdout: --- --- stderr: --- Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "/Users/pablogsal/github/cpython/Lib/test/test_importlib/partial/pool_in_threads.py", line 9, in t with multiprocessing.Pool(1): ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/multiprocessing/context.py", line 119, in Pool File "/Users/pablogsal/github/cpython/Lib/multiprocessing/pool.py", line 212, in __init__ File "/Users/pablogsal/github/cpython/Lib/multiprocessing/pool.py", line 303, in _repopulate_pool File "/Users/pablogsal/github/cpython/Lib/multiprocessing/pool.py", line 326, in _repopulate_pool_static File "/Users/pablogsal/github/cpython/Lib/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) ^^^^^^^^^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) ^^^^^^^^^^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/multiprocessing/popen_spawn_posix.py", line 53, in _launch parent_r, child_w = os.pipe() ^^^^^^^^^ File "/Users/pablogsal/github/cpython/Lib/test/test_importlib/partial/pool_in_threads.py", line 9, in t with multiprocessing.Pool(1): ^^^^^^^^^^^^^^^^^^^^^^^ OSError: [Errno 24] Too many open files /Users/pablogsal/github/cpython/Lib/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 65 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' /Users/pablogsal/github/cpython/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-m8lbung6': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/pablogsal/github/cpython/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-w8c_ci83': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/pablogsal/github/cpython/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-yzcaa23b': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) /Users/pablogsal/github/cpython/Lib/multiprocessing/resource_tracker.py:237: UserWarning: resource_tracker: '/mp-vgij30u5': [Errno 2] No such file or directory warnings.warn('resource_tracker: %r: %s' % (name, e)) --- ---------------------------------------------------------------------- Ran 1428 tests in 2.752s FAILED (failures=1, skipped=6, expected failures=1) test test_importlib failed test_importlib failed (1 failure) == Tests result: FAILURE == 6 tests OK. ---------- components: Tests, macOS messages: 402648 nosy: ned.deily, pablogsal, ronaldoussoren priority: normal severity: normal status: open title: test_multiprocessing_pool_circular_import fails in M1 mac versions: Python 3.10, Python 3.11, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 26 06:58:45 2021 From: report at bugs.python.org (Yiyang Zhan) Date: Sun, 26 Sep 2021 10:58:45 +0000 Subject: [New-bugs-announce] [issue45291] Some instructions in the "Using Python on Unix platforms" document do no work on CentOS 7 Message-ID: <1632653925.67.0.37350677863.issue45291@roundup.psfhosted.org> New submission from Yiyang Zhan : The instructions in "Custom OpenSSL" section of "Using Python on Unix platforms" do not work on CentOS 7: https://github.com/python/cpython/blob/v3.10.0rc2/Doc/using/unix.rst#custom-openssl. CPython's ./configure script assumes the OpenSSL's library resides in "$ssldir/lib". This isn't guaranteed with the current instruction, because OpenSSL might create for example lib64 for the .so files. See https://github.com/openssl/openssl/blob/openssl-3.0.0/INSTALL.md#libdir: > Some build targets have a multilib postfix set in the build configuration. For these targets the default libdir is lib. Please use --libdir=lib to override the libdir if adding the postfix is undesirable. Therefore it's better to explicitly set --libdir=lib. ---------- assignee: docs at python components: Documentation messages: 402657 nosy: docs at python, zhanpon priority: normal severity: normal status: open title: Some instructions in the "Using Python on Unix platforms" document do no work on CentOS 7 type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 26 08:07:11 2021 From: report at bugs.python.org (Irit Katriel) Date: Sun, 26 Sep 2021 12:07:11 +0000 Subject: [New-bugs-announce] [issue45292] Implement PEP 654 Message-ID: <1632658031.68.0.457473173232.issue45292@roundup.psfhosted.org> Change by Irit Katriel : ---------- assignee: iritkatriel components: Documentation, Interpreter Core, Library (Lib) nosy: iritkatriel priority: normal severity: normal status: open title: Implement PEP 654 type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 26 08:13:59 2021 From: report at bugs.python.org (Kapil Bansal) Date: Sun, 26 Sep 2021 12:13:59 +0000 Subject: [New-bugs-announce] [issue45293] List inplace addition different from normal addition Message-ID: <1632658439.55.0.347636093536.issue45293@roundup.psfhosted.org> New submission from Kapil Bansal : Hi, I tried addition and in-place addition on a list. >>> l = ['a', 'b', 'c'] >>> l = l + 'de' (raises an error) >>> l += 'de' >>> print(l) ['a', 'b' , 'c', 'd', 'e'] I want to ask why the behaviour of both these are different?? If it is done intentionally, then it should be there in the documentation but I am not able to find any reference. ---------- messages: 402658 nosy: KapilBansal320 priority: normal severity: normal status: open title: List inplace addition different from normal addition type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 26 11:25:13 2021 From: report at bugs.python.org (arts stars) Date: Sun, 26 Sep 2021 15:25:13 +0000 Subject: [New-bugs-announce] [issue45294] Conditional import fails and produce UnboundLocalError, if a variable machting import name is used before Message-ID: <1632669913.29.0.580043897174.issue45294@roundup.psfhosted.org> New submission from arts stars : Hello, I have this situation: ---------------- def test(): if True : print("Exception"+DiaObjectFactoryHelper) else: from . import DiaObjectFactoryHelper pass test() --------------- it generates instead of (like the the line 'from . import' is commented) : "NameError: name 'DiaObjectFactoryHelper' is not defined" this: UnboundLocalError: local variable 'DiaObjectFactoryHelper' referenced before assignment PS: The github authentificatiion did not work, did not manage to grab email even if set public ---------- messages: 402663 nosy: stars-of-stras priority: normal severity: normal status: open title: Conditional import fails and produce UnboundLocalError, if a variable machting import name is used before versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 26 12:29:41 2021 From: report at bugs.python.org (Ken Jin) Date: Sun, 26 Sep 2021 16:29:41 +0000 Subject: [New-bugs-announce] [issue45295] _PyObject_GetMethod/LOAD_METHOD for C classmethods Message-ID: <1632673781.3.0.384078877149.issue45295@roundup.psfhosted.org> New submission from Ken Jin : LOAD_METHOD + CALL_METHOD currently doesn't work for Python @classmethod and C classmethod (METH_CLASS). They still create bound classmethods which are fairly expensive. I propose supporting classmethods. I have an implementation for C classmethods. It passes most of the test suite, and I've also got it to play along with PEP 659 specialization. Some numbers from Windows release build (PGO build will likely be less favorable): python.exe -m timeit "int.from_bytes(b'')" Main: 2000000 loops, best of 5: 107 nsec per loop Patched: 5000000 loops, best of 5: 72.4 nsec per loop Funnily enough, `(1).from_bytes()` still needs a bound classmethod, but I think people usually use the other form. A toy PR will be up for review. I will then split the change into two parts (one for _PyObject_GetMethod changes, another for PEP 659 specialization) to help decide if the maintenance-perf ratio is worth it. ---------- components: Interpreter Core messages: 402668 nosy: kj priority: normal severity: normal status: open title: _PyObject_GetMethod/LOAD_METHOD for C classmethods type: performance versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 26 16:52:27 2021 From: report at bugs.python.org (Terry J. Reedy) Date: Sun, 26 Sep 2021 20:52:27 +0000 Subject: [New-bugs-announce] [issue45296] IDLE: Change Ctrl-Z note in exit/quit repr on Windows Message-ID: <1632689547.92.0.46019643965.issue45296@roundup.psfhosted.org> New submission from Terry J. Reedy : On Windows: >>> exit 'Use exit() or Ctrl-Z plus Return to exit' >>> quit 'Use quit() or Ctrl-Z plus Return to exit' >>> exit.eof 'Ctrl-Z plus Return' On *nix, 'Ctrl-Z plus Return' is 'Ctrl-D (i.e, EOF)' IDLE uses the latter even on Windows, and Ctrl-Z does not work. Both exit and quit are instances of _sitebuiltins.Quitter https://github.com/python/cpython/blob/e14d5ae5447ae28fc4828a9cee8e9007f9c30700/Lib/_sitebuiltins.py#L13-L26 class Quitter(object): def __init__(self, name, eof): self.name = name self.eof = eof def __repr__(self): return 'Use %s() or %s to exit' % (self.name, self.eof) def __call__ [not relevant here] We just need to replace current exit/quit.eof as indicated above on startup. ---------- messages: 402678 nosy: terry.reedy priority: normal severity: normal status: open title: IDLE: Change Ctrl-Z note in exit/quit repr on Windows _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 26 20:28:07 2021 From: report at bugs.python.org (Steven D'Aprano) Date: Mon, 27 Sep 2021 00:28:07 +0000 Subject: [New-bugs-announce] [issue45297] Improve the IDLE shell save command Message-ID: <1632702487.28.0.152974990593.issue45297@roundup.psfhosted.org> New submission from Steven D'Aprano : See this question on Discuss: https://discuss.python.org/t/what-is-this-syntax-i-dont-know-how-to-fix-it/10844 It seems that IDLE allows you to save the shell output, complete with welcome message and prompts, as a .py file, and then reopen it and attempt to run it, which obviously fails. When it does fail, it is confusing. Suggested enhancements: - When saving the complete shell session, save it to a text file, not .py. That would be useful for anyone wanting a full record of the interpreter session. - When saving the shell session as a .py file, strip out the welcome message, the prompts, and any output, leaving only what will hopefully be runnable code. I don't know what sort of UI this should have. Two different menu commands? A checkbox in the Save dialog? My thoughts are that the heuristic to reconstruct runnable code from the interpreter session may not be foolproof, but better than nothing. Something like the doctest rules might work reasonably well. - strip the welcome message; - any line (or block) starting with the prompt >>> that is followed by a traceback should be thrown out; - any other text not starting with the prompt >>> is interpreter output and should be thrown out; - you should be left with just blocks starting with the prompt, so remove the prompt, adjust the indents, and hopefully you're left with valid runnable code. ---------- assignee: terry.reedy components: IDLE messages: 402686 nosy: steven.daprano, terry.reedy priority: normal severity: normal status: open title: Improve the IDLE shell save command type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Sep 26 21:33:14 2021 From: report at bugs.python.org (Keming) Date: Mon, 27 Sep 2021 01:33:14 +0000 Subject: [New-bugs-announce] [issue45298] SIGSEGV when access a fork Event in a spawn Process Message-ID: <1632706394.95.0.366136047801.issue45298@roundup.psfhosted.org> New submission from Keming : Code to trigger this problem: ```python import multiprocessing as mp from time import sleep def wait_for_event(event): while not event.is_set(): sleep(0.1) def trigger_segment_fault(): event = mp.get_context("fork").Event() p = mp.get_context("spawn").Process(target=wait_for_event, args=(event,)) p.start() # this will show the exitcode=-SIGSEGV sleep(1) print(p) event.set() p.terminate() if __name__ == "__main__": trigger_segment_fault() ``` Accessing this forked event in a spawned process will result in a segment fault. I have found a related report: https://bugs.python.org/issue43832. But I think it's not well documented in the Python 3 multiprocessing doc. Will it be better to explicit indicate that the event is related to the start method context in the documentation? ---------- assignee: docs at python components: Documentation messages: 402687 nosy: docs at python, kemingy priority: normal severity: normal status: open title: SIGSEGV when access a fork Event in a spawn Process type: crash _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 01:02:29 2021 From: report at bugs.python.org (Grant Edwards) Date: Mon, 27 Sep 2021 05:02:29 +0000 Subject: [New-bugs-announce] [issue45299] SMTP.send_message() does from mangling when it should not Message-ID: <1632718949.8.0.657238158075.issue45299@roundup.psfhosted.org> New submission from Grant Edwards : SMTP.send_message() does from mangling even when the message's policy has that disabled. The problem is in the send_messsage() function shown below: 912 def send_message(self, msg, from_addr=None, to_addrs=None, 913 mail_options=(), rcpt_options=()): 914 """Converts message to a bytestring and passes it to sendmail. ... 963 # Make a local copy so we can delete the bcc headers. 964 msg_copy = copy.copy(msg) ... 977 with io.BytesIO() as bytesmsg: 978 if international: 979 g = email.generator.BytesGenerator( 980 bytesmsg, policy=msg.policy.clone(utf8=True)) 981 mail_options = (*mail_options, 'SMTPUTF8', 'BODY=8BITMIME') 982 else: 983 g = email.generator.BytesGenerator(bytesmsg) 984 g.flatten(msg_copy, linesep='\r\n') 985 flatmsg = bytesmsg.getvalue() If 'international' is True, then the BytesGenerator is passed msg.policy with utf8 added, and from mangling is only done if the message's policy has from mangling enabled. This is correct behavior. If 'international' is False, then the generator always does from mangling regardless of the message'spolicy. From mangling is only used when writing message to mbox format files. When sending a message via SMTP It is wrong to do it when the message's policy says not to. This needs to be fixed by passing the message's policy to BytesGenerator() in the 'else' clause also. I would suggest changing 983 g = email.generator.BytesGenerator(bytesmsg) to 983 g = email.generator.BytesGenerator(bytesmsg, policy=msg.policy) The problem is that if neither mangle_from_ nor policy arguments are passed to email.generator.BytesGenerator(), then mangle_from_ defaults to True, and the mangle_from_ setting in the message is ignored. This is not really documented: https://docs.python.org/3/library/email.generator.html#email.generator.BytesGenerator If optional mangle_from_ is True, put a > character in front of any line in the body that starts with the exact string "From ", that is From followed by a space at the beginning of a line. mangle_from_ defaults to the value of the mangle_from_ setting of the policy. Where "the policy" refers to the one passed to BytesGenerator(). Note that this section does _not_ state what happens if neither mangle_from_ nor policy are passed to BytesGenerator(). What actually happens is that in that case mangle_from_ defaults to True. (Not a good choice for default, since that's only useful for the one case where you're writing to an mbox file.) However, there is also a misleading sentence later in that same section: If policy is None (the default), use the policy associated with the Message or EmailMessage object passed to flatten to control the message generation. That's only partially true. If you don't pass a policy to BytesGenerator(), only _some_ of the settings from the message's policy will be used. Some policy settings (e.g. mangle_from_) are obeyed when passed to BytesGenerator(), but ignored in the message's policy even if there was no policy passed to BytesGenerator(). The documentation needs to be changed to state that mangle_from_ defaults to True if neitehr mangle_from_ nor policy are passed to BytesGenerator(), and that last sentence needs to be changed to state that when no policy is passed to BytesGenerator() only _some_ of the fields in the message's policy are used to control the message generation. (An actual list of policy fields are obeyed from the message's policy would be nice.) ---------- components: Library (Lib) messages: 402690 nosy: grant.b.edwards priority: normal severity: normal status: open title: SMTP.send_message() does from mangling when it should not versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 09:29:57 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 27 Sep 2021 13:29:57 +0000 Subject: [New-bugs-announce] [issue45300] Building Python documentation with doctest logs a ResourceWarning in Doc/library/nntplib.rst Message-ID: <1632749397.21.0.680115205063.issue45300@roundup.psfhosted.org> New submission from STINNER Victor : Build Python documentatioin with: make -C Doc/ PYTHON=../python venv LANG= PYTHONTRACEMALLOC=10 make -C Doc/ PYTHON=../python SPHINXOPTS="-q -W -j10" doctest 2>&1|tee log See the logs: /home/vstinner/python/main/Lib/socket.py:776: ResourceWarning: unclosed self._sock = None Object allocated at (most recent call last): File "/home/vstinner/python/main/Doc/venv/lib/python3.11/site-packages/sphinx/builders/__init__.py", lineno 361 self.write(docnames, list(updated_docnames), method) File "/home/vstinner/python/main/Doc/venv/lib/python3.11/site-packages/sphinx/ext/doctest.py", lineno 366 self.test_doc(docname, doctree) File "/home/vstinner/python/main/Doc/venv/lib/python3.11/site-packages/sphinx/ext/doctest.py", lineno 470 self.test_group(group) File "/home/vstinner/python/main/Doc/venv/lib/python3.11/site-packages/sphinx/ext/doctest.py", lineno 554 self.test_runner.run(test, out=self._warn_out, clear_globs=False) File "/home/vstinner/python/main/Lib/doctest.py", lineno 1495 return self.__run(test, compileflags, out) File "/home/vstinner/python/main/Lib/doctest.py", lineno 1348 exec(compile(example.source, filename, "single", File "", lineno 1 File "/home/vstinner/python/main/Lib/nntplib.py", lineno 334 self.sock = self._create_socket(timeout) File "/home/vstinner/python/main/Lib/nntplib.py", lineno 399 return socket.create_connection((self.host, self.port), timeout) File "/home/vstinner/python/main/Lib/socket.py", lineno 828 sock = socket(af, socktype, proto) ---------- assignee: docs at python components: Documentation messages: 402709 nosy: docs at python, vstinner priority: normal severity: normal status: open title: Building Python documentation with doctest logs a ResourceWarning in Doc/library/nntplib.rst versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 10:26:03 2021 From: report at bugs.python.org (STINNER Victor) Date: Mon, 27 Sep 2021 14:26:03 +0000 Subject: [New-bugs-announce] [issue45301] pycore_condvar.h: remove Windows conditonal variable emulation Message-ID: <1632752763.53.0.87370368586.issue45301@roundup.psfhosted.org> New submission from STINNER Victor : I recently worked on time.sleep() enhancement (bpo-21302) and threading bugfixes (bpo-45274, bpo-1596321). I saw one more time that Python emulates conditional variables to support Windows XP and older. But Python 3.11 requires Windows 8.1 or newer. IMO it's time to remove _PY_EMULATED_WIN_CV code path from pycore_condvar.h. ---------- components: Library (Lib) messages: 402720 nosy: paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: pycore_condvar.h: remove Windows conditonal variable emulation versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 11:46:25 2021 From: report at bugs.python.org (xloem) Date: Mon, 27 Sep 2021 15:46:25 +0000 Subject: [New-bugs-announce] [issue45302] basic builtin functions missing __text_signature__ attributes Message-ID: <1632757585.5.0.650553887806.issue45302@roundup.psfhosted.org> New submission from xloem <0xloem at gmail.com>: As there is no __text_signature__ nor __signature__ attribute on basic builtin functions such as print or open, inspect.signature() cannot enumerate their parameters. It seems adding these might not be a complex task for somebody familiar with the code. ---------- components: Argument Clinic, Interpreter Core messages: 402727 nosy: larry, xloem priority: normal severity: normal status: open title: basic builtin functions missing __text_signature__ attributes type: enhancement versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 11:48:43 2021 From: report at bugs.python.org (xloem) Date: Mon, 27 Sep 2021 15:48:43 +0000 Subject: [New-bugs-announce] [issue45303] ast module classes missing __text_signature__ attribute Message-ID: <1632757723.33.0.590596255222.issue45303@roundup.psfhosted.org> New submission from xloem <0xloem at gmail.com>: The ast module has no signature information on its types. The types are generated in a uniform way, so it should be reasonable to add __text_signature__ or __signature__ fields to all of them at once. ---------- components: Argument Clinic, Interpreter Core, Library (Lib) messages: 402728 nosy: larry, xloem priority: normal severity: normal status: open title: ast module classes missing __text_signature__ attribute type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 15:04:19 2021 From: report at bugs.python.org (jakirkham) Date: Mon, 27 Sep 2021 19:04:19 +0000 Subject: [New-bugs-announce] [issue45304] Supporting out-of-band buffers (pickle protocol 5) in multiprocessing Message-ID: <1632769459.32.0.297800730758.issue45304@roundup.psfhosted.org> New submission from jakirkham : In Python 3.8+, pickle protocol 5 ( PEP<574> ) was added, which supports out-of-band buffer collection[1]. The idea being that when pickling an object with a large amount of data attached to it (like an array, dataframe, etc.) one could collect this large amount of data alongside the normal pickled data without causing a copy. This is important in particular when serializing data for communication between two python instances. IOW this is quite valuable when using a `multiprocessing.pool.Pool`[2] or a `concurrent.futures.ProcessPoolExecutor`[3]. However AFAICT neither of these leverage this functionality[4][5]. To ensure zero-copy processing of large data, it would be helpful for pickle protocol 5 to be used in both of these pools. [1] https://docs.python.org/3/library/pickle.html#pickle-oob [2] https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool [3] https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor [4] https://github.com/python/cpython/blob/16b5bc68964c6126845f4cdd54b24996e71ae0ba/Lib/multiprocessing/queues.py#L372 [5] https://github.com/python/cpython/blob/16b5bc68964c6126845f4cdd54b24996e71ae0ba/Lib/multiprocessing/queues.py#L245 ---------- components: IO, Library (Lib) messages: 402736 nosy: jakirkham priority: normal severity: normal status: open title: Supporting out-of-band buffers (pickle protocol 5) in multiprocessing type: performance versions: Python 3.10, Python 3.11, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 17:28:08 2021 From: report at bugs.python.org (Dave McNulla) Date: Mon, 27 Sep 2021 21:28:08 +0000 Subject: [New-bugs-announce] [issue45305] Incorrect record of call_args_list when using multiple side_effect in mock.patch Message-ID: <1632778088.54.0.273231089693.issue45305@roundup.psfhosted.org> New submission from Dave McNulla : https://gist.github.com/dmcnulla/ecec8fc96a2fd07082f240eeff6888d9 I'm trying to reproduce an error in a call to a method, forcing a second call to the method. In my test, the call_args_list is showing incorrectly (both in debugging or running unittest normally). I am not certain what other circumstances this is happening or not happening. I was able to reduce the code to reproduce it quite a bit. I am certain that the first call to the method is not matching correctly in my criteria to what I expect based on my debugging. I am using python 3.7. I can reproduce in IntelliJ or by commandline `python3 -m pytest test_multiple_side_effect.py` ---------- components: Tests messages: 402743 nosy: dmcnulla priority: normal severity: normal status: open title: Incorrect record of call_args_list when using multiple side_effect in mock.patch versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 20:24:44 2021 From: report at bugs.python.org (Josh Haberman) Date: Tue, 28 Sep 2021 00:24:44 +0000 Subject: [New-bugs-announce] [issue45306] Docs are incorrect re: constant initialization in the C99 standard Message-ID: <1632788684.07.0.376752176258.issue45306@roundup.psfhosted.org> New submission from Josh Haberman : I believe the following excerpt from the docs is incorrect (https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_base): > Slot initialization is subject to the rules of initializing > globals. C99 requires the initializers to be ?address > constants?. Function designators like PyType_GenericNew(), > with implicit conversion to a pointer, are valid C99 address > constants. > > However, the unary ?&? operator applied to a non-static > variable like PyBaseObject_Type() is not required to produce > an address constant. Compilers may support this (gcc does), > MSVC does not. Both compilers are strictly standard > conforming in this particular behavior. > > Consequently, tp_base should be set in the extension module?s init function. I explained why in https://mail.python.org/archives/list/python-dev at python.org/thread/2WUFTVQA7SLEDEDYSRJ75XFIR3EUTKKO/ and on https://bugs.python.org/msg402738. The short version: &foo is an "address constant" according to the standard whenever "foo" has static storage duration. Variables declared "extern" have static storage duration. Therefore strictly conforming implementations should accept &PyBaseObject_Type as a valid constant initializer. I believe the text above could be replaced by something like: > MSVC does not support constant initialization of of an address > that comes from another DLL, so extensions should be set in the > extension module's init function. ---------- assignee: docs at python components: Documentation messages: 402752 nosy: docs at python, jhaberman priority: normal severity: normal status: open title: Docs are incorrect re: constant initialization in the C99 standard _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Sep 27 21:01:15 2021 From: report at bugs.python.org (Gregory Szorc) Date: Tue, 28 Sep 2021 01:01:15 +0000 Subject: [New-bugs-announce] [issue45307] Removal of _PyImport_FindExtensionObject() in 3.10 limits custom extension module loaders Message-ID: <1632790875.67.0.121557732673.issue45307@roundup.psfhosted.org> New submission from Gregory Szorc : https://bugs.python.org/issue41994 / commit 4db8988420e0a122d617df741381b0c385af032c removed _PyImport_FindExtensionObject() from the C API and it is no longer available in CPython 3.10 after alpha 5. At least py2exe and PyOxidizer rely on this API for implementing a custom module loader for extension modules. Essentially, both want to implement a custom Loader.create_module() so they can use a custom mechanism for injecting a shared library into the process. While the details shouldn't be important beyond "they can't use imp.create_dynamic()," both use a similar technique that hooks LoadLibrary() on Windows to enable them to load a DLL from memory (as opposed to a file). While I don't have the extension module loading mechanism fully paged in to my head at the moment, I believe the reason that _PyImport_FindExtensionObject() (now effectively import_find_extension()) is important for py2exe and PyOxidizer is because they need to support at most once initialization, including honoring the multi-phase initialization semantics. Since the state of extension modules is stored in `static PyObject *extensions` and the thread state (which are opaque to the C API), the loss of _PyImport_FindExtensionObject() means there is no way to check for and use an existing extension module module object from the bowels of the importer machinery. And I think this means it isn't possible to implement well-behaved alternate extension module loaders any more. I'm aware the deleted API was "private" and probably shouldn't have been used in the first place. And what py2exe and PyOxidizer are doing here is rather unorthodox. In my mind the simplest path forward is restoring _PyImport_FindExtensionObject(). But a properly designed public API is probably a better solution. Until 3.10 makes equivalent functionality available or another workaround is supported, PyOxidizer won't be able to support loading extension modules from memory on Windows on Python 3.10. This is unfortunate. But probably not a deal breaker and I can probably go several months shipping PyOxidizer with this regression without too many complaints. ---------- components: C API messages: 402754 nosy: indygreg, petr.viktorin, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Removal of _PyImport_FindExtensionObject() in 3.10 limits custom extension module loaders type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 28 03:57:03 2021 From: report at bugs.python.org (eleven xiang) Date: Tue, 28 Sep 2021 07:57:03 +0000 Subject: [New-bugs-announce] [issue45308] multiprocessing default fork child process will not free object, which is inherited from parent process Message-ID: <1632815823.67.0.320448602992.issue45308@roundup.psfhosted.org> New submission from eleven xiang : Hi, Here I did an experiment, and detail as below 1. There is library, which has global object and has its class method 2. Use multiprocessing module to fork the child process, parent process call the global object's method first, and then child process called the global object method again to modify it 3. After that when script exit, parent will free the object, but child process will not free it. 4. if we change the start method to 'spawn', the child process will free it ---------- components: Library (Lib) files: test.py messages: 402758 nosy: eleven.xiang priority: normal severity: normal status: open title: multiprocessing default fork child process will not free object, which is inherited from parent process type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file50307/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 28 05:54:32 2021 From: report at bugs.python.org (=?utf-8?b?56mG5YWw?=) Date: Tue, 28 Sep 2021 09:54:32 +0000 Subject: [New-bugs-announce] [issue45309] asyncio task can not be used to open_connection and read data. Message-ID: <1632822872.87.0.0190246614717.issue45309@roundup.psfhosted.org> New submission from ?? : The server code: import asyncio import struct counter = 0 async def on_connection(r: asyncio.StreamReader, w: asyncio.StreamWriter): msg = struct.pack("HB", 3, 0) w.write(msg) await w.drain() global counter counter += 1 print(counter, "client") async def main(): server = await asyncio.start_server(on_connection, '0.0.0.0', 12345) await server.serve_forever() if __name__ == "__main__": asyncio.run(main()) The client code: import asyncio loop = asyncio.get_event_loop() counter = 0 c_counter = 0 async def connection_to(): r, w = await asyncio.open_connection('192.168.3.2', 12345) global c_counter c_counter += 1 print(c_counter, "connected") await r.readexactly(3) global counter counter += 1 print(counter, "get_msg") async def main(): for i in range(7000): t = loop.create_task(connection_to()) try: loop.run_until_complete(main()) loop.run_forever() except Exception as e: print(e.with_traceback(None)) I open the server on wsl debian and run the client on host windows. Try start more client at once, Then you will find that counter is not equal to c_counter. It nearly always happend. Now I can not trust create_task any more. ---------- components: asyncio messages: 402762 nosy: asvetlov, whitestockingirl, yselivanov priority: normal severity: normal status: open title: asyncio task can not be used to open_connection and read data. type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 28 08:22:18 2021 From: report at bugs.python.org (STINNER Victor) Date: Tue, 28 Sep 2021 12:22:18 +0000 Subject: [New-bugs-announce] [issue45310] test_multiprocessing_forkserver: test_shared_memory_basics() failed with FileExistsError: [Errno 17] File exists: '/test01_tsmb' Message-ID: <1632831738.98.0.525271172358.issue45310@roundup.psfhosted.org> New submission from STINNER Victor : AMD64 Fedora Stable LTO + PGO 3.10 (build 357): https://buildbot.python.org/all/#/builders/651/builds/357 First test_multiprocessing_forkserver failed, then test_multiprocessing_spawn, then test_multiprocessing_fork. I confirm that these tests fail if the /test01_tsmb shared memory exists. The test is supposed to unlink the shared memory once the test completes: def test_shared_memory_basics(self): sms = shared_memory.SharedMemory('test01_tsmb', create=True, size=512) self.addCleanup(sms.unlink) Maybe the whole process was killed while the test was running. I removed it manually: sudo ./python -c "import _posixshmem; _posixshmem.shm_unlink('/test01_tsmb')" Logs: Traceback (most recent call last): File "/home/buildbot/buildarea/3.10.cstratak-fedora-stable-x86_64.lto-pgo/build/Lib/multiprocessing/resource_tracker.py", line 209, in main cache[rtype].remove(name) KeyError: '/psm_49a93592' (...) ====================================================================== ERROR: test_shared_memory_basics (test.test_multiprocessing_forkserver.WithProcessesTestSharedMemory) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.10.cstratak-fedora-stable-x86_64.lto-pgo/build/Lib/test/_test_multiprocessing.py", line 3777, in test_shared_memory_basics sms = shared_memory.SharedMemory('test01_tsmb', create=True, size=512) File "/home/buildbot/buildarea/3.10.cstratak-fedora-stable-x86_64.lto-pgo/build/Lib/multiprocessing/shared_memory.py", line 103, in __init__ self._fd = _posixshmem.shm_open( FileExistsError: [Errno 17] File exists: '/test01_tsmb' ---------- components: Tests messages: 402770 nosy: vstinner priority: normal severity: normal status: open title: test_multiprocessing_forkserver: test_shared_memory_basics() failed with FileExistsError: [Errno 17] File exists: '/test01_tsmb' versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 28 09:32:03 2021 From: report at bugs.python.org (Besart Dollma) Date: Tue, 28 Sep 2021 13:32:03 +0000 Subject: [New-bugs-announce] [issue45311] Threading Semaphore and BoundedSemaphore release method implementation improvement Message-ID: <1632835923.15.0.15993520004.issue45311@roundup.psfhosted.org> New submission from Besart Dollma : Hi, I was looking at the implementation of Semaphore and BoundedSemaphore in threading.py and I saw that `notify` is called on a loop n times while it supports an n parameter. https://github.com/python/cpython/blob/84975146a7ce64f1d50dcec8311b7f7188a5c962/Lib/threading.py#L479 Unless I am missing something, removing the loop and replacing it with self._cond.notify(n) will slightly improve performance by avoiding the function call overhead. I can prepare a Pool Request if the change is acceptable. Thanks, ---------- components: Library (Lib) messages: 402779 nosy: besi7dollma priority: normal severity: normal status: open title: Threading Semaphore and BoundedSemaphore release method implementation improvement type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 28 11:49:22 2021 From: report at bugs.python.org (=?utf-8?q?Dimitrije_Milovi=C4=87?=) Date: Tue, 28 Sep 2021 15:49:22 +0000 Subject: [New-bugs-announce] [issue45312] "MUPCA Root" Certificates - treated as invalid and cause error, but are walid and necessary Message-ID: <1632844162.48.0.609560888074.issue45312@roundup.psfhosted.org> New submission from Dimitrije Milovi? : I just commented to the issue here https://bugs.python.org/issue35665?@ok_message=issue%2035665%20files%20edited%20ok&@template=item, but noticed "closed" so better start a new one issue, and to further update the importance of those certificates... I came to this issue (still persistent with all python versions since 3.6) while using yt-dlp: https://github.com/yt-dlp/yt-dlp/issues/1060 I obviously have the SAME problem than the guys in your link since I am from Serbia too, and those certificates "MUPCA Root" are (unfortunately-badly executed) crucial (issued by the ministry of interior - police ?) ones to be able too read ID cards and use personal signing certificates, and they're are all valid... So the option to remove the faulty certificates, is a no go to me (or anyone in Serbia using their ID card - individuals, companies and entrepreneurs like me)... Please help! ---------- components: Windows files: Untitled.png messages: 402784 nosy: MDM-1, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: "MUPCA Root" Certificates - treated as invalid and cause error, but are walid and necessary type: crash versions: Python 3.9 Added file: https://bugs.python.org/file50312/Untitled.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 28 13:26:47 2021 From: report at bugs.python.org (bytebites) Date: Tue, 28 Sep 2021 17:26:47 +0000 Subject: [New-bugs-announce] [issue45313] Counter.elements() wrong error message Message-ID: <1632850007.37.0.962387512154.issue45313@roundup.psfhosted.org> New submission from bytebites : aCounter.elements(1) gives wrong error message: TypeError: Counter.elements() takes 1 positional argument but 2 were given ---------- messages: 402795 nosy: bytebites priority: normal severity: normal status: open title: Counter.elements() wrong error message type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 28 14:20:08 2021 From: report at bugs.python.org (Frans) Date: Tue, 28 Sep 2021 18:20:08 +0000 Subject: [New-bugs-announce] [issue45314] Using target python while cross-building Message-ID: <1632853208.74.0.704194556945.issue45314@roundup.psfhosted.org> New submission from Frans : While trying to cross-compile Python-3.9.7, I came across the next error report: i586-cross-linux-gcc -Xlinker -export-dynamic -o python Programs/python.o -L. -lpython3.9 -lcrypt -ldl -lpthread -lm -lm _PYTHON_PROJECT_BASE=/mnt/lfs/sources-base/Python-3.9.7 _PYTHON_HOST_PLATFORM=linux-i586 PYTHONPATH=./Lib _PYTHON_SYSCONFIGDATA_NAME=_sys configdata__linux_i386-linux-gnu python -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi /bin/sh: line 1: python: command not found generate-posix-vars failed make: *** [Makefile:619: pybuilddir.txt] Error 1 ----------------------------------------------------------- Strange, because Python configure/Makefile known that we are cross-compiling, but still try to execute a program with the target architecture, which is not the same as the architecture of the build machine. Of course it fails. ---------- components: Cross-Build messages: 402799 nosy: Alex.Willmer, fransdb priority: normal severity: normal status: open title: Using target python while cross-building versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Sep 28 16:45:06 2021 From: report at bugs.python.org (Sebastian Berg) Date: Tue, 28 Sep 2021 20:45:06 +0000 Subject: [New-bugs-announce] [issue45315] `PyType_FromSpec` does not copy the name Message-ID: <1632861906.82.0.496996210358.issue45315@roundup.psfhosted.org> New submission from Sebastian Berg : As noted in the issue: https://bugs.python.org/issue15870#msg402800 `PyType_FromSpec` assumes that the `name` passed is persistent for the program lifetime. This seems wrong/unnecessary: We are creating a heap-type, a heap-type's name is stored as a unicode object anyway! So, there is no reason to require the string be persistent, and a program that decides to build a custom name dynamically (for whatever reasons) would run into a crash when the name is accessed. The code: https://github.com/python/cpython/blob/0c50b8c0b8274d54d6b71ed7bd21057d3642f138/Objects/typeobject.c#L3427 Should be modified to point to to the `ht_name` (utf8) data instead. My guess is, the simplest solution is calling `type_set_name`, even if that runs some unnecessary checks. I understand that the FromSpec API is mostly designed to replace the static type definition rather than extend it, but it seems an unintentional/strange requirement. ---------- components: Interpreter Core messages: 402804 nosy: seberg priority: normal severity: normal status: open title: `PyType_FromSpec` does not copy the name type: crash versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 04:18:04 2021 From: report at bugs.python.org (STINNER Victor) Date: Wed, 29 Sep 2021 08:18:04 +0000 Subject: [New-bugs-announce] [issue45316] [C API] Functions not exported with PyAPI_FUNC() Message-ID: <1632903484.87.0.875373224779.issue45316@roundup.psfhosted.org> New submission from STINNER Victor : The Python C API contains multiple functions which are not exported with PyAPI_FUNC() and so are not usable. Public functions: --- void PyLineTable_InitAddressRange(const char *linetable, Py_ssize_t length, int firstlineno, PyCodeAddressRange *range); int PyLineTable_NextAddressRange(PyCodeAddressRange *range); int PyLineTable_PreviousAddressRange(PyCodeAddressRange *range); --- => Either make this functions private ("_Py" prefix) and move them to the internal C API, or add PyAPI_FUNC(). Private functions: --- int _PyCode_InitAddressRange(PyCodeObject* co, PyCodeAddressRange *bounds); int _PyCode_InitEndAddressRange(PyCodeObject* co, PyCodeAddressRange* bounds); PyDictKeysObject *_PyDict_NewKeysForClass(void); Py_ssize_t _PyDict_KeysSize(PyDictKeysObject *keys); uint32_t _PyDictKeys_GetVersionForCurrentState(PyDictKeysObject *dictkeys); Py_ssize_t _PyDictKeys_StringLookup(PyDictKeysObject* dictkeys, PyObject *key); PyObject *_PyDict_Pop_KnownHash(PyObject *, PyObject *, Py_hash_t, PyObject *); PyObject *_PyDict_FromKeys(PyObject *, PyObject *, PyObject *); int _PyObjectDict_SetItem(PyTypeObject *tp, PyObject **dictptr, PyObject *name, PyObject *value); PyObject *_PyDict_LoadGlobal(PyDictObject *, PyDictObject *, PyObject *); Py_ssize_t _PyDict_GetItemHint(PyDictObject *, PyObject *, Py_ssize_t, PyObject **); PyFrameObject* _PyFrame_New_NoTrack(struct _interpreter_frame *, int); int PySignal_SetWakeupFd(int fd); uint32_t _PyFunction_GetVersionForCurrentState(PyFunctionObject *func); PyObject *_PyGen_yf(PyGenObject *); PyObject *_PyCoro_GetAwaitableIter(PyObject *o); PyObject *_PyAsyncGenValueWrapperNew(PyObject *); void _PyArg_Fini(void); int _Py_CheckPython3(void); --- => I suggest to move all these declarations to the internal C API. Moreover, Include/moduleobject.h contains: --- #ifdef Py_BUILD_CORE extern int _PyModule_IsExtension(PyObject *obj); #endif --- IMO this declaration should be moved to the internal C API. See also bpo-45201 about PySignal_SetWakeupFd(). ---------- components: C API messages: 402827 nosy: vstinner priority: normal severity: normal status: open title: [C API] Functions not exported with PyAPI_FUNC() versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 07:40:33 2021 From: report at bugs.python.org (Mark Shannon) Date: Wed, 29 Sep 2021 11:40:33 +0000 Subject: [New-bugs-announce] [issue45317] Document the removal the usage of the C stack in Python to Python calls Message-ID: <1632915633.23.0.578655230735.issue45317@roundup.psfhosted.org> New submission from Mark Shannon : Assuming that issue 45256 is implemented, we will need to document it. I'm opening a separate issue, so this doesn't get lost in the midst of 45256. We need to: Document the changes to gdb. Possibly at https://wiki.python.org/moin/DebuggingWithGdb, or in the main docs. Add a "what's new" entry explaining what the impact of this change is. ---------- assignee: docs at python components: Documentation messages: 402850 nosy: Mark.Shannon, docs at python priority: release blocker severity: normal status: open title: Document the removal the usage of the C stack in Python to Python calls type: enhancement versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 07:47:24 2021 From: report at bugs.python.org (=?utf-8?q?Bo=C5=A1tjan_Mejak?=) Date: Wed, 29 Sep 2021 11:47:24 +0000 Subject: [New-bugs-announce] [issue45318] Python 3.10: cyclomatic complexity of match-case syntax Message-ID: <1632916044.26.0.76023159461.issue45318@roundup.psfhosted.org> New submission from Bo?tjan Mejak : I am wondering about the cyclomatic complexity of using the new match-case syntax in Python 3.10, and later. What is the cyclomatic complexity difference (if any?) of a match-case code as opposed to code that uses the if-elif-else syntax? So my question is this: Are there any benefits in using the match-case syntax in terms of cyclomatic complexity? ---------- components: Interpreter Core messages: 402854 nosy: PythonEnthusiast priority: normal severity: normal status: open title: Python 3.10: cyclomatic complexity of match-case syntax type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 07:51:39 2021 From: report at bugs.python.org (Christian Heimes) Date: Wed, 29 Sep 2021 11:51:39 +0000 Subject: [New-bugs-announce] [issue45319] Possible regression in __annotations__ descr for heap type subclasses Message-ID: <1632916299.97.0.729247269758.issue45319@roundup.psfhosted.org> New submission from Christian Heimes : While I was working on a patch to port wrapt to limited API and stable ABI, I noticed a possible regression in __annotations__ PyGetSetDef. It looks like C heap type subclasses no longer inherit the __annotations__ descriptor from a C heap type parent class. A gdb session confirmed that 3.10 no longer calls the WraptObjectProxy_set_annotations setter of the parent class while 3.9 does. I had to add { "__annotations__", (getter)WraptObjectProxy_get_annotations, (setter)WraptObjectProxy_set_annotations, 0}, to PyGetSetDef of the child class in order to fix the behavior. Python 3.9 and older work as expected. You can reproduce the behavior by disabling WRAPT_ANNOTATIONS_GETSET_WORKAROUND and run "tox -e py310-install-extensions". The PR is https://github.com/GrahamDumpleton/wrapt/pull/187. ---------- components: C API keywords: 3.10regression messages: 402855 nosy: christian.heimes, pablogsal priority: normal severity: normal stage: test needed status: open title: Possible regression in __annotations__ descr for heap type subclasses type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 08:22:30 2021 From: report at bugs.python.org (Hugo van Kemenade) Date: Wed, 29 Sep 2021 12:22:30 +0000 Subject: [New-bugs-announce] [issue45320] Remove deprecated inspect functions Message-ID: <1632918150.03.0.388631628057.issue45320@roundup.psfhosted.org> New submission from Hugo van Kemenade : inspect.getargspec was deprecated in docs since 3.0 (https://docs.python.org/3.0/library/inspect.html?highlight=getargspec#inspect.getargspec), raising a DeprecationWarning since 3.5 (bpo-20438, https://github.com/python/cpython/commit/3cfec2e2fcab9f39121cec362b78ac235093ca1c). inspect.formatargspec was deprecated in docs since 3.5 (https://docs.python.org/3.5/library/inspect.html?highlight=getargspec#inspect.formatargspec), raising a DeprecationWarning since 3.8 (bpo-33582, https://github.com/python/cpython/commit/46c5cd0f6e22bdfbdd3f0b18f1d01eda754e7e11). Undocumented inspect.Signature.from_function and inspect.Signature.from_builtin in docs and by raising a DeprecationWarning since 3.5 (bpo-20438, https://github.com/python/cpython/commit/3cfec2e2fcab9f39121cec362b78ac235093ca1c). These can be removed in Python 3.11. ---------- components: Library (Lib) messages: 402860 nosy: hugovk priority: normal severity: normal status: open title: Remove deprecated inspect functions versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 08:49:45 2021 From: report at bugs.python.org (sping) Date: Wed, 29 Sep 2021 12:49:45 +0000 Subject: [New-bugs-announce] [issue45321] Module xml.parsers.expat.errors misses error code constants of libexpat >=2.0 Message-ID: <1632919785.51.0.148853477081.issue45321@roundup.psfhosted.org> New submission from sping : (This has been mention at https://bugs.python.org/issue44394#msg395642 before, but issue 44394 has been closed as fixed despite that part being forgotten, hence the dedicated ticket...) Module `xml.parsers.expat.errors` and its docs need 6 more error code entries to be complete: /* Added in 2.0. */ 38 XML_ERROR_RESERVED_PREFIX_XML 39 XML_ERROR_RESERVED_PREFIX_XMLNS 40 XML_ERROR_RESERVED_NAMESPACE_URI /* Added in 2.2.1. */ 41 XML_ERROR_INVALID_ARGUMENT /* Added in 2.3.0. */ 42 XML_ERROR_NO_BUFFER /* Added in 2.4.0. */ 43 XML_ERROR_AMPLIFICATION_LIMIT_BREACH The source for this is: - https://github.com/libexpat/libexpat/blob/72d7ce953827fe08a56b8ea64092f208be6ffc5b/expat/lib/expat.h#L120-L129 The place where additions is needed is: - https://github.com/python/cpython/blob/f76889a88720b56c8174f26a20a8e676a460c7a6/Modules/pyexpat.c#L1748 - https://github.com/python/cpython/blame/f76889a88720b56c8174f26a20a8e676a460c7a6/Doc/library/pyexpat.rst#L867 Thanks in advance. ---------- components: Extension Modules messages: 402866 nosy: sping priority: normal severity: normal status: open title: Module xml.parsers.expat.errors misses error code constants of libexpat >=2.0 versions: Python 3.10, Python 3.11, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 10:31:32 2021 From: report at bugs.python.org (tongxiaoge) Date: Wed, 29 Sep 2021 14:31:32 +0000 Subject: [New-bugs-announce] [issue45322] [X86_64]test_concurrent_futures fail Message-ID: <1632925892.37.0.547180941783.issue45322@roundup.psfhosted.org> New submission from tongxiaoge : The error log is as follows: [ 1848s] 0:19:24 load avg: 22.28 [411/412/1] test_concurrent_futures failed (19 min 19 sec) -- running: test_capi (19 min 22 sec) [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] [ 1848s] [ 1848s] Traceback: [ 1848s] Thread 0x00007fb68ffab700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 296 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/queues.py", line 224 in _feed [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Thread 0x00007fb6907ac700 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/selectors.py", line 415 in select [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/multiprocessing/connection.py", line 921 in wait [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/concurrent/futures/process.py", line 361 in _queue_management_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 885 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 942 in _bootstrap_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/threading.py", line 905 in _bootstrap [ 1848s] [ 1848s] Current thread 0x00007fb69e8f7740 (most recent call first): [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 972 in _fail_on_deadlock [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1033 in test_crash [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 645 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/case.py", line 693 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 122 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/suite.py", line 84 in __call__ [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/unittest/runner.py", line 176 in run [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 1919 in _run_suite [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2041 in run_unittest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/test_concurrent_futures.py", line 1300 in test_main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/support/__init__.py", line 2173 in decorator [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 234 in _runtest_inner2 [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 270 in _runtest_inner [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 141 in _runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest.py", line 193 in runtest [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/runtest_mp.py", line 80 in run_tests_worker [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 656 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 636 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/libregrtest/main.py", line 713 in main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 46 in _main [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/test/regrtest.py", line 50 in [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 85 in _run_code [ 1848s] File "/home/abuild/rpmbuild/BUILD/Python-3.7.9/Lib/runpy.py", line 193 in _run_module_as_main [ 1848s] ---------- components: Tests messages: 402879 nosy: christian.heimes, sxt1001, thatiparthy, vstinner priority: normal severity: normal status: open title: [X86_64]test_concurrent_futures fail versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 11:19:21 2021 From: report at bugs.python.org (=?utf-8?q?Jo=C3=ABl_Bourgault?=) Date: Wed, 29 Sep 2021 15:19:21 +0000 Subject: [New-bugs-announce] [issue45323] unexpected behavior on first match case _ Message-ID: <1632928761.88.0.894816549892.issue45323@roundup.psfhosted.org> New submission from Jo?l Bourgault : While testing the `match...case` construction, I get the following behavior with Docker image Python 3.10 rc2-slim: ```python >>> match "robert": ... case x if len(x) > 10: ... print("long nom") ... case [0, y]: ... print("point ? x nul") ... case _: ... print("nothing interesting") ... nothing interesting >>> x # assigned, since matched, even if 'guarded-out' 'robert' >>> y # not assigned, because not matched Traceback (most recent call last): ... NameError: name 'y' is not defined >>> _ # normally not assigned, but we get a value?? ? 'robert' >>> del _ # but the variable does not even exist!?!?!? ??? Traceback (most recent call last): ... NameError: name '_' is not defined ``` Moreover, if we continue working in the same session by assigning `_` explicitly and playing with `case _`, we don't get any weird behavior anymore, and `_` behaves as a normal variable. So it seems to me that there is some weird corner case here, that should be adressed. ---------- messages: 402884 nosy: ojob priority: normal severity: normal status: open title: unexpected behavior on first match case _ type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 13:57:04 2021 From: report at bugs.python.org (Eric Snow) Date: Wed, 29 Sep 2021 17:57:04 +0000 Subject: [New-bugs-announce] [issue45324] The frozen importer should capture info in find_spec(). Message-ID: <1632938224.05.0.143549467143.issue45324@roundup.psfhosted.org> New submission from Eric Snow : Currently FrozenImporter (in Lib/importlib/_bootstrap.py) does minimal work in its find_spec() method. It only checks whether or not the module is frozen. However, in doing so it has gathered all the info we need to load the module. We end up repeating that work in exec_module(). Rather than duplicating that effort, we should preserve the info in spec.loader_state. This has the added benefit of aligning more closely with the division between finder and loader. Once we get to the loader, there shouldn't be a need to check if the module is frozen nor otherwise interact with the internal frozen module state (i.e. PyImport_FrozenModules). ---------- assignee: eric.snow components: Interpreter Core messages: 402895 nosy: eric.snow priority: normal severity: normal stage: needs patch status: open title: The frozen importer should capture info in find_spec(). type: behavior versions: Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 14:08:04 2021 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Wed, 29 Sep 2021 18:08:04 +0000 Subject: [New-bugs-announce] [issue45325] Allow "p" in Py_BuildValue Message-ID: <1632938884.37.0.882166671136.issue45325@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : We should allow Py_BuildValue to take a "p" argument that takes a C int and produces a Python Bool so it would be symmetric with the p format code for PyArg_Parse that takes a Python object and returns a C int containing PyObject_IsTrue(obj). ---------- messages: 402897 nosy: pablogsal priority: normal severity: normal status: open title: Allow "p" in Py_BuildValue _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Sep 29 17:14:09 2021 From: report at bugs.python.org (Dmitry Marakasov) Date: Wed, 29 Sep 2021 21:14:09 +0000 Subject: [New-bugs-announce] [issue45326] Unexpected TypeError with type alias+issubclass+ABC Message-ID: <1632950049.79.0.711339156878.issue45326@roundup.psfhosted.org> New submission from Dmitry Marakasov : Here's a curious problem. issubclass() check of a type against an ABC-derived class raises TypeError claiming that type is not a class, however inspect.isclass() says it's a class, and issubclass() check against a simple class works fine: ``` from abc import ABC class C1: pass issubclass(dict[str, str], C1) # False class C2(ABC): pass issubclass(dict[str, str], C2) # TypeError: issubclass() arg 1 must be a class ``` I've ran into this problem while using `inspect` module to look for subclasses of a specific ABC in a module which may also contain type aliases, and after converting a type alias from `Dict[str, str]` to modern `dict[str, str]` I've got an unexpected crash in this code: if inspect.isclass(member) and issubclass(member, superclass): Not sure which is the culprit, ABC or how dict[]-style type aliases are implemented. ---------- components: Library (Lib) messages: 402914 nosy: AMDmi3 priority: normal severity: normal status: open title: Unexpected TypeError with type alias+issubclass+ABC versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 02:26:57 2021 From: report at bugs.python.org (Vinayak Hosamani) Date: Thu, 30 Sep 2021 06:26:57 +0000 Subject: [New-bugs-announce] [issue45327] json loads is stuck infinitely when the file name is Boolean Message-ID: <1632983217.04.0.955112259844.issue45327@roundup.psfhosted.org> New submission from Vinayak Hosamani : Below snippet works fine on Python2 >>> import json >>> tc_report_file = False >>> tc_data = json.load(open(tc_report_file)) Traceback (most recent call last): File "", line 1, in TypeError: coercing to Unicode: need string or buffer, bool found >>> as we can see it is throwing an exception same piece of code is stuck at Python3.8.10 vinayakh at ats-engine:~/stf_files$ python3 Python 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> import json >>> tc_report_file = False >>> tc_data = json.load(open(tc_report_file)) ---------- components: Library (Lib) messages: 402933 nosy: vinayakuh priority: normal severity: normal status: open title: json loads is stuck infinitely when the file name is Boolean versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 04:06:56 2021 From: report at bugs.python.org (R) Date: Thu, 30 Sep 2021 08:06:56 +0000 Subject: [New-bugs-announce] [issue45328] http.client.HTTPConnection doesn't work without TCP_NODELAY Message-ID: <1632989216.99.0.864211743878.issue45328@roundup.psfhosted.org> New submission from R : I'm working on trying to run python under SerenityOS. At the moment, SerenityOS doesn't implement the TCP_NODELAY socket option. This makes the HTTPConnection.connect() method raise an OSError for an operation that is otherwise optional. Additionally, the connection object can be left in an intermediate state: the underlying socket is always created, but depending on what method was invoked (either connect() directly or a higher-level one such as putrequest()) the connection object can be in IDLE or REQ_STARTED state. I have a patch that works (attached), and I'll be working on submitting a PR now. Usage of TCP_NODELAY was introduced in 3.5 (#23302), so even though I've been testing against 3.10rc2 for the time being I'm sure it will affect all versions in between. ---------- components: Library (Lib) files: http-client.patch keywords: patch messages: 402937 nosy: rtobar2 priority: normal severity: normal status: open title: http.client.HTTPConnection doesn't work without TCP_NODELAY type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file50316/http-client.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 07:21:11 2021 From: report at bugs.python.org (TAGAMI Yukihiro) Date: Thu, 30 Sep 2021 11:21:11 +0000 Subject: [New-bugs-announce] [issue45329] pyexpat: segmentation fault when `--with-system-expat` is specified Message-ID: <1633000871.63.0.316839646856.issue45329@roundup.psfhosted.org> New submission from TAGAMI Yukihiro : Some tests, which are related to pyexpat, get failed with `./configure --with-system-expat`. ``` 11 tests failed: test_minidom test_multiprocessing_fork test_multiprocessing_forkserver test_multiprocessing_spawn test_plistlib test_pulldom test_pyexpat test_sax test_xml_etree test_xml_etree_c test_xmlrpc ``` Since 3.10.0b2, `Modules/pyexpat.c` has been broken. I guess this is caused by accessing freed memory. For more detail, please refer to the attached file. ---------- components: Extension Modules files: pyexpat-log.txt messages: 402949 nosy: y-tag priority: normal severity: normal status: open title: pyexpat: segmentation fault when `--with-system-expat` is specified type: crash versions: Python 3.10, Python 3.11 Added file: https://bugs.python.org/file50317/pyexpat-log.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 09:09:28 2021 From: report at bugs.python.org (Ken Jin) Date: Thu, 30 Sep 2021 13:09:28 +0000 Subject: [New-bugs-announce] [issue45330] dulwich_log performance regression in 3.10 Message-ID: <1633007368.54.0.0773120735117.issue45330@roundup.psfhosted.org> New submission from Ken Jin : Somewhere between May02-May11, dulwich_log benchmark on pyperformance had a major performance regression on the 3.10 branch. https://speed.python.org/timeline/?exe=12&base=&ben=dulwich_log&env=1&revs=200&equid=off&quarts=on&extr=on For a start, we could git bisect with pyperformance. FWIW, I couldn't reproduce the perf regression on Windows. So it might be a Linux-only thing. ---------- messages: 402955 nosy: iritkatriel, kj priority: normal severity: normal status: open title: dulwich_log performance regression in 3.10 type: performance versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 09:57:43 2021 From: report at bugs.python.org (Sam Bishop) Date: Thu, 30 Sep 2021 13:57:43 +0000 Subject: [New-bugs-announce] [issue45331] Can create enum of ranges, cannot create range enum. Range should be subclassable... or EnumMeta.__new__ should be smarter. Message-ID: <1633010263.22.0.644959932757.issue45331@roundup.psfhosted.org> New submission from Sam Bishop : Range types are perfectly valid as values in an enum, like so. class EnumOfRanges(Enum): ZERO = range(0, 0) RANGE_A = range(1, 11) RANGE_B = range(11, 26) However unlike the other base types , 'int', 'str', 'float', etc. You cannot create a "range enum" class RangeEnum(range, Enum): ZERO = range(0, 0) RANGE_A = range(1, 11) RANGE_B = range(11, 26) produces `TypeError: type 'range' is not an acceptable base type` when you try and import `RangeEnum`. The current documentation for `enum` implicitly says this should work by not mentioning anything special here https://docs.python.org/3/library/enum.html#others It also allows the use of range objects as value types, another implicit suggestion that we should be able to restrict an enum class to just range values like we can for other builtin class types. Also to keep this a concise issue: - Yes I'm aware not all of the base classes can be subclassed. - Yes I know I that there are good reasons bool should not be subclassable. So I'd like to suggest one of three things should be done to improve the situation: A: Solve https://bugs.python.org/issue17279 and by documenting the special base class objects that cannot be subclassed and reference this in the documentation for Enums. B: Make a decision as to which base class objects we should be able to subclass, and then improve their C implementations to allow subclassing. (It's also probably worth documenting the final list of special objects and solving https://bugs.python.org/issue17279 should this approach be selected.) C: The __new__ method on EnumMeta should be made smarter so that it either emits a more useful warning (I had to head to the CPython source code to work out what the error `TypeError: type 'range' is not an acceptable base type` meant) or somehow being more smart about how it handles the special classes which can't cannot be subclassed allowing them to be used anyway. which again sort of involves solving https://bugs.python.org/issue17279, and in the case that its just made magically smarter, I'll admit could confuse some people as to why "Enum" is special and can subclass these but their own code can't just do `class MyRange(range):` Regardless of the outcome, it would be good to fill in this pitfall one way or the other for the sake of future developers, I'm a reasonably experienced Python developer and it caught me by surprise I'm likely not the first and probably wont be the last if the behaviour remains as it currently is. ---------- components: Interpreter Core messages: 402960 nosy: techdragon priority: normal severity: normal status: open title: Can create enum of ranges, cannot create range enum. Range should be subclassable... or EnumMeta.__new__ should be smarter. type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 11:26:12 2021 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 30 Sep 2021 15:26:12 +0000 Subject: [New-bugs-announce] [issue45332] Decimal test and benchmark are broken Message-ID: <1633015572.09.0.109680774964.issue45332@roundup.psfhosted.org> New submission from Serhiy Storchaka : The test and the benchmark for the decimal module are broken in 3.10+. $ ./python Modules/_decimal/tests/deccheck.py Traceback (most recent call last): File "/home/serhiy/py/cpython/Modules/_decimal/tests/deccheck.py", line 50, in from test.support import import_fresh_module ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ImportError: cannot import name 'import_fresh_module' from 'test.support' (/home/serhiy/py/cpython/Lib/test/support/__init__.py) $ ./python Modules/_decimal/tests/bench.py Traceback (most recent call last): File "/home/serhiy/py/cpython/Modules/_decimal/tests/bench.py", line 11, in from test.support import import_fresh_module ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ImportError: cannot import name 'import_fresh_module' from 'test.support' (/home/serhiy/py/cpython/Lib/test/support/__init__.py) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/serhiy/py/cpython/Modules/_decimal/tests/bench.py", line 13, in from test.test_support import import_fresh_module ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ImportError: cannot import name 'import_fresh_module' from 'test.test_support' (/home/serhiy/py/cpython/Lib/test/test_support.py) Modules/_decimal/tests/bench.py ---------- components: Demos and Tools messages: 402964 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Decimal test and benchmark are broken type: behavior versions: Python 3.10, Python 3.11 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 11:54:52 2021 From: report at bugs.python.org (chovey) Date: Thu, 30 Sep 2021 15:54:52 +0000 Subject: [New-bugs-announce] [issue45333] += operator and accessors bug? Message-ID: <1633017292.67.0.488480039709.issue45333@roundup.psfhosted.org> New submission from chovey : We used get/set attribute accessors with private data, and suspect the beaviour we see with the += operator contains a bug. Below is our original implementation, followed by our fix. But, we feel the original implementation should have worked. Please advise. Thank you. @property def position(self) -> np.ndarray: return self._position @position.setter def position(self, val: np.ndarray): self._position = val def update(self, *, delta_t: float): # Nope! Bug! (?) # Tried to combine the getter and setter for self.position # with the += operator, which will not work. # self.position += self.velocity * delta_t # # This fixes the behaviour. self._position = self.velocity * delta_t + self.position ---------- components: 2to3 (2.x to 3.x conversion tool) messages: 402966 nosy: chovey priority: normal severity: normal status: open title: += operator and accessors bug? type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 11:57:11 2021 From: report at bugs.python.org (Vigneshwar Elangovan) Date: Thu, 30 Sep 2021 15:57:11 +0000 Subject: [New-bugs-announce] [issue45334] String Strip not working Message-ID: <1633017431.24.0.598560965902.issue45334@roundup.psfhosted.org> New submission from Vigneshwar Elangovan : string = "monitorScript_cpu.py" a = string.strip("monitorScript_") print(a) ----> getting output => u.py Should stip only "monitorScript_" but stripping additional 2 charaters "monitorScript_cp" Below code working fine: string = "monitorScript_send_user.py" a = string.strip("monitorScript_") print(a) ---------- components: Parser files: capture.PNG messages: 402967 nosy: lys.nikolaou, pablogsal, vigneshwar.e14 priority: normal severity: normal status: open title: String Strip not working type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file50318/capture.PNG _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 15:37:28 2021 From: report at bugs.python.org (Ian Fisher) Date: Thu, 30 Sep 2021 19:37:28 +0000 Subject: [New-bugs-announce] [issue45335] Default TIMESTAMP converter in sqlite3 ignores time zone Message-ID: <1633030648.15.0.389354245942.issue45335@roundup.psfhosted.org> New submission from Ian Fisher : The SQLite converter that the sqlite3 library automatically registers for TIMESTAMP columns (https://github.com/python/cpython/blob/main/Lib/sqlite3/dbapi2.py#L66) ignores the time zone even if it is present and always returns a naive datetime object. I think that the converter should return an aware object if the time zone is present in the database. As it is, round trips of TIMESTAMP values from the database to Python and back might erase the original time zone info. Now that datetime.datetime.fromisoformat is in Python 3.7, this should be easy to implement. ---------- components: Library (Lib) messages: 402979 nosy: iafisher priority: normal severity: normal status: open title: Default TIMESTAMP converter in sqlite3 ignores time zone type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 15:58:41 2021 From: report at bugs.python.org (ed wolf) Date: Thu, 30 Sep 2021 19:58:41 +0000 Subject: [New-bugs-announce] [issue45336] Issue with xml.tree.ElementTree.write Message-ID: <1633031921.76.0.241035024911.issue45336@roundup.psfhosted.org> New submission from ed wolf : When executing the following command after modifiy an xml file an error is prodcued. import xml.etree.ElementTree as ET rtexmlFile = 'Fox_CM3550A_SWP1_Rte_ecuc.arxml' rte_ecu_tree = ET.parse(rtexmlFile) root = rte_ecu_tree.getroot() rte_ecu_tree.write(rtexmlFile, encoding="UTF-8", xml_declaration="True", default_namespace="None" method="xml",short_empty_elements="True" ) ValueError: cannot use non-qualified names with default_namespace option The documentation for the ElementTree.write function indicates the following format for this command but this format does not seem to wrok The write command does not also take into account when having standalone in the xml defintion. For ex, ElementTree.write will not add standalone back to the xml file Is this a bug in version 3.7? write(file, encoding="us-ascii", xml_declaration=None, default_namespace=None, method="xml", *, short_empty_elements=True) Writes the element tree to a file, as XML. file is a file name, or a file object opened for writing. encoding 1 is the output encoding (default is US-ASCII). xml_declaration controls if an XML declaration should be added to the file. Use False for never, True for always, None for only if not US-ASCII or UTF-8 or Unicode (default is None). default_namespace sets the default XML namespace (for ?xmlns?). method is either "xml", "html" or "text" (default is "xml"). The keyword-only short_empty_elements parameter controls the formatting of elements that contain no content. If True (the default), they are emitted as a single self-closed tag, otherwise they are emitted as a pair of start/end tags. The output is either a string (str) or binary (bytes). This is controlled by the encoding argument. If encoding is "unicode", the output is a string; otherwise, it?s binary. Note that this may conflict with the type of file if it?s an open file object; make sure you do not try to write a string to a binary stream and vice versa. ---------- assignee: docs at python components: Documentation messages: 402981 nosy: docs at python, twowolfs priority: normal severity: normal status: open title: Issue with xml.tree.ElementTree.write type: performance versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Sep 30 16:39:58 2021 From: report at bugs.python.org (Adam Yoblick) Date: Thu, 30 Sep 2021 20:39:58 +0000 Subject: [New-bugs-announce] [issue45337] Create venv with pip fails when target dir is under userappdata using Microsoft Store python Message-ID: <1633034398.54.0.325301195524.issue45337@roundup.psfhosted.org> New submission from Adam Yoblick : Repro steps: 1. Install Python 3.9 from the Microsoft Store 2. Try to create a virtual environment under the userappdata folder, using a command line similar to the following: "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.2032.0_x64__qbz5n2kfra8p0\python3.9.exe" -m venv "C:\Users\advolker\AppData\Local\Microsoft\CookiecutterTools\env" 3. Observe the following error: Error: Command '['C:\\Users\\advolker\\AppData\\Local\\Microsoft\\CookiecutterTools\\env\\Scripts\\python.exe', '-Im', 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit status 106. Note that creating a venv without pip DOES work: "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.2032.0_x64__qbz5n2kfra8p0\python3.9.exe" -m venv "C:\Users\advolker\AppData\Local\Microsoft\CookiecutterTools\env" --without-pip BUT the venv is NOT at the specified location. This is because the Windows Store app creates a redirect when creating the venv, and that redirect is only visible from within the python executable. This means that python doesn't respect the redirect when trying to install pip into the newly created venv. ---------- components: Windows messages: 402983 nosy: AdamYoblick, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Create venv with pip fails when target dir is under userappdata using Microsoft Store python versions: Python 3.9 _______________________________________ Python tracker _______________________________________