From report at bugs.python.org Tue Dec 1 01:37:14 2020 From: report at bugs.python.org (Kshitish) Date: Tue, 01 Dec 2020 06:37:14 +0000 Subject: [New-bugs-announce] [issue42518] Error Message Message-ID: <1606804634.41.0.958530528001.issue42518@roundup.psfhosted.org> New submission from Kshitish : print (5 + 2 == 7 and 10 <= 1232 and 100 ^ 1000 >= 128) # Incorrect Output: True This argument should be false but it prints true. This is the logical error vulnerability. Try this code in another language too. You find out they print false because the argument is false but unfortunately Python prints true. Because Python does not understand this. ---------- components: Windows files: main.py messages: 382216 nosy: blue555, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Error Message versions: Python 3.8 Added file: https://bugs.python.org/file49640/main.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 03:55:35 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 01 Dec 2020 08:55:35 +0000 Subject: [New-bugs-announce] [issue42519] [C API] Upgrade the C API in extension modules Message-ID: <1606812935.61.0.224275559553.issue42519@roundup.psfhosted.org> New submission from STINNER Victor : Python has around 118 extension modules. Most of them are quite old and still use the old way to write C extensions. I created this issue as a placeholder to modernize the code using my new tool: https://github.com/pythoncapi/upgrade_pythoncapi For example, replace PyMem_MALLOC() with PyMem_Malloc(), since PyMem_MALLOC() is a deprecated alias. ---------- components: C API messages: 382220 nosy: vstinner priority: normal severity: normal status: open title: [C API] Upgrade the C API in extension modules versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 05:03:32 2020 From: report at bugs.python.org (Antony Lee) Date: Tue, 01 Dec 2020 10:03:32 +0000 Subject: [New-bugs-announce] [issue42520] add_dll_directory only accepts absolute paths Message-ID: <1606817012.16.0.476843949209.issue42520@roundup.psfhosted.org> New submission from Antony Lee : os.add_dll_directory appears to only accept absolute paths, inheriting this restriction from AddDllDirectory. That's absolutely fine, but could perhaps be mentioned in the docs (https://docs.python.org/3/library/os.html#os.add_dll_directory), especially considering that the docs do mention the parallel with `sys.path`, which *does* support relative paths. ---------- assignee: docs at python components: Documentation messages: 382228 nosy: Antony.Lee, docs at python priority: normal severity: normal status: open title: add_dll_directory only accepts absolute paths versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 06:15:39 2020 From: report at bugs.python.org (anthony shaw) Date: Tue, 01 Dec 2020 11:15:39 +0000 Subject: [New-bugs-announce] [issue42521] Debug (-d) mode not working in 3.9 Message-ID: <1606821339.69.0.708367922333.issue42521@roundup.psfhosted.org> New submission from anthony shaw : I noticed since the new parser implementation, the debug mode in the tokeniser is no longer working. This is the case for 3.9.0 and also 3.9.1rc1. Running python3.9 with a simple application like this: # Demo application def my_function(): proceed Does not output anything > python3.9 -d ~/PycharmProjects/cpython-book-samples/13/test_tokens.py > Produces no output Whereas python3.10 (the latest alpha) outputs the expected results ---------- messages: 382236 nosy: anthony shaw, lys.nikolaou, pablogsal priority: normal severity: normal status: open title: Debug (-d) mode not working in 3.9 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 06:21:37 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 01 Dec 2020 11:21:37 +0000 Subject: [New-bugs-announce] [issue42522] [C API] Add Py_Borrow() function: call Py_XDECREF() and return the object Message-ID: <1606821697.29.0.99350380249.issue42522@roundup.psfhosted.org> New submission from STINNER Victor : I'm working on a script to migrate old C extension modules to the latest flavor of the Python C API: https://github.com/pythoncapi/upgrade_pythoncapi I would like to replace frame->f_code with PyFrame_GetCode(frame). The problem is that frame->f_code is a borrowed reference, whereas PyFrame_GetCode() returns a strong reference. Having a Py_Borrow() function would help to automate migration to PyFrame_GetCode(): static inline PyObject* Py_Borrow(PyObject *obj) { Py_XDECREF(obj); return obj; } So frame->f_code can be replaced with Py_Borrow(PyFrame_GetCode(frame)). Py_Borrow() is similar to Py_XDECREF() but can be used as an expression: PyObject *code = Py_Borrow(PyFrame_GetCode(frame)); Py_Borrow() is the opposite of Py_XNewRef(). For example, Py_Borrow(Py_XNewRef(obj)) leaves the reference count unchanged (+1 and then -1). My pratical problem is that it's not easy not add Py_XDECREF() call when converting C code to the new C API. Example 1: PyObject *code = frame->f_code; This one is easy and can be written as: PyObject *code = PyFrame_GetCode(frame); Py_XDECREF(code); Example 2: func(frame->f_code); This one is more tricky. For example, the following code using a macro is wrong: #define Py_BORROW(obj) (Py_XDECREF(obj), obj) func(Py_BORROW(PyFrame_GetCode(frame->f_code))); since it calls PyFrame_GetCode() twice when proceed by the C preprocessor and so leaks a reference: func(Py_XDECREF(PyFrame_GetCode(frame->f_code)), PyFrame_GetCode(frame->f_code)); Attached PR implements Py_Borrow() function as a static inline function. ---------- components: C API messages: 382237 nosy: vstinner priority: normal severity: normal status: open title: [C API] Add Py_Borrow() function: call Py_XDECREF() and return the object versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 07:33:56 2020 From: report at bugs.python.org (Numerlor) Date: Tue, 01 Dec 2020 12:33:56 +0000 Subject: [New-bugs-announce] [issue42523] using windows doc incorrectly states version support Message-ID: <1606826036.29.0.083322027972.issue42523@roundup.psfhosted.org> New submission from Numerlor : In this paragraph, the using windows doc page dynamically adjusts the version to the one that's being looked at by the user, however the statement is no longer true for python 3.9 and above as support for vista/7 was dropped > As specified in PEP 11, a Python release only supports a Windows platform while Microsoft considers the platform under extended support. This means that Python |version| supports Windows Vista and newer. If you require Windows XP support then please install Python 3.4. ---------- assignee: docs at python components: Documentation messages: 382246 nosy: Numerlor, docs at python priority: normal severity: normal status: open title: using windows doc incorrectly states version support versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 10:13:18 2020 From: report at bugs.python.org (Romuald Brunet) Date: Tue, 01 Dec 2020 15:13:18 +0000 Subject: [New-bugs-announce] [issue42524] pdb access to return value Message-ID: <1606835598.54.0.959102851686.issue42524@roundup.psfhosted.org> New submission from Romuald Brunet : When using the pdb module, there is currently no way to easy access the current return value of the stack This return value is accessed by the 'retval command' I propose using the currently unused argument to allow storing the return value in the local variables, accessible via the debugger For example: def foo(): debugger() return ComplexObject() def bar(): return foo() (pdb) retval (pdb) retval zz (pdb) zz.attribute 'some value' ---------- components: Library (Lib) messages: 382259 nosy: Romuald priority: normal severity: normal status: open title: pdb access to return value type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 10:30:37 2020 From: report at bugs.python.org (Yurii Karabas) Date: Tue, 01 Dec 2020 15:30:37 +0000 Subject: [New-bugs-announce] [issue42525] Optimize class/module level annotation Message-ID: <1606836637.51.0.505503737654.issue42525@roundup.psfhosted.org> New submission from Yurii Karabas <1998uriyyo at gmail.com>: This issue is inspired by https://bugs.python.org/issue42202 We can do smth similar for class/module level annotations. Inada Naoki what do you think regarding that? ---------- components: Interpreter Core messages: 382263 nosy: methane, uriyyo priority: normal severity: normal status: open title: Optimize class/module level annotation type: resource usage versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 10:42:34 2020 From: report at bugs.python.org (Tom) Date: Tue, 01 Dec 2020 15:42:34 +0000 Subject: [New-bugs-announce] [issue42526] Exceptions in asyncio.Server callbacks are not retrievable Message-ID: <1606837354.76.0.196975943835.issue42526@roundup.psfhosted.org> New submission from Tom : Consider this program: import asyncio async def handler(r, w): raise RuntimeError async def main(): server = await asyncio.start_server(handler, host='localhost', port=1234) r, w = await asyncio.open_connection(host='localhost', port=1234) await server.serve_forever() server.close() asyncio.run(main()) The RuntimeError is not retrievable via the serve_forever coroutine. To my knowledge, there is no feature of the asyncio API which causes the server to stop on an exception and retrieve it. I have also tried wrapping serve_forever in a Task, and waiting on the coro with FIRST_EXCEPTION. This severely complicates testing asyncio servers, since failing tests hang forever if the failure occurs in a callback. It should be possible to configure the server to end if a callback fails, e.g. by a 'stop_on_error' kwarg to start_server (defaulting to False for compatibility). I know this isn't a technical problem, since AnyIO, which uses asyncio, does this by default. This equivalent program ends after the exception: import anyio async def handler(client): raise RuntimeError async def main(): async with anyio.create_task_group() as tg: listener = await anyio.create_tcp_listener(local_host='localhost', local_port=1234) await tg.spawn(listener.serve, handler) async with await anyio.connect_tcp('localhost', 1234) as client: pass anyio.run(main) ---------- components: asyncio messages: 382265 nosy: asvetlov, tmewett, yselivanov priority: normal severity: normal status: open title: Exceptions in asyncio.Server callbacks are not retrievable versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 11:40:04 2020 From: report at bugs.python.org (Yanfeng Wang) Date: Tue, 01 Dec 2020 16:40:04 +0000 Subject: [New-bugs-announce] [issue42527] UnicodeEncode error in BaseHttpRequestHandler class when send unicode data. Message-ID: <1606840804.64.0.232983390658.issue42527@roundup.psfhosted.org> New submission from Yanfeng Wang : My environment: system: MacOS 10.15.x python: 3.9.0 when I inherit the BaseHTTPRequestHandler class, and call the send_error(error_di, msg) method in my own class's do_GET override function: the msg argument can't accept unicode string, an UnicodeEncodeError of 'latin-1' encoding error with ordinal call failed of out of range(255). I tracked the code, and found in the implementation of this class, the data was sent through a encode call with 'latin-1' hard coded in. in the file: server.py near line 507. BTW, I tried u'my string'.encode('utf-8'), and it didn't work, another TypeError in module html __init__.py line 19. If I commented the original hard code 'latin-1' line and replace with the hard code encoding to 'utf-8', all will be fine. Thanks for your read anyone who cares or takes care of this issue. ---------- components: Unicode messages: 382268 nosy: ezio.melotti, vstinner, wangyanfeng.p priority: normal severity: normal status: open title: UnicodeEncode error in BaseHttpRequestHandler class when send unicode data. type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 11:47:45 2020 From: report at bugs.python.org (Antonio Cuni) Date: Tue, 01 Dec 2020 16:47:45 +0000 Subject: [New-bugs-announce] [issue42528] Improve the docs of most Py*_Check{, Exact} API calls Message-ID: <1606841265.25.0.130640674128.issue42528@roundup.psfhosted.org> New submission from Antonio Cuni : I think that none of these API calls can fail, but only few of them are documented as such. E.g. PyNumber_Check contains the sentece "This function always succeeds" but PyBytes_Check does not. ---------- assignee: docs at python components: Documentation messages: 382269 nosy: antocuni, docs at python priority: normal severity: normal status: open title: Improve the docs of most Py*_Check{,Exact} API calls type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 1 13:03:03 2020 From: report at bugs.python.org (Karl Nelson) Date: Tue, 01 Dec 2020 18:03:03 +0000 Subject: [New-bugs-announce] [issue42529] CPython DLL initialization routine failed from PYC cache file Message-ID: <1606845783.12.0.45647033223.issue42529@roundup.psfhosted.org> New submission from Karl Nelson : While trying to use JPype on Windows Python 3.9.0, we are running into an bizarre issue with loading the internal module which is written in C. When running a python script the first time the internal module loads correctly. However, the second time that script is run the internal module reports "A dynamic link library (DLL) initialization routine failed." If you then erase the pyc cache file that is importing the internal module then it works again. This only occurs on Windows and was not present using the same source prior versions of Python 3.8. We investigate the byte codes from both version and they are doing the same series of actions so the problem appears to be calling the same opcode to execute an import. I make sure all required symbols were found in the libraries and only only copy of the internal DLL was the same with and without loading from a pyc. It may be a change in the requirements of module initialization, but I don't know how to proceed. There was one deprecation warning but correcting that did not alter the outcome. It appears that the execute path for importing a CPython module takes a different path when the script was imported from something compiled on the fly as opposed to loading from a pyc. ``` import: 'jpype' Traceback (most recent call last): File "D:\bld\jpype1_1605785280189\test_tmp\run_test.py", line 2, in import jpype File "D:\bld\jpype1_1605785280189\_test_env\lib\site-packages\jpype\__init__.py", line 18, in import _jpype ImportError: DLL load failed while importing _jpype: A dynamic link library (DLL) initialization routine failed. ``` ---------- components: Windows messages: 382275 nosy: Thrameos, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: CPython DLL initialization routine failed from PYC cache file type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 01:23:16 2020 From: report at bugs.python.org (Daniel Kostecki) Date: Wed, 02 Dec 2020 06:23:16 +0000 Subject: [New-bugs-announce] [issue42530] Pickle Serialization Mangles NllLossBackward Objects in Tensor Objects Message-ID: <1606890196.44.0.812849435025.issue42530@roundup.psfhosted.org> New submission from Daniel Kostecki : torch.nn.functional.nll_loss returns Tensor objects which contain a loss value as well as a grad_fn object. Pickle does not throw an exception when serializing (dumps) the Tensor object. When loading (loads) the serialized data, the grad_fn object is lost and it becomes a requires_grad object. However, if one attempts to serialize the grad_fn object encapsulated in the Tensor object, Pickle then throws a TypeError (TypeError: cannot pickle 'NllLossBackward' object). This behavior seems inconsistent. Perhaps serialization of NllLossBackward objects should be supported or their encapsulating Tensors should also throw a TypeError. This behavior should be easily reproducible. ---------- components: Library (Lib) messages: 382294 nosy: dkostecki priority: normal severity: normal status: open title: Pickle Serialization Mangles NllLossBackward Objects in Tensor Objects type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 01:31:42 2020 From: report at bugs.python.org (William Schwartz) Date: Wed, 02 Dec 2020 06:31:42 +0000 Subject: [New-bugs-announce] [issue42531] importlib.resources.path() raises TypeError for packages without __file__ Message-ID: <1606890702.39.0.177441013455.issue42531@roundup.psfhosted.org> New submission from William Schwartz : Suppose pkg is a package, it contains a resource r, pkg.__spec__.origin is None, and p = importlib.resources.path(pkg, r). Then p.__enter__() raises a TypeError in Python 3.7 and 3.8. (The problem has been fixed in 3.9). The error can be demonstrated by running the attached path-test.py. The tracebacks in 3.7 and 3.8 are nearly identical, so I'll just show the 3.8 traceback. 3.8.6 (default, Nov 20 2020, 18:29:40) [Clang 12.0.0 (clang-1200.0.32.27)] Traceback (most recent call last): File "path-test.py", line 19, in p.__enter__() # Kaboom File "/usr/local/Cellar/python at 3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/usr/local/Cellar/python at 3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/importlib/resources.py", line 196, in path package_directory = Path(package.__spec__.origin).parent File "/usr/local/Cellar/python at 3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pathlib.py", line 1041, in __new__ self = cls._from_parts(args, init=False) File "/usr/local/Cellar/python at 3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pathlib.py", line 682, in _from_parts drv, root, parts = self._parse_args(args) File "/usr/local/Cellar/python at 3.8/3.8.6_2/Frameworks/Python.framework/Versions/3.8/lib/python3.8/pathlib.py", line 666, in _parse_args a = os.fspath(a) TypeError: expected str, bytes or os.PathLike object, not NoneType The fix is super simple, as shown below. I'll submit this as a PR as well. diff --git a/Lib/importlib/resources.py b/Lib/importlib/resources.py index fc3a1c9cab..8d37d52cb8 100644 --- a/Lib/importlib/resources.py +++ b/Lib/importlib/resources.py @@ -193,9 +193,11 @@ def path(package: Package, resource: Resource) -> Iterator[Path]: _check_location(package) # Fall-through for both the lack of resource_path() *and* if # resource_path() raises FileNotFoundError. - package_directory = Path(package.__spec__.origin).parent - file_path = package_directory / resource - if file_path.exists(): + file_path = None + if package.__spec__.origin is not None: + package_directory = Path(package.__spec__.origin).parent + file_path = package_directory / resource + if file_path is not None and file_path.exists(): yield file_path else: with open_binary(package, resource) as fp: ---------- components: Library (Lib) files: path-test.py messages: 382297 nosy: William.Schwartz, brett.cannon priority: normal severity: normal status: open title: importlib.resources.path() raises TypeError for packages without __file__ type: behavior versions: Python 3.7, Python 3.8 Added file: https://bugs.python.org/file49643/path-test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 02:29:30 2020 From: report at bugs.python.org (Idan Weiss) Date: Wed, 02 Dec 2020 07:29:30 +0000 Subject: [New-bugs-announce] [issue42532] spec_arg's __bool__ is called while initializing NonCallableMock Message-ID: <1606894170.3.0.122228066862.issue42532@roundup.psfhosted.org> Change by Idan Weiss : ---------- components: Library (Lib) nosy: idanweiss97 priority: normal severity: normal status: open title: spec_arg's __bool__ is called while initializing NonCallableMock type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 04:54:58 2020 From: report at bugs.python.org (Tobias Kunze) Date: Wed, 02 Dec 2020 09:54:58 +0000 Subject: [New-bugs-announce] [issue42533] Document encodings.idna limitations Message-ID: <1606902898.84.0.956500804562.issue42533@roundup.psfhosted.org> New submission from Tobias Kunze : The documentation for the encodings.idna module contains no indicator that the RFC it supports has been obsoleted by another RFC: https://docs.python.org/3.10/library/codecs.html#module-encodings.idna I'm sure this is obvious when you know your RFCs, but when just looking at the docs, it's easy to miss. In #msg379674, Marc-Andre suggested to fix the situation by deprecating or updating IDNA support. I'd like to propose to add a warning message in the meantime, pointing out the newer RFC and linking to the idna package on PyPI (if links to PyPI packages are alright in the docs?) ---------- assignee: docs at python components: Documentation, Unicode messages: 382300 nosy: docs at python, ezio.melotti, rixx, vstinner priority: normal severity: normal status: open title: Document encodings.idna limitations type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 05:18:16 2020 From: report at bugs.python.org (Jussi Enkovaara) Date: Wed, 02 Dec 2020 10:18:16 +0000 Subject: [New-bugs-announce] [issue42534] venv does not work correctly when imported from .zip Message-ID: <1606904296.18.0.0189453228301.issue42534@roundup.psfhosted.org> New submission from Jussi Enkovaara : In some cases it can be useful to provide standard library in a zip-file. However, when "venv" is imported from a zip-file, activate etc. scripts are not generated. The directory for script templates is determined in function setup_scripts in venv/__init__.py as path = os.path.abspath(os.path.dirname(__file__)) path = os.path.join(path, 'scripts') which becomes .../python38.zip/venv/scripts when is venv is imported from zip-file. No scripts are now generated, and no error / warning messages are invoked either. ---------- components: Library (Lib) messages: 382301 nosy: jussienko priority: normal severity: normal status: open title: venv does not work correctly when imported from .zip type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 06:14:10 2020 From: report at bugs.python.org (=?utf-8?q?Tymek_Wo=C5=82od=C5=BAko?=) Date: Wed, 02 Dec 2020 11:14:10 +0000 Subject: [New-bugs-announce] [issue42535] unittest.patch confuses modules with base modules Message-ID: <1606907650.68.0.627501426895.issue42535@roundup.psfhosted.org> New submission from Tymek Wo?od?ko : Nonetheless having several attempts, I wasn't able to create reproducible example for this bug, but I will try describing it in detail. I have a package with multiple modules. One of the paths is like `mymodule.nestedmodule.io`, among other functions, this module contains functions `foo()` and `bar()`, where `bar()` does call `foo()`. The module *does not* import base python's `io` module. I have a unit test that patches: with path('mymodule.nestedmodule.io.foo'): bar() The problem is, when running the test I get the following error: `AttributeError: does not have the attribute 'foo'`. The problem is solved when I rename `io` to `myio` and correct all the paths to use the new name. ---------- components: Library (Lib) messages: 382303 nosy: twolodzko priority: normal severity: normal status: open title: unittest.patch confuses modules with base modules type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 06:48:35 2020 From: report at bugs.python.org (STINNER Victor) Date: Wed, 02 Dec 2020 11:48:35 +0000 Subject: [New-bugs-announce] [issue42536] test_itertools leaks sometimes references Message-ID: <1606909715.58.0.318966724927.issue42536@roundup.psfhosted.org> New submission from STINNER Victor : 12:07:06 vstinner at apu$ ./python -m test test_itertools -R 3:3 -F -j4 0:00:00 load avg: 0.44 Run tests in parallel using 4 child processes (...) 0:02:03 load avg: 4.24 [ 1] test_itertools passed (2 min 2 sec) -- running: test_itertools (2 min 3 sec), test_itertools (2 min 3 sec), test_itertools (2 min 3 sec) beginning 6 repetitions 123456 ...... 0:02:03 load avg: 4.24 [ 2/1] test_itertools failed (2 min 3 sec) -- running: test_itertools (2 min 3 sec), test_itertools (2 min 3 sec) beginning 6 repetitions 123456 ...... test_itertools leaked [32, 32, 32] references, sum=96 test_itertools leaked [24, 22, 22] memory blocks, sum=68 Kill process group Kill process group Kill process group == Tests result: FAILURE == 1 test OK. 1 test failed: test_itertools Total duration: 2 min 3 sec Tests result: FAILURE ---------- components: Tests messages: 382304 nosy: vstinner priority: normal severity: normal status: open title: test_itertools leaks sometimes references versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 06:52:24 2020 From: report at bugs.python.org (Qingyao Sun) Date: Wed, 02 Dec 2020 11:52:24 +0000 Subject: [New-bugs-announce] [issue42537] Implement built-in shorthand b() for breakpoint() Message-ID: <1606909944.93.0.455067472723.issue42537@roundup.psfhosted.org> New submission from Qingyao Sun : Placeholder issue for discussion of the proposal of a built-in shorthand b() for breakpoint(). ---------- components: Interpreter Core messages: 382306 nosy: nalzok priority: normal severity: normal status: open title: Implement built-in shorthand b() for breakpoint() type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 07:17:27 2020 From: report at bugs.python.org (Alexander Greckov) Date: Wed, 02 Dec 2020 12:17:27 +0000 Subject: [New-bugs-announce] [issue42538] AsyncIO strange behaviour Message-ID: <1606911447.68.0.828780901691.issue42538@roundup.psfhosted.org> New submission from Alexander Greckov : Hi! I've met strange behaviour related to the coroutine execution. Probably it's somehow related to the asyncio.Queue get() method. I have the following lines of code: class WeightSource: def __init__(self): self._queue = asyncio.Queue(maxsize=1) def __aiter__(self): return self async def __anext__(self): return await self._queue.get() async def _preconfigure(app: web.Application) -> None: setup_db() # asyncio.create_task(WeightSource.listen_scales('/dev/ttyACM0', 9600) asyncio.create_task(handle_weight_event_creation()) # coroutine prints WEIGHT await create_track_tasks() async def create_track_tasks(): asyncio.create_task(track_scales_availability()) asyncio.create_task(track_crm_availability()) async def track_scales_availability(): async for weight in WeightSource(): print(weight) When I'm trying to run _preconfigure coroutine (automatically started by aiohttp), i see the message: ERROR:asyncio:Task was destroyed but it is pending. The strange things two: The process and loop remains alive (so by the logic error should be not shown) and the second thing is when I export PYTHONASYNCIODEBUG=1 everything works well without any error. On unset this variable the error returns. When I don't use asyncio.Queue and just place asyncio.sleep(), coroutine doesn't fall. Error happens inside the `async for weight in WeightSource()` statement. Why PYTHONASYNCIODEBUG changes behaviour? ---------- components: asyncio files: output.txt messages: 382307 nosy: Alexander-Greckov, asvetlov, yselivanov priority: normal severity: normal status: open title: AsyncIO strange behaviour type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49644/output.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 09:05:34 2020 From: report at bugs.python.org (Mingchii Suen) Date: Wed, 02 Dec 2020 14:05:34 +0000 Subject: [New-bugs-announce] [issue42539] Missing parenthesis in `platform._sys_version_parser` Message-ID: <1606917934.84.0.784006855646.issue42539@roundup.psfhosted.org> New submission from Mingchii Suen : in `platform.py`, `_sys_version_parser` the last part of regex is `r'\[([^\]]+)\]?'` which should be `r'(?:\[([^\]]+)\])?' without this change, the question mark only make the last `\]` optional, not the whole part optional this will cause unable to detect python implementation issue in some custom build python if the `[compiler]` part in `sys.version` is missing will make a pull request soon I have o ---------- components: Library (Lib) messages: 382313 nosy: crazy95sun priority: normal severity: normal status: open title: Missing parenthesis in `platform._sys_version_parser` type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 09:22:16 2020 From: report at bugs.python.org (CendioOssman) Date: Wed, 02 Dec 2020 14:22:16 +0000 Subject: [New-bugs-announce] [issue42540] Debug pymalloc crash when using os.fork() [regression] Message-ID: <1606918936.38.0.790172378115.issue42540@roundup.psfhosted.org> New submission from CendioOssman : A python equivalent of the classical daemon() call started throwing an error from 3.8 when the debug hooks are active for pymalloc. If the tracing is also active it segfaults. This simple example triggers it: import os def daemon(): pid = os.fork() if pid != 0: os._exit(0) daemon() The error given is: Debug memory block at address p=0xf013d0: API '1' 0 bytes originally requested The 7 pad bytes at p-7 are not all FORBIDDENBYTE (0xfd): at p-7: 0x00 *** OUCH at p-6: 0x00 *** OUCH at p-5: 0x00 *** OUCH at p-4: 0x00 *** OUCH at p-3: 0x00 *** OUCH at p-2: 0x00 *** OUCH at p-1: 0x00 *** OUCH Because memory is corrupted at the start, the count of bytes requested may be bogus, and checking the trailing pad bytes may segfault. The 8 pad bytes at tail=0xf013d0 are not all FORBIDDENBYTE (0xfd): at tail+0: 0x01 *** OUCH at tail+1: 0x00 *** OUCH at tail+2: 0x00 *** OUCH at tail+3: 0x00 *** OUCH at tail+4: 0x00 *** OUCH at tail+5: 0x00 *** OUCH at tail+6: 0x00 *** OUCH at tail+7: 0x00 *** OUCH Enable tracemalloc to get the memory block allocation traceback Fatal Python error: bad ID: Allocated using API '1', verified using API 'r' Python runtime state: finalizing (tstate=0xf023b0) Tested on Fedora, Ubuntu and RHEL with the same behaviour everwhere. Everything up to 3.8 works fine. 3.8 and 3.9 both exhibit the issue. Since this is a very standard way of starting a daemon it should affect quite a few users. At the very least it makes it annoying to use development mode to catch other issues. ---------- components: Interpreter Core messages: 382314 nosy: CendioOssman priority: normal severity: normal status: open title: Debug pymalloc crash when using os.fork() [regression] type: crash versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 09:49:55 2020 From: report at bugs.python.org (E. Paine) Date: Wed, 02 Dec 2020 14:49:55 +0000 Subject: [New-bugs-announce] [issue42541] Tkinter colours wrong on MacOS universal2 Message-ID: <1606920595.63.0.665509072103.issue42541@roundup.psfhosted.org> New submission from E. Paine : When using tkinter from the universal2 package, the colours are completely confused. Most text tries to white, even though the background is light and this makes text in Entries completely unreadable (where the background is white). This affects both Tk and Ttk. The first included screenshot is of a simple script which just creates a Tk label and Ttk label. The second screenshot is of the IDLE 'open module' interface in which I have typed 'test' (though, you cannot see this). The main problem is that I cannot check whether this is a Tkinter or Tk issue as ActiveTcl only offers 8.6.9 and I haven't yet compiled my own copy of Tk. This is not a problem on the standard installer. ---------- components: Tkinter, macOS files: Screenshot from 2020-12-02 14-37-02.png messages: 382315 nosy: epaine, ned.deily, ronaldoussoren, serhiy.storchaka priority: normal severity: normal status: open title: Tkinter colours wrong on MacOS universal2 versions: Python 3.10, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49645/Screenshot from 2020-12-02 14-37-02.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 10:18:59 2020 From: report at bugs.python.org (Ryan Govostes) Date: Wed, 02 Dec 2020 15:18:59 +0000 Subject: [New-bugs-announce] [issue42542] weakref documentation does not fully describe proxies Message-ID: <1606922339.94.0.698325525788.issue42542@roundup.psfhosted.org> New submission from Ryan Govostes : The documentation for weakref.proxy() does not describe how the proxy object behaves when the object it references is gone. The apparent behavior is that it raises a ReferenceError when an attribute of the proxy object is accessed. It would probably be a good idea to describe what the proxy object does in general, for those who are unfamiliar with the concept: attribute accesses on the proxy object are forwarded to the referenced object, if it exists. ---------- assignee: docs at python components: Documentation messages: 382319 nosy: docs at python, rgov priority: normal severity: normal status: open title: weakref documentation does not fully describe proxies versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 11:09:30 2020 From: report at bugs.python.org (Sean Grogan) Date: Wed, 02 Dec 2020 16:09:30 +0000 Subject: [New-bugs-announce] [issue42543] case sensitivity in open() arguments Message-ID: <1606925370.25.0.425660398835.issue42543@roundup.psfhosted.org> New submission from Sean Grogan : I was stuck on a problem today using an open statement where I was trying to open a file for writing e.g. with open("RESULTS.CSV", "W") as csvfile: csvwriter = csv.writer(csvfile) csvwriter.writerow(["X", "Y"]) csvwriter.writerows(data) I did not notice I had the mode W in upper case. I am not sure if there is a legacy reason for only allowing lower case arguments here but I think a quick note in the documentation that it's case sensitive or a check (or note) when throwing an error would be helpful? such as ValueError: invalid mode: 'W' -- your case appears to be an upper case, please ensure the case of the mode is correct or ValueError: invalid mode: 'W' -- note the mode is case sensitive could be helpful? ---------- assignee: docs at python components: Documentation messages: 382322 nosy: docs at python, sean.grogan priority: normal severity: normal status: open title: case sensitivity in open() arguments type: behavior versions: Python 3.6, Python 3.7, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 15:45:17 2020 From: report at bugs.python.org (Alex Sherman) Date: Wed, 02 Dec 2020 20:45:17 +0000 Subject: [New-bugs-announce] [issue42544] In windows, asyncio.run_in_executor strips logger class information from modified logging.Logger objects Message-ID: <1606941917.4.0.387310203547.issue42544@roundup.psfhosted.org> New submission from Alex Sherman : IN WINDOWS asyncio's loop.run_in_executor(pool, callback, logger, *args) strips the subclass information about logging.Loggers when passed into concurrent.futures.ProcessPoolExecutor. The logger behaves as a default logging.Logger object as far as I can tell. Run the attached file to see via print statements that the logger information (such as additional verbosity and file handling) is all removed from the logger but only inside the loop.run_in_executor call. This is a windows specific error. Tested on windows 10 (misbehaved) and ubuntu 18.04 (behaved as expected). ---------- components: IO, asyncio files: example_logger_behavior.py messages: 382335 nosy: adsherman09, asvetlov, yselivanov priority: normal severity: normal status: open title: In windows, asyncio.run_in_executor strips logger class information from modified logging.Logger objects type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49648/example_logger_behavior.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 15:52:19 2020 From: report at bugs.python.org (Pablo Galindo Salgado) Date: Wed, 02 Dec 2020 20:52:19 +0000 Subject: [New-bugs-announce] [issue42545] Check that all symbols in the limited ABI are exported Message-ID: <1606942339.51.0.833109264536.issue42545@roundup.psfhosted.org> New submission from Pablo Galindo Salgado : >From the discussion at the end of https://bugs.python.org/issue40939 is important to add a check to the CI that can tell us if we mistakenly remove a symbol of the limited ABI. ---------- components: Tests messages: 382336 nosy: pablogsal priority: normal severity: normal status: open title: Check that all symbols in the limited ABI are exported versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 15:57:34 2020 From: report at bugs.python.org (Daniel Rose) Date: Wed, 02 Dec 2020 20:57:34 +0000 Subject: [New-bugs-announce] [issue42546] copy - Allow .copy(deep=True) alongside .deepcopy() for easier usage and zen Message-ID: <1606942654.49.0.938602064992.issue42546@roundup.psfhosted.org> New submission from Daniel Rose : It would be convenient and cleaner/more in line with the Python zen to allow `.copy()` to allow a `deep=True` argument, to use `.deepcopy()` in a simpler way. This is easier to read, and brings some more usefulness to `.copy()`. `.deepcopy()` would be kept for backwards compatibility. ---------- components: Library (Lib) messages: 382338 nosy: TheCatster priority: normal severity: normal status: open title: copy - Allow .copy(deep=True) alongside .deepcopy() for easier usage and zen type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 16:15:50 2020 From: report at bugs.python.org (Mikhail Khvoinitsky) Date: Wed, 02 Dec 2020 21:15:50 +0000 Subject: [New-bugs-announce] [issue42547] argparse: add_argument(... nargs='+', metavar=) does not work with positional arguments Message-ID: <1606943750.61.0.237402460323.issue42547@roundup.psfhosted.org> New submission from Mikhail Khvoinitsky : Example which works: parser.add_argument('--test', nargs='+', metavar=('TEST', 'TEST2')) Example which doesn't work: parser.add_argument('test', nargs='+', metavar=('TEST', 'TEST2')) it raises: Traceback (most recent call last): File args = parser.parse_args() File "/usr/lib/python3.8/argparse.py", line 1768, in parse_args args, argv = self.parse_known_args(args, namespace) File "/usr/lib/python3.8/argparse.py", line 1800, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/usr/lib/python3.8/argparse.py", line 2006, in _parse_known_args start_index = consume_optional(start_index) File "/usr/lib/python3.8/argparse.py", line 1946, in consume_optional take_action(action, args, option_string) File "/usr/lib/python3.8/argparse.py", line 1874, in take_action action(self, namespace, argument_values, option_string) File "/usr/lib/python3.8/argparse.py", line 1044, in __call__ parser.print_help() File "/usr/lib/python3.8/argparse.py", line 2494, in print_help self._print_message(self.format_help(), file) File "/usr/lib/python3.8/argparse.py", line 2471, in format_help formatter.add_arguments(action_group._group_actions) File "/usr/lib/python3.8/argparse.py", line 276, in add_arguments self.add_argument(action) File "/usr/lib/python3.8/argparse.py", line 261, in add_argument invocations = [get_invocation(action)] File "/usr/lib/python3.8/argparse.py", line 549, in _format_action_invocation metavar, = self._metavar_formatter(action, default)(1) ValueError: too many values to unpack (expected 1) Expected result: help message should look like this: usage: test_argparse [-h] TEST [TEST2 ...] positional arguments: TEST optional arguments: -h, --help show this help message and exit ---------- components: Library (Lib) messages: 382341 nosy: m_khvoinitsky priority: normal severity: normal status: open title: argparse: add_argument(... nargs='+', metavar=) does not work with positional arguments type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 17:36:25 2020 From: report at bugs.python.org (Andy S) Date: Wed, 02 Dec 2020 22:36:25 +0000 Subject: [New-bugs-announce] [issue42548] debugger stops at breakpoint of `pass` that is not actually reached Message-ID: <1606948585.89.0.450511023817.issue42548@roundup.psfhosted.org> New submission from Andy S : The python (3.6) doc states (https://docs.python.org/3/reference/simple_stmts.html#the-pass-statement): pass is a null operation... So since this is still an operation one could expect that it can be used as an op to breakpoint on while debugging some scripts. Nevertheless: $ pdb3.7 ./debug_bug.py > /...play/debug_bug.py(1)() -> a = None (Pdb) list 1 -> a = None 2 3 4 def fun(): 5 b = False 6 if a is None: 7 b = True 8 pass 9 else: 10 pass 11 (Pdb) 12 13 fun() 14 pass [EOF] (Pdb) b 10 Breakpoint 1 at /...play/debug_bug.py:10 (Pdb) run Restarting ./debug_bug.py with arguments: ./debug_bug.py > /...play/debug_bug.py(1)() -> a = None (Pdb) continue > /...play/debug_bug.py(10)fun() -> pass (Pdb) bt /usr/lib/python3.7/bdb.py(585)run() -> exec(cmd, globals, locals) (1)() /...play/debug_bug.py(13)() -> fun() > /...play/debug_bug.py(10)fun() -> pass (Pdb) p b True (Pdb) ---------- components: Interpreter Core, Library (Lib) files: debug_bug.py messages: 382351 nosy: gatekeeper.mail priority: normal severity: normal status: open title: debugger stops at breakpoint of `pass` that is not actually reached type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file49649/debug_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 18:11:21 2020 From: report at bugs.python.org (Dieter Maurer) Date: Wed, 02 Dec 2020 23:11:21 +0000 Subject: [New-bugs-announce] [issue42549] "quopri" line endings not standard conform Message-ID: <1606950681.46.0.919700601419.issue42549@roundup.psfhosted.org> New submission from Dieter Maurer : RFC 1521 and "https://tools.ietf.org/html/rfc2045#section-6.7" stipulate that the quoted printable encoding uses CRLF line endings. Python's "quopri" implementation does not honor this. ---------- components: Library (Lib) messages: 382354 nosy: dmaurer priority: normal severity: normal status: open title: "quopri" line endings not standard conform type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 21:38:33 2020 From: report at bugs.python.org (ye andy) Date: Thu, 03 Dec 2020 02:38:33 +0000 Subject: [New-bugs-announce] =?utf-8?b?W2lzc3VlNDI1NTBdIHJl5bqT5Yy56YWN?= =?utf-8?b?6Zeu6aKY?= Message-ID: <1606963113.63.0.536238231027.issue42550@roundup.psfhosted.org> New submission from ye andy : import re a = """0xd26935a5ee4cd542e8a3a7e74fb7a99855975b59\n""" eth_re = re.compile(r'^0x[0-9a-fA-F]{40}$') print(eth_re.match(a)) print(len(a)) # ??43 ---------- components: Library (Lib) messages: 382367 nosy: andy.ye.jx priority: normal severity: normal status: open title: re????? type: security versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 2 23:29:39 2020 From: report at bugs.python.org (Matthew Suozzo) Date: Thu, 03 Dec 2020 04:29:39 +0000 Subject: [New-bugs-announce] [issue42551] Generator `yield`s counted as primitive calls by cProfile Message-ID: <1606969779.69.0.0674581941893.issue42551@roundup.psfhosted.org> New submission from Matthew Suozzo : # Issue When profiling a generator function, the initial call and all subsequent yields are aggregated into the same "ncalls" metric by cProfile. ## Example >>> cProfile.run(""" ... def foo(): ... yield 1 ... yield 2 ... assert tuple(foo()) == (1, 2) ... """) ncalls tottime percall cumtime percall filename:lineno(function) ... 3 0.000 0.000 0.000 0.000 :2(foo) ... This was unexpected behavior since it *looks* like a single call from the code. This also complicates basic analysis about the frequency of a codepath's execution where a generator might yield a variable number of times depending on the input. The profile interface can and does differentiate between call types: Normal calls and "primitive" calls, i.e. those that are not induced via recursion, are displayed as "all_calls/primitive_calls" e.g. "3/1" for a single initial calls with 2 recursed calls (comparable to the example above). This seems like an appropriate abstraction to apply in the generator case: Each yield is better modeled as an 'interior' call to the generator, not as a call on its own. # Possible fix I have two ideas that seem like they might address the problem: * Add a new PyTrace_YIELD constant (and handle it in [3]) * Track some state from the `frame->f_gen` in the ProfilerEntry (injected in [3], used in [4]) to determine whether this is the first or a subsequent call. I've not been poking around for long so I don't have an intuition about which would be less invasive (nor really whether either would work in practice). As background, this seems to be the call chain from trace invocation to callcount increment: [0]: https://github.com/python/cpython/blob/4e7a69bdb63a104587759d7784124492dcdd496e/Python/ceval.c#L4106-L4107 [1]: https://github.com/python/cpython/blob/4e7a69bdb63a104587759d7784124492dcdd496e/Python/ceval.c#L4937 [2]: https://github.com/python/cpython/blob/4e7a69bdb63a104587759d7784124492dcdd496e/Python/ceval.c#L4961 [3]: https://github.com/python/cpython/blob/4e7a69bdb63a104587759d7784124492dcdd496e/Modules/_lsprof.c#L419 [4]: https://github.com/python/cpython/blob/4e7a69bdb63a104587759d7784124492dcdd496e/Modules/_lsprof.c#L389 [5]: https://github.com/python/cpython/blob/4e7a69bdb63a104587759d7784124492dcdd496e/Modules/_lsprof.c#L311-L316 ---------- components: C API messages: 382373 nosy: matthew.suozzo priority: normal severity: normal status: open title: Generator `yield`s counted as primitive calls by cProfile type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 04:49:23 2020 From: report at bugs.python.org (gaborjbernat) Date: Thu, 03 Dec 2020 09:49:23 +0000 Subject: [New-bugs-announce] [issue42552] Automatically set parent thread idents on thread start Message-ID: <1606988963.46.0.590267410194.issue42552@roundup.psfhosted.org> New submission from gaborjbernat : I want to request automatically adding the current thread ident on the thread start as parent_ident. I would like this to be able to implement thread-local variables that inherit their values from the parent thread. See https://gist.github.com/gaborbernat/67b653f1d3ce4857a065a3bd81e424df#file-thread_inheritence_sol_1-py-L1 for such an example. ---------- messages: 382398 nosy: gaborjbernat priority: normal severity: normal status: open title: Automatically set parent thread idents on thread start versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 07:30:02 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 03 Dec 2020 12:30:02 +0000 Subject: [New-bugs-announce] [issue42553] test_asyncio: test_call_later() fails on Windows x64 GitHub Action Message-ID: <1606998602.75.0.120891089578.issue42553@roundup.psfhosted.org> New submission from STINNER Victor : FAIL: test_call_later (test.test_asyncio.test_events.SelectEventLoopTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\a\cpython\cpython\lib\test\test_asyncio\test_events.py", line 301, in test_call_later self.assertTrue(0.08 <= t1-t0 <= 0.8, t1-t0) AssertionError: False is not true : 0.07799999999997453 https://github.com/python/cpython/pull/23626/checks?check_run_id=1492411421 ---------- components: Tests messages: 382404 nosy: vstinner priority: normal severity: normal status: open title: test_asyncio: test_call_later() fails on Windows x64 GitHub Action versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 08:11:48 2020 From: report at bugs.python.org (FX Coudert) Date: Thu, 03 Dec 2020 13:11:48 +0000 Subject: [New-bugs-announce] [issue42554] distutils.util.get_platform() depends on minor version for macOS 11 Message-ID: <1607001108.52.0.506559643874.issue42554@roundup.psfhosted.org> New submission from FX Coudert : On macOS Big Sur (11.y.z), the return value of distutils.util.get_platform() records in some cases the minor version of OS. For example: - with a Python 3.9 built on macOS 10.0.1, distutils.util.get_platform() will return macosx-11.0-x86_64 - with a Python 3.9 built on macOS 10.1.0, distutils.util.get_platform() will return macosx-11.1-x86_64 - if MACOSX_DEPLOYMENT_TARGET=11.1 was set at build time, then distutils.util.get_platform() will return macosx-11.1-x86_64 - if MACOSX_DEPLOYMENT_TARGET=11 was set at build time, then distutils.util.get_platform() will return macosx-11-x86_64 This has important consequences for wheel and pip, which use the return value to determine platform tags: https://github.com/pypa/wheel/issues/385 >From the API Reference (https://docs.python.org/3/distutils/apiref.html), it is not clear what the expect return value is. Given that previously, the return value of distutils.util.get_platform() was dependent only on the macOS major version, I would expect this to be the case. Therefore, distutils.util.get_platform() should return macosx-11-x86_64 on all Big Sur (macOS 11.x.y) versions. PS: This is not directly related to another issue with MACOSX_DEPLOYMENT_TARGET https://bugs.python.org/issue42504 ---------- components: Build messages: 382408 nosy: fxcoudert priority: normal severity: normal status: open title: distutils.util.get_platform() depends on minor version for macOS 11 type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 08:16:51 2020 From: report at bugs.python.org (mike dalrymple) Date: Thu, 03 Dec 2020 13:16:51 +0000 Subject: [New-bugs-announce] [issue42555] math function sqrt() not working in 3.9 Message-ID: <1607001411.31.0.73323583323.issue42555@roundup.psfhosted.org> New submission from mike dalrymple : Downloaded Python 3.9.0 Documentation indicates: math.sqrt(x) Return the square root of x. When I use in IDLE shell 3.9.0, I receive error: >>> sqrt(25) Traceback (most recent call last): File "", line 1, in sqrt(25) NameError: name 'sqrt' is not defined What is the problem? ---------- assignee: terry.reedy components: IDLE messages: 382411 nosy: mikeuser_01, terry.reedy priority: normal severity: normal status: open title: math function sqrt() not working in 3.9 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 09:41:06 2020 From: report at bugs.python.org (Pierre Ossman) Date: Thu, 03 Dec 2020 14:41:06 +0000 Subject: [New-bugs-announce] [issue42556] unittest.mock.patch() cannot properly mock methods Message-ID: <1607006466.6.0.505250389966.issue42556@roundup.psfhosted.org> New submission from Pierre Ossman : unittest.mock.patch() as it currently works cannot properly mock a method as it currently replaces it with something more mimicking a function. I.e. the descriptor magic that includes "self" isn't properly set up. In most cases this doesn't really matter, but there are a few use cases where this is important: 1. Calling base classes where you need to make sure it works regardless of super() or direct reference to the base class. 2. Multiple objects calling the same base class using super(). Without the self argument you can't tell the calls apart. 3. Setting up a side_effect that needs access to the object. In some cases you can pass the object using some side channel, but not all. E.g. not when mocking a base class' __init__(). (already reported as Issue35577). Right now you can work around this by using autospec, as that has the undocumented side-effect of properly setting up methods. So don't fix Issue41915 before this one or we lose that workaround. :) ---------- components: Library (Lib) messages: 382415 nosy: CendioOssman priority: normal severity: normal status: open title: unittest.mock.patch() cannot properly mock methods versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 10:34:45 2020 From: report at bugs.python.org (Berry Schoenmakers) Date: Thu, 03 Dec 2020 15:34:45 +0000 Subject: [New-bugs-announce] [issue42557] Make asyncio.__main__ reusable, also adding a preamble feature. Message-ID: <1607009685.51.0.588770537577.issue42557@roundup.psfhosted.org> New submission from Berry Schoenmakers : The async REPL introduced in Python 3.8 is very nice for quick tests and experiments, supporting top-level "await" (see https://bugs.python.org/issue37028). I'm using it often when developing code that heavily relies on Python's asyncio module. A drawback of the basic interface when launching it with "python -m asyncio" is that one usually still needs to enter a couple of statements to load the package you're working on, etc. To overcome this I've added a __main__.py to the mpyc package such that entering "python -m mpyc" will launch the async REPL like this: asyncio REPL 3.10.0a2 (tags/v3.10.0a2:114ee5d, Nov 3 2020, 00:37:42) [MSC v.1927 64 bit (AMD64)] on win32 Use "await" directly instead of "asyncio.run()". Type "help", "copyright", "credits" or "license" for more information. >>> import asyncio >>> from mpyc.runtime import mpc >>> secint = mpc.SecInt() >>> secfxp = mpc.SecFxp() >>> secfld256 = mpc.SecFld(256) >>> This enables the async REPL but also a "preamble" with some additional code is executed. To program mpyc.__main__.py, however, I basically had to copy asyncio.__main__.py and make a few changes throughout the code. It works alright, but to take advantage of future improvements to asyncio.__main__.py it would be very convenient if asyncio.__main__.py becomes reusable. With the added feature for specifying a "preamble", programming something like mpyc.__main__.py requires just a few lines of code: import asyncio.__main__ if __name__ == '__main__': preamble = ('from mpyc.runtime import mpc', 'secint = mpc.SecInt()', 'secfxp = mpc.SecFxp()', 'secfld256 = mpc.SecFld(256)') asyncio.__main__.main(preamble) The attachment contains the current version of mpyc.__main__.py, which to a large extent duplicates code from asyncio.__main__.py. A couple of, I think, minor changes are applied to make the code reusable, and to add the preamble feature. Would be nice if asyncio.__main__.py is updated in this manner in Python 3.10 such that package developers can tailor it to their needs? ---------- components: asyncio files: __main__.py messages: 382419 nosy: asvetlov, lschoe, vstinner, yselivanov priority: normal severity: normal status: open title: Make asyncio.__main__ reusable, also adding a preamble feature. type: enhancement versions: Python 3.10 Added file: https://bugs.python.org/file49652/__main__.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 12:05:29 2020 From: report at bugs.python.org (Jack O'Connor) Date: Thu, 03 Dec 2020 17:05:29 +0000 Subject: [New-bugs-announce] [issue42558] waitpid/waitid race caused by change to Popen.send_signal in Python 3.9 Message-ID: <1607015129.26.0.150065141179.issue42558@roundup.psfhosted.org> New submission from Jack O'Connor : In Python 3.9, Popen.send_signal() was changed to call Popen.poll() internally before signaling. (Tracking bug: https://bugs.python.org/issue38630.) This is a best-effort check for the famous kill/wait race condition. However, because this can now reap an already-exited child process as a side effect, it can cause previously working programs to crash. Here's a simple example: ``` import os import subprocess import time child = subprocess.Popen(["true"]) time.sleep(1) child.kill() os.waitpid(child.pid, 0) ``` The program above exits cleanly in Python 3.8 but crashes with ChildProcessError in Python 3.9. It's a race against child process exit, so in practice (without the sleep) it's a heisenbug. There's a deeper race here that's harder to demonstrate in an example, but parallel to the original wait/kill issue: If the child PID happens to be reused by another parent thread in this same process, the call to waitpid might *succeed* and reap the unrelated child process. That would export the crash to that thread, and possibly create more kill/wait races. In short, the question of when exactly a child process is reaped is important for correct signaling on Unix, and changing that behavior can break programs in confusing ways. This change affected the Duct library, and I might not've caught it if not for a lucky failing doctest: https://github.com/oconnor663/duct.py/commit/5dfae70cc9481051c5e53da0c48d9efa8ff71507 I haven't searched for more instances of this bug in the wild, but one way to find them would be to look for code that calls both os.waitpid/waitid and also Popen.send_signal/kill/terminate. Duct found itself in this position because it was using waitid(WNOWAIT) on Unix only, to solve this same race condition, and also using Popen.kill on both Unix and Windows. ---------- messages: 382429 nosy: oconnor663 priority: normal severity: normal status: open title: waitpid/waitid race caused by change to Popen.send_signal in Python 3.9 type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 12:53:56 2020 From: report at bugs.python.org (Paul Sokolovsky) Date: Thu, 03 Dec 2020 17:53:56 +0000 Subject: [New-bugs-announce] [issue42559] random.getrandbits: Should it be explicit that it returns unsigned/non-negative integer? Message-ID: <1607018036.1.0.534167707073.issue42559@roundup.psfhosted.org> New submission from Paul Sokolovsky : Current docs for random.getrandbits() ( https://docs.python.org/3/library/random.html#random.getrandbits ) read (3.9.1rc1 at the top of the page): Returns a Python integer with k random bits. This method is supplied with the MersenneTwister generator and some other generators may also provide it as an optional part of the API. When available, getrandbits() enables randrange() to handle arbitrarily large ranges. So, based that it talks about "bits", it's easy to imagine that the result is unsigned. But it's interesting that it mentions "a Python integer", not even just "integer". And Python integers are known to be signed. So I'd say, there's enough of ambiguity in there, and would be nice to explicitly specify that it's "unsigned (non-negative) Python integer". If there's interest for the background of concern, some implementations may have "small" and "large" integers underlyingly (which used to be exposed at the user-level in Python2). "Small" integers would be cheaper, but of course, they would be signed. So, at certain value of getrandbits() parameter, there may be a temptation to use up a sign bit of a small integer, instead of going straight to allocating expensive large integer. It should be clear this is NOT a valid "optimization", and result should be always non-negative. ---------- messages: 382434 nosy: pfalcon priority: normal severity: normal status: open title: random.getrandbits: Should it be explicit that it returns unsigned/non-negative integer? versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 13:35:37 2020 From: report at bugs.python.org (Mason Ginter) Date: Thu, 03 Dec 2020 18:35:37 +0000 Subject: [New-bugs-announce] [issue42560] Improve Tkinter Documentation Message-ID: <1607020537.02.0.420924180434.issue42560@roundup.psfhosted.org> New submission from Mason Ginter : Online python Tkinter documentation (https://docs.python.org/3/library/tkinter.html) lacks many features. The main part I find lacking is documentation on methods, as many of them either aren't listed in the documentation and/or take *args as the only parameter, rather than defined parameters. Not sure how in-depth docs from effbot were, but those are not currently available. Existing documentation in the source code (https://github.com/python/cpython/blob/3.9/Lib/tkinter/__init__.py) seems to be a good start to adding to the online documentation. ---------- assignee: docs at python components: Documentation messages: 382437 nosy: docs at python, mason2 priority: normal severity: normal status: open title: Improve Tkinter Documentation type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 15:24:18 2020 From: report at bugs.python.org (sds) Date: Thu, 03 Dec 2020 20:24:18 +0000 Subject: [New-bugs-announce] [issue42561] better error reporting in ast Message-ID: <1607027058.31.0.3436923935.issue42561@roundup.psfhosted.org> New submission from sds : ast parsing error locations are hard to pinpoint. See https://stackoverflow.com/q/46933995/850781. ---------- components: Library (Lib) files: ast.diff keywords: patch messages: 382449 nosy: sam-s priority: normal severity: normal status: open title: better error reporting in ast type: enhancement versions: Python 3.10 Added file: https://bugs.python.org/file49653/ast.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 15:38:17 2020 From: report at bugs.python.org (Yurii Karabas) Date: Thu, 03 Dec 2020 20:38:17 +0000 Subject: [New-bugs-announce] [issue42562] dis failed to parse function that has only annotations Message-ID: <1607027897.93.0.00454445674539.issue42562@roundup.psfhosted.org> New submission from Yurii Karabas <1998uriyyo at gmail.com>: `dis` module failed when trying to parse function that has only annotations at the function body: ``` def foo(): a: int ``` Failed with stacktrace: ``` 1 0 LOAD_CONST 0 () 2 LOAD_CONST 1 ('foo') 4 MAKE_FUNCTION 0 6 STORE_NAME 0 (foo) 8 LOAD_CONST 2 (None) 10 RETURN_VALUE Disassembly of : Traceback (most recent call last): File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/dis.py", line 536, in _test() File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/dis.py", line 533, in _test dis(code) File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/dis.py", line 79, in dis _disassemble_recursive(x, file=file, depth=depth) File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/dis.py", line 381, in _disassemble_recursive _disassemble_recursive(x, file=file, depth=depth) File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/dis.py", line 373, in _disassemble_recursive disassemble(co, file=file) File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/dis.py", line 369, in disassemble _disassemble_bytes(co.co_code, lasti, co.co_varnames, co.co_names, File "/Users/yuriikarabas/my-projects/temp-cpython/Lib/dis.py", line 389, in _disassemble_bytes maxlineno = max(linestarts.values()) + line_offset ValueError: max() arg is an empty sequence ``` ---------- components: Library (Lib) messages: 382453 nosy: uriyyo priority: normal severity: normal status: open title: dis failed to parse function that has only annotations type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 19:57:36 2020 From: report at bugs.python.org (Nicholas Kobald) Date: Fri, 04 Dec 2020 00:57:36 +0000 Subject: [New-bugs-announce] [issue42563] max function reports type errors in incorrect order Message-ID: <1607043456.77.0.380335998391.issue42563@roundup.psfhosted.org> New submission from Nicholas Kobald : I'm not _sure_ this is a bug, but I thought the behaviour was a bit odd. If you run max with a str and an int in different orders, you get this behaviour: >>> max(123, 'hello') Traceback (most recent call last): File "", line 1, in TypeError: '>' not supported between instances of 'str' and 'int' >>> max(123, 'hello') >>> max('hello', 123) Traceback (most recent call last): File "", line 1, in TypeError: '>' not supported between instances of 'int' and 'str' Note that order of the error message: 'int' and 'str' is the inverse of the order the str and int really are. Did a search for max and didn't find this. ---------- messages: 382463 nosy: nkobald priority: normal severity: normal status: open title: max function reports type errors in incorrect order type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 21:11:52 2020 From: report at bugs.python.org (Gregory Szorc) Date: Fri, 04 Dec 2020 02:11:52 +0000 Subject: [New-bugs-announce] [issue42564] "from .__init__ import ..." syntax imports a duplicate module Message-ID: <1607047912.32.0.654567376337.issue42564@roundup.psfhosted.org> New submission from Gregory Szorc : (Rereporting from https://github.com/indygreg/PyOxidizer/issues/317.) $ mkdir foo $ cat > foo/__init__.py < test = True > EOF $ cat > foo/bar.py < from .__init__ import test > EOF $ python3.9 Python 3.9.0 (default, Nov 1 2020, 22:40:00) [GCC 10.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import foo.bar >>> import sys >>> sys.modules['foo'] >>> sys.modules['foo.__init__'] I am surprised that `from .__init__` even works, as `__init__` isn't a valid module name. What appears to be happening is the path based importer doesn't recognize the `__init__` as special and it falls back to its regular file probing technique to locate a module derive from the path. It finds the `__init__.py[c]` file and imports it. A consequence of this is that the explicit `__init__` import/module exists as a separate module object under `sys.modules`. So you can effectively have the same file imported as 2 module objects living under 2 names. This could of course result in subtle software bugs, like module-level variables not updating when you expect them to. (This could also be a feature for code relying on this behavior, of course.) I only attempted to reproduce with 3.9. But this behavior has likely existed for years. ---------- components: Interpreter Core messages: 382464 nosy: indygreg priority: normal severity: normal status: open title: "from .__init__ import ..." syntax imports a duplicate module type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 3 21:51:26 2020 From: report at bugs.python.org (Siva Krishna Giri Babu Avvaru) Date: Fri, 04 Dec 2020 02:51:26 +0000 Subject: [New-bugs-announce] [issue42565] Traceback (most recent call last): File "", line 1, in NameError: name 'python' is not defined Message-ID: <1607050286.4.0.671884194813.issue42565@roundup.psfhosted.org> Change by Siva Krishna Giri Babu Avvaru : ---------- nosy: sivakrishnaavvaru priority: normal severity: normal status: open title: Traceback (most recent call last): File "", line 1, in NameError: name 'python' is not defined type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 4 03:46:49 2020 From: report at bugs.python.org (zerotypic) Date: Fri, 04 Dec 2020 08:46:49 +0000 Subject: [New-bugs-announce] [issue42566] Clearing stack frame of suspended coroutine causes coroutine to malfunction Message-ID: <1607071609.18.0.760361961347.issue42566@roundup.psfhosted.org> New submission from zerotypic : When a stack frame belonging to a coroutine that is currently suspended is cleared, via the frame.clear() function, the coroutine appears to stop working properly and hangs forever pending some callback. I've put up an example program that exhibits this problem here: https://gist.github.com/zerotypic/74ac1a7d6b4b946c14c1ebd86de6027b (and also attached it) In this example, the stack frame comes from the traceback object associated with an Exception that was raised inside the coroutine, which was passed to another coroutine. I've included some sample output from the program below. In this run, we do not trigger the bug: $ python3 async_bug.py foo: raising exception foo: waiting for ev bar: exn = TypeError('blah',) bar: setting ev foo: ev set, quitting now. bar: foo_task = result='quit successfully'> The result is that the task running the coroutine "foo" quits successfully. In this run, we trigger the bug by providing the "bad" commandline argument: $ python3 async_bug.py bad foo: raising exception foo: waiting for ev bar: exn = TypeError('blah',) bar: Clearing frame. bar: setting ev bar: foo_task = wait_for=()]>> Task was destroyed but it is pending! task: wait_for=()]>> The task running "foo" is still pending, waiting on some callback, even though it should have awakened and completed. This also happens with generators: >>> def gen(): ... try: ... raise TypeError("blah") ... except Exception as e: ... yield e ... print("Completing generator.") ... yield "done" ... >>> gen() >>> list(gen()) Completing generator. [TypeError('blah',), 'done'] >>> g = gen() >>> exn = next(g) >>> exn.__traceback__.tb_frame.clear() >>> next(g) Traceback (most recent call last): File "", line 1, in StopIteration >>> This isn't surprising since you shouldn't be able to clear frames of code that is still running. It seems to me that the frame.clear() function should check that the frame belongs to a suspended coroutine/generator, and refuse to clear the frame if so. In frame_clear() (frameobject.c:676), the frame is checked to see if it is currently executing using _PyFrame_IsExecuting(), and raises a RuntimeError if so. Perhaps this should be changed to use _PyFrameHasCompleted()? I am not familiar with the internals of CPython so I'm not sure if this is the right thing to do. I've been testing this on 3.6.9, but looking at the code for 3.9, this is likely to still be an issue. Also, I first discovered this due to a call to traceback.clear_frames() from unittest.TestCase.assertRaises(); so if the problem isn't fixed, perhaps unittest should be modified to optionally not clear frames. Thanks! ---------- components: Interpreter Core files: async_bug.py messages: 382472 nosy: zerotypic priority: normal severity: normal status: open title: Clearing stack frame of suspended coroutine causes coroutine to malfunction type: behavior versions: Python 3.6 Added file: https://bugs.python.org/file49654/async_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 4 09:51:03 2020 From: report at bugs.python.org (Ethan Furman) Date: Fri, 04 Dec 2020 14:51:03 +0000 Subject: [New-bugs-announce] [issue42567] Enum: manually call __init_subclass__ after members are added Message-ID: <1607093463.71.0.994675423727.issue42567@roundup.psfhosted.org> New submission from Ethan Furman : __init_subclass__ is being automatically called when the initial Enum is created, but before the members have been added, greatly reducing that method's usefulness. ---------- assignee: ethan.furman components: Library (Lib) messages: 382489 nosy: ethan.furman priority: normal severity: normal stage: needs patch status: open title: Enum: manually call __init_subclass__ after members are added type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 4 10:03:14 2020 From: report at bugs.python.org (Alexey Izbyshev) Date: Fri, 04 Dec 2020 15:03:14 +0000 Subject: [New-bugs-announce] [issue42568] Python can't run .pyc files with non-ASCII path on Windows Message-ID: <1607094194.36.0.301843698023.issue42568@roundup.psfhosted.org> New submission from Alexey Izbyshev : > python ????.pyc python: Can't reopen .pyc file The issue is caused by _Py_fopen() being used as though it can deal with paths encoded in FS-default encoding (UTF-8 by default on Windows), but in fact it's just a simple wrapper around fopen() from the C runtime, so it uses the current ANSI code page, breaking if PYTHONLEGACYWINDOWSFSENCODING is not enabled. I could find only two callers if _Py_fopen() on Windows: * https://github.com/python/cpython/blob/db68544122f5/Python/pythonrun.c#L380 (which caused this issue) * https://github.com/python/cpython/blob/db68544122f5/Python/errors.c#L1708 PyErr_ProgramText() doesn't seem to be called in CPython, but https://github.com/python/cpython/blob/db68544122f5/Include/pyerrors.h#L243 claims that filename is "decoded from the filesystem encoding", which doesn't match the code. ---------- components: Interpreter Core, Windows messages: 382490 nosy: izbyshev, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: Python can't run .pyc files with non-ASCII path on Windows type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 4 11:12:33 2020 From: report at bugs.python.org (Alexey Izbyshev) Date: Fri, 04 Dec 2020 16:12:33 +0000 Subject: [New-bugs-announce] [issue42569] Callers of _Py_fopen/_Py_wfopen may be broken after addition of audit hooks Message-ID: <1607098353.32.0.624763724813.issue42569@roundup.psfhosted.org> New submission from Alexey Izbyshev : Before addition of audit hooks in 3.8, _Py_fopen() and _Py_wfopen() were simple wrappers around corresponding C runtime functions. They didn't require GIL, reported errors via errno and could be safely called during early interpreter initialization. With the addition of PySys_Audit() calls, they can also raise an exception, which makes it unclear how they should be used. At least one caller[1] is confused, so an early-added hook (e.g. from sitecustomize.py) that raises a RuntimeError on at attempt to open the main file causes the following: $ ./python /home/test/test.py ./python: can't open file '/home/test/test.py': [Errno 22] Invalid argument Traceback (most recent call last): File "/home/test/.local/lib/python3.10/site-packages/sitecustomize.py", line 10, in hook raise RuntimeError("XXX") RuntimeError: XXX "Invalid argument" is reported by pymain_run_file() due to a bogus errno, and the real problem (exception from the hook) is noticed only later. Could somebody share the current intended status/role of these helpers? Understanding that seems to be required to deal with issue 32381. [1] https://github.com/python/cpython/blob/066394018a84/Modules/main.c#L314 ---------- components: Interpreter Core messages: 382499 nosy: izbyshev, steve.dower, vstinner priority: normal severity: normal status: open title: Callers of _Py_fopen/_Py_wfopen may be broken after addition of audit hooks type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 4 12:53:40 2020 From: report at bugs.python.org (Kshitish) Date: Fri, 04 Dec 2020 17:53:40 +0000 Subject: [New-bugs-announce] [issue42570] Try and Except doesn't work properly Message-ID: <1607104420.61.0.0601519357686.issue42570@roundup.psfhosted.org> New submission from Kshitish : Try & Except doesn't work probably. In the except continue keyword doesn't work probably. ---------- files: main.py messages: 382515 nosy: blue555 priority: normal severity: normal status: open title: Try and Except doesn't work properly type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49656/main.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 4 16:11:08 2020 From: report at bugs.python.org (Frederic Gagnon) Date: Fri, 04 Dec 2020 21:11:08 +0000 Subject: [New-bugs-announce] [issue42571] [docs] add links to Glossary#parameter in libraries Message-ID: <1607116268.54.0.585771365964.issue42571@roundup.psfhosted.org> New submission from Frederic Gagnon : Could be helpful to make it so that, in libraries, / * * and ** (i.e. positional-only, keyword-only, var-positional and var-keyword indicators) link to https://docs.python.org/3/glossary.html#term-parameter This come from someone relatively new to python that had a somewhat hard time understanding what the ", *," stood for in https://docs.python.org/3/library/glob.html#glob.glob ---------- assignee: docs at python components: Documentation messages: 382530 nosy: docs at python, fredg1 priority: normal severity: normal status: open title: [docs] add links to Glossary#parameter in libraries type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 4 17:42:15 2020 From: report at bugs.python.org (Austin Scola) Date: Fri, 04 Dec 2020 22:42:15 +0000 Subject: [New-bugs-announce] [issue42572] Better path handling with argparse Message-ID: <1607121735.85.0.926277593776.issue42572@roundup.psfhosted.org> New submission from Austin Scola : One of the types of arguments that I find myself most often passing to `argparse.ArgumentParser` is paths. I think that I am probably not alone in frequent usage of paths as arguments. Given this, it would be extremely helpful to have an `argparse.Action` in `argparse` or an type similar to `FileType` that converted the string to a path. A path type factory could also have an arguments to optionally check if the path exists, or is a directory, or other similar predicates. ---------- components: Library (Lib) messages: 382542 nosy: ascola priority: normal severity: normal status: open title: Better path handling with argparse type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 5 00:15:24 2020 From: report at bugs.python.org (tosunilmathew@yahoo.com) Date: Sat, 05 Dec 2020 05:15:24 +0000 Subject: [New-bugs-announce] [issue42573] Installation of Python 3.9 failing with message "User cancelled installation" Message-ID: <1607145324.41.0.931574881737.issue42573@roundup.psfhosted.org> Change by tosunilmathew at yahoo.com : ---------- components: Installation nosy: tosunilmathew priority: normal severity: normal status: open title: Installation of Python 3.9 failing with message "User cancelled installation" type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 5 01:30:39 2020 From: report at bugs.python.org (Brandt Bucher) Date: Sat, 05 Dec 2020 06:30:39 +0000 Subject: [New-bugs-announce] [issue42574] Travis can't build the 3.8 branch right now Message-ID: <1607149839.2.0.378125476647.issue42574@roundup.psfhosted.org> New submission from Brandt Bucher : Travis seems to be using the wrong Python executable for (at least) the "make -j4 regen-all" step on the 3.8 branch. I have a hunch it's using the system python3 executable (3.5?). It causes the following failure when building: ... python3 ./Tools/scripts/update_file.py ./Include/graminit.h ./Include/graminit.h.new File "./Tools/clinic/clinic.py", line 1772 filename_new = f"{filename}.new" ^ SyntaxError: invalid syntax Makefile:574: recipe for target 'clinic' failed make: *** [clinic] Error 1 Recent examples: https://travis-ci.com/github/python/cpython/jobs/454447280 https://travis-ci.com/github/python/cpython/jobs/454551266 https://travis-ci.com/github/python/cpython/jobs/454650029 https://travis-ci.com/github/python/cpython/jobs/454907763 I know Travis has been fairly problematic for us in the past. ---------- components: Build messages: 382555 nosy: brandtbucher priority: normal severity: normal status: open title: Travis can't build the 3.8 branch right now versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 5 01:35:39 2020 From: report at bugs.python.org (Sam Yan) Date: Sat, 05 Dec 2020 06:35:39 +0000 Subject: [New-bugs-announce] [issue42575] Suggest to add an LinkedList data structure to python Message-ID: <1607150139.15.0.868707735034.issue42575@roundup.psfhosted.org> New submission from Sam Yan : There has been no LinkedList data structure for Python. Therefore suggest adding a LinkedList data structure. ---------- components: C API files: LinkedList.py messages: 382557 nosy: SamUnimelb priority: normal severity: normal status: open title: Suggest to add an LinkedList data structure to python type: enhancement versions: Python 3.10 Added file: https://bugs.python.org/file49658/LinkedList.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 5 09:31:19 2020 From: report at bugs.python.org (Ken Jin) Date: Sat, 05 Dec 2020 14:31:19 +0000 Subject: [New-bugs-announce] [issue42576] Passing keyword arguments to types.GenericAlias causes a hard crash Message-ID: <1607178679.27.0.947659385701.issue42576@roundup.psfhosted.org> New submission from Ken Jin : I noticed that passing keyword arguments to types.GenericAlias's __new__ causes the interpreter to hard crash and exit due to an assertion failure: import types types.GenericAlias(bad=float) Result: Assertion failed: PyTuple_CheckExact(kwnames), file ....\cpython\Python\getargs.c, line 2767 A simple fix is to just use _PyArg_NoKeywords instead of _PyArg_NoKwnames. Looking through the rest of the C code, it seems that apart from GenericAlias, only vectorcalls for various builtins use _PyArg_NoKwnames. However, they don't seem to be affected by the bug. ---------- components: Interpreter Core messages: 382565 nosy: gvanrossum, kj priority: normal severity: normal status: open title: Passing keyword arguments to types.GenericAlias causes a hard crash versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 5 10:10:38 2020 From: report at bugs.python.org (Christoph Reiter) Date: Sat, 05 Dec 2020 15:10:38 +0000 Subject: [New-bugs-announce] [issue42577] Unhelpful syntax error when expression spans multiple lines Message-ID: <1607181038.72.0.910363231446.issue42577@roundup.psfhosted.org> New submission from Christoph Reiter : I don't know if the bug tracker is the right place for this, please point me to the right place if not. Someone faced to the following code (simplified example here) and asked for help: ``` if 3: if 1: print(((123)) if 2: print(123) ``` This results in the following error: ``` File "error.py", line 5 if 2: ^ SyntaxError: invalid syntax ``` which is very confusing to users not familiar with generator expressions. I'm wondering if python could improve syntax errors in this case by adding more context, like point to where it started parsing the current thing and where it gave up instead to just the later. ---------- messages: 382566 nosy: lazka priority: normal severity: normal status: open title: Unhelpful syntax error when expression spans multiple lines versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 5 10:22:35 2020 From: report at bugs.python.org (wyz23x2) Date: Sat, 05 Dec 2020 15:22:35 +0000 Subject: [New-bugs-announce] [issue42578] Add tip when encountering UnicodeDecode/Encode Error in open() Message-ID: <1607181755.9.0.464136103954.issue42578@roundup.psfhosted.org> New submission from wyz23x2 : Programmers often stumble over UnicodeDecode/EncodeError during open(), and especially beginners don't know what to do. There are lots of questions on Stackoverflow: https://stackoverflow.com/questions/16528468/while-reading-file-on-python-i-got-a-unicodedecodeerror-what-can-i-do-to-resol https://stackoverflow.com/questions/38186847/python-line-replace-returns-unicodeencodeerror https://stackoverflow.com/questions/3224268/python-unicode-encode-error https://stackoverflow.com/questions/50331257/python3-unicodeencodingerror https://stackoverflow.com/questions/24717808/python-cant-write-to-file-unicodeencodeerror ...... Maybe a helpful tip can be added to the error message. We have done this before: >>> (1,)(2, 3) :1: SyntaxWarning: 'tuple' object is not callable; perhaps you missed a comma? >>> print 3 File "", line 1 print 3 ^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print(3)? ---------- components: IO messages: 382567 nosy: wyz23x2 priority: normal severity: normal status: open title: Add tip when encountering UnicodeDecode/Encode Error in open() versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 5 16:47:14 2020 From: report at bugs.python.org (Matej Cepl) Date: Sat, 05 Dec 2020 21:47:14 +0000 Subject: [New-bugs-announce] [issue42579] Solution from gh#python/cpython#13236 unnecessarily binds building of documentation to the latest version of Sphinx Message-ID: <1607204834.85.0.284068397188.issue42579@roundup.psfhosted.org> New submission from Matej Cepl : I think the solution in gh#python/cpython#13236 is the suboptimal one, because it makes Python dependent on the latest version of Sphinx unnecessarily. There are many situations where Python can be built on the older platform and it is too bothersome to require update of Sphinx (and all its dependencies) as well. ---------- assignee: docs at python components: Documentation messages: 382585 nosy: docs at python, mcepl, pablogsal priority: normal severity: normal status: open title: Solution from gh#python/cpython#13236 unnecessarily binds building of documentation to the latest version of Sphinx versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 6 05:50:33 2020 From: report at bugs.python.org (Matthias Klose) Date: Sun, 06 Dec 2020 10:50:33 +0000 Subject: [New-bugs-announce] [issue42580] ctypes.util.find_library("libc") fails Message-ID: <1607251833.68.0.530621101082.issue42580@roundup.psfhosted.org> New submission from Matthias Klose : regression compared to 3.8: $ python3.9 Python 3.9.1rc1 (default, Nov 27 2020, 19:38:39) [GCC 10.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ctypes.util.find_library("libc") Traceback (most recent call last): File "", line 1, in NameError: name 'ctypes' is not defined >>> import ctypes.util >>> ctypes.util.find_library("libc") Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.9/ctypes/util.py", line 341, in find_library _get_soname(_findLib_gcc(name)) or _get_soname(_findLib_ld(name)) File "/usr/lib/python3.9/ctypes/util.py", line 147, in _findLib_gcc if not _is_elf(file): File "/usr/lib/python3.9/ctypes/util.py", line 99, in _is_elf with open(filename, 'br') as thefile: FileNotFoundError: [Errno 2] No such file or directory: b'liblibc.a' also note that the shared library is installed. ---------- components: ctypes keywords: 3.9regression messages: 382592 nosy: doko priority: normal severity: normal status: open title: ctypes.util.find_library("libc") fails versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 6 10:04:00 2020 From: report at bugs.python.org (kaxing) Date: Sun, 06 Dec 2020 15:04:00 +0000 Subject: [New-bugs-announce] [issue42581] Docs site redirection doesn't work for 3.9 Message-ID: <1607267040.29.0.292241641932.issue42581@roundup.psfhosted.org> New submission from kaxing : Following URL won't redirect(301) to /math.html https://docs.python.org/3.9/library/math Where the user sees 404 when they click from >>> help(math) However, works fine in: https://docs.python.org/3/library/math https://docs.python.org/3.5/library/math https://docs.python.org/2/library/math https://docs.python.org/2.7/library/math ---------- assignee: docs at python components: Documentation messages: 382596 nosy: docs at python, kaxing priority: normal severity: normal status: open title: Docs site redirection doesn't work for 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 6 10:30:01 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 06 Dec 2020 15:30:01 +0000 Subject: [New-bugs-announce] [issue42582] Remove asyncio._all_tasks_compat() Message-ID: <1607268601.54.0.121182893712.issue42582@roundup.psfhosted.org> New submission from Serhiy Storchaka : It was only used in _asynciomodule.c to implement now removed Task.all_tasks() method (see issue40967). ---------- components: asyncio messages: 382597 nosy: asvetlov, serhiy.storchaka, yselivanov priority: normal severity: normal status: open title: Remove asyncio._all_tasks_compat() type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 6 19:32:12 2020 From: report at bugs.python.org (Kjartan Hrafnkelsson) Date: Mon, 07 Dec 2020 00:32:12 +0000 Subject: [New-bugs-announce] [issue42583] JSON.dumps() creates invalid JSON with single quotes Message-ID: <1607301132.51.0.662814441179.issue42583@roundup.psfhosted.org> New submission from Kjartan Hrafnkelsson : the JSON.dumps() function should create valid JSON using double quotes only, however when the function is run with an object as its argument it results in invalid JSON using single quotes. As defined in the ECMA-404 JSON Data Interchange Syntax (http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-404.pdf) JSON should use double quotes not single quotes. ---------- components: Library (Lib) messages: 382615 nosy: imeuropa priority: normal severity: normal status: open title: JSON.dumps() creates invalid JSON with single quotes type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 05:36:22 2020 From: report at bugs.python.org (Erlend Egeberg Aasland) Date: Mon, 07 Dec 2020 10:36:22 +0000 Subject: [New-bugs-announce] [issue42584] Upgrade macOS and Windows installers to use SQLite 3.34.0 Message-ID: <1607337382.55.0.591359272905.issue42584@roundup.psfhosted.org> New submission from Erlend Egeberg Aasland : SQLite 3.34.0 was released 2020-12-01: https://www.sqlite.org/releaselog/3_34_0.html Compiles fine on master, and tests are completing without error. ---------- components: Library (Lib) messages: 382625 nosy: erlendaasland priority: normal severity: normal status: open title: Upgrade macOS and Windows installers to use SQLite 3.34.0 versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 06:08:45 2020 From: report at bugs.python.org (Stegle, Julien) Date: Mon, 07 Dec 2020 11:08:45 +0000 Subject: [New-bugs-announce] [issue42585] Segmentation fault on Linux with multiprocess queue Message-ID: <1607339325.06.0.835049890466.issue42585@roundup.psfhosted.org> New submission from Stegle, Julien : Hi, I'm experiencing segmentation fault issues when running inside a Docker container (tested with python:3.8.6, python:3.8.6-slim, python:3.6.8 & python:3.6.8-slim). On windows everything works fine, but when running on Docker when I try to put a string into my queue I'm experiencing the following error: GNU gdb (Debian 8.2.1-2+b3) 8.2.1 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from python...(no debugging symbols found)...done. Starting program: /usr/local/bin/python -m ge.xxx.core.runners.exec basic reader /home/executor/xxx/exemple1.ini warning: Error disabling address space randomization: Operation not permitted [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". [Detaching after fork from child process 12] [New Thread 0x7f11f66c7700 (LWP 13)] [New Thread 0x7f11f3ec6700 (LWP 14)] [New Thread 0x7f11f16c5700 (LWP 15)] [New Thread 0x7f11eeec4700 (LWP 16)] [New Thread 0x7f11ec6c3700 (LWP 17)] [New Thread 0x7f11e9ec2700 (LWP 18)] [New Thread 0x7f11e76c1700 (LWP 19)] [New Thread 0x7f11e4ec0700 (LWP 20)] [New Thread 0x7f11e26bf700 (LWP 21)] [New Thread 0x7f11dfebe700 (LWP 22)] [New Thread 0x7f11df6bd700 (LWP 23)] [Thread 0x7f11e9ec2700 (LWP 18) exited] [Thread 0x7f11e76c1700 (LWP 19) exited] [Thread 0x7f11ec6c3700 (LWP 17) exited] [Thread 0x7f11eeec4700 (LWP 16) exited] [Thread 0x7f11f16c5700 (LWP 15) exited] [Thread 0x7f11f3ec6700 (LWP 14) exited] [Thread 0x7f11f66c7700 (LWP 13) exited] [Thread 0x7f11e26bf700 (LWP 21) exited] [Thread 0x7f11dfebe700 (LWP 22) exited] [Thread 0x7f11e4ec0700 (LWP 20) exited] [Thread 0x7f11df6bd700 (LWP 23) exited] [Detaching after fork from child process 24] [Detaching after fork from child process 25] [New Thread 0x7f11df6bd700 (LWP 26)] [New Thread 0x7f11dfebe700 (LWP 27)] [New Thread 0x7f11e26bf700 (LWP 28)] [New Thread 0x7f11e4ec0700 (LWP 29)] [New Thread 0x7f11cc893700 (LWP 30)] [New Thread 0x7f11ca092700 (LWP 31)] [New Thread 0x7f11c7891700 (LWP 32)] [New Thread 0x7f11c5090700 (LWP 33)] [New Thread 0x7f11c488f700 (LWP 34)] [New Thread 0x7f11c208e700 (LWP 35)] [New Thread 0x7f11bd88d700 (LWP 36)] [New Thread 0x7f11b5def700 (LWP 37)] [Thread 0x7f11e4ec0700 (LWP 29) exited] [Thread 0x7f11c7891700 (LWP 32) exited] [Thread 0x7f11ca092700 (LWP 31) exited] [Thread 0x7f11c488f700 (LWP 34) exited] [Thread 0x7f11c5090700 (LWP 33) exited] [Thread 0x7f11cc893700 (LWP 30) exited] [Thread 0x7f11e26bf700 (LWP 28) exited] [Thread 0x7f11dfebe700 (LWP 27) exited] [Thread 0x7f11df6bd700 (LWP 26) exited] [Thread 0x7f11bd88d700 (LWP 36) exited] [Thread 0x7f11c208e700 (LWP 35) exited] [Detaching after fork from child process 38] [Thread 0x7f11b5def700 (LWP 37) exited] 2020-12-07T09:16:12.760+0000 - ge.xxx.inputs - INFO - Filesystem input "ge.xxx.core.inputs.reader.dicom" initialized, target: /data/dicoms 2020-12-07T09:16:12.780+0000 - ge.xxx.parsers - INFO - Dicom data parser initialized 2020-12-07T09:16:12.918+0000 - alembic.env - INFO - Migrating database engine_data 2020-12-07T09:16:12.927+0000 - alembic.env - INFO - Migrating database engine_application 2020-12-07T09:16:12.933+0000 - alembic.env - INFO - Migrating database engine_enhancement 2020-12-07T09:16:12.956+0000 - ge.xxx.core - INFO - Duct 'default' initialized 2020-12-07T09:16:12.957+0000 - ge.xxx.inputs - INFO - Start filesystem input thread "ge.xxx.core.inputs.reader.dicom" [Detaching after fork from child process 39] [Detaching after fork from child process 40] 2020-12-07T09:16:13.639+0000 - ge.xxx.inputs - DEBUG - Start reading file system 2020-12-07T09:16:13.666+0000 - ge.xxx.inputs - DEBUG - File /data/dicoms/1/XXXXXXXXXXX.dcm did not match filters 2020-12-07T09:16:15.881+0000 - ge.xxx.inputs - DEBUG - ADDING DICOMREADER W/ TO QUEUE Fatal Python error: Segmentation fault Current thread 0x00007f0cb732f740 (most recent call first): File "/usr/local/lib/python3.6/multiprocessing/queues.py", line 82 in put File "/home/executor/ge.xxx.extractor/ge/xxx/extractor/input/filesystem/reader.py", line 36 in feed File "/home/executor/ge.xxx.duct/ge/xxx/duct/input/filesystem/reader.py", line 58 in scan File "/usr/local/lib/python3.6/multiprocessing/process.py", line 93 in run File "/usr/local/lib/python3.6/multiprocessing/process.py", line 258 in _bootstrap File "/usr/local/lib/python3.6/multiprocessing/spawn.py", line 118 in _main File "/usr/local/lib/python3.6/multiprocessing/spawn.py", line 105 in spawn_main File "", line 1 in ^C Thread 1 "python" received signal SIGINT, Interrupt. 0x00007f11fac97037 in __GI___select (nfds=0, readfds=0x0, writefds=0x0, exceptfds=0x0, timeout=0x7fff0c5d6970) at ../sysdeps/unix/sysv/linux/select.c:41 41 ../sysdeps/unix/sysv/linux/select.c: No such file or directory. (gdb) bt full #0 0x00007f11fac97037 in __GI___select (nfds=0, readfds=0x0, writefds=0x0, exceptfds=0x0, timeout=0x7fff0c5d6970) at ../sysdeps/unix/sysv/linux/select.c:41 resultvar = 18446744073709551102 sc_cancel_oldtype = 0 sc_ret = #1 0x00007f11fb11360e in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #2 0x00007f11fb03f079 in _PyCFunction_FastCallDict () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #3 0x00007f11fb08558d in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #4 0x00007f11fb07f49a in _PyEval_EvalFrameDefault () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #5 0x00007f11fb08585a in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #6 0x00007f11fb0856b1 in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #7 0x00007f11fb07f49a in _PyEval_EvalFrameDefault () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. --Type for more, q to quit, c to continue without paging--c #8 0x00007f11fb07eae5 in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #9 0x00007f11fb085c4b in _PyFunction_FastCallDict () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #10 0x00007f11fb0097be in _PyObject_FastCallDict () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #11 0x00007f11fb00a126 in _PyObject_Call_Prepend () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #12 0x00007f11fb009b07 in PyObject_Call () from /usr/local/lib/libpython3.6m.so.1.0 #8 0x00007f11fb07eae5 in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #9 0x00007f11fb085c4b in _PyFunction_FastCallDict () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #10 0x00007f11fb0097be in _PyObject_FastCallDict () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #11 0x00007f11fb00a126 in _PyObject_Call_Prepend () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #12 0x00007f11fb009b07 in PyObject_Call () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #13 0x00007f11fb0545ad in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #14 0x00007f11fb05236b in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #23 0x00007f11fb07e719 in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #24 0x00007f11fb0858e2 in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #25 0x00007f11fb0856b1 in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #26 0x00007f11fb07f49a in _PyEval_EvalFrameDefault () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #27 0x00007f11fb07da81 in PyEval_EvalCodeEx () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #28 0x00007f11fb01fa23 in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #29 0x00007f11fb009b07 in PyObject_Call () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #30 0x00007f11fb1063a1 in ?? () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #31 0x00007f11fb10620b in Py_Main () from /usr/local/lib/libpython3.6m.so.1.0 No symbol table info available. #32 0x0000557b7762a37b in main () No symbol table info available. Faulty code: def feed(logger: logging.Logger, name: str, root: Path, filters: List[Filter], supported_extensions: List[str], queues: List[Queue]): if supported_extensions: for extension in supported_extensions: for filepath in root.rglob(f'*.{extension}'): if any([not x.validate(filepath) for x in filters]): logger.debug(f'File {filepath} did not match filters') continue for queue in queues: logger.debug(f'ADDING DICOMREADER W/ TO QUEUE {queue}') queue.put(str(filepath)) # CRASH HAPPENS HERE logger.debug('DONE ADDING TO QUEUE') logger.debug(f'[{name}] New file created and add to queues {filepath}') else: for filepath in root.rglob(f'*.*'): if any([not x.validate(filepath) for x in filters]): logger.debug(f'File {filepath} did not match filters') continue for queue in queues: logger.debug('ADDING DICOMREADER W/O TO QUEUE') queue.put(str(filepath)) logger.debug('DONE ADDING TO QUEUE') logger.debug(f'[{name}] New file created and add to queues {filepath}') def scan(logger_name: str, logging_configuration: str, name: str, feed: Callable[[logging.Logger, Path, List[Filter], List[str], List[Queue]], None], directory: str, filters: List[Filter], supported_extensions: List[str], queues: List[Queue], done_event: Event): if logging_configuration: setup_logging(logging_configuration) logger = logging.getLogger(logger_name) logger.debug('Start reading file system') root = Path(directory) feed(logger, name, root, filters, supported_extensions, queues) logger.info("All folder read. Reader function ends now") done_event.clear() Here's how the process is created: def get_executor(self, args: tuple = (), kwargs: dict = {}): return get_context("spawn").Process(target=self._executor_target, args=args, kwargs=kwargs, daemon=True) And here are the args used: from multiprocessing import Queue from ge.XXX import Filter class Filter(ABC): def __init__(self, logger_name: str, logging_configuration: str): self.logger_name = logger_name self.logging_configuration = logging_configuration def get_logger(self): return logging.getLogger(self.logger_name) def validate(self, data: Any) -> bool: raise NotImplementedError class Input(FileSystemBase): def __init__(self, logger_name: str, logging_configuration: str, name: str, output_queues: List[Queue], directory: str, filters: List[Filter] = [], supported_extensions: List[str] = [], feed: Callable[[logging.Logger, Path, List[Filter], List[str], List[Queue]], None] = feed): super().__init__(logger_name, logging_configuration, name, output_queues, directory, filters=filters) self._feed = feed self._executor_target = scan self._supported_extensions = supported_extensions self._executor = self.get_executor( args=(logger_name, logging_configuration, name, feed, directory, filters, supported_extensions, output_queues, self._running) ) After a quick lookup in the source code the issue seems to be related to this line in multiprocessing Queue: File "/usr/local/lib/python3.6/multiprocessing/queues.py", line 82 in put def put(self, obj, block=True, timeout=None): assert not self._closed if not self._sem.acquire(block, timeout): # THIS ONE raise Full with self._notempty: if self._thread is None: self._start_thread() self._buffer.append(obj) self._notempty.notify() Is there a specific Linux behavior that i'm missing here ? Just to be sure it wasn't coming from my consumer I didn't start the process consuming from the queue (in case it came from a Lock issue) so only this process runs and no other code is used for the test. Best regards, Julien Stegle ---------- messages: 382629 nosy: julien.stegle priority: normal severity: normal status: open title: Segmentation fault on Linux with multiprocess queue type: crash versions: Python 3.6, Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 08:31:34 2020 From: report at bugs.python.org (Valeriu Predoi) Date: Mon, 07 Dec 2020 13:31:34 +0000 Subject: [New-bugs-announce] [issue42586] unittest.mock.Mock spec can't be array/ndarray in Python 3.9 Message-ID: <1607347894.15.0.455977385218.issue42586@roundup.psfhosted.org> New submission from Valeriu Predoi : Hey guys, the new unittest.mock.Mock for Python 3.9 can not accept a spec arg if it is a numpy array, it'll accept all other types but not ndarray, have a look at this quick minimal code: import numpy as np from unittest import mock as mg ob = mg.Mock(spec=[4, 4]) print("spec is list, mock is", ob) ob = mg.Mock(spec=["4", "4"]) print("spec is list of str, mock is", ob) ob = mg.Mock(spec=(4, 4)) print("spec is tuple, mock is", ob) ob = mg.Mock(spec="cow") print("spec is string, mock is", ob) ob = mg.Mock(spec=22) print("spec is int, mock is", ob) ob = mg.Mock(spec=np.array([4, 4])) print("spec is ndarray, mock is", ob) versions: python 3.9.0 h2a148a8_4_cpython conda-forge pytest-mock 3.3.1 pypi_0 pypi Is this intended or it's a buggy-bug? Cheers muchly! V ---------- components: Tests messages: 382634 nosy: valeriupredoi priority: normal severity: normal status: open title: unittest.mock.Mock spec can't be array/ndarray in Python 3.9 type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 10:00:21 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 07 Dec 2020 15:00:21 +0000 Subject: [New-bugs-announce] [issue42587] test_buffer fails on Python built with GCC 11 Message-ID: <1607353221.74.0.500162368406.issue42587@roundup.psfhosted.org> New submission from STINNER Victor : PPC64LE Fedora Rawhide LTO 3.x: https://buildbot.python.org/all/#/builders/448/builds/433 FAIL: test_memoryview_cast (test.test_buffer.TestBufferProtocol) FAIL: test_memoryview_cast_1D_ND (test.test_buffer.TestBufferProtocol) FAIL: test_memoryview_compare_random_formats (test.test_buffer.TestBufferProtocol) FAIL: test_ndarray_format_shape (test.test_buffer.TestBufferProtocol) FAIL: test_ndarray_format_strides (test.test_buffer.TestBufferProtocol) FAIL: test_ndarray_getbuf (test.test_buffer.TestBufferProtocol) FAIL: test_ndarray_index_getitem_single (test.test_buffer.TestBufferProtocol) FAIL: test_ndarray_random (test.test_buffer.TestBufferProtocol) FAIL: test_ndarray_slice_assign_single (test.test_buffer.TestBufferProtocol) Example: FAIL: test_ndarray_slice_assign_single (test.test_buffer.TestBufferProtocol) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildbot/buildarea/3.x.cstratak-fedora-rawhide-ppc64le.lto/build/Lib/test/test_buffer.py", line 1836, in test_ndarray_slice_assign_single self.assertEqual(mv, nd) AssertionError: != ---------- components: Tests messages: 382643 nosy: vstinner priority: normal severity: normal status: open title: test_buffer fails on Python built with GCC 11 versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 10:13:10 2020 From: report at bugs.python.org (Ran Benita) Date: Mon, 07 Dec 2020 15:13:10 +0000 Subject: [New-bugs-announce] [issue42588] Improvements to graphlib.TopologicalSorter.static_order() documentation Message-ID: <1607353990.94.0.818492776719.issue42588@roundup.psfhosted.org> New submission from Ran Benita : One issue and one suggestion. Issue: The documentation of prepare() says: > If any cycle is detected, CycleError will be raised which is what happens. The documentation of static_order() says that static_order() is equivalent to: def static_order(self): self.prepare() while self.is_active(): node_group = self.get_ready() yield from node_group self.done(*node_group) specifically it is said to call self.prepare(), and also says > If any cycle is detected, CycleError will be raised. But, this only happens when the result of static_order is *iterated*, not when it's called, unlike what is suggested by the code and the comment. Ideally, I think the call should raise the CycleError already if possible; this way, only the call can be wrapped in a try/except instead of the entire iteration. But if not, it should be clarified in the documentation. Suggestion: The documentation of static_order() says > Returns an iterable of nodes in a topological order. Using this method does not require to call TopologicalSorter.prepare() or TopologicalSorter.done(). I think the wording "does not require" still implies that they *can* be called, but really they can't. If prepare() is called before static_order(), then when static_order() is iterated, "ValueError: cannot prepare() more than once" is raised. I suggest this wording: Returns an iterable of nodes in a topological order. When using this method, TopologicalSorter.prepare() and TopologicalSorter.done() should not be called. ---------- assignee: docs at python components: Documentation messages: 382647 nosy: bluetech, docs at python priority: normal severity: normal status: open title: Improvements to graphlib.TopologicalSorter.static_order() documentation versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 10:53:35 2020 From: report at bugs.python.org (Irit Katriel) Date: Mon, 07 Dec 2020 15:53:35 +0000 Subject: [New-bugs-announce] [issue42589] doc: Wrong "from" keyword link in Exceptions doc Message-ID: <1607356415.63.0.620558354489.issue42589@roundup.psfhosted.org> New submission from Irit Katriel : In the Exceptions doc: https://docs.python.org/3/library/exceptions.html#built-in-exceptions In the sentence "The expression following from must be an exception or None." "from" is a keyword which links to https://docs.python.org/3/reference/simple_stmts.html#from But that is related to the from in "from X import Y" rather than "raise X from Y". ---------- assignee: docs at python components: Documentation messages: 382649 nosy: docs at python, iritkatriel priority: normal severity: normal status: open title: doc: Wrong "from" keyword link in Exceptions doc type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 10:56:57 2020 From: report at bugs.python.org (Jakub Stasiak) Date: Mon, 07 Dec 2020 15:56:57 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue42590=5D_Something_like_Ru?= =?utf-8?q?st=27s_std=3A=3Async=3A=3AMutex_=E2=80=93_combining_a_mutex_pri?= =?utf-8?q?mitive_and_a_piece_of_data_it=27s_protecting?= Message-ID: <1607356617.55.0.274484392105.issue42590@roundup.psfhosted.org> New submission from Jakub Stasiak : I've been wondering if it's worth it to have something like Rust's std::sync::Mutex[1] which is used like this: let data = Mutex::new(0); { let mut unlocked = data.lock().unwrap(); *unlocked += 1; } // unlocked is no longer available here, we need to use data.lock() again Our (Python) [R]Lock is typically used like this: data_to_protect = whatever() lock = Lock() # ... with lock: data_to_protect.do_something() The inconvenience of this is obvious to me and it's more error prone if one forgets the lock when accessing data_to_protect. I wrote a quick prototype to get something like Mutex in Python: import threading from contextlib import contextmanager from typing import Any, cast, Dict, Generator, Generic, Optional, TypeVar T = TypeVar('T') class LockedData(Generic[T]): def __init__(self, data: T, lock: Any = None) -> None: self._data = data if lock is None: lock = threading.Lock() self._lock = lock @contextmanager def unlocked(self, timeout: float = -1.0) -> Generator[T, None, None]: acquired = None unlocked = None try: acquired = self._lock.acquire(timeout=timeout) if acquired is False: raise LockTimeout() unlocked = UnlockResult(self._data) yield unlocked finally: if acquired is True: if unlocked is not None: unlocked._unlocked = False self._data = unlocked._data unlocked._data = None self._lock.release() class UnlockResult(Generic[T]): _data: Optional[T] def __init__(self, data: T) -> None: self._data = data self._unlocked = True @property def data(self) -> T: assert self._unlocked return cast(T, self._data) @data.setter def data(self, data: T) -> None: assert self._unlocked self._data = data class LockTimeout(Exception): pass if __name__ == '__main__': locked_dict: LockedData[Dict[str, bool]] = LockedData({}) # Mutating the dictionary with locked_dict.unlocked() as result: result.data['hello'] = True with locked_dict.unlocked() as result: print(result.data) # Replacing the dictionary with locked_dict.unlocked() as result: result.data = {'a': True, 'b': False} with locked_dict.unlocked() as result: print(result.data) # Trying to access data after context closes print(result._data) print(result.data) Now this is obviously quite far from what Rust offers, as there's nothing to prevent a person from doing something like this: with locked_dict.unlocked() as result: data = result.data print('Oh no, look: %r' % (data,)) but it seems to me it's still an improvement. [1] https://doc.rust-lang.org/std/sync/struct.Mutex.html ---------- components: Library (Lib) messages: 382650 nosy: benjamin.peterson, jstasiak, pitrou, vstinner priority: normal severity: normal status: open title: Something like Rust's std::sync::Mutex ? combining a mutex primitive and a piece of data it's protecting type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 11:00:33 2020 From: report at bugs.python.org (Christian Bachmaier) Date: Mon, 07 Dec 2020 16:00:33 +0000 Subject: [New-bugs-announce] [issue42591] Method Py_FrozenMain missing in libpython3.9 Message-ID: <1607356833.89.0.498069278584.issue42591@roundup.psfhosted.org> New submission from Christian Bachmaier : In Python 3.9.0 and 3.9.1rc1 (Packages from Ubuntu Devel Branch or Fedora) the Method int Py_FrozenMain(int, char**) is missing in libpython3.9.so Thus, when trying the provided freeze example via freeze.py hello.py & make one gets the linker error /usr/bin/ld: frozen.o: in function `main': frozen/frozen.c:681: undefined reference to `Py_FrozenMain' . In previous Python 3.8.x the bug does not show. Thanks, Chris ---------- components: C API messages: 382651 nosy: chba priority: normal severity: normal status: open title: Method Py_FrozenMain missing in libpython3.9 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 12:36:35 2020 From: report at bugs.python.org (Paul Bryan) Date: Mon, 07 Dec 2020 17:36:35 +0000 Subject: [New-bugs-announce] [issue42592] TypedDict: total=False but still key required Message-ID: <1607362595.41.0.679668494764.issue42592@roundup.psfhosted.org> New submission from Paul Bryan : I believe "a" below should be an optional key, not a required one. Python 3.9.0 (default, Oct 7 2020, 23:09:01) [GCC 10.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import typing >>> TD = typing.TypedDict("TD", {"a": str}, total=False) >>> TD.__total__ False >>> TD.__required_keys__ frozenset({'a'}) >>> TD.__optional_keys__ frozenset() >>> ---------- components: Library (Lib) messages: 382662 nosy: gvanrossum, pbryan priority: normal severity: normal status: open title: TypedDict: total=False but still key required type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 15:18:22 2020 From: report at bugs.python.org (syl-nktaylor) Date: Mon, 07 Dec 2020 20:18:22 +0000 Subject: [New-bugs-announce] [issue42593] Consistency in unicode string multiplication with an integer Message-ID: <1607372302.54.0.511899594666.issue42593@roundup.psfhosted.org> New submission from syl-nktaylor : In https://github.com/python/cpython/blob/master/Objects/unicodeobject.c#L12930, unicode_repeat does string multiplication with an integer in 3 different ways: 1) one memset call, for utf-8 when string size is 1 2) linear 'for' loops, for utf-16 and utf-32 when string size is 1 3) logarithmic 'while' loop with memcpy calls, for utf-8/utf-16/utf-32 when string size > 1 Is there a performance or correctness reason for which we can't also use the 3rd way for the second case? I realise depending on architecture, the second case could benefit from vectorization, but the memcpy calls will also be hardware optimised. An example of using just the 1st and 3rd methods: --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -12954,31 +12954,16 @@ unicode_repeat(PyObject *str, Py_ssize_t len) assert(PyUnicode_KIND(u) == PyUnicode_KIND(str)); - if (PyUnicode_GET_LENGTH(str) == 1) { - int kind = PyUnicode_KIND(str); - Py_UCS4 fill_char = PyUnicode_READ(kind, PyUnicode_DATA(str), 0); - if (kind == PyUnicode_1BYTE_KIND) { - void *to = PyUnicode_DATA(u); - memset(to, (unsigned char)fill_char, len); - } - else if (kind == PyUnicode_2BYTE_KIND) { - Py_UCS2 *ucs2 = PyUnicode_2BYTE_DATA(u); - for (n = 0; n < len; ++n) - ucs2[n] = fill_char; - } else { - Py_UCS4 *ucs4 = PyUnicode_4BYTE_DATA(u); - assert(kind == PyUnicode_4BYTE_KIND); - for (n = 0; n < len; ++n) - ucs4[n] = fill_char; - } - } - else { + Py_ssize_t char_size = PyUnicode_KIND(str); + char *to = (char *) PyUnicode_DATA(u); + if (PyUnicode_GET_LENGTH(str) == 1 && char_size == PyUnicode_1BYTE_KIND) { + Py_UCS4 fill_char = PyUnicode_READ(char_size, PyUnicode_DATA(str), 0); + memset(to, fill_char, len); + } else { /* number of characters copied this far */ Py_ssize_t done = PyUnicode_GET_LENGTH(str); - Py_ssize_t char_size = PyUnicode_KIND(str); - char *to = (char *) PyUnicode_DATA(u); memcpy(to, PyUnicode_DATA(str), PyUnicode_GET_LENGTH(str) * char_size); while (done < nchars) {... ---------- components: Unicode messages: 382681 nosy: ezio.melotti, syl-nktaylor, vstinner priority: normal severity: normal status: open title: Consistency in unicode string multiplication with an integer type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 17:40:05 2020 From: report at bugs.python.org (Bart Robinson) Date: Mon, 07 Dec 2020 22:40:05 +0000 Subject: [New-bugs-announce] [issue42594] Provide a way to skip magic-number search in ZipFile(mode='a') Message-ID: <1607380805.75.0.407394534884.issue42594@roundup.psfhosted.org> New submission from Bart Robinson : When a ZipFile object is created with mode='a', the existing file contents are checked for the magic number b"PK\005\006" near the end of the file. If a non-zipfile just happens to contain this magic number, it can confuse the library into assuming the file is a zipfile when it is not. It would be great if ZipFile.__init__() provided a way to skip the magic-number check and force a new central directory to be appended to the file. This could take the form of an additional named argument like ZipFile.__init__(force_append=True), or an additional character in the mode string, like 'a+'. Either of these options should be backward-compatible with existing code. Currently, my company has code that uses monkey-patching to work around the lack of this feature in the standard library. We use mode='a' to append metadata to files in existing formats that can contain arbitrary binary data and so occasionally include the magic number. ---------- components: Library (Lib) messages: 382691 nosy: Bart Robinson priority: normal severity: normal status: open title: Provide a way to skip magic-number search in ZipFile(mode='a') type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 7 23:10:50 2020 From: report at bugs.python.org (NO_DANGER LacalBitcoins) Date: Tue, 08 Dec 2020 04:10:50 +0000 Subject: [New-bugs-announce] [issue42595] 5 * str Message-ID: <1607400650.61.0.129436529135.issue42595@roundup.psfhosted.org> New submission from NO_DANGER LacalBitcoins : 5 * 'abc' ??? ??????. ?????????! ---------- files: Screenshot_1.png messages: 382709 nosy: denisustinovweb priority: normal severity: normal status: open title: 5 * str type: crash Added file: https://bugs.python.org/file49659/Screenshot_1.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 04:22:04 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 08 Dec 2020 09:22:04 +0000 Subject: [New-bugs-announce] [issue42596] aarch64 Fedora Rawhide LTO + PGO 3.8 fails to build Python: /usr/bin/ld: error adding symbols: bad value Message-ID: <1607419324.76.0.598132356337.issue42596@roundup.psfhosted.org> New submission from STINNER Victor : aarch64 Fedora Rawhide LTO + PGO 3.8: https://buildbot.python.org/all/#/builders/528/builds/243 /usr/bin/ld: /lib64/libgcc_s.so.1: .gnu.version_r invalid entry /usr/bin/ld: /lib64/libgcc_s.so.1: error adding symbols: bad value collect2: error: ld returned 1 exit status make[3]: *** [Makefile:578: python] Error 1 make[3]: *** Waiting for unfinished jobs.... Is it another GCC 11 bug when using LTO and/or PGO? ---------- components: Build messages: 382718 nosy: vstinner priority: normal severity: normal status: open title: aarch64 Fedora Rawhide LTO + PGO 3.8 fails to build Python: /usr/bin/ld: error adding symbols: bad value versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 05:19:27 2020 From: report at bugs.python.org (Dominik V.) Date: Tue, 08 Dec 2020 10:19:27 +0000 Subject: [New-bugs-announce] [issue42597] Improve documentation of locals() w.r.t. "free variables" vs. global variables Message-ID: <1607422767.08.0.320752230073.issue42597@roundup.psfhosted.org> New submission from Dominik V. : The documentation of locals() mentions that: > Free variables are returned by locals() when it is called in function blocks [...] The term "free variable" is defined in the documentation about the execution model (https://docs.python.org/3/reference/executionmodel.html#binding-of-names): > If a variable is used in a code block but not defined there, it is a free variable. That definition includes global variables (and builtin ones), but these are not returned by locals(). For example compare the following: ``` x = 1 def foo(): # global x x = 1 def bar(): print(locals()) y = x bar() foo() ``` If the `global x` is commented then it prints {'x': 1}, and if it is uncommented it prints {}. The same holds for names of builtins. So the documentation of locals() could mention this in the following way (emphasis added): > Free variables *of enclosing functions* are returned by locals() when it is called in function blocks [...] ----- There is also a StackOverflow question, that describes this confusion: https://stackoverflow.com/questions/12919278/how-to-define-free-variable-in-python By the way, would it be helpful to add the term "free variable" to the glossary (https://docs.python.org/3/glossary.html)? ---------- assignee: docs at python components: Documentation messages: 382721 nosy: Dominik V., docs at python priority: normal severity: normal status: open title: Improve documentation of locals() w.r.t. "free variables" vs. global variables type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 06:15:17 2020 From: report at bugs.python.org (Joshua Root) Date: Tue, 08 Dec 2020 11:15:17 +0000 Subject: [New-bugs-announce] [issue42598] Some configure checks rely on implicit function declaration Message-ID: <1607426117.25.0.975744797691.issue42598@roundup.psfhosted.org> New submission from Joshua Root : There are several cases in the configure script of exit being used without including stdlib.h, and one case of close being used without including unistd.h. A compiler that does not allow implicit function declaration, such as clang in Xcode 12, will error out due to this, which may cause the affected checks to give incorrect results. ---------- components: Build messages: 382726 nosy: jmr priority: normal severity: normal status: open title: Some configure checks rely on implicit function declaration type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 06:25:12 2020 From: report at bugs.python.org (hai shi) Date: Tue, 08 Dec 2020 11:25:12 +0000 Subject: [New-bugs-announce] [issue42599] Remove PyModule_GetWarningsModule() in pylifecycle.c Message-ID: <1607426712.85.0.240458410103.issue42599@roundup.psfhosted.org> New submission from hai shi : PyModule_GetWarningsModule() of pylifecycle.c is useless when _warning module was converted to a builtin module in 2.6. So move it directly now is OK. ---------- components: Interpreter Core messages: 382727 nosy: petr.viktorin, shihai1991, vstinner priority: normal severity: normal status: open title: Remove PyModule_GetWarningsModule() in pylifecycle.c versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 07:35:56 2020 From: report at bugs.python.org (Hynek Schlawack) Date: Tue, 08 Dec 2020 12:35:56 +0000 Subject: [New-bugs-announce] [issue42600] Cancelling tasks waiting for asyncio.Conditions crashes w/ RuntimeError: Lock is not acquired. Message-ID: <1607430956.17.0.934441920245.issue42600@roundup.psfhosted.org> New submission from Hynek Schlawack : This is something I've been procrastinating on for almost a year and working around it using my own version of asyncio.Condition because I wasn't sure how to describe it. So here's my best take: Consider the following code: ``` import asyncio async def tf(con): async with con: await asyncio.wait_for(con.wait(), 60) async def f(loop): con = asyncio.Condition() t = loop.create_task(tf(con)) await asyncio.sleep(1) t.cancel() async with con: con.notify_all() await t loop = asyncio.get_event_loop() loop.run_until_complete(f(loop)) ``` (I'm using old-school APIs because I wanted to verify whether it was a regression. I ran into the bug with new-style APIs: https://gist.github.com/hynek/387f44672722171c901b8422320e8f9b) `await t` will crash with: ``` Traceback (most recent call last): File "/Users/hynek/t.py", line 6, in tf await asyncio.wait_for(con.wait(), 60) File "/Users/hynek/.asdf/installs/python/3.9.0/lib/python3.9/asyncio/tasks.py", line 466, in wait_for await waiter asyncio.exceptions.CancelledError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hynek/t.py", line 24, in loop.run_until_complete(f(loop)) File "/Users/hynek/.asdf/installs/python/3.9.0/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/Users/hynek/t.py", line 20, in f await t File "/Users/hynek/t.py", line 6, in tf await asyncio.wait_for(con.wait(), 60) File "/Users/hynek/.asdf/installs/python/3.9.0/lib/python3.9/asyncio/locks.py", line 20, in __aexit__ self.release() File "/Users/hynek/.asdf/installs/python/3.9.0/lib/python3.9/asyncio/locks.py", line 146, in release raise RuntimeError('Lock is not acquired.') RuntimeError: Lock is not acquired. ``` If you replace wait_for with a simple await, it works and raises an asyncio.exceptions.CancelledError: ``` Traceback (most recent call last): File "/Users/hynek/t.py", line 6, in tf await con.wait() File "/Users/hynek/.asdf/installs/python/3.9.0/lib/python3.9/asyncio/locks.py", line 290, in wait await fut asyncio.exceptions.CancelledError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hynek/t.py", line 20, in f await t asyncio.exceptions.CancelledError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hynek/t.py", line 24, in loop.run_until_complete(f(loop)) File "/Users/hynek/.asdf/installs/python/3.9.0/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() asyncio.exceptions.CancelledError ``` I have verified, that this has been broken at least since 3.5.10. The current 3.10.0a3 is affected too. ---------- components: asyncio messages: 382732 nosy: asvetlov, hynek, lukasz.langa, yselivanov priority: normal severity: normal status: open title: Cancelling tasks waiting for asyncio.Conditions crashes w/ RuntimeError: Lock is not acquired. versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 08:48:29 2020 From: report at bugs.python.org (Harvastum) Date: Tue, 08 Dec 2020 13:48:29 +0000 Subject: [New-bugs-announce] [issue42601] [doc] add more examples and additional explanation to re.sub Message-ID: <1607435309.34.0.33576150552.issue42601@roundup.psfhosted.org> New submission from Harvastum : This entire page: https://docs.python.org/3.10/library/re.html does not have a single occurrence of the word "lambda". In my humble opinion it's a pretty important trick to utilize capture groups in lambdas to e.g. use them to access value in a dictionary. Examples are available here: https://stackoverflow.com/a/18737927/6380791 and here: https://www.oreilly.com/library/view/python-cookbook/0596001673/ch03s15.html but somehow not in the documentation. There is a mention about referencing groups from different contexts, but the lambda is quite unique and although I think it does fall under "when processing match object m", I think it still is worth its own entry. Btw. it's my first contribution here, sorry for any faux pas I may have commited, please point it out if I did! ---------- assignee: docs at python components: Documentation messages: 382736 nosy: docs at python, harvastum priority: normal severity: normal status: open title: [doc] add more examples and additional explanation to re.sub type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 10:55:37 2020 From: report at bugs.python.org (Myungbae Son) Date: Tue, 08 Dec 2020 15:55:37 +0000 Subject: [New-bugs-announce] [issue42602] seekable() returns True on pipe objects in Windows Message-ID: <1607442937.17.0.755090753747.issue42602@roundup.psfhosted.org> New submission from Myungbae Son : >>> import os >>> r, w = os.pipe() >>> os.lseek(w, 10, 0) 10 >>> wf = open(w, 'w') >>> wf.seekable() True This happens on Windows. Consequently seek() works for these objects but they seems to be no-op. This may confuse libraries that depend on seeking. The named pipe objects (via CreateNamedPipe -> open_osfhandle -> open()) exhibit the same behavior. ---------- components: IO messages: 382746 nosy: nedsociety priority: normal severity: normal status: open title: seekable() returns True on pipe objects in Windows type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 13:07:44 2020 From: report at bugs.python.org (Manolis Stamatogiannakis) Date: Tue, 08 Dec 2020 18:07:44 +0000 Subject: [New-bugs-announce] [issue42603] Tkinter: pkg-config is not used to get location of tcl and tk headers/libraries Message-ID: <1607450864.79.0.446942772761.issue42603@roundup.psfhosted.org> New submission from Manolis Stamatogiannakis : This is indirectly related to 42541: If pkg-config settings are honoured, it is made easier to compile Python against a modern version of TCL/TK. ---------- components: Tkinter messages: 382756 nosy: m000 priority: normal severity: normal status: open title: Tkinter: pkg-config is not used to get location of tcl and tk headers/libraries type: compile error versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 16:53:22 2020 From: report at bugs.python.org (mattip) Date: Tue, 08 Dec 2020 21:53:22 +0000 Subject: [New-bugs-announce] [issue42604] EXT_SUFFIX too short on FreeBSD and AIX Message-ID: <1607464402.35.0.336960410788.issue42604@roundup.psfhosted.org> New submission from mattip : Continuation of bpo 39825, this time for FreeBSD and AIX. As commented there, the test added in the fix to 39825 fails on FreeBSD and AIX: FAIL: test_EXT_SUFFIX_in_vars (test.test_sysconfig.TestSysConfig) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/home/buildbot/python/3.x.koobs-freebsd-564d/build/Lib/test/test_sysconfig.py", line 368, in test_EXT_SUFFIX_in_vars self.assertEqual(vars['EXT_SUFFIX'], _imp.extension_suffixes()[0]) AssertionError: '.so' != '.cpython-310d.so' - .so + .cpython-310d.so So somehow EXT_SUFFIX is being set to .so rather than .cpython-310d.so. It seems the difference in EXT_SUFFIX comes from this stanza in configure: case $ac_sys_system in Linux*|GNU*|Darwin|VxWorks) EXT_SUFFIX=.${SOABI}${SHLIB_SUFFIX};; *) EXT_SUFFIX=${SHLIB_SUFFIX};; esac where $ac_sys_system is `uname -s`. On FREEBSD, this is "FreeBSD", and I think on AIX it is "AIX". My preference would be to always set EXT_SUFFIX to ${SOABI}${SHLIB_SUFFIX}, with no option for setting it to a different value. Does that seem right? ---------- components: Build messages: 382767 nosy: mattip, vstinner priority: normal severity: normal status: open title: EXT_SUFFIX too short on FreeBSD and AIX versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 20:08:23 2020 From: report at bugs.python.org (Aleksey Vlasenko) Date: Wed, 09 Dec 2020 01:08:23 +0000 Subject: [New-bugs-announce] [issue42605] dir_util.copy_tree crashes if folder it previously created is removed Message-ID: <1607476103.38.0.0125735789387.issue42605@roundup.psfhosted.org> New submission from Aleksey Vlasenko : Minimal example: import os import shutil from distutils import dir_util shutil.rmtree('folder1') os.makedirs('folder1/folder2/folder3/') with open('folder1/folder2/folder3/data.txt', 'w') as fp: fp.write('hello') print(os.path.exists('folder1/new_folder2')) # -> prints false dir_util.copy_tree('folder1/folder2', 'folder1/new_folder2') # -> works shutil.rmtree('folder1/new_folder2') print(os.path.exists('folder1/new_folder2')) # -> prints false dir_util.copy_tree('folder1/folder2', 'folder1/new_folder2') # -> crashes --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /opt/conda/lib/python3.7/distutils/file_util.py in _copy_file_contents(src, dst, buffer_size) 40 try: ---> 41 fdst = open(dst, 'wb') 42 except OSError as e: FileNotFoundError: [Errno 2] No such file or directory: 'folder1/new_folder2/folder3/data.txt' dir_util caches folders it previously created in a static global variable _path_created which is a bad idea: https://github.com/python/cpython/blob/master/Lib/distutils/dir_util.py ---------- components: Distutils messages: 382782 nosy: dstufft, eric.araujo, vlasenkoalexey priority: normal severity: normal status: open title: dir_util.copy_tree crashes if folder it previously created is removed type: crash versions: Python 3.6, Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 22:12:46 2020 From: report at bugs.python.org (Alexey Izbyshev) Date: Wed, 09 Dec 2020 03:12:46 +0000 Subject: [New-bugs-announce] [issue42606] Support POSIX atomicity guarantee of O_APPEND on Windows Message-ID: <1607483566.95.0.1927829703.issue42606@roundup.psfhosted.org> New submission from Alexey Izbyshev : On POSIX-conforming systems, O_APPEND flag for open() must ensure that no intervening file modification occurs between changing the file offset and the write operation[1]. In effect, two processes that independently opened the same file with O_APPEND can't overwrite each other's data. On Windows, however, the Microsoft C runtime implements O_APPEND as an lseek(fd, 0, SEEK_END) followed by write(), which obviously doesn't provide the above guarantee. This affects both os.open() and the builtin open() Python functions, which rely on _wopen() from MSVCRT. A demo is attached. While POSIX O_APPEND doesn't guarantee the absence of partial writes, the guarantee of non-overlapping writes alone is still useful in cases such as debug logging from multiple processes without file locking or other synchronization. Moreover, for local filesystems, partial writes don't really occur in practice (barring conditions such as ENOSPC or EIO). Windows offers two ways to achieve non-overlapping appends: 1. WriteFile()[2] with OVERLAPPED structure with Offset and OffsetHigh set to -1. This is essentially per-write O_APPEND. 2. Using a file handle with FILE_APPEND_DATA access right but without FILE_WRITE_DATA access right. While (1) seems easy to add to FileIO, there are some problems: * os.write(fd) can't use it without caller's help because it has no way to know that the fd was opened with O_APPEND (there is no fcntl() in MSVCRT) * write() from MSVCRT (currently used by FileIO) performs some additional remapping of error codes and checks after it calls WriteFile(), so we'd have to emulate that behavior or risk breaking compatibility. I considered to go for (2) by reimplementing _wopen() via CreateFile(), which would also allow us to solve a long-standing issue of missing FILE_SHARE_DELETE on file handles, but hit several problems: * the most serious one is rather silly: we need to honor the current umask to possibly create a read-only file, but there is no way to query it without changing it, which is not thread-safe. Well, actually, I did discover a way: _umask_s(), when called with an invalid mask, returns both EINVAL error and the current umask. But this behavior directly contradicts MSDN, which claims that _umask_s() doesn't modify its second argument on failure[3]. So I'm not willing to rely on this until Microsoft fixes their docs. * os module exposes some MSVCRT-specific flags for use with os.open() (like O_TEMPORARY), which a reimplementation would have to support. It seems easy in most cases, but there is O_TEXT, which enables some obscure legacy behavior in MSVCRT such as removal of a trailing byte 26 (Ctrl-Z) when a file is opened with O_RDWR. More generally, it's unclear to me whether os.open() is explicitly intended to be a gateway to MSVCRT and thus support all current and future flags or is just expected to work similarly to MSVCRT in common cases. So in the end I decided to let _wopen() create the initial fd as usual, but then fix it up via DuplicateHandle() -- see the PR. [1] https://pubs.opengroup.org/onlinepubs/9699919799/functions/write.html [2] https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-writefile [3] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/umask-s?view=msvc-160 ---------- components: IO, Windows files: test.py messages: 382784 nosy: eryksun, izbyshev, paul.moore, steve.dower, tim.golden, vstinner, zach.ware priority: normal severity: normal status: open title: Support POSIX atomicity guarantee of O_APPEND on Windows type: enhancement versions: Python 3.10 Added file: https://bugs.python.org/file49661/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 8 22:18:11 2020 From: report at bugs.python.org (=?utf-8?b?5a6J6L+3?=) Date: Wed, 09 Dec 2020 03:18:11 +0000 Subject: [New-bugs-announce] [issue42607] raw strings SyntaxError Message-ID: <1607483891.66.0.214609891347.issue42607@roundup.psfhosted.org> New submission from ?? : [test at test ~]# python3 -c 'print(r"\n")' \n [test at test ~]# python3 -c 'print(r"n\")' File "", line 1 print(r"n\") ^ SyntaxError: EOL while scanning string literal ---------- components: Windows messages: 382785 nosy: anmikf, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: raw strings SyntaxError type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 9 04:14:29 2020 From: report at bugs.python.org (ali) Date: Wed, 09 Dec 2020 09:14:29 +0000 Subject: [New-bugs-announce] [issue42608] Installation failed from source code on Debian ([307/416] test_socket) Message-ID: <1607505269.45.0.398750468222.issue42608@roundup.psfhosted.org> New submission from ali : I'm following guide below to install Python3.7 from source on Debian-64Bit. https://linuxize.com/post/how-to-install-python-3-7-on-ubuntu-18-04/ I'm installing Python3.7.9-final-64bit. But `make -j 8` hanged out more than 2 hours on: 0:22:43 load avg: 1.29 [307/416] test_socket How to fix it !/? ---------- components: Build messages: 382790 nosy: alimp5 priority: normal severity: normal status: open title: Installation failed from source code on Debian ([307/416] test_socket) type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 9 06:05:07 2020 From: report at bugs.python.org (Erik Lamers) Date: Wed, 09 Dec 2020 11:05:07 +0000 Subject: [New-bugs-announce] [issue42609] Eval with two high string multiplication crashes newer Python versions Message-ID: <1607511907.66.0.321451468972.issue42609@roundup.psfhosted.org> New submission from Erik Lamers : For Python version 3.7 and above the following statement will end up in a segfault. eval("1 + 100"*1000000) Whereas Python versions 3.6 and below would tread this as a Recursion error. ---------- components: Interpreter Core messages: 382791 nosy: Erik-Lamers1 priority: normal severity: normal status: open title: Eval with two high string multiplication crashes newer Python versions type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 9 08:39:46 2020 From: report at bugs.python.org (dimitri.wei) Date: Wed, 09 Dec 2020 13:39:46 +0000 Subject: [New-bugs-announce] [issue42610] Get the type of from a var Message-ID: <1607521186.12.0.817710588965.issue42610@roundup.psfhosted.org> New submission from dimitri.wei : **Feature** A similar feature in typescript ```ts const foo: number = 1 type Foo = typeof foo // type Foo = number function bar(x: string): void { } type Bar = typeof bar // type Bar = (x: string) => void ``` **Pitch** The expected way in future python. ```py from typing import Type foo: int = 1 Foo = Type[foo] # equivalent to Foo = int def bar(x: string) -> None : ... Bar = Type[bar] # equivalent to Bar = Callable[[str], None] ``` ---------- components: Demos and Tools messages: 382792 nosy: wlf100220 priority: normal severity: normal status: open title: Get the type of from a var type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 9 23:51:12 2020 From: report at bugs.python.org (Kyle Stanley) Date: Thu, 10 Dec 2020 04:51:12 +0000 Subject: [New-bugs-announce] [issue42611] PEP 594 Message-ID: <1607575872.07.0.562391698279.issue42611@roundup.psfhosted.org> New submission from Kyle Stanley : This issue was created for the purpose of tracking any changes related to PEP 594 (Removing dead batteries from stdlib), and any relevant discussions about the modules being removed. ---------- components: Library (Lib) messages: 382815 nosy: aeros, christian.heimes priority: normal pull_requests: 22588 severity: normal status: open title: PEP 594 type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 10 02:20:03 2020 From: report at bugs.python.org (Deepanshu Garg) Date: Thu, 10 Dec 2020 07:20:03 +0000 Subject: [New-bugs-announce] [issue42612] Software Designer Message-ID: <1607584803.23.0.875978893674.issue42612@roundup.psfhosted.org> New submission from Deepanshu Garg : I am calling and executing python script from C++ code using "PyRun_SimpleFile". This API is being called from windows thread and every thread intializes and finalizes the python interpreter. However, it works fine for single thread but if intiate a request immideatly after first returned the results, it does not work. I ttried using PyGILState_STATE as well but it caused me access violation issue and application crashed ---------- components: C API messages: 382816 nosy: deepanshugarg09 priority: normal severity: normal status: open title: Software Designer type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 10 05:01:58 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 10 Dec 2020 10:01:58 +0000 Subject: [New-bugs-announce] [issue42613] freeze.py doesn't support sys.platlibdir different than lib nor multiarch Message-ID: <1607594518.8.0.0823964738983.issue42613@roundup.psfhosted.org> New submission from STINNER Victor : Tools/freeze/freeze.py doesn't support config directory using multiarch nor directory using a sys.platlibdir different than "lib". In short, it doesn't work on Fedora 33 which uses: /usr/lib64/python3.10/config-3.10-x86_64-linux-gnu/ It might be possible to copy/paste the code creating the config-xxx path from Lib/sysconfig.py to Tools/freeze/freeze.py, but I would prefer to reuse code if possible, to make the code more sustainable. Maybe we can add a private function to get the path to the "config" directory. Or even a public function. ---------- components: Unicode messages: 382821 nosy: ezio.melotti, vstinner priority: normal severity: normal status: open title: freeze.py doesn't support sys.platlibdir different than lib nor multiarch versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 10 06:32:57 2020 From: report at bugs.python.org (Mihail Kirilov) Date: Thu, 10 Dec 2020 11:32:57 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue42614=5D_Pathlib_does_not_?= =?utf-8?q?support_a_Cyrillic_character_=27=D0=B9=27?= Message-ID: <1607599977.07.0.665981519829.issue42614@roundup.psfhosted.org> New submission from Mihail Kirilov : I have a file with a Cirilyc name - "???? ?? ?????????", which when I load with path.Path and call name on it behaves differently ``` (Pdb) pathlib.Path("/tmp/pytest-of-root/pytest-15/test_bulgarian_name0/data/encoding/???? ?? ?????????.ldr").name '???? ?? ?????????.ldr' (Pdb) pathlib.Path("/tmp/pytest-of-root/pytest-15/test_bulgarian_name0/data/encoding/???? ?? ?????????.ldr").name[2] '?' (Pdb) pathlib.Path("/tmp/pytest-of-root/pytest-15/test_bulgarian_name0/data/encoding/???? ?? ?????????.ldr").name == "???? ?? ?????????" False ``` ---------- components: Unicode messages: 382823 nosy: ezio.melotti, hidr0.frbg, vstinner priority: normal severity: normal status: open title: Pathlib does not support a Cyrillic character '?' type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 10 11:47:13 2020 From: report at bugs.python.org (Om G) Date: Thu, 10 Dec 2020 16:47:13 +0000 Subject: [New-bugs-announce] [issue42615] Redundant jump instructions due to deleted unreachable bytecode blocks Message-ID: <1607618833.17.0.897209602583.issue42615@roundup.psfhosted.org> New submission from Om G : During optimization, the compiler deletes blocks that are marked as unreachable. In doing so, it can render jump instructions that used to jump over the now-deleted blocks redundant, since simply falling through to the next non-empty block is now equivalent. An example of a place where this occurs is around "if condition: statement; else: break" style structures (see attached proof of concept code below), but this is a general case and could occur in other places. Tested on the latest 3.10 branch including all recent compile.c changes. ---------- files: jmptest.py messages: 382834 nosy: OmG priority: normal severity: normal status: open title: Redundant jump instructions due to deleted unreachable bytecode blocks type: performance versions: Python 3.10 Added file: https://bugs.python.org/file49664/jmptest.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 10 15:21:24 2020 From: report at bugs.python.org (Tom Birch) Date: Thu, 10 Dec 2020 20:21:24 +0000 Subject: [New-bugs-announce] [issue42616] C Extensions on Darwin that link against libpython are likely to crash Message-ID: <1607631684.88.0.776282381427.issue42616@roundup.psfhosted.org> New submission from Tom Birch : After https://github.com/python/cpython/pull/12946, there exists an issue on MacOS due to the two-level namespace for symbol resolution. If a C extension links against libpython.dylib, all symbols dependencies from the Python C API will be bound to libpython. When the C extension is loaded into a process launched by running the `python` binary, there will be two active copies of the python runtime. This can lead to crashes if objects from one runtime are used by the other. https://developer.apple.com/library/archive/documentation/Porting/Conceptual/PortingUnix/compiling/compiling.html#//apple_ref/doc/uid/TP40002850-BCIHJBBF See issue/test case here: https://github.com/PDAL/python/pull/76 ---------- components: C API, macOS messages: 382841 nosy: froody, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: C Extensions on Darwin that link against libpython are likely to crash type: crash versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 10 16:40:40 2020 From: report at bugs.python.org (Karl Nelson) Date: Thu, 10 Dec 2020 21:40:40 +0000 Subject: [New-bugs-announce] [issue42617] Enhancement request for PyType_FromSpecWIthBases add option for meta class Message-ID: <1607636440.03.0.352368649598.issue42617@roundup.psfhosted.org> New submission from Karl Nelson : PyType_FromSpecWithBases is missing an argument for taking a meta class. As a result it is necessary to replicate a large portion of Python code when I need to create a new heap type with a specified meta class. This is a maintenance issue as replicating Python code is likely to get broken in future. I have replicated to code from JPype so that it is clear how the meta class is coming into place. PyJPClass_Type is derived from a PyHeapType with additional slots for Java to use and overrides allocation slots to that it can add extra slots (needs true multiple inheritance as it needed to add Java slots to types with mismatching memory layouts object, long, float, exception, ...). This code could be made much safer if there were a PyType_FromSpecWithBasesMeta which used the meta class to allocate the memory (then I could safely override the slots after creation). ``` PyObject* PyJPClass_FromSpecWithBases(PyType_Spec *spec, PyObject *bases) { JP_PY_TRY("PyJPClass_FromSpecWithBases"); // Python lacks a FromSpecWithMeta so we are going to have to fake it here. PyTypeObject* type = (PyTypeObject*) PyJPClass_Type->tp_alloc(PyJPClass_Type, 0); // <== we need to use the meta class here PyHeapTypeObject* heap = (PyHeapTypeObject*) type; type->tp_flags = spec->flags | Py_TPFLAGS_HEAPTYPE | Py_TPFLAGS_HAVE_GC; type->tp_name = spec->name; const char *s = strrchr(spec->name, '.'); if (s == NULL) s = spec->name; else s++; heap->ht_qualname = PyUnicode_FromString(s); heap->ht_name = heap->ht_qualname; Py_INCREF(heap->ht_name); if (bases == NULL) type->tp_bases = PyTuple_Pack(1, (PyObject*) & PyBaseObject_Type); else { type->tp_bases = bases; Py_INCREF(bases); } type->tp_base = (PyTypeObject*) PyTuple_GetItem(type->tp_bases, 0); Py_INCREF(type->tp_base); type->tp_as_async = &heap->as_async; type->tp_as_buffer = &heap->as_buffer; type->tp_as_mapping = &heap->as_mapping; type->tp_as_number = &heap->as_number; type->tp_as_sequence = &heap->as_sequence; type->tp_basicsize = spec->basicsize; if (spec->basicsize == 0) type->tp_basicsize = type->tp_base->tp_basicsize; type->tp_itemsize = spec->itemsize; if (spec->itemsize == 0) type->tp_itemsize = type->tp_base->tp_itemsize; // <=== Replicated code from the meta class type->tp_alloc = PyJPValue_alloc; type->tp_free = PyJPValue_free; type->tp_finalize = (destructor) PyJPValue_finalize; // <= Replicated code from Python for (PyType_Slot* slot = spec->slots; slot->slot; slot++) { switch (slot->slot) { case Py_tp_free: type->tp_free = (freefunc) slot->pfunc; break; case Py_tp_new: type->tp_new = (newfunc) slot->pfunc; break; case Py_tp_init: type->tp_init = (initproc) slot->pfunc; break; case Py_tp_getattro: type->tp_getattro = (getattrofunc) slot->pfunc; break; case Py_tp_setattro: type->tp_setattro = (setattrofunc) slot->pfunc; break; case Py_tp_dealloc: type->tp_dealloc = (destructor) slot->pfunc; break; case Py_tp_str: type->tp_str = (reprfunc) slot->pfunc; break; case Py_tp_repr: type->tp_repr = (reprfunc) slot->pfunc; break; case Py_tp_methods: type->tp_methods = (PyMethodDef*) slot->pfunc; break; case Py_sq_item: heap->as_sequence.sq_item = (ssizeargfunc) slot->pfunc; break; case Py_sq_length: heap->as_sequence.sq_length = (lenfunc) slot->pfunc; break; case Py_mp_ass_subscript: heap->as_mapping.mp_ass_subscript = (objobjargproc) slot->pfunc; break; case Py_tp_hash: type->tp_hash = (hashfunc) slot->pfunc; break; case Py_nb_int: heap->as_number.nb_int = (unaryfunc) slot->pfunc; break; case Py_nb_float: heap->as_number.nb_float = (unaryfunc) slot->pfunc; break; case Py_tp_richcompare: type->tp_richcompare = (richcmpfunc) slot->pfunc; break; case Py_mp_subscript: heap->as_mapping.mp_subscript = (binaryfunc) slot->pfunc; break; case Py_nb_index: heap->as_number.nb_index = (unaryfunc) slot->pfunc; break; case Py_nb_absolute: heap->as_number.nb_absolute = (unaryfunc) slot->pfunc; break; case Py_nb_and: heap->as_number.nb_and = (binaryfunc) slot->pfunc; break; case Py_nb_or: heap->as_number.nb_or = (binaryfunc) slot->pfunc; break; case Py_nb_xor: heap->as_number.nb_xor = (binaryfunc) slot->pfunc; break; case Py_nb_add: heap->as_number.nb_add = (binaryfunc) slot->pfunc; break; case Py_nb_subtract: heap->as_number.nb_subtract = (binaryfunc) slot->pfunc; break; case Py_nb_multiply: heap->as_number.nb_multiply = (binaryfunc) slot->pfunc; break; case Py_nb_rshift: heap->as_number.nb_rshift = (binaryfunc) slot->pfunc; break; case Py_nb_lshift: heap->as_number.nb_lshift = (binaryfunc) slot->pfunc; break; case Py_nb_negative: heap->as_number.nb_negative = (unaryfunc) slot->pfunc; break; case Py_nb_bool: heap->as_number.nb_bool = (inquiry) slot->pfunc; break; case Py_nb_invert: heap->as_number.nb_invert = (unaryfunc) slot->pfunc; break; case Py_nb_positive: heap->as_number.nb_positive = (unaryfunc) slot->pfunc; break; case Py_nb_floor_divide: heap->as_number.nb_floor_divide = (binaryfunc) slot->pfunc; break; case Py_nb_divmod: heap->as_number.nb_divmod = (binaryfunc) slot->pfunc; break; case Py_tp_getset: type->tp_getset = (PyGetSetDef*) slot->pfunc; break; // GCOVR_EXCL_START default: PyErr_Format(PyExc_TypeError, "slot %d not implemented", slot->slot); JP_RAISE_PYTHON(); // GCOVR_EXCL_STOP } } PyType_Ready(type); PyDict_SetItemString(type->tp_dict, "__module__", PyUnicode_FromString("_jpype")); return (PyObject*) type; JP_PY_CATCH(NULL); // GCOVR_EXCL_LINE } ``` ---------- components: C API messages: 382850 nosy: Thrameos priority: normal severity: normal status: open title: Enhancement request for PyType_FromSpecWIthBases add option for meta class type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 10 16:49:26 2020 From: report at bugs.python.org (Karl Nelson) Date: Thu, 10 Dec 2020 21:49:26 +0000 Subject: [New-bugs-announce] [issue42618] Enhancement request for importing stacktraces from foreign sources Message-ID: <1607636966.94.0.276935236028.issue42618@roundup.psfhosted.org> New submission from Karl Nelson : In JPype, I am transfer stack information from Java into Python for diagnostics and display purposed. Unfortunately, as the exception system is directly accessing traceback structure elements this cannot be replicated without creating traceback structures in C. I have thus been forced to create new methods that create this using internal Python structures. It would be much better if there were C API method to allow importing a foreign stack trace into Python. Here is an example of the code used in JPype for reference. ``` // Transfer list of filenames, functions and lines to Python. PyObject* PyTrace_FromJPStackTrace(JPStackTrace& trace) { PyTracebackObject *last_traceback = NULL; PyObject *dict = PyModule_GetDict(PyJPModule); for (JPStackTrace::iterator iter = trace.begin(); iter != trace.end(); ++iter) { last_traceback = tb_create(last_traceback, dict, iter->getFile(), iter->getFunction(), iter->getLine()); } if (last_traceback == NULL) Py_RETURN_NONE; return (PyObject*) last_traceback; } PyTracebackObject *tb_create( PyTracebackObject *last_traceback, PyObject *dict, const char* filename, const char* funcname, int linenum) { // Create a code for this frame. PyCodeObject *code = PyCode_NewEmpty(filename, funcname, linenum); // Create a frame for the traceback. PyFrameObject *frame = (PyFrameObject*) PyFrame_Type.tp_alloc(&PyFrame_Type, 0); frame->f_back = NULL; if (last_traceback != NULL) { frame->f_back = last_traceback->tb_frame; Py_INCREF(frame->f_back); } frame->f_builtins = dict; Py_INCREF(frame->f_builtins); frame->f_code = (PyCodeObject*) code; frame->f_executing = 0; frame->f_gen = NULL; frame->f_globals = dict; Py_INCREF(frame->f_globals); frame->f_iblock = 0; frame->f_lasti = 0; frame->f_lineno = 0; frame->f_locals = NULL; frame->f_localsplus[0] = 0; frame->f_stacktop = NULL; frame->f_trace = NULL; frame->f_valuestack = 0; #if PY_VERSION_HEX>=0x03070000 frame->f_trace_lines = 0; frame->f_trace_opcodes = 0; #endif // Create a traceback PyTracebackObject *traceback = (PyTracebackObject*) PyTraceBack_Type.tp_alloc(&PyTraceBack_Type, 0); traceback->tb_frame = frame; traceback->tb_lasti = frame->f_lasti; traceback->tb_lineno = linenum; traceback->tb_next = last_traceback; return traceback; } ``` ---------- components: C API messages: 382852 nosy: Thrameos priority: normal severity: normal status: open title: Enhancement request for importing stacktraces from foreign sources type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 11 05:34:20 2020 From: report at bugs.python.org (xz_sophos) Date: Fri, 11 Dec 2020 10:34:20 +0000 Subject: [New-bugs-announce] [issue42619] python 3.9.1 failed to create .so files as universal2 (arm64, x86_64) on Mac Message-ID: <1607682860.38.0.598588089863.issue42619@roundup.psfhosted.org> New submission from xz_sophos : With 3.9.1 source code, I am trying to make univerisal2 build of python.framework from the source code. The problem I am seeing is the main python binary is universal, but all the .so files in lib-dynload (Python.framework/Versions/3.9/lib/python3.9/lib-dynload/) are only x86_64. The Mac is MacOS 10.15, with Xcode 12.2 installed. Xcode12.2 has SDKs to support universal build for both arm64 and x86_64 architecture. I have no problems making other universal applications and frameworks on the machine. Following the documentation. I ran the following build command: ./configure --enable-universalsdk --enable-framework=./tmp --with-universal-archs=universal2 --without-ensurepip make make install The resulting python was universal but the .so files are not. ---------- components: macOS messages: 382861 nosy: ned.deily, ronaldoussoren, xz_sophos priority: normal severity: normal status: open title: python 3.9.1 failed to create .so files as universal2 (arm64, x86_64) on Mac type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 11 06:49:59 2020 From: report at bugs.python.org (Rick van Rein) Date: Fri, 11 Dec 2020 11:49:59 +0000 Subject: [New-bugs-announce] [issue42620] documentation on `getsockname()` wrong for AF_INET6 Message-ID: <1607687399.41.0.179840129913.issue42620@roundup.psfhosted.org> New submission from Rick van Rein : Shown in the session below is unexpected output of a 4-tuple from an AF_INET6 socket along with documentation that *suggests* to expect a 2-tuple. The phrasing "IP" might have to be toned down to "IPv4" or "AF_INET" to be accurate enough to avoid confusion. Opinion: I think you should be explicit about the different behaviour for AF_INET6, so it is not reduced to a special/nut case for special interest groups. IPv6 has a hard enough time getting in; different formats for AF_INET and AF_INET6 should ideally be shown to all programmers, to at least avoid *uninformed* decisions to be incompatible with IPv6 while they develop on an IPv4 system (and the same in the opposite direction). Python 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> sox6 = socket.socket (socket.AF_INET6) >>> sox6.getsockname () ('::', 0, 0, 0) >>> sox6.getsockname.__doc__ 'getsockname() -> address info\n\nReturn the address of the local endpoint. For IP sockets, the address\ninfo is a pair (hostaddr, port).' ---------- components: Library (Lib) messages: 382863 nosy: vanrein priority: normal severity: normal status: open title: documentation on `getsockname()` wrong for AF_INET6 type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 11 07:30:59 2020 From: report at bugs.python.org (Avinash Gaur) Date: Fri, 11 Dec 2020 12:30:59 +0000 Subject: [New-bugs-announce] [issue42621] Python IDLE no longer opens after clicking on its icon Message-ID: <1607689859.93.0.151638298109.issue42621@roundup.psfhosted.org> New submission from Avinash Gaur <8962avi at gmail.com>: I was able to use python IDLE earlier. But when I tried to open now, I was unable to open Python 3.7 IDLE. I have tried uninstalling and reinstalling Python(different versions) and deleting the .idlerc folder. I am using Windows 10. ---------- assignee: terry.reedy components: IDLE messages: 382864 nosy: 8962avi, terry.reedy priority: normal severity: normal status: open title: Python IDLE no longer opens after clicking on its icon type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 11 09:06:00 2020 From: report at bugs.python.org (Bohdan Borkivskyi) Date: Fri, 11 Dec 2020 14:06:00 +0000 Subject: [New-bugs-announce] [issue42622] Add support to add existing parser to ArgumentParser.subparsers Message-ID: <1607695560.13.0.222854382376.issue42622@roundup.psfhosted.org> New submission from Bohdan Borkivskyi : Currently, there is only a possibility to create empty parser as subparser - argparse.py, line 1122 The purpose of issue is to add support for existing parser to be added as subparser ---------- components: Library (Lib) messages: 382867 nosy: borkivskyi priority: normal severity: normal status: open title: Add support to add existing parser to ArgumentParser.subparsers type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 11 21:31:43 2020 From: report at bugs.python.org (Pratik Palwai) Date: Sat, 12 Dec 2020 02:31:43 +0000 Subject: [New-bugs-announce] [issue42623] Syntax Error showing pointer in wrong location Message-ID: <1607740303.61.0.556483220394.issue42623@roundup.psfhosted.org> New submission from Pratik Palwai <1051371 at student.auhsd.us>: When I get a syntax error, it shows the line with the error, and then a little '^' on the next line, showing where the error is. However, the location of the '^' is a little misleading, since it isn't in the right spot. For example (not sure if it will format correctly though): elif self.left == None and self.right =! None: ^ The symbol should be under the +! instead of under the 'and.' Also, this problem doesn't occur with the font Courier. ---------- assignee: terry.reedy components: IDLE messages: 382900 nosy: 1051371, terry.reedy priority: normal severity: normal status: open title: Syntax Error showing pointer in wrong location type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 12 03:47:49 2020 From: report at bugs.python.org (LIU Qingyuan) Date: Sat, 12 Dec 2020 08:47:49 +0000 Subject: [New-bugs-announce] [issue42624] sqlite3 package document mistake Message-ID: <1607762869.89.0.42414804314.issue42624@roundup.psfhosted.org> New submission from LIU Qingyuan : In the document about sqlite3 package, it was suggested that when users are trying to create a table already exists, sqlite3.ProgrammingError is going to be thrown. However, the actual exception thrown is sqlite3.OperationalError, which is inconsistent with the document. Doc: https://docs.python.org/3/library/sqlite3.html#exceptions ---------- assignee: docs at python components: Documentation messages: 382904 nosy: docs at python, seeker-Liu priority: normal severity: normal status: open title: sqlite3 package document mistake type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 12 10:09:41 2020 From: report at bugs.python.org (hai shi) Date: Sat, 12 Dec 2020 15:09:41 +0000 Subject: [New-bugs-announce] [issue42625] Segmentation fault of PyState_AddModule() Message-ID: <1607785781.22.0.676457210715.issue42625@roundup.psfhosted.org> New submission from hai shi : PyState_AddModule(NULL, &_testcapimodule) in _testcapi module will get a segmentation fault. And it's a C API, so a little improvement will be better. ---------- assignee: shihai1991 components: C API messages: 382915 nosy: petr.viktorin, shihai1991, vstinner priority: normal severity: normal status: open title: Segmentation fault of PyState_AddModule() type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 12 12:51:50 2020 From: report at bugs.python.org (Joakim Nilsson) Date: Sat, 12 Dec 2020 17:51:50 +0000 Subject: [New-bugs-announce] [issue42626] readline history, vi-editingmode and ANSI color codes bug Message-ID: <1607795510.65.0.341487246406.issue42626@roundup.psfhosted.org> New submission from Joakim Nilsson : Tested on Debian Bullseye with Python 3.9. If 'set editing-mode vi' is used in .inputrc and the attached program is run, a bug occurs when navigating the readline history. It seems only to occur when ANSI color escape characters are input to the 'input()' function. To reproduce the bug: Run 'readline-example.py' Enter '0123456789' in the prompt without quotes. Press enter. Press escape and then 'k' to go back in history in vi normal mode. The cursor is now placed between '2' and '3' and it is impossible to erase anything after the '2'. (To enter vi insert mode, press i and start editing the text normally.) This bug does not occur for shorter strings. If for example '012345678' is input, the program behaves normally. If the escape characters are not used in the 'input()' function, program behaves normally. ---------- components: Library (Lib) files: readline-example.py messages: 382917 nosy: nijoakim priority: normal severity: normal status: open title: readline history, vi-editingmode and ANSI color codes bug type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49673/readline-example.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 12 15:27:52 2020 From: report at bugs.python.org (benrg) Date: Sat, 12 Dec 2020 20:27:52 +0000 Subject: [New-bugs-announce] [issue42627] urllib.request.getproxies() misparses Windows registry proxy settings Message-ID: <1607804872.9.0.5170746987.issue42627@roundup.psfhosted.org> New submission from benrg : If `HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyServer` contains the string `http=host:123;https=host:456;ftp=host:789`, then getproxies_registry() should return {'http': 'http://host:123', 'https': 'http://host:456', 'ftp': 'http://host:789'} for consistency with WinInet and Chromium, but it actually returns {'http': 'http://host:123', 'https': 'https://host:456', 'ftp': 'ftp://host:789'} This bug has existed for a very long time (since Python 2.0.1 if not earlier), but it was exposed recently when urllib3 added support for HTTPS-in-HTTPS proxies in version 1.26. Before that, an `https` prefix on the HTTPS proxy url was silently treated as `http`, accidentally resulting in the correct behavior. There are additional bugs in the treatment of single-proxy strings (the case when the string contains no `=` character). The Chromium code for parsing the ProxyServer string can be found here: https://source.chromium.org/chromium/chromium/src/+/refs/tags/89.0.4353.1:net/proxy_resolution/proxy_config.cc;l=86 Below is my attempt at modifying the code from `getproxies_registry` to approximately match Chromium's behavior. I could turn this into a patch, but I'd like feedback on the corner cases first. if '=' not in proxyServer and ';' not in proxyServer: # Use one setting for all protocols. # Chromium treats this as a separate category, and some software # uses the ALL_PROXY environment variable for a similar purpose, # so arguably this should be 'all={}'.format(proxyServer), # but this is more backward compatible. proxyServer = 'http={0};https={0};ftp={0}'.format(proxyServer) for p in proxyServer.split(';'): # Chromium and WinInet are inconsistent in their treatment of # invalid strings with the wrong number of = characters. It # probably doesn't matter. protocol, addresses = p.split('=', 1) protocol = protocol.strip() # Chromium supports more than one proxy per protocol. I don't # know how many clients support the same, but handling it is at # least no worse than leaving the commas uninterpreted. for address in addresses.split(','): if protocol in {'http', 'https', 'ftp', 'socks'}: # See if address has a type:// prefix if not re.match('(?:[^/:]+)://', address): if protocol == 'socks': # Chromium notes that the correct protocol here # is SOCKS4, but "socks://" is interpreted # as SOCKS5 elsewhere. I don't know whether # prepending socks4:// here would break code. address = 'socks://' + address else: address = 'http://' + address # A string like 'http=foo;http=bar' will produce a # comma-separated list, while previously 'bar' would # override 'foo'. That could potentially break something. if protocol not in proxies: proxies[protocol] = address else: proxies[protocol] += ',' + address ---------- components: Library (Lib), Windows messages: 382921 nosy: benrg, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: urllib.request.getproxies() misparses Windows registry proxy settings type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 12 16:23:18 2020 From: report at bugs.python.org (Kent Watsen) Date: Sat, 12 Dec 2020 21:23:18 +0000 Subject: [New-bugs-announce] [issue42628] binascii doesn't work on some base64 Message-ID: <1607808198.75.0.933631782561.issue42628@roundup.psfhosted.org> New submission from Kent Watsen : [Tested on 3.8.2 and 3.9.0, bug may manifest in other versions too] The IETF sometimes uses the dummy base64 value "base64encodedvalue==" in specifications in lieu of a block of otherwise meaningless b64. Even though it is a dummy value, the value should be convertible to binary and back again. This works using the built-in command `base64` as well as OpenSSL command line, but binascii is unable to do it. See below: $ echo "base64encodedvalue==" | base64 | base64 -d base64encodedvalue== $ echo "base64encodedvalue==" | openssl enc -base64 -A | openssl enc -d base64 -A base64encodedvalue== $ printf "import binascii\nprint(binascii.b2a_base64(binascii.a2b_base64('base64encodedvalue=='), newline=False).decode('ascii'))" | python - base64encodedvaluQ== After some investigation, it appears that almost any valid base64 matching the pattern "??==" fails. For instance: $ printf "import binascii\nprint(binascii.b2a_base64(binascii.a2b_base64('ue=='), newline=False).decode('ascii'))" | python - uQ== $ printf "import binascii\nprint(binascii.b2a_base64(binascii.a2b_base64('aa=='), newline=False).decode('ascii'))" | python - aQ== $ printf "import binascii\nprint(binascii.b2a_base64(binascii.a2b_base64('a0=='), newline=False).decode('ascii'))" | python - aw== Is this a bug? ---------- messages: 382922 nosy: kwatsen2 priority: normal severity: normal status: open title: binascii doesn't work on some base64 type: behavior versions: Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 12 19:21:25 2020 From: report at bugs.python.org (Max Bachmann) Date: Sun, 13 Dec 2020 00:21:25 +0000 Subject: [New-bugs-announce] [issue42629] PyObject_Call not behaving as documented Message-ID: <1607818885.74.0.707505524552.issue42629@roundup.psfhosted.org> New submission from Max Bachmann : The documentation of PyObject_Call here: https://docs.python.org/3/c-api/call.html#c.PyObject_Call states, that it is the equivalent of the Python expression: callable(*args, **kwargs). so I would expect: PyObject* args = PyTuple_New(0); PyObject* kwargs = PyDict_New(); PyObject_Call(funcObj, args, kwargs) to behave similar to args = [] kwargs = {} func(*args, **kwargs) however this is not the case since in this case when I edit kwargs inside PyObject* func(PyObject* /*self*/, PyObject* /*args*/, PyObject* keywds) { PyObject* str = PyUnicode_FromString("test_str"); PyDict_SetItemString(keywds, "test", str); } it changes the original dictionary passed into PyObject_Call. I was wondering, whether this means, that: a) it is not allowed to modify the keywds argument passed to a PyCFunctionWithKeywords b) when calling PyObject_Call it is required to copy the kwargs for the call using PyDict_Copy Neither the documentation of PyObject_Call nor the documentation of PyCFunctionWithKeywords (https://docs.python.org/3/c-api/structures.html#c.PyCFunctionWithKeywords) made this clear to me. ---------- components: C API messages: 382927 nosy: maxbachmann priority: normal severity: normal status: open title: PyObject_Call not behaving as documented type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 13 18:24:00 2020 From: report at bugs.python.org (Ivo Shipkaliev) Date: Sun, 13 Dec 2020 23:24:00 +0000 Subject: [New-bugs-announce] [issue42630] Variable.__init__ raise a RuntimeError instead of obscure AttributeError Message-ID: <1607901840.62.0.626631020476.issue42630@roundup.psfhosted.org> New submission from Ivo Shipkaliev : Hello. I think it would be nice to add: 335 > if not master: 336 > raise RuntimeError('a valid Tk instance is required.') to lib/tkinter/__init__.py, and not rely on this unclear AttributeError. Could it be assigned to me, please? Best Regards Ivo Shipkaliev ---------- components: Tkinter messages: 382944 nosy: shippo_ priority: normal severity: normal status: open title: Variable.__init__ raise a RuntimeError instead of obscure AttributeError type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 13 18:37:58 2020 From: report at bugs.python.org (to7m) Date: Sun, 13 Dec 2020 23:37:58 +0000 Subject: [New-bugs-announce] [issue42631] Multiprocessing module hangs on os.read() on Linux Message-ID: <1607902678.82.0.104927232585.issue42631@roundup.psfhosted.org> New submission from to7m : As best I can tell, sometimes when a Listener or Connection is not properly initialised, the Client fails to communicate properly with it. Instead of raising an exception, the Client hangs. receiver.py: from multiprocessing.connection import Listener while True: with Listener(("localhost", 10000), authkey=b"test") as listener: with listener.accept() as connection: print(connection.recv()) client.py (intended as a stress test): from multiprocessing.connection import Client for i in range(1000): successfully_sent = False while not successfully_sent: try: with Client(("localhost", 10000), authkey=b"test") as client: client.send(i) except (ConnectionRefusedError, ConnectionResetError): continue successfully_sent = True Also noteworthy: I posted on StackExchange (https://stackoverflow.com/questions/65276145/multiprocessing-receive-all-messages-from-multiple-runtimes) and it seems that the code there (only 1000 messages) took around an hour to run for a Windows user, whereas it would take less than a second to successfully run on Linux. ---------- components: IO files: receive.py messages: 382945 nosy: to7m priority: normal severity: normal status: open title: Multiprocessing module hangs on os.read() on Linux type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49675/receive.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 01:02:52 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Mon, 14 Dec 2020 06:02:52 +0000 Subject: [New-bugs-announce] [issue42632] Reassgining ZeroDivisionError will lead to bug in Except clause Message-ID: <1607925772.13.0.935744871681.issue42632@roundup.psfhosted.org> New submission from Xinmeng Xia : Running the following program: ============================== def foo(): try: 1/0 except ZeroDivisionError as e: ZeroDivisionError = 1 foo() ============================== The expected output should be nothing. ZeroDivisionError is caught and then reassignment is executed. However, running this program in Python3.10 will lead to the following error: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/error/1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/error/1.py", line 8, in foo() File "/home/xxm/Desktop/nameChanging/error/1.py", line 6, in foo except Exception as e: UnboundLocalError: local variable 'Exception' referenced before assignment ---------- components: Interpreter Core messages: 382953 nosy: xxm priority: normal severity: normal status: open title: Reassgining ZeroDivisionError will lead to bug in Except clause type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 02:03:20 2020 From: report at bugs.python.org (Matthew Walker) Date: Mon, 14 Dec 2020 07:03:20 +0000 Subject: [New-bugs-announce] [issue42633] Wave documentation doesn't mention signed/unsigned requirements Message-ID: <1607929400.79.0.578872374659.issue42633@roundup.psfhosted.org> New submission from Matthew Walker : It would be very useful if the documentation for Python's Wave module mentioned that 8-bit samples must be unsigned while 16-bit samples must be signed. See the Wikipedia article on the WAV format: "There are some inconsistencies in the WAV format: for example, 8-bit data is unsigned while 16-bit data is signed" https://en.wikipedia.org/wiki/WAV Although I haven't contributed previously, I would be pleased to make such a contribution if it would be appreciated. ---------- assignee: docs at python components: Documentation messages: 382957 nosy: docs at python, mattgwwalker priority: normal severity: normal status: open title: Wave documentation doesn't mention signed/unsigned requirements type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 02:24:56 2020 From: report at bugs.python.org (Mark Shannon) Date: Mon, 14 Dec 2020 07:24:56 +0000 Subject: [New-bugs-announce] [issue42634] Incorrect line number in bytecode for try-except-finally Message-ID: <1607930696.05.0.974303425196.issue42634@roundup.psfhosted.org> New submission from Mark Shannon : The following code, when traced, produces a spurious line event for line 5: a, b, c = 1, 1, 1 try: a = 3 except: b = 5 finally: c = 7 assert a == 3 and b == 1 and c == 7 Bug reported by Ned Batchelder https://gist.github.com/nedbat/6c5dedde9df8d2de13de8a6a39a5f112 ---------- assignee: Mark.Shannon messages: 382958 nosy: Mark.Shannon, nedbat priority: release blocker severity: normal stage: needs patch status: open title: Incorrect line number in bytecode for try-except-finally type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 04:24:33 2020 From: report at bugs.python.org (Mark Shannon) Date: Mon, 14 Dec 2020 09:24:33 +0000 Subject: [New-bugs-announce] [issue42635] Incorrect line number in bytecode for nested loops Message-ID: <1607937873.82.0.254975508091.issue42635@roundup.psfhosted.org> New submission from Mark Shannon : The following code, when traced, produces a spurious line events at the end of the inner loop. def dloop(): for i in range(3): for j in range(3): a = i + j assert a == 4 Bug reported by Ned Batchelder https://gist.github.com/nedbat/6c5dedde9df8d2de13de8a6a39a5f112 ---------- assignee: Mark.Shannon messages: 382968 nosy: Mark.Shannon, nedbat priority: normal severity: normal stage: needs patch status: open title: Incorrect line number in bytecode for nested loops type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 05:36:21 2020 From: report at bugs.python.org (Martin Natano) Date: Mon, 14 Dec 2020 10:36:21 +0000 Subject: [New-bugs-announce] [issue42636] shielded task exception never retrieved when outer task cancelled Message-ID: <1607942181.31.0.646688674125.issue42636@roundup.psfhosted.org> New submission from Martin Natano : A task created with asyncio.shield() never retrieves the task exception, which results in a log message being generated. See the attached script for a minimal example. Output looks something like this: Task exception was never retrieved future: exception=Exception('foo')> Traceback (most recent call last): File "/Users/natano/python-bug/trigger-warning.py", line 6, in some_coroutine raise Exception('foo') Exception: foo I believe this behaviour is not intended and is an unintended side-effect of commit b35acc5b3a0148c5fd4462968b310fb436726d5a (https://github.com/python/cpython/commit/b35acc5b3a0148c5fd4462968b310fb436726d5a). The _inner_done_callback has has code to retrieve the exception when the outer task was cancelled, but since that commit the _inner_done_callback is not called anymore when the outer task was cancelled. (Tested with 3.8, 3.9 and 3.10. I don't have a python 3.7 version available for testing, but I think this issue also affects 3.7.) ---------- components: asyncio files: trigger-warning.py messages: 382975 nosy: asvetlov, natano, yselivanov priority: normal severity: normal status: open title: shielded task exception never retrieved when outer task cancelled versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49679/trigger-warning.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 05:38:59 2020 From: report at bugs.python.org (E. Paine) Date: Mon, 14 Dec 2020 10:38:59 +0000 Subject: [New-bugs-announce] [issue42637] Python release note tkinter problems Message-ID: <1607942339.24.0.547653054528.issue42637@roundup.psfhosted.org> New submission from E. Paine : Under the "Installer news" for the Python 3.9.1 release, it notes: "As we are waiting for an updated version of pip, please consider the macos11.0 installer experimental." Is it worth also noting that tkinter has serious known problems that are being resolved (issue42541)? ---------- messages: 382977 nosy: epaine, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Python release note tkinter problems _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 06:29:54 2020 From: report at bugs.python.org (Terry J. Reedy) Date: Mon, 14 Dec 2020 11:29:54 +0000 Subject: [New-bugs-announce] [issue42638] IDLE: Context lines not working correctly Message-ID: <1607945394.59.0.0452931693477.issue42638@roundup.psfhosted.org> New submission from Terry J. Reedy : Test: open editor.py in editor. Turn on Code context with large max lines. Scroll down to body of _sphinx_version (after line 37). Nothing appears. Scroll down to line 82 in EditorWindow.__init__. 'def _sphinx_version' appears. Scroll up (again with arrow key) to the 'def _sphinx_version' line and it diappears from the context. Usually repeats. Is it coincidence that 82 - 37 = 45, the lines in the window? No. Shrink window to 26 lines and context line appears when at line 37 + 26 = 63. ---------- assignee: terry.reedy components: IDLE messages: 382981 nosy: terry.reedy priority: normal severity: normal stage: test needed status: open title: IDLE: Context lines not working correctly type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 07:00:33 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 14 Dec 2020 12:00:33 +0000 Subject: [New-bugs-announce] [issue42639] Make atexit state per interpreter Message-ID: <1607947233.16.0.810683660249.issue42639@roundup.psfhosted.org> New submission from STINNER Victor : In Python 2.7, atexit was implemented in Python and registered itself using sys.exitfunc public attribute: https://docs.python.org/2.7/library/sys.html#sys.exitfunc https://docs.python.org/2.7/library/atexit.html#module-atexit In Python 3.0, the atexit module was rewritten in C. A new private _Py_PyAtExit() function was added to set a new private global "pyexitfunc" variable: variable used by call_py_exitfuncs() called by Py_Finalize(). In Python 3.7, the global "pyexitfunc" variable was moved int _PyRuntimeState (commit 2ebc5ce42a8a9e047e790aefbf9a94811569b2b6), and then into PyInterpreterState (commit 776407fe893fd42972c7e3f71423d9d86741d07c). In Python 3.7, the atexit module was upgrade to the multiphase initialization API (PEP 489): PyInit_atexit() uses PyModuleDef_Init(). Since Python 2.7, the atexit module has a limitation: if a second instance is created, the new instance overrides the old one, and old registered callbacks are newer called. One option is to disallow creating a second instance: see bpo-40600 and PR 23699 for that. Another option is to move the atexit state (callbacks) into PyInterpreterState. Two atexit module instances would modify the same list of callbacks. In this issue, I propose to investigate this option. ---------- components: Library (Lib) messages: 382982 nosy: vstinner priority: normal severity: normal status: open title: Make atexit state per interpreter versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 13:22:37 2020 From: report at bugs.python.org (Justin) Date: Mon, 14 Dec 2020 18:22:37 +0000 Subject: [New-bugs-announce] [issue42640] tkinter throws exception when key is pressed Message-ID: <1607970157.48.0.157059469848.issue42640@roundup.psfhosted.org> New submission from Justin : On macOs 10.14.6 laptop with a qwerty layout, when I switch my keyboard layout to `French - PC` and use a tkinter keyboard listener and press the '[' button which should be the '^' button on the azerty keyboard, this Error is thrown: ``` 2020-12-14 10:13:26.533 Python[14880:1492926] *** Terminating app due to uncaught exception 'NSRangeException', reason: '-[__NSCFConstantString characterAtIndex:]: Range or index out of bounds' *** First throw call stack: ( 0 CoreFoundation 0x00007fff41ce49ad __exceptionPreprocess + 256 1 libobjc.A.dylib 0x00007fff6c3dfa17 objc_exception_throw + 48 2 CoreFoundation 0x00007fff41ce47df +[NSException raise:format:] + 201 3 CoreFoundation 0x00007fff41c5a159 -[__NSCFString characterAtIndex:] + 102 4 Tk 0x00007fff4da8a806 TkpInitKeymapInfo + 719 5 Tk 0x00007fff4da905e9 Tk_MacOSXSetupTkNotifier + 793 6 Tcl 0x00007fff4d98c48e Tcl_DoOneEvent + 301 7 _tkinter.cpython-38-darwin.so 0x00000001090802de _tkinter_tkapp_mainloop + 342 8 Python 0x0000000108ac1b0a method_vectorcall_FASTCALL + 250 9 Python 0x0000000108b5a299 call_function + 346 10 Python 0x0000000108b51457 _PyEval_EvalFrameDefault + 3895 11 Python 0x0000000108b5ae5d _PyEval_EvalCodeWithName + 2107 12 Python 0x0000000108abad39 _PyFunction_Vectorcall + 217 13 Python 0x0000000108abcc7d method_vectorcall + 135 14 Python 0x0000000108b5a299 call_function + 346 15 Python 0x0000000108b51477 _PyEval_EvalFrameDefault + 3927 16 Python 0x0000000108b5ae5d _PyEval_EvalCodeWithName + 2107 17 Python 0x0000000108b5047d PyEval_EvalCode + 51 18 Python 0x0000000108b89025 run_eval_code_obj + 102 19 Python 0x0000000108b88473 run_mod + 82 20 Python 0x0000000108b87345 PyRun_FileExFlags + 160 21 Python 0x0000000108b86a29 PyRun_SimpleFileExFlags + 271 22 Python 0x0000000108b9e449 Py_RunMain + 1870 23 Python 0x0000000108b9e790 pymain_main + 306 24 Python 0x0000000108b9e7de Py_BytesMain + 42 25 libdyld.dylib 0x00007fff6dbae3d5 start + 1 26 ??? 0x0000000000000002 0x0 + 2 ) libc++abi.dylib: terminating with uncaught exception of type NSException Abort trap: 6 ``` One can verify this by running this program on a macOs laptop with a qwerty keyboard, switching the layout to French - PC and pressing the '[' key. ``` import pygame pygame.init() pygame.display.set_mode((100, 100)) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit(); #sys.exit() if sys is imported if event.type == pygame.KEYDOWN: key_name = pygame.key.name(event.key) print(event, event.key.__class__, event.key, key_name) elif event.type == pygame.KEYUP: key_name = pygame.key.name(event.key) print(event, event.key.__class__, event.key, key_name) ``` ``` ---------- messages: 382998 nosy: spacether priority: normal severity: normal status: open title: tkinter throws exception when key is pressed _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 17:52:43 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 14 Dec 2020 22:52:43 +0000 Subject: [New-bugs-announce] [issue42641] Deprecate os.popen() function Message-ID: <1607986363.29.0.33477455927.issue42641@roundup.psfhosted.org> New submission from STINNER Victor : The os.popen() function uses a shell by default which usually leads to shell injection vulnerability. It also has a weird API: * closing the file waits until the process completes. * close() returns a "wait status" (*) not a "returncode" (*) see https://docs.python.org/dev/library/os.html#os.waitstatus_to_exitcode for the meaning of a "wait status". IMO the subprocess module provides better and safer alternatives with a clean API. The subprocess module already explains how to replace os.popen() with subprocess: https://docs.python.org/dev/library/subprocess.html#replacing-os-popen-os-popen2-os-popen3 In Python 2, os.popen() was deprecated since Python 2.6, but Python 3.0 removed the deprecation (commit dcf97b98ec5cad972b3a8b4989001c45da87d0ea, then commit f5a429295d855267c33c5ef110fbf05ee7a3013e extended os.popen() documentation again: bpo-6490). platform.popen() existed until Python 3.8 (bpo-35345). It was deprecated since Python 3.3 (bpo-11377). -- There is also the os.system() function which exposes the libc system() function. Should we deprecate this one as well? ---------- components: Library (Lib) messages: 383012 nosy: vstinner priority: normal severity: normal status: open title: Deprecate os.popen() function versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 21:20:21 2020 From: report at bugs.python.org (Nick Coghlan) Date: Tue, 15 Dec 2020 02:20:21 +0000 Subject: [New-bugs-announce] [issue42642] logging: add high priority log level for warnings being cleared Message-ID: <1607998821.36.0.619520119815.issue42642@roundup.psfhosted.org> New submission from Nick Coghlan : When using the logging module for long running services, there's one limitation of the predefined logging levels that I semi-regularly run into: the only entirely reliable log level for reporting that a WARNING state has been cleared is itself WARNING. Using INFO instead means that it is common to get error logs that show warning states being entered without any matching "error cleared" notification. My idea for resolving this would be to define a new ESSENTIAL log level between WARNING and ERROR. The idea of the new log level would be to have a logged-by-default level for non-fault-but-essential messages like: * service startup (including version info) * service shutdown * warnings being cleared (NOTICE would be another possible name for such a level) ---------- messages: 383027 nosy: ncoghlan, vinay.sajip priority: low severity: normal stage: needs patch status: open title: logging: add high priority log level for warnings being cleared type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 14 21:24:06 2020 From: report at bugs.python.org (Decorater) Date: Tue, 15 Dec 2020 02:24:06 +0000 Subject: [New-bugs-announce] [issue42643] http.server does not support HTTP range requests Message-ID: <1607999046.16.0.658504644018.issue42643@roundup.psfhosted.org> New submission from Decorater : I have issues with range requests on http.server module (ran from python -m http.server command) When requesting files to download from an program written in C (which uses range requests to update an progress bar) it ignores this and simply forces download of the entire thing at once) which in turn on my program makes the progress bar never update to display the progress of the download to the user. https://tools.ietf.org/id/draft-ietf-httpbis-p5-range-09.html#range.units It is a part from HTTP/1.1, I think this would be something good to actually have to support partial requests (if they request an range in the header of their request), for directory listings (which can be used to download files from), it could be considered helpful it be able to request specific byte ranges inside of the files to download at a time to split up bandwidth to not overwhelm an file server. ---------- components: Library (Lib) messages: 383028 nosy: Decorater priority: normal severity: normal status: open title: http.server does not support HTTP range requests type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 15 05:00:46 2020 From: report at bugs.python.org (OverLordGoldDragon) Date: Tue, 15 Dec 2020 10:00:46 +0000 Subject: [New-bugs-announce] [issue42644] logging.disable('WARN') breaks AsyncIO Message-ID: <1608026446.25.0.329181114873.issue42644@roundup.psfhosted.org> New submission from OverLordGoldDragon : import logging with logging.disable('WARN'): pass See: https://github.com/ipython/ipython/issues/12713 I'm not actually sure what's happening here, just reporting. ---------- components: asyncio messages: 383040 nosy: OverLordGoldDragon, asvetlov, yselivanov priority: normal severity: normal status: open title: logging.disable('WARN') breaks AsyncIO type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 15 07:21:32 2020 From: report at bugs.python.org (Mark Shannon) Date: Tue, 15 Dec 2020 12:21:32 +0000 Subject: [New-bugs-announce] [issue42645] break/continue or return in finally block occurs twice in trace. Message-ID: <1608034892.59.0.529132946554.issue42645@roundup.psfhosted.org> New submission from Mark Shannon : This function def f(): try: return 2 finally: 4 would generate a try of [1, 2, 4, 2]. It should generate [1, 2, 4] and not trace the return twice. ---------- assignee: Mark.Shannon messages: 383044 nosy: Mark.Shannon, nedbat priority: normal severity: normal stage: needs patch status: open title: break/continue or return in finally block occurs twice in trace. type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 15 08:07:08 2020 From: report at bugs.python.org (wyz23x2) Date: Tue, 15 Dec 2020 13:07:08 +0000 Subject: [New-bugs-announce] [issue42646] Add function that supports "applying" methods Message-ID: <1608037628.46.0.38282222902.issue42646@roundup.psfhosted.org> New submission from wyz23x2 : Doing this is generally very annoying: y = x.copy() y.some_method() Sometimes x doesn't have copy(), so: from copy import deepcopy y = deepcopy(x) y.some_method() So maybe a function could be added to help. For example: def apply(obj, function, /, args=(), kwargs={}): try: new = obj.copy() except AttributeError: from copy import copy new = copy(obj) function(new, *args, **kwargs) return new # implement reversed() for list lis = [1, 2, 3, 4, 5] arr = apply(lis, list.reverse) print(arr) # [5, 4, 3, 2, 1] apply() maybe isn't the best name because of the builtin apply() in Python 2, but that's EOL. It could be added in the standard library. ---------- components: Library (Lib) messages: 383050 nosy: wyz23x2 priority: normal severity: normal status: open title: Add function that supports "applying" methods versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 15 08:59:10 2020 From: report at bugs.python.org (Julien Danjou) Date: Tue, 15 Dec 2020 13:59:10 +0000 Subject: [New-bugs-announce] [issue42647] Unable to use concurrent.futures in atexit hook Message-ID: <1608040750.56.0.960217977665.issue42647@roundup.psfhosted.org> New submission from Julien Danjou : Python 3.9 introduced a regression with concurrent.futures. The following program works fine on Python < 3.8 but raises an error on 3.9: ``` import atexit import concurrent.futures def spawn(): with concurrent.futures.ThreadPoolExecutor() as t: pass atexit.register(spawn) ``` ``` $ python3.9 rep.py Error in atexit._run_exitfuncs: Traceback (most recent call last): File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 790, in exec_module File "", line 228, in _call_with_frames_removed File "/usr/local/Cellar/python at 3.9/3.9.0_3/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 37, in threading._register_atexit(_python_exit) File "/usr/local/Cellar/python at 3.9/3.9.0_3/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 1370, in _register_atexit raise RuntimeError("can't register atexit after shutdown") RuntimeError: can't register atexit after shutdown ``` ---------- messages: 383058 nosy: jd priority: normal severity: normal status: open title: Unable to use concurrent.futures in atexit hook type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 15 12:05:42 2020 From: report at bugs.python.org (STINNER Victor) Date: Tue, 15 Dec 2020 17:05:42 +0000 Subject: [New-bugs-announce] [issue42648] subprocess: add a helper/parameter to catch exec() OSError exception Message-ID: <1608051942.35.0.527198192461.issue42648@roundup.psfhosted.org> New submission from STINNER Victor : While working on on bpo-42641 "Deprecate os.popen() function", I tried to replace os.popen() with a subprocess function, to avoid relying on an external shell. Example from test_posix: with os.popen('id -G 2>/dev/null') as idg: groups = idg.read().strip() ret = idg.close() os.popen() uses a shell, and so returns an non-zero exit status if the "id" program is not available: >>> import os; os.popen('nonexistent').close() /bin/sh: nonexistent : commande introuvable 32512 whereas the subprocess module raises an OSError: >>> import subprocess; proc=subprocess.run('nonexistent') FileNotFoundError: [Errno 2] No such file or directory: 'nonexistent' It's not convenient to have to write try/except OSError when replacing os.popen() with subprocess.run(). It would be convenient to have a subprocess API which avoid the need for try/except, if possible with keeping the ability to distinguish when exec() fails and when exec() completed but waitpid() returns a non-zero exit status (child process exit code is non-zero). This issue becomes more interesting when subprocess uses os.posix_spawn(). The subprocess module only uses os.posix_spawn() if the libc implementation is known to report exec() failure: if os.posix_spawn() raises an OSError exception if exec() fails. See subprocess._use_posix_spawn() which uses os.confstr('CS_GNU_LIBC_VERSION') to check if the glibc 2.24+ is used. Or maybe I simply missed a very obvious API in subprocess for this problem? ---------- components: Library (Lib) messages: 383078 nosy: gregory.p.smith, vstinner priority: normal severity: normal status: open title: subprocess: add a helper/parameter to catch exec() OSError exception versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 15 12:55:57 2020 From: report at bugs.python.org (Michael Orlitzky) Date: Tue, 15 Dec 2020 17:55:57 +0000 Subject: [New-bugs-announce] [issue42649] RecursionError when parsing unwieldy expression (regression from 2.7 -> 3.x) Message-ID: <1608054957.39.0.0382114401229.issue42649@roundup.psfhosted.org> New submission from Michael Orlitzky : The attached file contains a huge, ugly expression that can be parsed in python-2.7 but not python-3.x (I've tested 3.7, 3.8, and 3.9). With the 3.x versions, I get $ python3 DPT_defn RecursionError: maximum recursion depth exceeded during compilation Python-2.7, on the other hand, is able to parse & compile much larger expressions -- this example was whittled down until it was roughly minimal. ---------- components: Interpreter Core files: DPT_defn messages: 383084 nosy: mjo priority: normal severity: normal status: open title: RecursionError when parsing unwieldy expression (regression from 2.7 -> 3.x) versions: Python 3.9 Added file: https://bugs.python.org/file49683/DPT_defn _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 15 13:49:25 2020 From: report at bugs.python.org (Brad Warren) Date: Tue, 15 Dec 2020 18:49:25 +0000 Subject: [New-bugs-announce] [issue42650] Can people use dest=argparse.SUPPRESS in custom Action classes? Message-ID: <1608058165.01.0.505230302489.issue42650@roundup.psfhosted.org> New submission from Brad Warren : argparse internally sets dest=SUPPRESS in action classes like _HelpAction and _VersionAction to prevent an attribute from being created for that option on the resulting namespace. Can users creating custom Action classes also use this functionality without worrying about it suddenly breaking in a new version of Python? If so, can we document this functionality? I'd be happy to submit a PR. ---------- assignee: docs at python components: Documentation messages: 383089 nosy: bmw, docs at python priority: normal severity: normal status: open title: Can people use dest=argparse.SUPPRESS in custom Action classes? versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 03:02:15 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Wed, 16 Dec 2020 08:02:15 +0000 Subject: [New-bugs-announce] [issue42651] Title: Recursive traceback crashes Python Interpreter Message-ID: <1608105735.03.0.523665122471.issue42651@roundup.psfhosted.org> New submission from Xinmeng Xia : ================= import traceback def foo(): traceback.print_exc() foo() foo() ================ Try running the above program, the interpreter is crashed with the error message like the following: Fatal Python error: Cannot recover from stack overflow. NoneType: None NoneType: None NoneType: None NoneType: None ... NoneType: None NoneType: None NoneType: NoneType: None NoneType: None ... NoneType: None NoneType: None NoneType: None Fatal Python error: _Py_CheckRecursiveCall: Cannot recover from stack overflow. Python runtime state: initialized Current thread 0x00007fab0bdda700 (most recent call first): File "/usr/local/python310/lib/python3.10/traceback.py", line 155 in _some_str File "/usr/local/python310/lib/python3.10/traceback.py", line 515 in __init__ File "/usr/local/python310/lib/python3.10/traceback.py", line 103 in print_exception File "/usr/local/python310/lib/python3.10/traceback.py", line 163 in print_exc File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 4 in foo File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5 in foo File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5 in foo ... File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5 in foo File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5 in foo File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5 in foo File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5 in foo ... Aborted (core dumped) ---------- components: Interpreter Core messages: 383118 nosy: xxm priority: normal severity: normal status: open title: Title: Recursive traceback crashes Python Interpreter type: crash versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 03:48:19 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Wed, 16 Dec 2020 08:48:19 +0000 Subject: [New-bugs-announce] [issue42652] recursive in finally clause cause Python interpreter crash. Message-ID: <1608108499.51.0.334108279167.issue42652@roundup.psfhosted.org> New submission from Xinmeng Xia : Considering the following two program,running the program 1 will get expected output: RecursionError program 1 =========================== import traceback def foo(): try: 1/0 except Exception as e: traceback.print_exc() finally: a = 1 foo() foo() ========================== ----------------------------------------------------------------------------------- ZeroDivisionError: division by zero Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 12, in File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10, in foo foo() File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10, in foo ... foo() File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10, in foo foo() File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 7, in foo traceback.print_exc() File "/usr/lib/python3.5/traceback.py", line 159, in print_exc print_exception(*sys.exc_info(), limit=limit, file=file, chain=chain) File "/usr/lib/python3.5/traceback.py", line 100, in print_exception type(value), value, tb, limit=limit).format(chain=chain): File "/usr/lib/python3.5/traceback.py", line 474, in __init__ capture_locals=capture_locals) File "/usr/lib/python3.5/traceback.py", line 358, in extract f.line File "/usr/lib/python3.5/traceback.py", line 282, in line self._line = linecache.getline(self.filename, self.lineno).strip() File "/usr/lib/python3.5/linecache.py", line 16, in getline lines = getlines(filename, module_globals) File "/usr/lib/python3.5/linecache.py", line 43, in getlines if len(entry) != 1: RecursionError: maximum recursion depth exceeded in comparison ------------------------------------------------------------------------ However when moving foo() into finally clause, the interpreter crashes. program 2 ========================== import traceback def foo(): try: 1/0 except Exception as e: traceback.print_exc() finally: a = 1 foo() foo() ========================== ----------------------------------------------------------------------------- File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10 in foo Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 ZeroDivisionError: division by zero During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 7, in foo traceback.print_exc() File "/usr/lib/python3.5/traceback.py", line 159, in print_exc print_exception(*sys.exc_info(), limit=limit, file=file, chain=chain) File "/usr/lib/python3.5/traceback.py", line 100, in print_exception type(value), value, tb, limit=limit).format(chain=chain): File "/usr/lib/python3.5/traceback.py", line 474, in __init__ capture_locals=capture_locals) File "/usr/lib/python3.5/traceback.py", line 358, in extract f.line File "/usr/lib/python3.5/traceback.py", line 282, in line self._line = linecache.getline(self.filename, self.lineno).strip() File "/usr/lib/python3.5/linecache.py", line 16, in getline lines = getlines(filename, module_globals) File "/usr/lib/python3.5/linecache.py", line 43, in getlines if len(entry) != 1: RecursionError: maximum recursion depth exceeded in comparison During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 5, in foo 1/0 File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10 in foo File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10 in foo File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10 in foo ... File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10 in foo File "/home/xxm/Desktop/nameChanging/myerror/test1.py", line 10 in foo ... Aborted (core dumped) ------------------------------------------------------------------------- ---------- components: Interpreter Core messages: 383125 nosy: xxm priority: normal severity: normal status: open title: recursive in finally clause cause Python interpreter crash. type: crash versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 04:19:11 2020 From: report at bugs.python.org (Stefan Tatschner) Date: Wed, 16 Dec 2020 09:19:11 +0000 Subject: [New-bugs-announce] [issue42653] Expose ISO-TP Contants for Linux >= 5.10 Message-ID: <1608110351.71.0.562655830578.issue42653@roundup.psfhosted.org> New submission from Stefan Tatschner : Linux >= 5.10 gained a new SocketCAN module called ISOTP. This is an implementation of ISO 15765-2 which is mainly used in vehicle CAN networks. Expose the constants from linux/can/isotp.h in the socket module. These constants are not entirely new. Previously there was an out of tree kernel module available and a subset of these constants are already available in CPyton. [1]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/can/isotp.h [2]: https://github.com/hartkopp/can-isotp ---------- components: Library (Lib) messages: 383126 nosy: rumpelsepp priority: normal pull_requests: 22652 severity: normal status: open title: Expose ISO-TP Contants for Linux >= 5.10 type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 04:42:43 2020 From: report at bugs.python.org (Mario Corchero) Date: Wed, 16 Dec 2020 09:42:43 +0000 Subject: [New-bugs-announce] [issue42654] Add folder for python customizations: __sitecustomize__ Message-ID: <1608111763.27.0.89032945983.issue42654@roundup.psfhosted.org> New submission from Mario Corchero : Following the conversations in https://bugs.python.org/issue33944, wanted to discuss the support for a __sitecustomize__ folder that will hold python scripts to be executed at startup. Similar to `sitecustomize.py` but allowing different stakeholders of a Python installation to add themselves. This is basically a "supported way" of the current abuse of pth files to add startup code. How will this be useful? - Support administrators to add multiple "sitecustomize.py" files. As an example, today we are basically appending to sitecustomize when we need some additional behaviour. - Support for library owners that need some startup customization like betterexceptions. - Tools that include an interpreter like virtualenv or things like PyOxidizer by allowing them to customize the interpreter they expose to users. It basically offers a better alternative to the currently abused feature of code execution in pth. I this is something that is wanted in CPython, from the thread in https://bugs.python.org/issue33944 I see some open questions though: - Look for `__sitecustomize__` only in site paths or in PYTHONPATH? I'm honestly fine either way, but sightly incline more to @jaraco proposal to make this basically be a namespace package walking all its instances. - Should we have a custom way to disable this? Or are we happy with just `-S`. I think the `-S` is fine, it offers a similar behaviour to sitecustomize. If you want to see "how it feels", see https://github.com/mariocj89/cpython/tree/pu/__sitecustomize__ (It's not finished). If it seems interesting I would love to put a PR through. With this, we might be able to eventually remove code execution in pth files! ---------- components: Library (Lib) messages: 383129 nosy: jaraco, mariocj89, ncoghlan priority: normal severity: normal status: open title: Add folder for python customizations: __sitecustomize__ versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 05:05:22 2020 From: report at bugs.python.org (Jakub Kulik) Date: Wed, 16 Dec 2020 10:05:22 +0000 Subject: [New-bugs-announce] [issue42655] Fix subprocess extra_groups gid conversion Message-ID: <1608113122.79.0.244683118693.issue42655@roundup.psfhosted.org> New submission from Jakub Kulik : C function `subprocess_fork_exec` incorrectly transforms gids from the `extra_groups` argument because it passes `unsigned long*` rather than `pid_t*` into the `_Py_Gid_Converter()`. Assuming that `gid_t` is 32 bit and `unsigned long` is 64 bit (which it often is), `*(gid_t *)p = gid;` then incorrectly overwrites only part of that variable, leaving the other one filled with previous garbage. I found this on Solaris, but I am pretty sure that this doesn't work correctly on Linux as well, since both use `unsigned int` as `gid_t`. ---------- components: Extension Modules messages: 383132 nosy: kulikjak priority: normal severity: normal status: open title: Fix subprocess extra_groups gid conversion versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 05:16:23 2020 From: report at bugs.python.org (=?utf-8?b?0JLQsNC70LXRgNGW0Lkg0JzQsNGA0YbQuNGI0LjQvQ==?=) Date: Wed, 16 Dec 2020 10:16:23 +0000 Subject: [New-bugs-announce] [issue42656] prime numbers loop issue Message-ID: <1608113783.31.0.191794615025.issue42656@roundup.psfhosted.org> New submission from ??????? ???????? : I have written a program which must show prime numbers. At the beginning it does, but from some moment it doesn't work properly, despite the fact that it is a loop. I suspect there is a bug somewhere, if not, I'll be thankful if you tell me the issue. valera0639 at gmail.com ---------- components: Windows files: main.py messages: 383135 nosy: paul.moore, steve.dower, tim.golden, valeriymartsyshyn, zach.ware priority: normal severity: normal status: open title: prime numbers loop issue type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49685/main.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 05:47:58 2020 From: report at bugs.python.org (xz_sophos) Date: Wed, 16 Dec 2020 10:47:58 +0000 Subject: [New-bugs-announce] [issue42657] Python 3.9.1 building process could not use local standard library Message-ID: <1608115678.17.0.390785883077.issue42657@roundup.psfhosted.org> New submission from xz_sophos : When trying to build python 3.9.1 on Mac OSX 10.15, one of the building step by make is to run this command: DYLD_FRAMEWORK_PATH=/Users/jenkins/BaseFolder/savmac-python ./python.exe -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi It would produce this error message: Could not find platform independent libraries Could not find platform dependent libraries Consider setting $PYTHONHOME to [:] Python path configuration: PYTHONHOME = (not set) PYTHONPATH = (not set) program name = '/Library/Frameworks/Python.framework/Versions/3.7/bin/python3' isolated = 0 environment = 0 user site = 1 import site = 0 sys._base_executable = '/Library/Frameworks/Python.framework/Versions/3.7/bin/python3' sys.base_prefix = '/Users/jenkins/BaseFolder/savmac-python/sophos/tmp/Python.framework/Versions/3.9' sys.base_exec_prefix = '/Users/jenkins/BaseFolder/savmac-python/sophos/tmp/Python.framework/Versions/3.9' sys.platlibdir = 'lib' sys.executable = '/Library/Frameworks/Python.framework/Versions/3.7/bin/python3' sys.prefix = '/Users/jenkins/BaseFolder/savmac-python/sophos/tmp/Python.framework/Versions/3.9' sys.exec_prefix = '/Users/jenkins/BaseFolder/savmac-python/sophos/tmp/Python.framework/Versions/3.9' sys.path = [ '/Users/jenkins/BaseFolder/savmac-python/sophos/tmp/Python.framework/Versions/3.9/lib/python39.zip', '/Users/jenkins/BaseFolder/savmac-python/sophos/tmp/Python.framework/Versions/3.9/lib/python3.9', '/Users/jenkins/BaseFolder/savmac-python/sophos/tmp/Python.framework/Versions/3.9/lib/lib-dynload', ] Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' Current thread 0x000000011668ddc0 (most recent call first): 15:08:54 stdout: generate-posix-vars failed make: *** [pybuilddir.txt] Error 1 Please note that current working directory is : /Users/jenkins/BaseFolder/savmac-python In the Mac, there is already a python 3.7 installed at: /Library/Frameworks/Python.framework/Versions/3.7/bin/python3 I wonder what's the root cause of the above ./python.exe error, as I don't have this error on a different MacOS 10.15 with an existing installation of "python launcher.app" (version 3.9.1) ---------- components: macOS messages: 383144 nosy: ned.deily, ronaldoussoren, xz_sophos priority: normal severity: normal status: open title: Python 3.9.1 building process could not use local standard library type: compile error versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 07:44:26 2020 From: report at bugs.python.org (sogom) Date: Wed, 16 Dec 2020 12:44:26 +0000 Subject: [New-bugs-announce] [issue42658] os.path.normcase() is inconsistent with Windows file system Message-ID: <1608122666.52.0.137113409131.issue42658@roundup.psfhosted.org> New submission from sogom : On Windows file system, U+03A9 (Greek capital letter Omega) and U+2126 (Ohm sign) are distinguished. In fact, two distinct files "\u03A9.txt" and "\u2126.txt" can exist side by side in the same folder. But os.path.normcase() transforms both U+03A9 and U+2126 to U+03C9 (Greek small letter omega). MSDN reads they use CompareStringOrdinal() to compare NTFS file names: https://docs.microsoft.com/en-us/windows/win32/intl/handling-sorting-in-your-applications#sort-strings-ordinally . This document also says "the function maps case using the operating system *uppercasing* table." But I made an experiment and found that at least in the Basic Multilingual Plane, "lowercase two strings by means of LCMapStringEx() and then wcscmp the two" always gives the same result as "compare the two strings with CompareStringOrdinal()". Though this fact is not explicitly mentioned in MSDN https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-lcmapstringex , the description of LCMAP_LINGUISTIC_CASING in this page implies that casing rules conform to file system's unless LCMAP_LINGUISTIC_CASING is used. Therefore, I believe that os.path.normcase() should probably call LCMapStringEx(), with the first argument LOCALE_NAME_INVARIANT and the second argument LCMAP_LOWERCASE. ---------- components: Windows messages: 383163 nosy: paul.moore, sogom, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: os.path.normcase() is inconsistent with Windows file system type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 10:12:58 2020 From: report at bugs.python.org (CrocoDuck) Date: Wed, 16 Dec 2020 15:12:58 +0000 Subject: [New-bugs-announce] [issue42659] Objects of uname_result Class Cannot be Successfully Pickled Message-ID: <1608131578.78.0.823168956156.issue42659@roundup.psfhosted.org> New submission from CrocoDuck : See the code example below. ```python import platform import pickle pack = { 'uname_result': platform.uname() } with open('test.pickle', 'wb') as f: pickle.dump(pack, f, protocol=pickle.HIGHEST_PROTOCOL) with open('test.pickle', 'rb') as f: data = pickle.load(f) ``` It works smoothly on Python 3.8.5. However, on Python 3.9.0, the last line produces this error: ``` Traceback (most recent call last): File "/Users/crocoduck/pickle/3.9.0/make_pickle.py", line 12, in data = pickle.load(f) TypeError: () takes 6 positional arguments but 7 were given ``` The files produced by the code snipped above are attached for reference. This was observed in macOS Catalina 10.15.7. ---------- files: pickles.zip messages: 383174 nosy: CrocoDuck priority: normal severity: normal status: open title: Objects of uname_result Class Cannot be Successfully Pickled type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49688/pickles.zip _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 10:59:28 2020 From: report at bugs.python.org (Paul Ganssle) Date: Wed, 16 Dec 2020 15:59:28 +0000 Subject: [New-bugs-announce] [issue42660] _zoneinfo.c incorrectly checks bounds of `day` variable in calenderrule_new Message-ID: <1608134368.02.0.941471966425.issue42660@roundup.psfhosted.org> New submission from Paul Ganssle : This is a code style issue ? in https://github.com/python/cpython/pull/23614, a regression was deliberately introduced to satisfy an overzealous compiler. The `day` variable has logical bounds `0 <= day <= 6`. In the original code, both sides of this boundary condition were explicitly checked (since this logically documents the bounds of the variable). Some compilers complain about checking `day < 0`, because `day` is an unsigned type. It is not an immutable fact that `day` will always be an unsigned type, and implicitly relying on this fact makes the code both less readable and more fragile. This was changed over my objections and despite the fact that I had a less fragile solution available that also satisfied the overzealous compiler. In the short term, my preferred solution would be to add in a static assertion that `day` is an unsigned type ? this does not have to work on every platform, it simply needs to serve as a notification to make the code less fragile and to document our assumptions to both readers and the compiler. In the long term, I think we need a way to solve the problem that it is apparently not possible to disable any compiler warnings even if they don't apply to the situation! ---------- components: Library (Lib) messages: 383180 nosy: p-ganssle priority: normal severity: normal stage: needs patch status: open title: _zoneinfo.c incorrectly checks bounds of `day` variable in calenderrule_new type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 13:11:52 2020 From: report at bugs.python.org (Samit Mohnot) Date: Wed, 16 Dec 2020 18:11:52 +0000 Subject: [New-bugs-announce] [issue42661] Hashlib Bug Message-ID: <1608142312.83.0.59041138794.issue42661@roundup.psfhosted.org> New submission from Samit Mohnot : Complete output (6 lines): Traceback (most recent call last): File "", line 1, in File "C:\Users\Username\AppData\Local\Temp\pip-install-uyjqlzx9\hashlib_973e0ee3f102447498d1d4dca94b7942\setup.py", line 68 print "unknown OS, please update setup.py" ^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print("unknown OS, please update setup.py")? ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ---------- components: Library (Lib) messages: 383198 nosy: samit.mohnot.018 priority: normal severity: normal status: open title: Hashlib Bug type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 15:51:41 2020 From: report at bugs.python.org (Paul Bryan) Date: Wed, 16 Dec 2020 20:51:41 +0000 Subject: [New-bugs-announce] [issue42662] Propose: Data model explict about __annotations__ key ordering. Message-ID: <1608151901.48.0.662364122955.issue42662@roundup.psfhosted.org> New submission from Paul Bryan : Currently the data model documentation does not specify the order of keys in __annotations__ dictionary. It is currently in the order that arguments or attributes are declared. I propose to make this explicit. Rationale: Having order explicitly specified in the documentation makes it a documented feature that code can depend on. Current code cannot safely rely on this behavior. ---------- assignee: docs at python components: Documentation messages: 383204 nosy: docs at python, pbryan priority: normal severity: normal status: open title: Propose: Data model explict about __annotations__ key ordering. versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 16:33:53 2020 From: report at bugs.python.org (Paul Ganssle) Date: Wed, 16 Dec 2020 21:33:53 +0000 Subject: [New-bugs-announce] [issue42663] zoneinfo does not support full range of allowed transition hours in fallback string Message-ID: <1608154433.49.0.318883425805.issue42663@roundup.psfhosted.org> New submission from Paul Ganssle : TZif files consist of a list of transitions followed by a POSIX TZ var-style string of a form like "AAA3BBB,M3.2.0/01:30,M11.1.0/02:15:45", which decodes to "AAA (UTC-3) is the standard time and BBB (UTC-2) is the DST time, DST applies starting at 02:15:45 local time on the 1st Sunday in November and ending at 1:30:00 local time on the 2nd Sunday in March". After the last listed transition, the rule specified by the TZ variable applies. POSIX says that the "hours" part of the transition rule must be in the range ?24, but as mentioned in a TODO comment in _zoneinfo.c, RFC 8536 ?3.3.1 specifies that the hours part of transition times may range from -167 to 167 (see: https://github.com/python/cpython/blob/master/Modules/_zoneinfo.c#L1844-L1847 ). Currently, zoneinfo does not support the full range of possible transitions, and a TZif file with a 3-digit transition hour would likely fail to parse. This isn't a terribly high priority at the moment, but if the tz project ever releases a TZif file with one of these TZ strings on it, it will all of a sudden become very critical to fix it, so we should probably try to get it fixed before Python 3.9 is EOL, so that all versions of Python with `zoneinfo` can handle this properly. ---------- components: Library (Lib) messages: 383206 nosy: p-ganssle priority: normal severity: normal stage: needs patch status: open title: zoneinfo does not support full range of allowed transition hours in fallback string type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 16 18:45:29 2020 From: report at bugs.python.org (sds) Date: Wed, 16 Dec 2020 23:45:29 +0000 Subject: [New-bugs-announce] [issue42664] strptime(%c) fails to parse the output of strftime(%c) Message-ID: <1608162329.64.0.12903130366.issue42664@roundup.psfhosted.org> New submission from sds : >>> import datetime, locale >>> locale.getlocale() ('en_US', 'UTF-8') >>> datetime.datetime.strptime("%c",datetime.datetime.now().strftime("%c")) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/_strptime.py", line 568, in _strptime_datetime tt, fraction, gmtoff_fraction = _strptime(data_string, format) File "/usr/lib/python3.8/_strptime.py", line 349, in _strptime raise ValueError("time data %r does not match format %r" % ValueError: time data '%c' does not match format 'Wed Dec 16 18:44:27 2020' ---------- components: Library (Lib) messages: 383217 nosy: sam-s priority: normal severity: normal status: open title: strptime(%c) fails to parse the output of strftime(%c) versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 17 03:52:53 2020 From: report at bugs.python.org (Ganesh Kathiresan) Date: Thu, 17 Dec 2020 08:52:53 +0000 Subject: [New-bugs-announce] [issue42665] Should PyLong_AsLongAndOverflow raise exception on overflow? Message-ID: <1608195173.24.0.881331564131.issue42665@roundup.psfhosted.org> Change by Ganesh Kathiresan : ---------- components: C API nosy: ganesh3597 priority: normal severity: normal status: open title: Should PyLong_AsLongAndOverflow raise exception on overflow? type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 17 04:06:46 2020 From: report at bugs.python.org (Thomas Viehmann) Date: Thu, 17 Dec 2020 09:06:46 +0000 Subject: [New-bugs-announce] [issue42666] getting a class source regresses in Python 3.9 Message-ID: <1608196006.88.0.643559446867.issue42666@roundup.psfhosted.org> New submission from Thomas Viehmann : getting a class source regresses in Python 3.9 onwards. The following worked in Python 3.8, now it doesn't anymore for 3.9.1 and 3.10.0a2: (save as foo.py) import inspect class Foo: def spam(self): global Bar class Bar: pass print(inspect.getsource(Bar)) Foo().spam() It seems that getsource is very brittle for classes. ---------- components: Library (Lib) messages: 383223 nosy: t-vi priority: normal severity: normal status: open title: getting a class source regresses in Python 3.9 type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 17 06:05:17 2020 From: report at bugs.python.org (gardener.willy) Date: Thu, 17 Dec 2020 11:05:17 +0000 Subject: [New-bugs-announce] [issue42667] shelve module is not thread-safe during accessing different databases from different threads Message-ID: <1608203117.3.0.00404135488038.issue42667@roundup.psfhosted.org> New submission from gardener.willy : Shelve module uses "import dbm" instruction while opening database. Dbm module has global dictionary "_modules". This dictionary modifies during database opening operation. When different threads simultaneously try to open different databases, unexpected behavior occurred. In my case I've got such message: with shelve.open('some_file') as f: File "/usr/local/lib/python3.6/shelve.py", line 243, in open return DbfilenameShelf(filename, flag, protocol, writeback) File "/usr/local/lib/python3.6/shelve.py", line 227, in init Shelf.init(self, dbm.open(filename, flag), protocol, writeback) File "/usr/local/lib/python3.6/dbm/__init__.py", line 94, in open return mod.open(file, flag, mode) AttributeError: module 'dbm.ndbm' has no attribute 'open' Behavior is the same on python 3.6 and 3.7. Error is spontaneous. ---------- components: Library (Lib) messages: 383229 nosy: gardener.willy priority: normal severity: normal status: open title: shelve module is not thread-safe during accessing different databases from different threads type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 17 07:00:52 2020 From: report at bugs.python.org (Martin Altmayer) Date: Thu, 17 Dec 2020 12:00:52 +0000 Subject: [New-bugs-announce] [issue42668] re.escape does not correctly escape newlines Message-ID: <1608206452.14.0.809229064925.issue42668@roundup.psfhosted.org> New submission from Martin Altmayer : re.escape('\n') returns '\\\n', i.e. a string consisting of a backslash and a newline. I believe it should return '\\n', i.e. a backslash and an 'n'. If the escape-result still contains a verbatim newline, why escape this character at all? Note that Python's regular expressions engine allows newlines, so re.match(re.escape('\n'), '\n') gives a match. Thus, while this looks like an undesired behavior, it is not functionally broken. The same problem applies to some other characters: \t\r\v\f ---------- components: Regular Expressions files: test.py messages: 383237 nosy: MartinAltmayer, ezio.melotti, mrabarnett priority: normal severity: normal status: open title: re.escape does not correctly escape newlines type: behavior versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49689/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 17 08:27:57 2020 From: report at bugs.python.org (Colin Watson) Date: Thu, 17 Dec 2020 13:27:57 +0000 Subject: [New-bugs-announce] [issue42669] "except" documentation still suggests nested tuples are allowed Message-ID: <1608211677.22.0.0507603390583.issue42669@roundup.psfhosted.org> New submission from Colin Watson : In Python 2, it was possible to use `except` with a nested tuple, and occasionally natural. For example, `zope.formlib.interfaces.InputErrors` is a tuple of several exception classes, and one might reasonably think to do something like this (this is real code used in several places in https://git.launchpad.net/launchpad): try: self.getInputValue() return True except (InputErrors, SomethingElse): return False As of Python 3.0, this raises "TypeError: catching classes that do not inherit from BaseException is not allowed" instead: one must instead either break it up into multiple "except" clauses or flatten the tuple. The change was mentioned in https://bugs.python.org/issue2380 and seems to have been intentional: I'm not requesting that the previous behaviour be restored, since it's a fairly rare porting issue and by now well-established in Python 3. However, the relevant sentences of documentation in https://docs.python.org/2/reference/compound_stmts.html#try and https://docs.python.org/3/reference/compound_stmts.html#the-try-statement are identical aside from punctuation, and they both read: For an except clause with an expression, that expression is evaluated, and the clause matches the exception if the resulting object is ?compatible? with the exception. An object is compatible with an exception if it is the class or a base class of the exception object or a tuple containing an item compatible with the exception. I think this admits a recursive reading: I certainly read it that way. It should make it clear that nested tuples are not allowed. ---------- assignee: docs at python components: Documentation messages: 383243 nosy: cjwatson, docs at python priority: normal severity: normal status: open title: "except" documentation still suggests nested tuples are allowed versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 17 12:49:53 2020 From: report at bugs.python.org (Scott Noyes) Date: Thu, 17 Dec 2020 17:49:53 +0000 Subject: [New-bugs-announce] [issue42670] Missing word in itertools.product Message-ID: <1608227393.01.0.305034051579.issue42670@roundup.psfhosted.org> New submission from Scott Noyes : -Accordingly, it only useful with finite inputs. +Accordingly, it is only useful with finite inputs. ---------- assignee: docs at python components: Documentation messages: 383257 nosy: docs at python, snoyes priority: normal severity: normal status: open title: Missing word in itertools.product type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 17 16:13:20 2020 From: report at bugs.python.org (STINNER Victor) Date: Thu, 17 Dec 2020 21:13:20 +0000 Subject: [New-bugs-announce] [issue42671] Make the Python finalization more deterministic Message-ID: <1608239600.62.0.0456277879965.issue42671@roundup.psfhosted.org> New submission from STINNER Victor : At exit, Python calls Py_Finalize() which tries to clear every single Python objects. The order is which Python objects are cleared is not fully deterministic. Py_Finalize() uses an heuristic to attempt to clear modules of sys.modules in the "best" order. The current code creates a weak reference to a module, set sys.modules[name] to None, and then clears the module attribute if and only if the module object was not destroyed (if the weak reference still points to the module). The problem is that even if a module object is destroyed, the module dictionary can remain alive thanks for various kinds of strong references to it. Worst case example: --- class VerboseDel: def __del__(self): print("Goodbye Cruel World") obj = VerboseDel() def func(): pass import os os.register_at_fork(after_in_child=func) del os del func print("exit") --- Output: --- $ python3.9 script.py exit --- => The VerboseDel object is never destroyed :-( BUG! Explanation: * os.register_at_fork(after_in_child=func) stores func in PyInterpreterState.after_forkers_child -> func() is kept alive until interpreter_clear() calls Py_CLEAR(interp->after_forkers_child); * func() has reference to the module dictionary I'm not sure why the VerboseDel object is not destroyed. I propose to rewrite the finalize_modules() to clear modules in a more deterministic order: * start by clearing __main__ module variables * then iterate on reversed(sys.modules.values()) and clear the module variables * Module attributes are cleared by _PyModule_ClearDict(): iterate on reversed(module.__dict__) and set dict values to None Drawback: it is a backward incompatible change. Code which worked by luck previously no longer works. I'm talking about applications which rely on __del__() methods being calling in an exact order and expect Python being in a specific state. Example: --- class VerboseDel: def __init__(self, name): self.name = name def __del__(self): print(self.name) a = VerboseDel("a") b = VerboseDel("b") c = VerboseDel("c") --- Output: --- c b a --- => Module attributes are deleted in the reverse order of their definition: the most recent object is deleted first, the oldest is deleted last. Example 2 with 3 modules (4 files): --- $ cat a.py class VerboseDel: def __init__(self, name): self.name = name def __del__(self): print(self.name) a = VerboseDel("a") $ cat b.py class VerboseDel: def __init__(self, name): self.name = name def __del__(self): print(self.name) b = VerboseDel("b") $ cat c.py class VerboseDel: def __init__(self, name): self.name = name def __del__(self): print(self.name) c = VerboseDel("c") $ cat z.py import a import b import c --- Output: --- $ ./python z.py c b a --- => Modules are deleted from the most recently imported (import c) to the least recently imported module (import a). ---------- components: Interpreter Core messages: 383265 nosy: vstinner priority: normal severity: normal status: open title: Make the Python finalization more deterministic versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 17 16:16:50 2020 From: report at bugs.python.org (Ivo Shipkaliev) Date: Thu, 17 Dec 2020 21:16:50 +0000 Subject: [New-bugs-announce] [issue42672] tkinter/__init__.py raises a NameError if NoDefaultRoot() Message-ID: <1608239810.27.0.625507020812.issue42672@roundup.psfhosted.org> Change by Ivo Shipkaliev : ---------- components: Tkinter files: default_root.diff keywords: patch nosy: shippo_ priority: normal severity: normal status: open title: tkinter/__init__.py raises a NameError if NoDefaultRoot() type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file49690/default_root.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 04:35:27 2020 From: report at bugs.python.org (Jurjen N.E. Bos) Date: Fri, 18 Dec 2020 09:35:27 +0000 Subject: [New-bugs-announce] [issue42673] Optimize round_size for rehashing Message-ID: <1608284127.41.0.53520654621.issue42673@roundup.psfhosted.org> New submission from Jurjen N.E. Bos : There's a trivial optimization in the round_size in hashtable.c: a loop is used to compute the lowest power of two >= s, while this can be done in one step with bit_length. I am making a pull request for this. ---------- components: Interpreter Core messages: 383291 nosy: jneb priority: normal severity: normal status: open title: Optimize round_size for rehashing type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 06:17:49 2020 From: report at bugs.python.org (Chris Gahagan) Date: Fri, 18 Dec 2020 11:17:49 +0000 Subject: [New-bugs-announce] [issue42674] __init_subclass__ only called for first subclass when class has multiple inheritance Message-ID: <1608290269.17.0.332705663862.issue42674@roundup.psfhosted.org> Change by Chris Gahagan : ---------- files: SubClass.py nosy: ccgahagan priority: normal severity: normal status: open title: __init_subclass__ only called for first subclass when class has multiple inheritance type: behavior versions: Python 3.8 Added file: https://bugs.python.org/file49691/SubClass.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 11:38:34 2020 From: report at bugs.python.org (Ken Jin) Date: Fri, 18 Dec 2020 16:38:34 +0000 Subject: [New-bugs-announce] [issue42675] Document changes made in bpo-42195 Message-ID: <1608309514.56.0.184586625269.issue42675@roundup.psfhosted.org> New submission from Ken Jin : A whatsnew is probably needed as this change causes backwards incompatibility in some code working with Python 3.9.0 and 3.9.1. I think the patch for Python 3.9.2 should mention that a DeprecationWarning is emitted for some invalid use cases, which will eventually become a TypeError. ---------- assignee: docs at python components: Documentation messages: 383304 nosy: docs at python, gvanrossum, kj priority: normal severity: normal status: open title: Document changes made in bpo-42195 versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 13:09:32 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 18 Dec 2020 18:09:32 +0000 Subject: [New-bugs-announce] [issue42676] zoneinfo uses locale depending functions for parsing Message-ID: <1608314972.2.0.724558479843.issue42676@roundup.psfhosted.org> New submission from Serhiy Storchaka : zoneinfo uses locale depending functions isalpha(), isdigit(), isalnum() to parse data. It may be correct when parse the TZ environment variable (although they do not work with multibytes locale encodings like UTF-8), I think that parsing the content of data files should not rely on current locale. Later the parsed data is decoded implying UTF-8 (for abbr) or ASCII (for numbers). ---------- components: Library (Lib) messages: 383313 nosy: p-ganssle, serhiy.storchaka priority: normal severity: normal status: open title: zoneinfo uses locale depending functions for parsing type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 14:15:45 2020 From: report at bugs.python.org (Thomas Nabelek) Date: Fri, 18 Dec 2020 19:15:45 +0000 Subject: [New-bugs-announce] [issue42677] Support comments in argparse fromfile_prefix_chars files Message-ID: <1608318945.24.0.0639471792025.issue42677@roundup.psfhosted.org> New submission from Thomas Nabelek : For input argument files, specified with the fromfile_prefix_chars argument to argparse.ArgumentParser, argparse should ignore lines beginning with '#' so that comments can be used in those files. ---------- components: Library (Lib) messages: 383321 nosy: nabelekt priority: normal severity: normal status: open title: Support comments in argparse fromfile_prefix_chars files type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 15:07:22 2020 From: report at bugs.python.org (Ethan Furman) Date: Fri, 18 Dec 2020 20:07:22 +0000 Subject: [New-bugs-announce] [issue42678] [Enum] _sunder_ methods only looked up in the last Enum class in the mro Message-ID: <1608322042.6.0.824129277027.issue42678@roundup.psfhosted.org> Change by Ethan Furman : ---------- assignee: ethan.furman components: Library (Lib) nosy: ethan.furman priority: normal severity: normal stage: needs patch status: open title: [Enum] _sunder_ methods only looked up in the last Enum class in the mro type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 16:36:55 2020 From: report at bugs.python.org (Olvin) Date: Fri, 18 Dec 2020 21:36:55 +0000 Subject: [New-bugs-announce] [issue42679] Minor improvement in datetime.timestamp() docs Message-ID: <1608327415.79.0.704757539161.issue42679@roundup.psfhosted.org> New submission from Olvin : Answering question on StackOverflow I've found next example in docs of datetime.timestamp() ( https://docs.python.org/3/library/datetime.html#datetime.datetime.timestamp ) which returns UTC timestamp: timestamp = (dt - datetime(1970, 1, 1)) / timedelta(seconds=1) While it works I think there's more explicit way using timedelta.total_seconds() : timestamp = (dt - datetime(1970, 1, 1)).total_seconds() In same article few lines above there's example using total_seconds() so I think it will be good to use same method in both examples. ---------- assignee: docs at python components: Documentation messages: 383328 nosy: docs at python, olvinroght priority: normal severity: normal status: open title: Minor improvement in datetime.timestamp() docs type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 19:27:18 2020 From: report at bugs.python.org (Martin Chase) Date: Sat, 19 Dec 2020 00:27:18 +0000 Subject: [New-bugs-announce] [issue42680] unicode identifiers not accessible or assignable through globals() Message-ID: <1608337638.31.0.282266859185.issue42680@roundup.psfhosted.org> New submission from Martin Chase : This behavior is best described by the code below: ``` >>> meow = 1 >>> 'meow' in globals() True >>> ?meow = 1e-6 >>> '?meow' in globals() False >>> globals()['woof'] = 1 >>> woof 1 >>> globals()['?woof'] = 1e-6 >>> ?woof Traceback (most recent call last): File "", line 1, in NameError: name '?woof' is not defined >>> import sys >>> sys.getdefaultencoding() 'utf-8' >>> [(k, bytes(k, 'utf-8')) for k in globals()] ..., ('?meow', b'\xce\xbcmeow'), ('?woof', b'\xc2\xb5woof')] >>> '?'.encode('utf-8') b'\xc2\xb5' ``` Testing was done on linux and windows, variously using 3.6.12, 3.7.6, 3.8.6 and 3.9.0+. ---------- components: Unicode messages: 383336 nosy: ezio.melotti, outofculture, vstinner priority: normal severity: normal status: open title: unicode identifiers not accessible or assignable through globals() type: behavior versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 18 19:58:57 2020 From: report at bugs.python.org (Robert T McQuaid) Date: Sat, 19 Dec 2020 00:58:57 +0000 Subject: [New-bugs-announce] [issue42681] mistake in curses documentation Message-ID: <1608339537.48.0.496910027234.issue42681@roundup.psfhosted.org> New submission from Robert T McQuaid : The description of color_pair starts with curses.color_pair(color_number) It should be curses.color_pair(pair_number) ---------- assignee: docs at python components: Documentation messages: 383344 nosy: arbor, docs at python priority: normal severity: normal status: open title: mistake in curses documentation versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 19 03:09:24 2020 From: report at bugs.python.org (lilydjwg) Date: Sat, 19 Dec 2020 08:09:24 +0000 Subject: [New-bugs-announce] [issue42682] awaiting a wrapped asyncio.Task multiple times gives long, repeative tracebacks Message-ID: <1608365364.67.0.620283086401.issue42682@roundup.psfhosted.org> New submission from lilydjwg : import asyncio async def crash(key): raise Exception('crash!') async def wait(fu): await fu async def main(): crasher = asyncio.create_task(crash(())) fs = [wait(crasher) for _ in range(10)] for fu in asyncio.as_completed(fs): try: await fu except Exception: import traceback traceback.print_exc() if __name__ == '__main__': asyncio.run(main()) This code will give a very long traceback 10 times. I expect it to be short. I'm caching the result of an async function like this: async def get( self, key: Hashable, func: Callable[[Hashable], Coroutine[Any, Any, Any]], ) -> Any: async with self.lock: cached = self.cache.get(key) if cached is None: coro = func(key) fu = asyncio.create_task(coro) self.cache[key] = fu if asyncio.isfuture(cached): # pending return await cached # type: ignore elif cached is not None: # cached return cached else: # not cached r = await fu self.cache[key] = r return r It works fine, except that when there is an exception the traceback is very long. ---------- components: asyncio messages: 383364 nosy: asvetlov, lilydjwg, yselivanov priority: normal severity: normal status: open title: awaiting a wrapped asyncio.Task multiple times gives long, repeative tracebacks type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 19 06:26:28 2020 From: report at bugs.python.org (Paul Moore) Date: Sat, 19 Dec 2020 11:26:28 +0000 Subject: [New-bugs-announce] [issue42683] asyncio should handle keyboard interrupt while the event loop is running Message-ID: <1608377188.51.0.0510456786918.issue42683@roundup.psfhosted.org> New submission from Paul Moore : See the comment on Discourse here: https://discuss.python.org/t/feeding-data-generated-via-asyncio-into-a-synchronous-main-loop/5436/28 (and the thread leading up to this comment). In the thread, @njs states that if the user hits Ctrl-C while the asyncio event loop is running, it's possible for internal asyncio data structures to end up in an inconsistent state. If that's the case, then this would make asyncio-based code unreliable in real-world use. I don't have a way to reproduce this - from the Discourse thread, I had assumed that ctrl-C was safe to use on an asyncio-based program, but was told otherwise, and I can't find anything definitive either way. At a minimum, the asyncio documentation should confirm that it is exception-safe (specifically against Ctrl-C, but in general I'd assume that asyncio is safe in the face of uncaught exceptions in user-written async code). ---------- components: asyncio messages: 383370 nosy: asvetlov, paul.moore, yselivanov priority: normal severity: normal status: open title: asyncio should handle keyboard interrupt while the event loop is running type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 19 07:51:17 2020 From: report at bugs.python.org (Antony Lee) Date: Sat, 19 Dec 2020 12:51:17 +0000 Subject: [New-bugs-announce] [issue42684] Improvements to documentation for PyUnicode_FS{Converter, Decoder} Message-ID: <1608382277.1.0.605007079577.issue42684@roundup.psfhosted.org> New submission from Antony Lee : The docs for PyUnicode_FSConverter and PyUnicode_FSDecoder could be improved on two points: - The functions also reject str/bytes that contain null bytes (one can easily verify that there's a specific check for them in the C implementations). Currently the docs only say that the converters use PyUnicode_EncodeFSDefault/PyUnicode_DecodeFSDefaultAndSize, but those don't check for null bytes. - The functions only ever return 1 or 0 (indicating success or failures), which means that one can just use e.g. `if (!PyUnicode_FSConverter(foo, &bar)) { goto error; } ...` (this pattern occurs repeatedly in the CPython codebase). In theory, given that the functions are only documented as being "O&"-converters, they could also be returning Py_CLEANUP_SUPPORTED in which case they'd need to be called a second time on failure to release allocated memory. ---------- assignee: docs at python components: C API, Documentation messages: 383378 nosy: Antony.Lee, docs at python priority: normal severity: normal status: open title: Improvements to documentation for PyUnicode_FS{Converter,Decoder} _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 19 10:29:47 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sat, 19 Dec 2020 15:29:47 +0000 Subject: [New-bugs-announce] [issue42685] Improve placing of simple query windows. Message-ID: <1608391787.71.0.662938082611.issue42685@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently simple query windows in Tkinter (such as tkinter.simpledialog.askinteger()) are placed at position 50 pixels right and 50 pixels below of the top left corner of the parent widget (even if it is not visible). If the parent is not specified, the initial position was determined by a windows manager before issue1538878, after issue1538878 it was placed at position 50 pixels right and 50 pixels below the default root widget (even if it is not visible). Issue42630 restored the pre-issue1538878 behavior, but it is still has many quirks. The proposed patch makes the placing algorithm similar to native Tk dialogs. * If parent is specified and mapped, the query widget is centered at the center of parent. Its position and size can be corrected so that it fits in the virtual root window. * Otherwise it is centered at the center of the screen. ---------- components: Tkinter messages: 383382 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Improve placing of simple query windows. type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 19 10:32:41 2020 From: report at bugs.python.org (Big Stone) Date: Sat, 19 Dec 2020 15:32:41 +0000 Subject: [New-bugs-announce] [issue42686] include built-in Math functions in SQLite to 3.35.0 of march 2021 Message-ID: <1608391961.58.0.717857253057.issue42686@roundup.psfhosted.org> New submission from Big Stone : SQlite-3.35.0 of mach 2021 will have Built-In Math Functions option. https://sqlite.org/draft/releaselog/3_35_0.html Would it be possible to have it activated in the following versions update of Python ? It's pretty usefull for some basic statistics ---------- messages: 383383 nosy: Big Stone priority: normal severity: normal status: open title: include built-in Math functions in SQLite to 3.35.0 of march 2021 type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 19 10:46:27 2020 From: report at bugs.python.org (Erik Soma) Date: Sat, 19 Dec 2020 15:46:27 +0000 Subject: [New-bugs-announce] [issue42687] tokenize module does not recognize Barry as FLUFL Message-ID: <1608392787.93.0.861284496438.issue42687@roundup.psfhosted.org> New submission from Erik Soma : '<>' is not recognized by the tokenize module as a single token, instead it is two tokens. ``` $ python -c "import tokenize; import io; import pprint; pprint.pprint(list(tokenize.tokenize(io.BytesIO(b'<>').readline)))" [TokenInfo(type=62 (ENCODING), string='utf-8', start=(0, 0), end=(0, 0), line=''), TokenInfo(type=54 (OP), string='<', start=(1, 0), end=(1, 1), line='<>'), TokenInfo(type=54 (OP), string='>', start=(1, 1), end=(1, 2), line='<>'), TokenInfo(type=4 (NEWLINE), string='', start=(1, 2), end=(1, 3), line=''), TokenInfo(type=0 (ENDMARKER), string='', start=(2, 0), end=(2, 0), line='')] ``` I would expect: ``` [TokenInfo(type=62 (ENCODING), string='utf-8', start=(0, 0), end=(0, 0), line=''), TokenInfo(type=54 (OP), string='<>', start=(1, 0), end=(1, 2), line='<>'), TokenInfo(type=4 (NEWLINE), string='', start=(1, 2), end=(1, 3), line=''), TokenInfo(type=0 (ENDMARKER), string='', start=(2, 0), end=(2, 0), line='')] ``` This is the behavior of the CPython tokenizer which the tokenizer module tries "to match the working of". ---------- messages: 383384 nosy: esoma priority: normal severity: normal status: open title: tokenize module does not recognize Barry as FLUFL versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 00:52:54 2020 From: report at bugs.python.org (Eli Rykoff) Date: Sun, 20 Dec 2020 05:52:54 +0000 Subject: [New-bugs-announce] [issue42688] ctypes memory error on Apple Silicon with external libffi Message-ID: <1608443574.61.0.0166938419217.issue42688@roundup.psfhosted.org> New submission from Eli Rykoff : Building python 3.9.1 on Apple Silicon compiled against a external (non-os-provided) libffi makes the following code return a MemoryError: Test: import ctypes @ctypes.CFUNCTYPE(None, ctypes.c_int, ctypes.c_char_p) def error_handler(fif, message): pass I have tracked this down to the following code in malloc_closure.c: #if USING_APPLE_OS_LIBFFI && HAVE_FFI_CLOSURE_ALLOC if (__builtin_available(macos 10.15, ios 13, watchos 6, tvos 13, *)) { ffi_closure_free(p); return; } #endif and #if USING_APPLE_OS_LIBFFI && HAVE_FFI_CLOSURE_ALLOC if (__builtin_available(macos 10.15, ios 13, watchos 6, tvos 13, *)) { return ffi_closure_alloc(size, codeloc); } #endif In fact, while the __builtin_available() call should be guarded by USING_APPLE_OS_LIBFFI, the call to ffi_closure_alloc() should only be guarded by HAVE_FFI_CLOSURE_ALLOC, as this is set as the result of an independent check in setup.py and should be used with external libffi when supported. The following code does work instead: #if HAVE_FFI_CLOSURE_ALLOC #if USING_APPLE_OS_LIBFFI if (__builtin_available(macos 10.15, ios 13, watchos 6, tvos 13, *)) { #endif ffi_closure_free(p); return; #if USING_APPLE_OS_LIBFFI } #endif #endif #if HAVE_FFI_CLOSURE_ALLOC #if USING_APPLE_OS_LIBFFI if (__builtin_available(macos 10.15, ios 13, watchos 6, tvos 13, *)) { #endif return ffi_closure_alloc(size, codeloc); return; #if USING_APPLE_OS_LIBFFI } #endif #endif ---------- components: ctypes messages: 383419 nosy: erykoff priority: normal severity: normal status: open title: ctypes memory error on Apple Silicon with external libffi type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 02:31:59 2020 From: report at bugs.python.org (Rafael) Date: Sun, 20 Dec 2020 07:31:59 +0000 Subject: [New-bugs-announce] [issue42689] Installation Message-ID: <1608449519.45.0.28337310335.issue42689@roundup.psfhosted.org> New submission from Rafael : I'm very new to python. I went to download 3.9.1 and it said "There was an error creating a temporary file that is needed to complete this installation." I was wondering how I can fix this problem so I can start learning ---------- components: Windows messages: 383422 nosy: Gimmesomo, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Installation type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 03:16:56 2020 From: report at bugs.python.org (JasperTecHK) Date: Sun, 20 Dec 2020 08:16:56 +0000 Subject: [New-bugs-announce] [issue42690] Aiohttp fails when using intel ax200 wireless card Message-ID: <1608452216.56.0.63748155919.issue42690@roundup.psfhosted.org> New submission from JasperTecHK : Not sure what the protocol is for reporting the bug, but based on my, albeit limited, testing, on two separate machines running the same model of wireless card(but different cards), the Intel AX200 wireless card, aiohttp fails to finish scraping from webpages. The issue does not occur when both machines are using ethernet. The issue was tested on a AWS instance as well to try and eliminate the issue as well. Further information is here, but I only got around to checking on a second machine a few minutes ago. https://www.reddit.com/r/learnpython/comments/jiu452/asyncio_erroring_on_one_machine/ Things done to eliminate variables: Tested on different network access methods (wlan0 vs eth0) Tested on two different machines Tested on two different OSes (Win 10, Ubuntu 20.04) Interestingly, I did test with three different networks. First one was AWS, which worked as expected. The two others exhibited strange behavior. Network A is a cable modem network from xfinity. We use our own router. When connected via wifi on either machine and run, the error as shown in my reddit thread above occurs, but if I run an ethernet line to it(though through a wifi-to-ethernet bridge on a rpi4 due to my room being unable to run a direct line of ethernet there, which is connected to the exact same wifi source as the two machines mentioned above), it will complete without errors. Network B is my mobile tethering. Windows being Windows, it has a hissy fit and doesn't want to play nice. So I was only able to run a test on my Ubuntu machine. This time, I get an error that stated the server disconnected, but if I add in a wg vpn through a EC2 server, it completes, albeit slowly, but that's understandable. Tldr: Network A was tested two ways. wifi ap > Win10/ubuntu = fail. Error message is what is shown in the reddit thread. wifi ap > rpi wlan-eth bridge > win10/ubuntu = pass. Note: The vpn mentioned in Network B was also tested on Network A but did not make a difference. Network B mobile tethering > ubuntu = fail. Error is a Server Disconnect. Full error is attached. mobile tethering > ubuntu + wg to EC2 = pass. I understand that since this doesn't appear to be a widespread issue, not a lot of help would be available for this. But it would be nice to at least be able to figure out what the problem is. ---------- components: asyncio files: tetherror.txt messages: 383425 nosy: JasperTecHK, asvetlov, yselivanov priority: normal severity: normal status: open title: Aiohttp fails when using intel ax200 wireless card versions: Python 3.8 Added file: https://bugs.python.org/file49693/tetherror.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 13:27:12 2020 From: report at bugs.python.org (=?utf-8?b?0J/RkdGC0YAg0KHQuNGA0LXQvdC60L4=?=) Date: Sun, 20 Dec 2020 18:27:12 +0000 Subject: [New-bugs-announce] [issue42691] macOS 11.1 + Homebrew 2.6.2 + python 3.9.1 = idle crash Message-ID: <1608488832.09.0.347546053982.issue42691@roundup.psfhosted.org> New submission from ???? ??????? : Process: Python [3355] Path: /usr/local/Cellar/python at 3.9/3.9.1_1/IDLE 3.app/Contents/MacOS/Python Identifier: org.python.IDLE Version: 3.9.1 (3.9.1) Code Type: X86-64 (Native) Parent Process: ??? [1] Responsible: Python [3355] User ID: 502 Date/Time: 2020-12-20 21:22:43.045 +0300 OS Version: macOS 11.1 (20C69) Report Version: 12 Bridge OS Version: 5.1 (18P3030) Anonymous UUID: 0607359F-0422-4E24-D0EC-3FDFB0D5A17C Time Awake Since Boot: 940 seconds System Integrity Protection: enabled Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Application Specific Information: abort() called Thread 0 Crashed:: Dispatch queue: com.apple.main-thread 0 libsystem_kernel.dylib 0x00007fff20307462 __pthread_kill + 10 1 libsystem_pthread.dylib 0x00007fff20335610 pthread_kill + 263 2 libsystem_c.dylib 0x00007fff20288720 abort + 120 3 Tcl 0x00007fff6fe06b55 Tcl_PanicVA + 398 4 Tcl 0x00007fff6fe06bd5 Tcl_Panic + 128 5 Tk 0x00007fff6ff06ad5 TkpInit + 385 6 Tk 0x00007fff6fe86788 0x7fff6fe55000 + 202632 7 _tkinter.cpython-39-darwin.so 0x0000000101dbe701 Tcl_AppInit + 84 8 _tkinter.cpython-39-darwin.so 0x0000000101db99fa _tkinter_create + 975 9 org.python.python 0x00000001018b11bb cfunction_vectorcall_FASTCALL + 203 10 org.python.python 0x0000000101928ff3 call_function + 403 11 org.python.python 0x0000000101926184 _PyEval_EvalFrameDefault + 27452 12 org.python.python 0x0000000101929b3b _PyEval_EvalCode + 1998 13 org.python.python 0x00000001018815d8 _PyFunction_Vectorcall + 248 14 org.python.python 0x0000000101880e28 _PyObject_FastCallDictTstate + 149 15 org.python.python 0x0000000101881891 _PyObject_Call_Prepend + 139 16 org.python.python 0x00000001018c9e96 slot_tp_init + 87 17 org.python.python 0x00000001018c36f7 type_call + 150 18 org.python.python 0x0000000101880fa3 _PyObject_MakeTpCall + 266 19 org.python.python 0x0000000101929027 call_function + 455 20 org.python.python 0x00000001019262ee _PyEval_EvalFrameDefault + 27814 21 org.python.python 0x0000000101929b3b _PyEval_EvalCode + 1998 22 org.python.python 0x00000001018815d8 _PyFunction_Vectorcall + 248 23 org.python.python 0x0000000101928ff3 call_function + 403 24 org.python.python 0x0000000101926230 _PyEval_EvalFrameDefault + 27624 25 org.python.python 0x0000000101929b3b _PyEval_EvalCode + 1998 26 org.python.python 0x000000010191f56d PyEval_EvalCode + 79 27 org.python.python 0x000000010195a5b5 run_eval_code_obj + 110 28 org.python.python 0x00000001019599ad run_mod + 103 29 org.python.python 0x0000000101958871 PyRun_FileExFlags + 241 30 org.python.python 0x0000000101957e61 PyRun_SimpleFileExFlags + 271 31 org.python.python 0x000000010196fd88 Py_RunMain + 1839 32 org.python.python 0x00000001019700c1 pymain_main + 306 33 org.python.python 0x000000010197010f Py_BytesMain + 42 34 libdyld.dylib 0x00007fff20350621 start + 1 Thread 1: 0 libsystem_pthread.dylib 0x00007fff20331458 start_wqthread + 0 Thread 2: 0 libsystem_pthread.dylib 0x00007fff20331458 start_wqthread + 0 Thread 3: 0 libsystem_pthread.dylib 0x00007fff20331458 start_wqthread + 0 Thread 4: 0 libsystem_pthread.dylib 0x00007fff20331458 start_wqthread + 0 Thread 5: 0 libsystem_pthread.dylib 0x00007fff20331458 start_wqthread + 0 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x000000010a905e00 rcx: 0x00007ffeee3c1068 rdx: 0x0000000000000000 rdi: 0x0000000000000307 rsi: 0x0000000000000006 rbp: 0x00007ffeee3c1090 rsp: 0x00007ffeee3c1068 r8: 0x00000000000130a8 r9: 0x00007fff889950e8 r10: 0x000000010a905e00 r11: 0x0000000000000246 r12: 0x0000000000000307 r13: 0x00007fb37d04b790 r14: 0x0000000000000006 r15: 0x0000000000000016 rip: 0x00007fff20307462 rfl: 0x0000000000000246 cr2: 0x00007fff8d152f70 Logical CPU: 0 Error Code: 0x02000148 Trap Number: 133 Thread 0 instruction stream not available. Thread 0 last branch register state not available. Binary Images: 0x10183d000 - 0x101840fff +Python (0) <4C3C8936-2066-3937-B7A8-51F29B2BA552> /usr/local/Cellar/python at 3.9/3.9.1_1/IDLE 3.app/Contents/MacOS/Python 0x10184c000 - 0x101a4bfff +org.python.python (3.9.1, [c] 2001-2019 Python Software Foundation. - 3.9.1) <0DBC6564-AB58-3938-9A54-89BAC0129512> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/Python 0x101b1b000 - 0x101b1efff libCyrillicConverter.dylib (90) <046BCE34-B5CD-363F-AB24-2DFD3B85D593> /System/Library/CoreServices/Encodings/libCyrillicConverter.dylib 0x101d28000 - 0x101d2bfff +_heapq.cpython-39-darwin.so (0) <08F4CDC9-229B-3576-BE1E-A3E3DB380BB5> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_heapq.cpython-39-darwin.so 0x101db8000 - 0x101dbffff +_tkinter.cpython-39-darwin.so (0) /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_tkinter.cpython-39-darwin.so 0x101e90000 - 0x101e93fff +grp.cpython-39-darwin.so (0) /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/grp.cpython-39-darwin.so 0x101ea0000 - 0x101ea3fff +_posixsubprocess.cpython-39-darwin.so (0) <72040909-A8B8-31A4-BB34-FCEEBE259B10> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_posixsubprocess.cpython-39-darwin.so 0x101eb0000 - 0x101eb7fff +select.cpython-39-darwin.so (0) <8E448349-CA8B-33AA-8CA1-0EC7B8263EBD> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/select.cpython-39-darwin.so 0x101ec4000 - 0x101ecbfff +math.cpython-39-darwin.so (0) <0100A69F-B468-3D6C-B4F9-B2A29E275750> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/math.cpython-39-darwin.so 0x101f18000 - 0x101f27fff +_socket.cpython-39-darwin.so (0) <65641E29-D67A-34BC-9C5C-5AAF7218FB2D> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_socket.cpython-39-darwin.so 0x101f34000 - 0x101f3bfff +array.cpython-39-darwin.so (0) <4832969C-4CAD-38DD-BAB8-57B700B67BA8> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/array.cpython-39-darwin.so 0x101fc8000 - 0x101fcbfff +_opcode.cpython-39-darwin.so (0) /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_opcode.cpython-39-darwin.so 0x101fd8000 - 0x101fdffff +binascii.cpython-39-darwin.so (0) <1089A30F-88D2-398C-91DC-481A4FC355E9> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/binascii.cpython-39-darwin.so 0x101fec000 - 0x101ff3fff +_struct.cpython-39-darwin.so (0) /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_struct.cpython-39-darwin.so 0x104100000 - 0x10410ffff +_datetime.cpython-39-darwin.so (0) <9829C364-CE52-31D9-AF5F-A07924B8C04E> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_datetime.cpython-39-darwin.so 0x104120000 - 0x104143fff +pyexpat.cpython-39-darwin.so (0) <260AD849-DC46-3255-8EEB-FB72BA56A081> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/pyexpat.cpython-39-darwin.so 0x1041d4000 - 0x1041dbfff +zlib.cpython-39-darwin.so (0) <21AB7C8C-37F8-3307-B9E3-678F62551826> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/zlib.cpython-39-darwin.so 0x1041e8000 - 0x1041ebfff +_bz2.cpython-39-darwin.so (0) <1B3842B8-378B-3616-A0BA-0BDC38B59005> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_bz2.cpython-39-darwin.so 0x1041f8000 - 0x1041fffff +_lzma.cpython-39-darwin.so (0) /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_lzma.cpython-39-darwin.so 0x10420c000 - 0x104227fff +liblzma.5.dylib (0) /usr/local/opt/xz/lib/liblzma.5.dylib 0x1043b4000 - 0x1043b7fff +_bisect.cpython-39-darwin.so (0) <4B2DACBB-2688-3B15-964F-199DE11351C0> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_bisect.cpython-39-darwin.so 0x1043c4000 - 0x1043c7fff +_random.cpython-39-darwin.so (0) <1D4681A8-C095-3EBC-98CE-D95EC276FA83> /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_random.cpython-39-darwin.so 0x1043d4000 - 0x1043dbfff +_sha512.cpython-39-darwin.so (0) /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_sha512.cpython-39-darwin.so 0x104428000 - 0x104437fff +_pickle.cpython-39-darwin.so (0) /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_pickle.cpython-39-darwin.so 0x104448000 - 0x10444bfff +_queue.cpython-39-darwin.so (0) /usr/local/Cellar/python at 3.9/3.9.1_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload/_queue.cpython-39-darwin.so 0x10a82e000 - 0x10a8c9fff dyld (832.7.1) /usr/lib/dyld 0x7fff2006b000 - 0x7fff2006cfff libsystem_blocks.dylib (78) <9CF131C6-16FB-3DD0-B046-9E0B6AB99935> /usr/lib/system/libsystem_blocks.dylib 0x7fff2006d000 - 0x7fff200a2fff libxpc.dylib (2038.40.38) <003A027D-9CE3-3794-A319-88495844662D> /usr/lib/system/libxpc.dylib 0x7fff200a3000 - 0x7fff200bafff libsystem_trace.dylib (1277.50.1) <48C14376-626E-3C81-B0F5-7416E64580C7> /usr/lib/system/libsystem_trace.dylib 0x7fff200bb000 - 0x7fff20159fff libcorecrypto.dylib (1000.60.19) <92F0211E-506E-3760-A3C2-808BF3905C07> /usr/lib/system/libcorecrypto.dylib 0x7fff2015a000 - 0x7fff20186fff libsystem_malloc.dylib (317.40.8) <2EF43B96-90FB-3C50-B73E-035238504E33> /usr/lib/system/libsystem_malloc.dylib 0x7fff20187000 - 0x7fff201cbfff libdispatch.dylib (1271.40.12) /usr/lib/system/libdispatch.dylib 0x7fff201cc000 - 0x7fff20204fff libobjc.A.dylib (818.2) <45EA2DE2-B612-3486-B156-2359CE279159> /usr/lib/libobjc.A.dylib 0x7fff20205000 - 0x7fff20207fff libsystem_featureflags.dylib (28.60.1) <7B4EBDDB-244E-3F78-8895-566FE22288F3> /usr/lib/system/libsystem_featureflags.dylib 0x7fff20208000 - 0x7fff20290fff libsystem_c.dylib (1439.40.11) <06D9F593-C815-385D-957F-2B5BCC223A8A> /usr/lib/system/libsystem_c.dylib 0x7fff20291000 - 0x7fff202e6fff libc++.1.dylib (904.4) /usr/lib/libc++.1.dylib 0x7fff202e7000 - 0x7fff202fffff libc++abi.dylib (904.4) /usr/lib/libc++abi.dylib 0x7fff20300000 - 0x7fff2032efff libsystem_kernel.dylib (7195.60.75) <4BD61365-29AF-3234-8002-D989D295FDBB> /usr/lib/system/libsystem_kernel.dylib 0x7fff2032f000 - 0x7fff2033afff libsystem_pthread.dylib (454.60.1) <8DD3A0BC-2C92-31E3-BBAB-CE923A4342E4> /usr/lib/system/libsystem_pthread.dylib 0x7fff2033b000 - 0x7fff20375fff libdyld.dylib (832.7.1) <2F8A14F5-7CB8-3EDD-85EA-7FA960BBC04E> /usr/lib/system/libdyld.dylib 0x7fff20376000 - 0x7fff2037ffff libsystem_platform.dylib (254.60.1) <3F7F6461-7B5C-3197-ACD7-C8A0CFCC6F55> /usr/lib/system/libsystem_platform.dylib 0x7fff20380000 - 0x7fff203abfff libsystem_info.dylib (542.40.3) <0979757C-5F0D-3F5A-9E0E-EBF234B310AF> /usr/lib/system/libsystem_info.dylib 0x7fff203ac000 - 0x7fff20847fff com.apple.CoreFoundation (6.9 - 1770.300) /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff20848000 - 0x7fff20a77fff com.apple.LaunchServices (1122.11 - 1122.11) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices 0x7fff20a78000 - 0x7fff20b4bfff com.apple.gpusw.MetalTools (1.0 - 1) /System/Library/PrivateFrameworks/MetalTools.framework/Versions/A/MetalTools 0x7fff20b4c000 - 0x7fff20daffff libBLAS.dylib (1336.40.1) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 0x7fff20db0000 - 0x7fff20dfdfff com.apple.Lexicon-framework (1.0 - 86.1) /System/Library/PrivateFrameworks/Lexicon.framework/Versions/A/Lexicon 0x7fff20dfe000 - 0x7fff20e6cfff libSparse.dylib (106) <60559226-6E4B-3601-B6CA-E3B85B5EB27B> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparse.dylib 0x7fff20e6d000 - 0x7fff20eeafff com.apple.SystemConfiguration (1.20 - 1.20) <8524EE4C-628F-315A-9531-44DD83CE275E> /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration 0x7fff20eeb000 - 0x7fff20f20fff libCRFSuite.dylib (50) <6CA29EAA-0585-3682-9AD2-DFD3D87A74D4> /usr/lib/libCRFSuite.dylib 0x7fff20f21000 - 0x7fff21158fff libmecabra.dylib (929.1.1) <39F5AD50-3AF2-3CFB-BD21-2DC45AA92A91> /usr/lib/libmecabra.dylib 0x7fff21159000 - 0x7fff214bcfff com.apple.Foundation (6.9 - 1770.300) <44A7115B-7FF0-3300-B61B-0FA71B63C715> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x7fff214bd000 - 0x7fff215a9fff com.apple.LanguageModeling (1.0 - 247.1) /System/Library/PrivateFrameworks/LanguageModeling.framework/Versions/A/LanguageModeling 0x7fff215aa000 - 0x7fff216e0fff com.apple.CoreDisplay (231.3 - 231.3) <229BF97A-1D56-3CB4-8338-E0D464F73A33> /System/Library/Frameworks/CoreDisplay.framework/Versions/A/CoreDisplay 0x7fff216e1000 - 0x7fff21956fff com.apple.audio.AudioToolboxCore (1.0 - 1180.23) <56821802-07B9-3FA9-AF73-D943BAE0DE57> /System/Library/PrivateFrameworks/AudioToolboxCore.framework/Versions/A/AudioToolboxCore 0x7fff21957000 - 0x7fff21b3ffff com.apple.CoreText (677.2.0.5 - 677.2.0.5) /System/Library/Frameworks/CoreText.framework/Versions/A/CoreText 0x7fff21b40000 - 0x7fff221e3fff com.apple.audio.CoreAudio (5.0 - 5.0) /System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio 0x7fff221e4000 - 0x7fff22535fff com.apple.security (7.0 - 59754.60.13) /System/Library/Frameworks/Security.framework/Versions/A/Security 0x7fff22536000 - 0x7fff22797fff libicucore.A.dylib (66109) <6C0A0196-2778-3035-81CE-7CA48D6C0628> /usr/lib/libicucore.A.dylib 0x7fff22798000 - 0x7fff227a1fff libsystem_darwin.dylib (1439.40.11) /usr/lib/system/libsystem_darwin.dylib 0x7fff227a2000 - 0x7fff22a89fff com.apple.CoreServices.CarbonCore (1307 - 1307) <9C615967-6D8E-307F-B028-6278A4FA7C8C> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x7fff22a8a000 - 0x7fff22ac8fff com.apple.CoreServicesInternal (476 - 476) /System/Library/PrivateFrameworks/CoreServicesInternal.framework/Versions/A/CoreServicesInternal 0x7fff22ac9000 - 0x7fff22b03fff com.apple.CSStore (1122.11 - 1122.11) <088D0108-AA14-3610-86A0-89D0C605384F> /System/Library/PrivateFrameworks/CoreServicesStore.framework/Versions/A/CoreServicesStore 0x7fff22b04000 - 0x7fff22bb1fff com.apple.framework.IOKit (2.0.2 - 1845.60.2) /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 0x7fff22bb2000 - 0x7fff22bbdfff libsystem_notify.dylib (279.40.4) <98D74EEF-60D9-3665-B877-7BE1558BA83E> /usr/lib/system/libsystem_notify.dylib 0x7fff22c0a000 - 0x7fff2396cfff com.apple.AppKit (6.9 - 2022.20.119) <4CB42914-672D-3AF0-A0A5-2209088A3DA0> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x7fff2396d000 - 0x7fff23bc0fff com.apple.UIFoundation (1.0 - 726.11) <71C63CE5-094D-34AF-B538-8DCAB3B66DE9> /System/Library/PrivateFrameworks/UIFoundation.framework/Versions/A/UIFoundation 0x7fff23bc1000 - 0x7fff23bd3fff com.apple.UniformTypeIdentifiers (633.0.2 - 633.0.2) <7BEC7DDC-2B7A-3B5D-B994-5FA352FC485A> /System/Library/Frameworks/UniformTypeIdentifiers.framework/Versions/A/UniformTypeIdentifiers 0x7fff2402b000 - 0x7fff2466efff libnetwork.dylib (2288.60.5) <180FE916-8DD6-3385-B231-0C423B7D2BD3> /usr/lib/libnetwork.dylib 0x7fff2466f000 - 0x7fff24b0cfff com.apple.CFNetwork (1209.1 - 1209.1) <60DE4CD6-B5AF-3E0E-8AF1-39ECFC1B8C98> /System/Library/Frameworks/CFNetwork.framework/Versions/A/CFNetwork 0x7fff24b0d000 - 0x7fff24b1bfff libsystem_networkextension.dylib (1295.60.5) /usr/lib/system/libsystem_networkextension.dylib 0x7fff24b1c000 - 0x7fff24b1cfff libenergytrace.dylib (22) <9BE5E51A-F531-3D59-BBBC-486FFF97BD30> /usr/lib/libenergytrace.dylib 0x7fff24b1d000 - 0x7fff24b78fff libMobileGestalt.dylib (978.60.2) /usr/lib/libMobileGestalt.dylib 0x7fff24b79000 - 0x7fff24b8ffff libsystem_asl.dylib (385) <940C5BB9-4928-3A63-97F2-132797C8B7E5> /usr/lib/system/libsystem_asl.dylib 0x7fff24b90000 - 0x7fff24ba7fff com.apple.TCC (1.0 - 1) <457D5F24-A346-38FC-8FA1-43B0C835E035> /System/Library/PrivateFrameworks/TCC.framework/Versions/A/TCC 0x7fff24ba8000 - 0x7fff24f0dfff com.apple.SkyLight (1.600.0 - 569.6) <35876384-45F9-3C62-995B-38EC31BE75D7> /System/Library/PrivateFrameworks/SkyLight.framework/Versions/A/SkyLight 0x7fff24f0e000 - 0x7fff255a1fff com.apple.CoreGraphics (2.0 - 1463.2.2) <323F725F-CB03-3AAD-AFBC-37B430B3FD4E> /System/Library/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics 0x7fff255a2000 - 0x7fff25698fff com.apple.ColorSync (4.13.0 - 3472) <7387EBC7-CBD9-34FE-B4A3-345E4750FD81> /System/Library/Frameworks/ColorSync.framework/Versions/A/ColorSync 0x7fff25699000 - 0x7fff256f4fff com.apple.HIServices (1.22 - 713) <9AF2CDD9-8B68-3606-8C9E-1842420ACDA7> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/HIServices.framework/Versions/A/HIServices 0x7fff25aa0000 - 0x7fff25ebefff com.apple.CoreData (120 - 1044.3) <76179A55-CA89-3967-A0A7-C419DB735983> /System/Library/Frameworks/CoreData.framework/Versions/A/CoreData 0x7fff25ebf000 - 0x7fff25ed5fff com.apple.ProtocolBuffer (1 - 285.20.8.8.1) <8EE538E7-2BB1-3E29-8FC3-938335998B22> /System/Library/PrivateFrameworks/ProtocolBuffer.framework/Versions/A/ProtocolBuffer 0x7fff25ed6000 - 0x7fff26095fff libsqlite3.dylib (321.1) /usr/lib/libsqlite3.dylib 0x7fff26113000 - 0x7fff2612bfff com.apple.commonutilities (8.0 - 900) <76711775-FF46-38CA-88F3-B4201C285C7F> /System/Library/PrivateFrameworks/CommonUtilities.framework/Versions/A/CommonUtilities 0x7fff2612c000 - 0x7fff261adfff com.apple.BaseBoard (526 - 526) <38C24B3A-8226-3FD5-8C28-B11D02747B56> /System/Library/PrivateFrameworks/BaseBoard.framework/Versions/A/BaseBoard 0x7fff261ae000 - 0x7fff261f9fff com.apple.RunningBoardServices (1.0 - 505.60.2) /System/Library/PrivateFrameworks/RunningBoardServices.framework/Versions/A/RunningBoardServices 0x7fff261fa000 - 0x7fff2626ffff com.apple.AE (918.0.1 - 918.0.1) <3A298716-A130-345E-B8FF-74194849015E> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE 0x7fff26270000 - 0x7fff26276fff libdns_services.dylib (1310.60.4) <61EB26AD-C09E-3140-955E-16BF7DD2D6E3> /usr/lib/libdns_services.dylib 0x7fff26277000 - 0x7fff2627efff libsystem_symptoms.dylib (1431.60.1) <88F35AAC-746F-3176-81DF-49CE3D285636> /usr/lib/system/libsystem_symptoms.dylib 0x7fff2627f000 - 0x7fff26403fff com.apple.Network (1.0 - 1) /System/Library/Frameworks/Network.framework/Versions/A/Network 0x7fff26404000 - 0x7fff26428fff com.apple.analyticsd (1.0 - 1) <99FE0234-454F-36FF-9DE9-36B94D8753F9> /System/Library/PrivateFrameworks/CoreAnalytics.framework/Versions/A/CoreAnalytics 0x7fff26429000 - 0x7fff2642bfff libDiagnosticMessagesClient.dylib (112) <1014A32B-89EE-3ADD-971F-9CB973172F69> /usr/lib/libDiagnosticMessagesClient.dylib 0x7fff2642c000 - 0x7fff26478fff com.apple.spotlight.metadata.utilities (1.0 - 2150.7.2) <37A1E760-2006-366C-9FAC-FB70227393FB> /System/Library/PrivateFrameworks/MetadataUtilities.framework/Versions/A/MetadataUtilities 0x7fff26479000 - 0x7fff26513fff com.apple.Metadata (10.7.0 - 2150.7.2) <509C6597-ABB2-3B81-8E09-C51A755CCDA2> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata 0x7fff26514000 - 0x7fff2651afff com.apple.DiskArbitration (2.7 - 2.7) <83DED679-BE65-3475-8AFF-D664BBAFA60A> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration 0x7fff2651b000 - 0x7fff26bc1fff com.apple.vImage (8.1 - 544) <305D97CC-B47C-32FD-9EC5-43259A469A14> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage 0x7fff26bc2000 - 0x7fff26e8ffff com.apple.QuartzCore (1.11 - 925.5) /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x7fff26e90000 - 0x7fff26ed1fff libFontRegistry.dylib (309) <790676A3-2B74-3239-A60D-429069933542> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontRegistry.dylib 0x7fff26ed2000 - 0x7fff27013fff com.apple.coreui (2.1 - 689.4) <0DA8F4E0-9473-374E-8B48-F0A40AEC63CE> /System/Library/PrivateFrameworks/CoreUI.framework/Versions/A/CoreUI 0x7fff27100000 - 0x7fff2710bfff com.apple.PerformanceAnalysis (1.275 - 275) <2F811EE6-D4D4-347E-B4A0-961F0DF050E5> /System/Library/PrivateFrameworks/PerformanceAnalysis.framework/Versions/A/PerformanceAnalysis 0x7fff2710c000 - 0x7fff2711bfff com.apple.OpenDirectory (11.1 - 230.40.1) <7710743E-6F55-342E-88FA-18796CF83700> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory 0x7fff2711c000 - 0x7fff2713bfff com.apple.CFOpenDirectory (11.1 - 230.40.1) <32ECCB06-56D8-3704-935B-7D5363B2988E> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/Frameworks/CFOpenDirectory.framework/Versions/A/CFOpenDirectory 0x7fff2713c000 - 0x7fff27144fff com.apple.CoreServices.FSEvents (1290.40.2 - 1290.40.2) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/FSEvents.framework/Versions/A/FSEvents 0x7fff27145000 - 0x7fff27169fff com.apple.coreservices.SharedFileList (144 - 144) <93D2192D-7A27-3FD4-B3AB-A4DCBF8419B7> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SharedFileList.framework/Versions/A/SharedFileList 0x7fff2716a000 - 0x7fff2716cfff libapp_launch_measurement.dylib (14.1) <9E2700C3-E993-3695-988E-FEF798B75E34> /usr/lib/libapp_launch_measurement.dylib 0x7fff2716d000 - 0x7fff271b5fff com.apple.CoreAutoLayout (1.0 - 21.10.1) <998BC461-F4F5-396E-9798-1C8126AD61DA> /System/Library/PrivateFrameworks/CoreAutoLayout.framework/Versions/A/CoreAutoLayout 0x7fff271b6000 - 0x7fff27298fff libxml2.2.dylib (34.8) <68396181-8100-390C-8886-EFB79F5B484C> /usr/lib/libxml2.2.dylib 0x7fff27299000 - 0x7fff272e5fff com.apple.CoreVideo (1.8 - 408.4) <0D5AD16E-A871-3ACB-B910-39B87928E937> /System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo 0x7fff272e6000 - 0x7fff272e8fff com.apple.loginsupport (1.0 - 1) <4F860927-F6F5-3A99-A103-744CF365634F> /System/Library/PrivateFrameworks/login.framework/Versions/A/Frameworks/loginsupport.framework/Versions/A/loginsupport 0x7fff282c9000 - 0x7fff282d9fff libsystem_containermanager.dylib (318.60.1) <4ED09A19-04CC-3464-9EFB-F674932020B5> /usr/lib/system/libsystem_containermanager.dylib 0x7fff282da000 - 0x7fff282ebfff com.apple.IOSurface (289.3 - 289.3) /System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface 0x7fff282ec000 - 0x7fff282f4fff com.apple.IOAccelerator (439.52 - 439.52) <3944C92D-7838-3D2F-A453-9DB15C815D7B> /System/Library/PrivateFrameworks/IOAccelerator.framework/Versions/A/IOAccelerator 0x7fff282f5000 - 0x7fff2841afff com.apple.Metal (244.32.7 - 244.32.7) <413B81AE-653F-3CF7-B5A4-A4391436E6D1> /System/Library/Frameworks/Metal.framework/Versions/A/Metal 0x7fff2841b000 - 0x7fff28437fff com.apple.audio.caulk (1.0 - 70) <952BA9D4-BAD3-3319-8C17-F7BB2655F80C> /System/Library/PrivateFrameworks/caulk.framework/Versions/A/caulk 0x7fff28438000 - 0x7fff28521fff com.apple.CoreMedia (1.0 - 2760.6.4.6) /System/Library/Frameworks/CoreMedia.framework/Versions/A/CoreMedia 0x7fff28522000 - 0x7fff2867efff libFontParser.dylib (305.2.0.6) <76C6C92A-1B16-3FB7-9EA2-7227D379C20F> /System/Library/PrivateFrameworks/FontServices.framework/libFontParser.dylib 0x7fff2867f000 - 0x7fff2897efff com.apple.HIToolbox (2.1.1 - 1060.4) <93518490-429F-3E31-8344-15D479C2F4CE> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox 0x7fff2897f000 - 0x7fff28992fff com.apple.framework.DFRFoundation (1.0 - 265) /System/Library/PrivateFrameworks/DFRFoundation.framework/Versions/A/DFRFoundation 0x7fff28993000 - 0x7fff28996fff com.apple.dt.XCTTargetBootstrap (1.0 - 17500) <13ADD312-F6F5-3C03-BD3B-9331B3851285> /System/Library/PrivateFrameworks/XCTTargetBootstrap.framework/Versions/A/XCTTargetBootstrap 0x7fff28997000 - 0x7fff289c0fff com.apple.CoreSVG (1.0 - 149) /System/Library/PrivateFrameworks/CoreSVG.framework/Versions/A/CoreSVG 0x7fff289c1000 - 0x7fff28bfafff com.apple.ImageIO (3.3.0 - 2130.2.7) <0FE3D51B-EC76-3558-BD56-7BFF61A6793D> /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO 0x7fff28bfb000 - 0x7fff28f78fff com.apple.CoreImage (16.1.0 - 1120.10) <46F1E4F5-DF8F-32D4-8D0C-6FCF2C27A5CD> /System/Library/Frameworks/CoreImage.framework/Versions/A/CoreImage 0x7fff28f79000 - 0x7fff28fd4fff com.apple.MetalPerformanceShaders.MPSCore (1.0 - 1) /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSCore.framework/Versions/A/MPSCore 0x7fff28fd5000 - 0x7fff28fd8fff libsystem_configuration.dylib (1109.60.2) /usr/lib/system/libsystem_configuration.dylib 0x7fff28fd9000 - 0x7fff28fddfff libsystem_sandbox.dylib (1441.60.4) <8CE27199-D633-31D2-AB08-56380A1DA9FB> /usr/lib/system/libsystem_sandbox.dylib 0x7fff28fde000 - 0x7fff28fdffff com.apple.AggregateDictionary (1.0 - 1) <7F2AFEBB-FF06-3194-B691-B411F3456962> /System/Library/PrivateFrameworks/AggregateDictionary.framework/Versions/A/AggregateDictionary 0x7fff28fe0000 - 0x7fff28fe3fff com.apple.AppleSystemInfo (3.1.5 - 3.1.5) <250CD2CA-E796-3CB0-9ADD-054998903B1D> /System/Library/PrivateFrameworks/AppleSystemInfo.framework/Versions/A/AppleSystemInfo 0x7fff28fe4000 - 0x7fff28fe5fff liblangid.dylib (136) <224DC045-2B60-39AF-B89E-E524175667F5> /usr/lib/liblangid.dylib 0x7fff28fe6000 - 0x7fff29086fff com.apple.CoreNLP (1.0 - 245) /System/Library/PrivateFrameworks/CoreNLP.framework/Versions/A/CoreNLP 0x7fff29087000 - 0x7fff2908dfff com.apple.LinguisticData (1.0 - 399) /System/Library/PrivateFrameworks/LinguisticData.framework/Versions/A/LinguisticData 0x7fff2908e000 - 0x7fff2974afff libBNNS.dylib (288.60.2) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBNNS.dylib 0x7fff2974b000 - 0x7fff2991efff libvDSP.dylib (760.40.6) <9434101D-E001-357F-9503-9896C6011F52> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x7fff2991f000 - 0x7fff29931fff com.apple.CoreEmoji (1.0 - 128) <7CCFC59A-8746-3E52-AF1D-1B67798E940C> /System/Library/PrivateFrameworks/CoreEmoji.framework/Versions/A/CoreEmoji 0x7fff29932000 - 0x7fff2993cfff com.apple.IOMobileFramebuffer (343.0.0 - 343.0.0) <9A6F913C-EC79-3FC1-A92C-3A1BA96D8DFB> /System/Library/PrivateFrameworks/IOMobileFramebuffer.framework/Versions/A/IOMobileFramebuffer 0x7fff29c33000 - 0x7fff29c43fff com.apple.AssertionServices (1.0 - 505.60.2) <9F8620BD-A58D-3A42-9B9E-DEC21517EF1A> /System/Library/PrivateFrameworks/AssertionServices.framework/Versions/A/AssertionServices 0x7fff29c44000 - 0x7fff29cd0fff com.apple.securityfoundation (6.0 - 55240.40.4) <5F06D141-62F4-3405-BA72-24673B170A16> /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation 0x7fff29cd1000 - 0x7fff29cdafff com.apple.coreservices.BackgroundTaskManagement (1.0 - 104) /System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement 0x7fff29cdb000 - 0x7fff29cdffff com.apple.xpc.ServiceManagement (1.0 - 1) <2C03BEB7-915C-3A3A-A44F-A77775E1BFD5> /System/Library/Frameworks/ServiceManagement.framework/Versions/A/ServiceManagement 0x7fff29ce0000 - 0x7fff29ce2fff libquarantine.dylib (119.40.2) <19D42B9D-3336-3543-AF75-6E605EA31599> /usr/lib/system/libquarantine.dylib 0x7fff29ce3000 - 0x7fff29ceefff libCheckFix.dylib (31) <3381FC93-F188-348C-9345-5567A7116CEF> /usr/lib/libCheckFix.dylib 0x7fff29cef000 - 0x7fff29d06fff libcoretls.dylib (169) <9C244029-6B45-3583-B27F-BB7BBF84D814> /usr/lib/libcoretls.dylib 0x7fff29d07000 - 0x7fff29d17fff libbsm.0.dylib (68.40.1) /usr/lib/libbsm.0.dylib 0x7fff29d18000 - 0x7fff29d61fff libmecab.dylib (929.1.1) /usr/lib/libmecab.dylib 0x7fff29d62000 - 0x7fff29d67fff libgermantok.dylib (24) /usr/lib/libgermantok.dylib 0x7fff29d68000 - 0x7fff29d7dfff libLinearAlgebra.dylib (1336.40.1) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLinearAlgebra.dylib 0x7fff29d7e000 - 0x7fff29fa5fff com.apple.MetalPerformanceShaders.MPSNeuralNetwork (1.0 - 1) <231CF580-952A-32BC-A423-9B9756AC9744> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSNeuralNetwork.framework/Versions/A/MPSNeuralNetwork 0x7fff29fa6000 - 0x7fff29ff5fff com.apple.MetalPerformanceShaders.MPSRayIntersector (1.0 - 1) <65A993E4-3DC2-3152-98D5-A1DF3DB4573F> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSRayIntersector.framework/Versions/A/MPSRayIntersector 0x7fff29ff6000 - 0x7fff2a13cfff com.apple.MLCompute (1.0 - 1) /System/Library/Frameworks/MLCompute.framework/Versions/A/MLCompute 0x7fff2a13d000 - 0x7fff2a173fff com.apple.MetalPerformanceShaders.MPSMatrix (1.0 - 1) /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSMatrix.framework/Versions/A/MPSMatrix 0x7fff2a174000 - 0x7fff2a1b1fff com.apple.MetalPerformanceShaders.MPSNDArray (1.0 - 1) /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSNDArray.framework/Versions/A/MPSNDArray 0x7fff2a1b2000 - 0x7fff2a242fff com.apple.MetalPerformanceShaders.MPSImage (1.0 - 1) <21527A17-2D6F-3BDF-9A74-F90FA6E26BB3> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/Frameworks/MPSImage.framework/Versions/A/MPSImage 0x7fff2a243000 - 0x7fff2a252fff com.apple.AppleFSCompression (125 - 1.0) /System/Library/PrivateFrameworks/AppleFSCompression.framework/Versions/A/AppleFSCompression 0x7fff2a253000 - 0x7fff2a260fff libbz2.1.0.dylib (44) <0575C0D0-B107-3E53-857F-DEC55998197B> /usr/lib/libbz2.1.0.dylib 0x7fff2a261000 - 0x7fff2a265fff libsystem_coreservices.dylib (127) /usr/lib/system/libsystem_coreservices.dylib 0x7fff2a266000 - 0x7fff2a293fff com.apple.CoreServices.OSServices (1122.11 - 1122.11) <870F34BE-C0ED-318B-858D-5F1E4757D552> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices 0x7fff2a469000 - 0x7fff2a47bfff libz.1.dylib (76) <9F89FD60-03F7-3175-AB34-5112B99E2B8A> /usr/lib/libz.1.dylib 0x7fff2a47c000 - 0x7fff2a4c3fff libsystem_m.dylib (3186.40.2) <79820D9E-0FF1-3F20-AF4F-F87EE20CE8C9> /usr/lib/system/libsystem_m.dylib 0x7fff2a4c4000 - 0x7fff2a4c4fff libcharset.1.dylib (59) <414F6A1C-1EBC-3956-AC2D-CCB0458F31AF> /usr/lib/libcharset.1.dylib 0x7fff2a4c5000 - 0x7fff2a4cafff libmacho.dylib (973.4) <28AE1649-22ED-3C4D-A232-29D37F821C39> /usr/lib/system/libmacho.dylib 0x7fff2a4cb000 - 0x7fff2a4e6fff libkxld.dylib (7195.60.75) <3600A314-332A-343D-B45D-D9D8B302545D> /usr/lib/system/libkxld.dylib 0x7fff2a4e7000 - 0x7fff2a4f2fff libcommonCrypto.dylib (60178.40.2) <1D0A75A5-DEC5-39C6-AB3D-E789B8866712> /usr/lib/system/libcommonCrypto.dylib 0x7fff2a4f3000 - 0x7fff2a4fdfff libunwind.dylib (200.10) /usr/lib/system/libunwind.dylib 0x7fff2a4fe000 - 0x7fff2a505fff liboah.dylib (203.13.2) /usr/lib/liboah.dylib 0x7fff2a506000 - 0x7fff2a510fff libcopyfile.dylib (173.40.2) <89483CD4-DA46-3AF2-AE78-FC37CED05ACC> /usr/lib/system/libcopyfile.dylib 0x7fff2a511000 - 0x7fff2a518fff libcompiler_rt.dylib (102.2) <0DB26EC8-B4CD-3268-B865-C2FC07E4D2AA> /usr/lib/system/libcompiler_rt.dylib 0x7fff2a519000 - 0x7fff2a51bfff libsystem_collections.dylib (1439.40.11) /usr/lib/system/libsystem_collections.dylib 0x7fff2a51c000 - 0x7fff2a51efff libsystem_secinit.dylib (87.60.1) <99B5FD99-1A8B-37C1-BD70-04990FA33B1C> /usr/lib/system/libsystem_secinit.dylib 0x7fff2a51f000 - 0x7fff2a521fff libremovefile.dylib (49.40.3) <750012C2-7097-33C3-B796-2766E6CDE8C1> /usr/lib/system/libremovefile.dylib 0x7fff2a522000 - 0x7fff2a522fff libkeymgr.dylib (31) <2C7B58B0-BE54-3A50-B399-AA49C19083A9> /usr/lib/system/libkeymgr.dylib 0x7fff2a523000 - 0x7fff2a52afff libsystem_dnssd.dylib (1310.60.4) <81EFC44D-450E-3AA3-AC8F-D7EF68F464B4> /usr/lib/system/libsystem_dnssd.dylib 0x7fff2a52b000 - 0x7fff2a530fff libcache.dylib (83) <2F7F7303-DB23-359E-85CD-8B2F93223E2A> /usr/lib/system/libcache.dylib 0x7fff2a531000 - 0x7fff2a532fff libSystem.B.dylib (1292.60.1) /usr/lib/libSystem.B.dylib 0x7fff2a533000 - 0x7fff2a536fff libfakelink.dylib (3) <34B6DC95-E19A-37C0-B9D0-558F692D85F5> /usr/lib/libfakelink.dylib 0x7fff2a537000 - 0x7fff2a537fff com.apple.SoftLinking (1.0 - 1) <90D679B3-DFFD-3604-B89F-1BCF70B3EBA4> /System/Library/PrivateFrameworks/SoftLinking.framework/Versions/A/SoftLinking 0x7fff2a538000 - 0x7fff2a56ffff libpcap.A.dylib (98.40.1) /usr/lib/libpcap.A.dylib 0x7fff2a570000 - 0x7fff2a660fff libiconv.2.dylib (59) <3E53F735-1D7E-3ABB-BC45-AAA37F535830> /usr/lib/libiconv.2.dylib 0x7fff2a661000 - 0x7fff2a672fff libcmph.dylib (8) <865FA425-831D-3E49-BD1B-14188D2A98AA> /usr/lib/libcmph.dylib 0x7fff2a673000 - 0x7fff2a6e4fff libarchive.2.dylib (83.40.4) <76B2F421-5335-37FB-9CD5-1018878B9E74> /usr/lib/libarchive.2.dylib 0x7fff2a6e5000 - 0x7fff2a74cfff com.apple.SearchKit (1.4.1 - 1.4.1) <7BDD2800-BDDC-3DE0-A4A8-B1E855130E3B> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit 0x7fff2a74d000 - 0x7fff2a74efff libThaiTokenizer.dylib (3) <513547CD-5C7F-37BE-A2AD-55A22F279588> /usr/lib/libThaiTokenizer.dylib 0x7fff2a74f000 - 0x7fff2a776fff com.apple.applesauce (1.0 - 16.26) /System/Library/PrivateFrameworks/AppleSauce.framework/Versions/A/AppleSauce 0x7fff2a777000 - 0x7fff2a78efff libapple_nghttp2.dylib (1.41) /usr/lib/libapple_nghttp2.dylib 0x7fff2a78f000 - 0x7fff2a7a1fff libSparseBLAS.dylib (1336.40.1) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libSparseBLAS.dylib 0x7fff2a7a2000 - 0x7fff2a7a3fff com.apple.MetalPerformanceShaders.MetalPerformanceShaders (1.0 - 1) <1BFEB124-CF05-342F-BC65-B233EAB661D9> /System/Library/Frameworks/MetalPerformanceShaders.framework/Versions/A/MetalPerformanceShaders 0x7fff2a7a4000 - 0x7fff2a7a8fff libpam.2.dylib (28.40.1) /usr/lib/libpam.2.dylib 0x7fff2a7a9000 - 0x7fff2a7c1fff libcompression.dylib (96.40.6) <45B8B821-8EB6-34FE-92E9-5CBA474499E2> /usr/lib/libcompression.dylib 0x7fff2a7c2000 - 0x7fff2a7c7fff libQuadrature.dylib (7) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libQuadrature.dylib 0x7fff2a7c8000 - 0x7fff2ab64fff libLAPACK.dylib (1336.40.1) <509FBCC6-4ECB-3192-98A6-D0C030E4E9D8> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib 0x7fff2ab65000 - 0x7fff2abb3fff com.apple.DictionaryServices (1.2 - 341) <83CDCE83-6B48-35F1-BACF-83240D940777> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/DictionaryServices.framework/Versions/A/DictionaryServices 0x7fff2abb4000 - 0x7fff2abccfff liblzma.5.dylib (16) /usr/lib/liblzma.5.dylib 0x7fff2abcd000 - 0x7fff2abcefff libcoretls_cfhelpers.dylib (169) /usr/lib/libcoretls_cfhelpers.dylib 0x7fff2abcf000 - 0x7fff2acc8fff com.apple.APFS (1677.60.23 - 1677.60.23) <8271EE40-CDF5-3E0B-9F42-B49DC7C46C98> /System/Library/PrivateFrameworks/APFS.framework/Versions/A/APFS 0x7fff2acc9000 - 0x7fff2acd6fff libxar.1.dylib (452) <3F3DA942-DC7B-31EF-BCF1-38F99F59A660> /usr/lib/libxar.1.dylib 0x7fff2acd7000 - 0x7fff2acdafff libutil.dylib (58.40.2) <85CF2B3B-6BEB-381D-8683-1DE2B0167ECC> /usr/lib/libutil.dylib 0x7fff2acdb000 - 0x7fff2ad03fff libxslt.1.dylib (17.2) <2C881E82-6E2C-3E92-8DC5-3C2D05FE7C95> /usr/lib/libxslt.1.dylib 0x7fff2ad04000 - 0x7fff2ad0efff libChineseTokenizer.dylib (37) <36891BB5-4A83-33A3-9995-CC5DB2AB53CE> /usr/lib/libChineseTokenizer.dylib 0x7fff2ad0f000 - 0x7fff2adcdfff libvMisc.dylib (760.40.6) <219319E1-BDBD-34D1-97B7-E46256785D3C> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x7fff2adce000 - 0x7fff2ae66fff libate.dylib (3.0.4) <51D50D08-F614-3929-AFB1-BF4ED9BE4751> /usr/lib/libate.dylib 0x7fff2ae67000 - 0x7fff2ae6efff libIOReport.dylib (64) <3C26FBDC-931E-3318-8225-C10849CF1D60> /usr/lib/libIOReport.dylib 0x7fff2b01e000 - 0x7fff2b071fff com.apple.AppleVAFramework (6.1.3 - 6.1.3) <8A5B1C42-DD83-303B-85DE-754FB6C10E1A> /System/Library/PrivateFrameworks/AppleVA.framework/Versions/A/AppleVA 0x7fff2b072000 - 0x7fff2b08bfff libexpat.1.dylib (26) <4408FC72-BDAA-33AE-BE14-4008642794ED> /usr/lib/libexpat.1.dylib 0x7fff2b08c000 - 0x7fff2b095fff libheimdal-asn1.dylib (597.40.10) <032931C8-B042-3B3D-93D3-5B3E27431FEA> /usr/lib/libheimdal-asn1.dylib 0x7fff2b096000 - 0x7fff2b0aafff com.apple.IconFoundation (479.3 - 479.3) <650C91C9-D6A1-3FF7-964B-DE1065F2243C> /System/Library/PrivateFrameworks/IconFoundation.framework/Versions/A/IconFoundation 0x7fff2b0ab000 - 0x7fff2b118fff com.apple.IconServices (479.3 - 479.3) <63CAB1AB-C485-382A-9088-F6E3937BB8E9> /System/Library/PrivateFrameworks/IconServices.framework/Versions/A/IconServices 0x7fff2b119000 - 0x7fff2b1b6fff com.apple.MediaExperience (1.0 - 1) /System/Library/PrivateFrameworks/MediaExperience.framework/Versions/A/MediaExperience 0x7fff2b1b7000 - 0x7fff2b1e0fff com.apple.persistentconnection (1.0 - 1.0) /System/Library/PrivateFrameworks/PersistentConnection.framework/Versions/A/PersistentConnection 0x7fff2b1e1000 - 0x7fff2b1effff com.apple.GraphVisualizer (1.0 - 100.1) <7035CCDF-5B9D-365C-A1FA-1D961EBEE44D> /System/Library/PrivateFrameworks/GraphVisualizer.framework/Versions/A/GraphVisualizer 0x7fff2b1f0000 - 0x7fff2b60bfff com.apple.vision.FaceCore (4.3.2 - 4.3.2) /System/Library/PrivateFrameworks/FaceCore.framework/Versions/A/FaceCore 0x7fff2b60c000 - 0x7fff2b656fff com.apple.OTSVG (1.0 - 677.2.0.5) /System/Library/PrivateFrameworks/OTSVG.framework/Versions/A/OTSVG 0x7fff2b657000 - 0x7fff2b65dfff com.apple.xpc.AppServerSupport (1.0 - 2038.40.38) <27B96AA0-421E-3E5A-B9D8-9BA3F0D133E9> /System/Library/PrivateFrameworks/AppServerSupport.framework/Versions/A/AppServerSupport 0x7fff2b65e000 - 0x7fff2b66ffff libhvf.dylib (1.0 - $[CURRENT_PROJECT_VERSION]) /System/Library/PrivateFrameworks/FontServices.framework/libhvf.dylib 0x7fff2b670000 - 0x7fff2b672fff libspindump.dylib (295) /usr/lib/libspindump.dylib 0x7fff2b673000 - 0x7fff2b733fff com.apple.Heimdal (4.0 - 2.0) <8BB18335-5DD3-3154-85C8-0145C64556A2> /System/Library/PrivateFrameworks/Heimdal.framework/Versions/A/Heimdal 0x7fff2b8d2000 - 0x7fff2b93cfff com.apple.bom (14.0 - 233) /System/Library/PrivateFrameworks/Bom.framework/Versions/A/Bom 0x7fff2b93d000 - 0x7fff2b987fff com.apple.AppleJPEG (1.0 - 1) /System/Library/PrivateFrameworks/AppleJPEG.framework/Versions/A/AppleJPEG 0x7fff2b988000 - 0x7fff2ba66fff libJP2.dylib (2130.2.7) <9D837C01-3D6C-3D71-8E92-3673CE06A21F> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libJP2.dylib 0x7fff2ba67000 - 0x7fff2ba6afff com.apple.WatchdogClient.framework (1.0 - 98.60.1) <8374BBBB-65CB-3D46-9AD6-0DD1FB99AD88> /System/Library/PrivateFrameworks/WatchdogClient.framework/Versions/A/WatchdogClient 0x7fff2ba6b000 - 0x7fff2ba9efff com.apple.MultitouchSupport.framework (4400.28 - 4400.28) /System/Library/PrivateFrameworks/MultitouchSupport.framework/Versions/A/MultitouchSupport 0x7fff2ba9f000 - 0x7fff2bbf1fff com.apple.VideoToolbox (1.0 - 2760.6.4.6) <35098775-A188-3BE0-B0B1-7CE0027BA295> /System/Library/Frameworks/VideoToolbox.framework/Versions/A/VideoToolbox 0x7fff2bbf2000 - 0x7fff2bc24fff libAudioToolboxUtility.dylib (1180.23) <58B4505B-F0EA-37FC-9F5A-6F9F05B0F2A5> /usr/lib/libAudioToolboxUtility.dylib 0x7fff2bc25000 - 0x7fff2bc4bfff libPng.dylib (2130.2.7) <1F3FED3B-FB07-3F43-8EAD-6100017FBAB5> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libPng.dylib 0x7fff2bc4c000 - 0x7fff2bca9fff libTIFF.dylib (2130.2.7) <27E9A2D3-003D-3D97-AD85-BE595EA0516F> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib 0x7fff2bcaa000 - 0x7fff2bcc4fff com.apple.IOPresentment (53 - 37) <070919DC-978E-3DB3-80FD-FB0C1BAAE80A> /System/Library/PrivateFrameworks/IOPresentment.framework/Versions/A/IOPresentment 0x7fff2bcc5000 - 0x7fff2bccbfff com.apple.GPUWrangler (6.2.2 - 6.2.2) /System/Library/PrivateFrameworks/GPUWrangler.framework/Versions/A/GPUWrangler 0x7fff2bccc000 - 0x7fff2bccffff libRadiance.dylib (2130.2.7) <7ABF94D2-5281-363F-A613-9C945D77AAE8> /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libRadiance.dylib 0x7fff2bcd0000 - 0x7fff2bcd5fff com.apple.DSExternalDisplay (3.1 - 380) /System/Library/PrivateFrameworks/DSExternalDisplay.framework/Versions/A/DSExternalDisplay 0x7fff2bcd6000 - 0x7fff2bcfafff libJPEG.dylib (2130.2.7) /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib 0x7fff2bcfb000 - 0x7fff2bd2afff com.apple.ATSUI (1.0 - 1) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATSUI.framework/Versions/A/ATSUI 0x7fff2bd2b000 - 0x7fff2bd2ffff libGIF.dylib (2130.2.7) /System/Library/Frameworks/ImageIO.framework/Versions/A/Resources/libGIF.dylib 0x7fff2bd30000 - 0x7fff2bd39fff com.apple.CMCaptureCore (1.0 - 80.17.1.1) /System/Library/PrivateFrameworks/CMCaptureCore.framework/Versions/A/CMCaptureCore 0x7fff2bd3a000 - 0x7fff2bd81fff com.apple.print.framework.PrintCore (16 - 531) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/PrintCore.framework/Versions/A/PrintCore 0x7fff2bd82000 - 0x7fff2be4efff com.apple.TextureIO (3.10.9 - 3.10.9) <0AC15003-4B6A-3FB3-9B41-3EF61A2BD430> /System/Library/PrivateFrameworks/TextureIO.framework/Versions/A/TextureIO 0x7fff2be4f000 - 0x7fff2be57fff com.apple.InternationalSupport (1.0 - 60) <5485FFDC-CE44-37F4-865F-91B2EFBC6CAF> /System/Library/PrivateFrameworks/InternationalSupport.framework/Versions/A/InternationalSupport 0x7fff2be58000 - 0x7fff2bed3fff com.apple.datadetectorscore (8.0 - 674) /System/Library/PrivateFrameworks/DataDetectorsCore.framework/Versions/A/DataDetectorsCore 0x7fff2bed4000 - 0x7fff2bf32fff com.apple.UserActivity (435 - 435) <075FD354-28FD-3A13-881C-955FA9106D5C> /System/Library/PrivateFrameworks/UserActivity.framework/Versions/A/UserActivity 0x7fff2cb80000 - 0x7fff2cbb1fff libSessionUtility.dylib (76.7) <95615EDE-46B9-32AE-96EC-7F6E5EB6A932> /System/Library/PrivateFrameworks/AudioSession.framework/libSessionUtility.dylib 0x7fff2cbb2000 - 0x7fff2cce2fff com.apple.audio.toolbox.AudioToolbox (1.14 - 1.14) /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox 0x7fff2cce3000 - 0x7fff2cd4afff com.apple.audio.AudioSession (1.0 - 76.7) /System/Library/PrivateFrameworks/AudioSession.framework/Versions/A/AudioSession 0x7fff2cd4b000 - 0x7fff2cd5dfff libAudioStatistics.dylib (25.1) <1D07EA54-BE7C-37C4-AA73-5224D402F0C3> /usr/lib/libAudioStatistics.dylib 0x7fff2cd5e000 - 0x7fff2cd6dfff com.apple.speech.synthesis.framework (9.0.51 - 9.0.51) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0x7fff2cd6e000 - 0x7fff2cdd9fff com.apple.ApplicationServices.ATS (377 - 516) <3A435648-CC5F-387E-AB37-391AAEABE314> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/ATS 0x7fff2cdda000 - 0x7fff2cdf2fff libresolv.9.dylib (68) <9957A6F4-8B66-3429-86CD-6DF4993EB6F5> /usr/lib/libresolv.9.dylib 0x7fff2cf25000 - 0x7fff2d004fff libSMC.dylib (20) /usr/lib/libSMC.dylib 0x7fff2d005000 - 0x7fff2d064fff libcups.2.dylib (494.1) <04A4801E-E1B5-3919-9F14-100F0C2D049B> /usr/lib/libcups.2.dylib 0x7fff2d065000 - 0x7fff2d074fff com.apple.LangAnalysis (1.7.0 - 254) <120945D9-B74D-3A6F-B160-2678E6B6481D> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LangAnalysis.framework/Versions/A/LangAnalysis 0x7fff2d075000 - 0x7fff2d07ffff com.apple.NetAuth (6.2 - 6.2) /System/Library/PrivateFrameworks/NetAuth.framework/Versions/A/NetAuth 0x7fff2d080000 - 0x7fff2d087fff com.apple.ColorSyncLegacy (4.13.0 - 1) <33DA9348-EADF-36D2-B999-56854481D272> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ColorSyncLegacy.framework/Versions/A/ColorSyncLegacy 0x7fff2d088000 - 0x7fff2d093fff com.apple.QD (4.0 - 416) <7FFC9049-7E42-372B-9105-1C4C94DE0110> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/QD.framework/Versions/A/QD 0x7fff2d094000 - 0x7fff2d702fff com.apple.audio.AudioResourceArbitration (1.0 - 1) <098FD431-D302-3DD5-9AD1-453615A73E68> /System/Library/PrivateFrameworks/AudioResourceArbitration.framework/Versions/A/AudioResourceArbitration 0x7fff2d703000 - 0x7fff2d70ffff com.apple.perfdata (1.0 - 67.40.1) <85A57A67-8721-3035-BCEE-D4AC98332D2C> /System/Library/PrivateFrameworks/perfdata.framework/Versions/A/perfdata 0x7fff2d710000 - 0x7fff2d71efff libperfcheck.dylib (41) <67113817-A463-360A-B321-9286DC50FEDA> /usr/lib/libperfcheck.dylib 0x7fff2d71f000 - 0x7fff2d72efff com.apple.Kerberos (3.0 - 1) <2E872705-0841-3695-AF79-4160D2A436AB> /System/Library/Frameworks/Kerberos.framework/Versions/A/Kerberos 0x7fff2d72f000 - 0x7fff2d77efff com.apple.GSS (4.0 - 2.0) <2A38D59F-5F3A-3779-A421-2F8128F22B95> /System/Library/Frameworks/GSS.framework/Versions/A/GSS 0x7fff2d77f000 - 0x7fff2d78ffff com.apple.CommonAuth (4.0 - 2.0) /System/Library/PrivateFrameworks/CommonAuth.framework/Versions/A/CommonAuth 0x7fff2d964000 - 0x7fff2d964fff liblaunch.dylib (2038.40.38) <05A7EFDD-4111-3E4D-B668-239B69DE3D0F> /usr/lib/system/liblaunch.dylib 0x7fff2fb8a000 - 0x7fff2fbb5fff com.apple.RemoteViewServices (2.0 - 163) /System/Library/PrivateFrameworks/RemoteViewServices.framework/Versions/A/RemoteViewServices 0x7fff2fbb6000 - 0x7fff2fbc5fff com.apple.SpeechRecognitionCore (6.1.12 - 6.1.12) /System/Library/PrivateFrameworks/SpeechRecognitionCore.framework/Versions/A/SpeechRecognitionCore 0x7fff2fbc6000 - 0x7fff2fbcdfff com.apple.speech.recognition.framework (6.0.3 - 6.0.3) <9C14FA0A-D905-375B-8C32-E311ED59B6AD> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SpeechRecognition.framework/Versions/A/SpeechRecognition 0x7fff2fe11000 - 0x7fff2fe11fff libsystem_product_info_filter.dylib (8.40.1) <7CCAF1A8-F570-341E-B275-0C80B092F8E0> /usr/lib/system/libsystem_product_info_filter.dylib 0x7fff2feec000 - 0x7fff2feecfff com.apple.Accelerate.vecLib (3.11 - vecLib 3.11) <510A463F-5CA5-3585-969F-2D44583B71C8> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib 0x7fff2ff13000 - 0x7fff2ff13fff com.apple.CoreServices (1122.11 - 1122.11) <5DDB040C-6E92-3DBE-9049-873F510F26E2> /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices 0x7fff301e1000 - 0x7fff301e1fff com.apple.Accelerate (1.11 - Accelerate 1.11) /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate 0x7fff32f61000 - 0x7fff32f64fff com.apple.help (1.3.8 - 71) <599F7E42-DEF1-3B70-83AB-C3BDF727CF93> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Help 0x7fff331a6000 - 0x7fff331a6fff com.apple.ApplicationServices (48 - 50) <7B536871-3F10-3138-B06B-9C2A3C07EC1E> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices 0x7fff334a6000 - 0x7fff334a6fff libHeimdalProxy.dylib (79) <1BD94BF6-8E63-3B21-95DC-E5EEEBFB8AE8> /System/Library/Frameworks/Kerberos.framework/Versions/A/Libraries/libHeimdalProxy.dylib 0x7fff34fa7000 - 0x7fff34faafff com.apple.Cocoa (6.11 - 23) /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa 0x7fff363ea000 - 0x7fff36405fff com.apple.openscripting (1.7 - 190) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/OpenScripting.framework/Versions/A/OpenScripting 0x7fff36406000 - 0x7fff36409fff com.apple.securityhi (9.0 - 55008) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SecurityHI.framework/Versions/A/SecurityHI 0x7fff3640a000 - 0x7fff3640dfff com.apple.ink.framework (10.15 - 227) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Ink.framework/Versions/A/Ink 0x7fff3640e000 - 0x7fff36411fff com.apple.CommonPanels (1.2.6 - 101) <101582BA-E64F-391A-BD23-50DCC3CF8939> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CommonPanels.framework/Versions/A/CommonPanels 0x7fff36412000 - 0x7fff36419fff com.apple.ImageCapture (1708 - 1708) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/ImageCapture.framework/Versions/A/ImageCapture 0x7fff3d104000 - 0x7fff3d107fff com.apple.print.framework.Print (15 - 271) <8411879F-7E3E-3882-BD06-68E797A3B9D6> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/A/Print 0x7fff3d108000 - 0x7fff3d10bfff com.apple.Carbon (160 - 164) <5683716A-5610-3B97-B473-B4652067E7A6> /System/Library/Frameworks/Carbon.framework/Versions/A/Carbon 0x7fff3d390000 - 0x7fff3d3affff com.apple.private.SystemPolicy (1.0 - 1) /System/Library/PrivateFrameworks/SystemPolicy.framework/Versions/A/SystemPolicy 0x7fff3dcfa000 - 0x7fff3dd0cfff libmis.dylib (274.60.2) <54387457-A60B-3390-AD6D-3B380792CD79> /usr/lib/libmis.dylib 0x7fff6c804000 - 0x7fff6c80afff libCoreFSCache.dylib (177.22) <4ECE128D-5E79-3ADF-8FE7-4FE8F565F8AA> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreFSCache.dylib 0x7fff6c80b000 - 0x7fff6c80ffff libCoreVMClient.dylib (177.22) /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreVMClient.dylib 0x7fff6c810000 - 0x7fff6c81ffff com.apple.opengl (18.1.1 - 18.1.1) /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL 0x7fff6c820000 - 0x7fff6c822fff libCVMSPluginSupport.dylib (18.1.1) <5F020D32-8663-3CB8-A50C-F939D4D4C31F> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCVMSPluginSupport.dylib 0x7fff6c823000 - 0x7fff6c82bfff libGFXShared.dylib (18.1.1) <2271532D-E2B3-3D4D-ADF0-0935F8DCE89B> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib 0x7fff6c82c000 - 0x7fff6c85ffff libGLImage.dylib (18.1.1) <528E53A3-33E1-34C7-8EE3-C42AE5255553> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib 0x7fff6c860000 - 0x7fff6c89cfff libGLU.dylib (18.1.1) <15CBDF20-8A87-3D84-90F8-D19F4A2B06E2> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib 0x7fff6ca32000 - 0x7fff6ca3cfff libGL.dylib (18.1.1) <157B74E1-F30D-3F9D-9AF8-AAA333D2812D> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib 0x7fff6de73000 - 0x7fff6decbfff com.apple.opencl (4.5 - 4.5) <8A3D06D5-4E82-355C-AE1B-E2C91DB58233> /System/Library/Frameworks/OpenCL.framework/Versions/A/OpenCL 0x7fff6fd89000 - 0x7fff6fe54fff Tcl (8.5.9 - 8.5.9) <51600BFF-7BDA-3446-9BAA-5800C815A91E> /System/Library/Frameworks/Tcl.framework/Versions/8.5/Tcl 0x7fff6fe55000 - 0x7fff6ff39fff Tk (8.5.9 - 8.5.9) /System/Library/Frameworks/Tk.framework/Versions/8.5/Tk External Modification Summary: Calls made by other processes targeting this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by all processes on this machine: task_for_pid: 790 thread_create: 0 thread_set_state: 0 VM Region Summary: ReadOnly portion of Libraries: Total=664.8M resident=0K(0%) swapped_out_or_unallocated=664.8M(100%) Writable regions: Total=96.8M written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=96.8M(100%) VIRTUAL REGION REGION TYPE SIZE COUNT (non-coalesced) =========== ======= ======= Activity Tracing 256K 1 Dispatch continuations 32.0M 1 Kernel Alloc Once 8K 1 MALLOC 37.6M 21 MALLOC guard page 16K 4 MALLOC_LARGE (reserved) 384K 1 reserved VM address space (unallocated) STACK GUARD 24K 6 Stack 18.5M 6 VM_ALLOCATE 8404K 43 __DATA 11.2M 281 __DATA_CONST 11.3M 175 __DATA_DIRTY 509K 86 __FONT_DATA 4K 1 __LINKEDIT 490.1M 48 __OBJC_RO 60.5M 1 __OBJC_RW 2452K 2 __TEXT 174.9M 289 __UNICODE 588K 1 mapped file 48.8M 9 shared memory 40K 4 =========== ======= ======= TOTAL 897.3M 981 TOTAL, minus reserved VM space 896.9M 981 Model: MacBookAir9,1, BootROM 1554.60.15.0.0 (iBridge: 18.16.13030.0.0,0), 2 processors, Dual-Core Intel Core i3, 1,1 GHz, 8 GB, SMC Graphics: kHW_IntelIrisPlusGraphicsItem, Intel Iris Plus Graphics, spdisplays_builtin Memory Module: BANK 0/ChannelA-DIMM0, 4 GB, LPDDR4X, 3733 MHz, SK Hynix, H9HCNNNCRMALPR-NEE Memory Module: BANK 2/ChannelB-DIMM0, 4 GB, LPDDR4X, 3733 MHz, SK Hynix, H9HCNNNCRMALPR-NEE AirPort: spairport_wireless_card_type_airport_extreme, wl0: Sep 11 2020 17:38:16 version 16.20.293.5.3.6.95 FWID 01-e5bd2163 Bluetooth: Version 8.0.2f9, 3 services, 27 devices, 1 incoming serial ports Network Service: Wi-Fi, AirPort, en0 USB Device: USB 3.1 Bus USB Device: USB 3.1 Bus USB Device: Apple T2 Bus USB Device: Touch Bar Backlight USB Device: Apple Internal Keyboard / Trackpad USB Device: Headset USB Device: Ambient Light Sensor USB Device: FaceTime HD Camera (Built-in) USB Device: Apple T2 Controller Thunderbolt Bus: MacBook Air, Apple Inc., 85.0 ---------- assignee: terry.reedy components: IDLE messages: 383436 nosy: p.sirenko.94, terry.reedy priority: normal severity: normal status: open title: macOS 11.1 + Homebrew 2.6.2 + python 3.9.1 = idle crash type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 13:27:40 2020 From: report at bugs.python.org (Joshua Root) Date: Sun, 20 Dec 2020 18:27:40 +0000 Subject: [New-bugs-announce] [issue42692] Build fails on macOS when compiler doesn't define __has_builtin Message-ID: <1608488860.71.0.844167332987.issue42692@roundup.psfhosted.org> New submission from Joshua Root : The line in posixmodule.c that checks for __builtin_available is rejected by compilers that don't have __has_builtin. The second check needs to be in a nested #if. ---------- components: Build, macOS messages: 383437 nosy: jmr, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: Build fails on macOS when compiler doesn't define __has_builtin type: compile error versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 16:06:08 2020 From: report at bugs.python.org (Ned Batchelder) Date: Sun, 20 Dec 2020 21:06:08 +0000 Subject: [New-bugs-announce] [issue42693] "if 0:" lines are traced; they didn't use to be Message-ID: <1608498368.47.0.692310939381.issue42693@roundup.psfhosted.org> New submission from Ned Batchelder : (Using CPython commit c95f8bc270.) This program has an "if 0:" line that becomes a NOP bytecode. It didn't used to in Python 3.9 print(1) if 0: # line 2 print(3) print(4) Using a simple trace program (https://github.com/nedbat/coveragepy/blob/master/lab/run_trace.py), it produces this output: call 1 @-1 line 1 @0 1 line 2 @8 line 4 @10 4 return 4 @20 Using Python3.9 gives this output: call 1 @-1 line 1 @0 1 line 4 @8 4 return 4 @18 Is this change intentional? ---------- messages: 383452 nosy: Mark.Shannon, nedbat priority: normal severity: normal status: open title: "if 0:" lines are traced; they didn't use to be type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 16:10:00 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 20 Dec 2020 21:10:00 +0000 Subject: [New-bugs-announce] [issue42694] Failed test_new_curses_panel in test_curses Message-ID: <1608498600.94.0.0759010269747.issue42694@roundup.psfhosted.org> New submission from Serhiy Storchaka : ====================================================================== FAIL: test_new_curses_panel (test.test_curses.TestCurses) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/serhiy/py/cpython/Lib/test/test_curses.py", line 425, in test_new_curses_panel self.assertRaises(TypeError, type(panel)) AssertionError: TypeError not raised by panel ---------------------------------------------------------------------- The regression was introduced in 1baf030a902392fe92d934ed0fb6a385cf7d8869 (issue1635741). It can lead to crash because creation of non-initialized object is allowed now. See issue23815 for details. ---------- messages: 383453 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: Failed test_new_curses_panel in test_curses versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 16:22:53 2020 From: report at bugs.python.org (Justin) Date: Sun, 20 Dec 2020 21:22:53 +0000 Subject: [New-bugs-announce] [issue42695] tkinter keysym_num value is incorrect Message-ID: <1608499373.07.0.718041853104.issue42695@roundup.psfhosted.org> New submission from Justin : Hi there. On my MacOS 10.14.16 laptop with a qwerty keyboard I was testing tkinter keyboard listening for an azerty keyboard layout by switching the layout to `French - PC` When I press qwerty keys Shift + \ I expect to see '?' printed. When looking at the tkinter events for that key press we see: ``` down {'keysym': 'Shift_L', 'keysym_num': 65505, 'keycode': 131072, 'char': ''} down {'keysym': 'tslash', 'keysym_num': 956, 'keycode': 956, 'char': '?'} up {'keysym': 'asterisk', 'keysym_num': 42, 'keycode': 2753468, 'char': '?'} up {'keysym': 'Shift_L', 'keysym_num': 65505, 'keycode': 131072, 'char': ''} ``` So the char value is correct but the keysym_num is not the expected 181 for mu. Comparing this to pressing Shift + / to generate the section symbol (?) we see: ``` down {'keysym': 'Shift_L', 'keysym_num': 65505, 'keycode': 131072, 'char': ''} down {'keysym': 'section', 'keysym_num': 167, 'keycode': 167, 'char': '?'} up {'keysym': 'section', 'keysym_num': 167, 'keycode': 2883751, 'char': '?'} up {'keysym': 'Shift_L', 'keysym_num': 65505, 'keycode': 131072, 'char': ''} ``` Which produces the expected keysym_num of 167. TLDR: the kysym_num value when writing the mu character is incorrect. It should be 181 and logging shows values of 956 and 42. Can this be fixed? Here is the keyboard listener program which can be used for verification: ``` from tkinter import * params = ['keysym', 'keysym_num', 'keycode', 'char'] def keyup(e): d = {p: getattr(e, p) for p in params} print('up', d) # print('up', e.__dict__) def keydown(e): d = {p: getattr(e, p) for p in params} print('down', d) # print('down', e.__dict__) pass root = Tk() frame = Frame(root, width=100, height=100) frame.bind("", keydown) frame.bind("", keyup) frame.pack() frame.focus_set() root.mainloop() ``` Note: my python version was installed from python.org and is: ``` Python 3.9.1 (v3.9.1:1e5d33e9b9, Dec 7 2020, 12:44:01) [Clang 12.0.0 (clang-1200.0.32.27)] on darwin ``` ---------- messages: 383458 nosy: spacether priority: normal severity: normal status: open title: tkinter keysym_num value is incorrect _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 16:29:18 2020 From: report at bugs.python.org (Ned Batchelder) Date: Sun, 20 Dec 2020 21:29:18 +0000 Subject: [New-bugs-announce] [issue42696] Duplicated unused bytecodes at end of function Message-ID: <1608499758.44.0.239199538725.issue42696@roundup.psfhosted.org> New submission from Ned Batchelder : (Using CPython commit c95f8bc270.) This program has extra bytecodes: def f(): for i in range(10): break return 17 The dis output is: 1 0 LOAD_CONST 0 () 2 LOAD_CONST 1 ('f') 4 MAKE_FUNCTION 0 6 STORE_NAME 0 (f) 8 LOAD_CONST 2 (None) 10 RETURN_VALUE Disassembly of : 2 0 LOAD_GLOBAL 0 (range) 2 LOAD_CONST 1 (10) 4 CALL_FUNCTION 1 6 GET_ITER 8 FOR_ITER 8 (to 18) 10 STORE_FAST 0 (i) 3 12 POP_TOP 4 14 LOAD_CONST 2 (17) 16 RETURN_VALUE >> 18 LOAD_CONST 2 (17) 20 RETURN_VALUE The break has something to do with it, because if I change the Python to: def f(): for i in range(10): a = 1 return 17 then the dis output is: 1 0 LOAD_CONST 0 () 2 LOAD_CONST 1 ('f') 4 MAKE_FUNCTION 0 6 STORE_NAME 0 (f) 8 LOAD_CONST 2 (None) 10 RETURN_VALUE Disassembly of : 2 0 LOAD_GLOBAL 0 (range) 2 LOAD_CONST 1 (10) 4 CALL_FUNCTION 1 6 GET_ITER >> 8 FOR_ITER 8 (to 18) 10 STORE_FAST 0 (i) 3 12 LOAD_CONST 2 (1) 14 STORE_FAST 1 (a) 16 JUMP_ABSOLUTE 8 4 >> 18 LOAD_CONST 3 (17) 20 RETURN_VALUE ---------- messages: 383460 nosy: Mark.Shannon, nedbat priority: normal severity: normal status: open title: Duplicated unused bytecodes at end of function versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 16:56:08 2020 From: report at bugs.python.org (=?utf-8?b?TWljaGHFgiBHw7Nybnk=?=) Date: Sun, 20 Dec 2020 21:56:08 +0000 Subject: [New-bugs-announce] [issue42697] 3.8.7rc1 regression: 'free(): invalid pointer' after running backports-zoneinfo test suite Message-ID: <1608501368.18.0.633201368751.issue42697@roundup.psfhosted.org> New submission from Micha? G?rny : I'm still investigating the problem and I will include more information shortly. However, I'm filing the bug early, as I'd like to prevent this regression from hitting 3.8.7 release. When running backports-zoneinfo-0.2.1 test suite using cpython 3.8.7rc1, all tests pass, then python segfaults: ``` ---------------------------------------------------------------------- Ran 233 tests in 2.200s OK (skipped=27) free(): invalid pointer /var/tmp/portage/dev-python/backports-zoneinfo-0.2.1/temp/environment: line 3054: 167 Aborted (core dumped) "${EPYTHON}" -m unittest discover -v ``` The backtrace I got doesn't seem very useful: #0 __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49 #1 0x00007fd4b6c79536 in __GI_abort () at abort.c:79 #2 0x00007fd4b6cd2bf7 in __libc_message (action=action at entry=do_abort, fmt=fmt at entry=0x7fd4b6de53b5 "%s\n") at ../sysdeps/posix/libc_fatal.c:155 #3 0x00007fd4b6cdaa7a in malloc_printerr (str=str at entry=0x7fd4b6de3593 "free(): invalid pointer") at malloc.c:5389 #4 0x00007fd4b6cdbe5c in _int_free (av=, p=, have_lock=0) at malloc.c:4201 #5 0x00007fd4b6f00aaa in ?? () from /usr/lib64/libpython3.8.so.1.0 #6 0x00007fd4b6eb8745 in ?? () from /usr/lib64/libpython3.8.so.1.0 #7 0x00007fd4b6ece115 in ?? () from /usr/lib64/libpython3.8.so.1.0 #8 0x00007fd4b6ece2f2 in ?? () from /usr/lib64/libpython3.8.so.1.0 #9 0x0000562239cd1a60 in ?? () #10 0x00007fd4b7086967 in ?? () from /usr/lib64/libpython3.8.so.1.0 #11 0x00007fd4b7167e20 in ?? () from /usr/lib64/libpython3.8.so.1.0 #12 0x0000562239cd1a60 in ?? () #13 0x00007fd4b6f05d26 in ?? () from /usr/lib64/libpython3.8.so.1.0 #14 0x00007fd4b6fccf0d in ?? () from /usr/lib64/libpython3.8.so.1.0 #15 0x00007fd4b6fcdc1d in PyGC_Collect () from /usr/lib64/libpython3.8.so.1.0 #16 0x000056223996c670 in ?? () #17 0x00007fd4b6f93e8a in PyImport_Cleanup () from /usr/lib64/libpython3.8.so.1.0 #18 0x00007fd4b6faa55c in Py_NewInterpreter () from /usr/lib64/libpython3.8.so.1.0 #19 0x0000000000000000 in ?? () I'm going to start by trying to bisect this, and let you know the results when I manage them. ---------- components: Interpreter Core messages: 383464 nosy: mgorny priority: normal severity: normal status: open title: 3.8.7rc1 regression: 'free(): invalid pointer' after running backports-zoneinfo test suite type: crash versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 19:44:44 2020 From: report at bugs.python.org (hydroflask) Date: Mon, 21 Dec 2020 00:44:44 +0000 Subject: [New-bugs-announce] [issue42698] Deadlock in pysqlite_connection_dealloc() Message-ID: <1608511484.37.0.765804145287.issue42698@roundup.psfhosted.org> New submission from hydroflask : pysqlite_connection_dealloc() calls sqlite3_close{,_v2}(). This can cause a deadlock if sqlite3_close() tries to acquire a lock that another thread holds, due to a deadlock between the GIL and an internal sqlite3 lock. This is especially common with sqlite3's "shared cache mode." Since the GIL should not be released during a tp_dealloc function and python has no control over the behavior of sqlite3_close(), it is incorrect to call sqlite3_close() in pysqlite_connection_dealloc(). ---------- components: Library (Lib) messages: 383471 nosy: hydroflask priority: normal severity: normal status: open title: Deadlock in pysqlite_connection_dealloc() type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 20:39:34 2020 From: report at bugs.python.org (Samuel Marks) Date: Mon, 21 Dec 2020 01:39:34 +0000 Subject: [New-bugs-announce] [issue42699] Use `.join(k for k in g)` instead of `.join([k for k in g])` Message-ID: <1608514774.35.0.908581836458.issue42699@roundup.psfhosted.org> New submission from Samuel Marks : This is an extremely minor improvement. Rather than create a `list`?using a comprehension?then have it consumed by `.join`, one can skip the list construction entirely. (I remember this working from at least Python 2.7? probably earlier also) ---------- messages: 383474 nosy: samuelmarks priority: normal pull_requests: 22737 severity: normal status: open title: Use `.join(k for k in g)` instead of `.join([k for k in g])` type: performance versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 20 21:11:48 2020 From: report at bugs.python.org (Michael Wayne Goodman) Date: Mon, 21 Dec 2020 02:11:48 +0000 Subject: [New-bugs-announce] [issue42700] xml.parsers.expat.errors description of codes/messages is flipped Message-ID: <1608516708.54.0.399014748226.issue42700@roundup.psfhosted.org> New submission from Michael Wayne Goodman : The documentation for xml.parsers.expat.errors.codes says: A dictionary mapping numeric error codes to their string descriptions. But this is backwards. It should say it maps the string descriptions to the error codes. Likewise, the docs for xml.parsers.expat.errors.messages is backwards. The other references to these dictionaries appear correct. For instance, under ExpatError.code: The :mod:`~xml.parsers.expat.errors` module also provides error message constants and a dictionary :data:`~xml.parsers.expat.errors.codes` mapping these messages back to the error codes, see below. This issue appears to be present in the docs for all available versions. ---------- assignee: docs at python components: Documentation messages: 383481 nosy: docs at python, goodmami priority: normal severity: normal status: open title: xml.parsers.expat.errors description of codes/messages is flipped versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 08:07:57 2020 From: report at bugs.python.org (Michael Felt) Date: Mon, 21 Dec 2020 13:07:57 +0000 Subject: [New-bugs-announce] [issue42701] Discrepency between configure.ac and configure Message-ID: <1608556077.87.0.922310073391.issue42701@roundup.psfhosted.org> New submission from Michael Felt : While working on a PR for issue42323 I get an error with the generated ./configure even without my patch. Working from commit 37a6d5f8027f969418fe53d0a73a21003a8e370d aixtools at gcc119:[/home/aixtools/cpython/cpython-master]git log --oneline | head 37a6d5f (HEAD -> master, upstream/master, upstream/HEAD, bpo-42323) [WIP/RFC] bpo-15872: tests: remove oddity from test_rmtree_errors (GH-22967) ab74c01 bpo-31904: Fix site and sysconfig modules for VxWorks RTOS (GH-21821) c95f8bc bpo-42669: Document that `except` rejects nested tuples (GH-23822) b0398a4 bpo-42572: Improve argparse docs for the type parameter. (GH-23849) a44ce6c bpo-42604: always set EXT_SUFFIX=${SOABI}${SHLIB_SUFFIX} when using configure (GH-23708) aixtools at gcc119:[/home/aixtools/cpython/cpython-master]rpm -qa autoconf autoconf-2.69-2.noarch aixtools at gcc119:[/home/aixtools/cpython/cpython-master]autoreconf -v autoreconf: Entering directory `.' autoreconf: configure.ac: not using Gettext autoreconf: running: aclocal autoreconf: configure.ac: tracing autoreconf: configure.ac: not using Libtool autoreconf: running: /opt/freeware/bin/autoconf autoreconf: running: /opt/freeware/bin/autoheader autoreconf: configure.ac: not using Automake autoreconf: Leaving directory `.' Your branch is up-to-date with 'upstream/master'. Changes not staged for commit: (use "git add ..." to update what will be committed) (use "git checkout -- ..." to discard changes in working directory) modified: aclocal.m4 modified: configure modified: pyconfig.h.in aixtools at gcc119:[/home/aixtools/cpython/cpython-master]git diff configure diff --git a/configure b/configure index f07edff..6f0d51f 100755 --- a/configure +++ b/configure @@ -6471,44 +6471,10 @@ if test "$Py_OPT" = 'true' ; then DEF_MAKE_RULE="build_all" case $CC in *gcc*) - { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts -fno-semantic-interposition" >&5 -$as_echo_n "checking whether C compiler accepts -fno-semantic-interposition... " >&6; } -if ${ax_cv_check_cflags___fno_semantic_interposition+:} false; then : - $as_echo_n "(cached) " >&6 -else - - ax_check_save_flags=$CFLAGS - CFLAGS="$CFLAGS -fno-semantic-interposition" - cat confdefs.h - <<_ACEOF >conftest.$ac_ext -/* end confdefs.h. */ - -int -main () -{ - - ; - return 0; -} -_ACEOF -if ac_fn_c_try_compile "$LINENO"; then : - ax_cv_check_cflags___fno_semantic_interposition=yes -else - ax_cv_check_cflags___fno_semantic_interposition=no -fi -rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext - CFLAGS=$ax_check_save_flags -fi -{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ax_cv_check_cflags___fno_semantic_interposition" >&5 -$as_echo "$ax_cv_check_cflags___fno_semantic_interposition" >&6; } -if test "x$ax_cv_check_cflags___fno_semantic_interposition" = xyes; then : - + AX_CHECK_COMPILE_FLAG(-fno-semantic-interposition, CFLAGS_NODIST="$CFLAGS_NODIST -fno-semantic-interposition" aixtools at gcc119:[/home/aixtools/cpython/cpython-master]./configure checking for git... found checking build system type... powerpc-ibm-aix7.2.4.0 checking host system type... powerpc-ibm-aix7.2.4.0 checking for python3.10... no checking for python3... python3 checking for --enable-universalsdk... no checking for --with-universal-archs... no checking MACHDEP... "aix" checking for gcc... gcc checking whether the C compiler works... yes ... checking for --with-pydebug... no checking for --with-trace-refs... no checking for --with-assertions... no checking for --enable-optimizations... no ./configure[6464]: syntax error at line 6475 : `(' unexpected aixtools at gcc119:[/home/aixtools/cpython/cpython-master] Unclear - to me - what is wrong with configure.ac or autoconf that is installed. aixtools at gcc119:[/home/aixtools/cpython/cpython-master]autoconf --version autoconf (GNU Autoconf) 2.69 Copyright (C) 2012 Free Software Foundation, Inc. License GPLv3+/Autoconf: GNU GPL version 3 or later , This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Written by David J. MacKenzie and Akim Demaille. ---------- components: Build messages: 383515 nosy: Michael.Felt priority: normal severity: normal status: open title: Discrepency between configure.ac and configure versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 08:14:59 2020 From: report at bugs.python.org (agbaroni) Date: Mon, 21 Dec 2020 13:14:59 +0000 Subject: [New-bugs-announce] [issue42702] Inconsistent state after autoreconf Message-ID: <1608556499.13.0.313130118619.issue42702@roundup.psfhosted.org> New submission from agbaroni : git clone https://github.com/python/cpython && cd cpython && autoreconf && ./configure --with-pydebug gives the following error: checking for --enable-optimizations... no ./configure: line 6443: syntax error near unexpected token `-fno-semantic-interposition,' ./configure: line 6443: ` AX_CHECK_COMPILE_FLAG(-fno-semantic-interposition,' Here (section 1.5): https://cpython-devguide.readthedocs.io/setup/ speaks about only autoreconf. ---------- messages: 383517 nosy: agbaroni priority: normal severity: normal status: open title: Inconsistent state after autoreconf type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 09:30:35 2020 From: report at bugs.python.org (Matt Fowler) Date: Mon, 21 Dec 2020 14:30:35 +0000 Subject: [New-bugs-announce] [issue42703] Asyncio Event Documentation Links Incorrect Message-ID: <1608561035.31.0.374875188875.issue42703@roundup.psfhosted.org> New submission from Matt Fowler : The documentation for `asyncio.Event` has incorrect links. The `wait` coroutine incorrectly links to the docs for the `asyncio.wait` waiting primitive, and the `set` method incorrectly links to the docs for the `set` class constructor. ---------- assignee: docs at python components: Documentation messages: 383524 nosy: docs at python, mattfowler priority: normal severity: normal status: open title: Asyncio Event Documentation Links Incorrect versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 09:33:43 2020 From: report at bugs.python.org (Malay Shah) Date: Mon, 21 Dec 2020 14:33:43 +0000 Subject: [New-bugs-announce] [issue42704] [macOS] [python3.7] : platform.machine() returns x86_64 on ARM machine Message-ID: <1608561223.52.0.215688337013.issue42704@roundup.psfhosted.org> New submission from Malay Shah : there is a discrepancy in platform.machine() behaviour on mac ARM: In python2.7.16 , platform.machine() returns : "arm64" In python3.7.9 , platform.machine() returns : "x86_64" ---------- components: macOS messages: 383525 nosy: ned.deily, ronaldoussoren, shah.malay04 priority: normal severity: normal status: open title: [macOS] [python3.7] : platform.machine() returns x86_64 on ARM machine type: behavior versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 10:11:17 2020 From: report at bugs.python.org (Mohamad Kanj) Date: Mon, 21 Dec 2020 15:11:17 +0000 Subject: [New-bugs-announce] [issue42705] Intercepting thread lock objects not working under context managers Message-ID: <1608563477.42.0.0709881246927.issue42705@roundup.psfhosted.org> Change by Mohamad Kanj : ---------- assignee: docs at python components: C API, Demos and Tools, Distutils, Documentation files: tracing-locking-mechanisms.py nosy: docs at python, dstufft, eric.araujo, mhmdkanj priority: normal severity: normal status: open title: Intercepting thread lock objects not working under context managers type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49695/tracing-locking-mechanisms.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 10:25:39 2020 From: report at bugs.python.org (Scott Norton) Date: Mon, 21 Dec 2020 15:25:39 +0000 Subject: [New-bugs-announce] [issue42706] random.uniform 2x slower than inline implementation Message-ID: <1608564339.76.0.67705848453.issue42706@roundup.psfhosted.org> New submission from Scott Norton : The library function random.uniform takes about twice as long as a manual inline implementation (Python 3.9.1). >>> import timeit >>> timeit.timeit('3 + (8 - 3) * random.random()', 'import random') 0.1540887290000228 >>> timeit.timeit('a + (b - a) * random.random()', 'import random\na = 3\nb = 8') 0.17950458899986188 >>> timeit.timeit('random.uniform(3, 8)', 'import random') # does the call/return really add that much overhead? 0.31145418699999894 ---------- components: C API messages: 383532 nosy: Scott Norton priority: normal severity: normal status: open title: random.uniform 2x slower than inline implementation type: performance versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 13:59:31 2020 From: report at bugs.python.org (Alexandre) Date: Mon, 21 Dec 2020 18:59:31 +0000 Subject: [New-bugs-announce] [issue42707] Python uses ANSI CP for stdio on Windows console instead of using console or OEM CP Message-ID: <1608577171.81.0.55180678487.issue42707@roundup.psfhosted.org> New submission from Alexandre : Hello, first of all, I hope this was not already discussed (I searched the bugs but it might have been discussed elsewhere) and it's really a bug. I've been struggling to understand today why a simple file redirection couldn't work properly today (encoding issues) and I think I finally understand the whole thing. There's an IO codepage set on Windows consoles (`chcp` for cmd, `[Console]::InputEncoding; [Console]::OutputEncoding` for PowerShell ; chcp will not work on Powershell while it displays it set the CP), 850 for my locale. When there's no redirection / piping, PyWindowsConsoleIO take cares of the encoding (utf-8 is seems), but when there's redirection or piping, encoding falls back to ANSI CP (from config_get_locale_encoding). This behavior seems to be incorrect / breaking things, an example: * testcp.py (file encoded as utf-8) ``` #!/usr/bin/env python3 # -*- coding: utf-8 print('?') ``` * using cmd: ``` # Test condition L:\Cop>chcp Page de codes active?: 850 # We're fine here L:\Cop>py testcp.py ? L:\Cop>py -c "import sys; print(sys.stdout.encoding)" utf-8 # Now with piping L:\Cop>py -c "import sys; print(sys.stdout.encoding)" | more cp1252 L:\Cop>py testcp.py | more ? L:\Cop>py testcp.py > lol && type lol ? # If we adjust cmd CP, it's fine too: L:\Cop>chcp 1252 Page de codes active?: 1252 L:\Cop>py testcp.py | more ? ``` * with pwsh: ``` PS L:\Cop> ([Console]::InputEncoding, [Console]::OutputEncoding) | select CodePage CodePage -------- 850 850 # Fine without redirection PS L:\Cop> py .\testcp.py ? # Here, write-host expect cp850 PS L:\Cop> py .\testcp.py | write-output ? # Same with Out-file (used by ">") PS L:\Cop> py .\testcp.py > lol; Get-Content lol ? # PS L:\Cop> py .\testcp.py | more ?? ``` By reading some sources today to solve my issue, I found many solutions: * in PS `[Console]::OutputEncoding = [Text.Utf8Encoding]::new($false); $env:PYTHONIOENCODING="utf8"` or `[Console]::OutputEncoding = [Text.Encoding]::GetEncoding(1252)` * in CMD `chcp 65001 && set PYTHONIOENCODING=utf8` (but this seems to break more) or `chcp 1252` But reading (and trusting) https://serverfault.com/questions/80635/how-can-i-manually-determine-the-codepage-and-locale-of-the-current-os (https://docs.microsoft.com/en-us/windows/win32/intl/locale-idefault-constants), I understand Python should be using reading the current CP (from GetConsoleOutputCP, like https://github.com/python/cpython/blob/3.9/Python/fileutils.c:) or using the default OEM CP, and not assuming ANSI CP for stdio : > * the OEM code page for use by legacy console applications, > * the ANSI code page for use by legacy GUI applications. The init path I could trace : > https://github.com/python/cpython/blob/3.9/Python/pylifecycle.c > init_sys_streams >> create_stdio (https://github.com/python/cpython/blob/3.9/Python/pylifecycle.c#L1774) >>> open.raw : https://github.com/python/cpython/blob/3.9/Modules/_io/_iomodule.c#L374 >>>> https://github.com/python/cpython/blob/3.9/Modules/_io/winconsoleio.c >> fallback to ini_sys_stream encoding > https://github.com/python/cpython/blob/3.9/Python/initconfig.c > config_init_stdio_encoding > config_get_locale_encoding > GetACP() Some test with GetConsoleCP: ``` L:\Cop>py -c "import os; print(os.device_encoding(0), os.device_encoding(1))" | more cp850 None L:\Cop>type nul | py -c "import os; print(os.device_encoding(0), os.device_encoding(1))" None cp850 L:\Cop>type nul | py -c "import ctypes; print(ctypes.windll.kernel32.GetConsoleCP(), ctypes.windll.kernel32.GetConsoleOutputCP())" 850 850 L:\Cop>py -c "import ctypes; print(ctypes.windll.kernel32.GetConsoleCP(), ctypes.windll.kernel32.GetConsoleOutputCP())" | more 850 850 ``` Some links / documentations, if useful: * https://serverfault.com/questions/80635/how-can-i-manually-determine-the-codepage-and-locale-of-the-current-os * https://docs.microsoft.com/en-us/windows/win32/intl/locale-idefault-constants * https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getoemcp * https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getacp * https://docs.microsoft.com/en-us/windows/console/getconsoleoutputcp * https://stackoverflow.com/questions/56944301/why-does-powershell-redirection-change-the-formatting-of-the-text-content * https://stackoverflow.com/questions/19122755/output-echo-a-variable-to-a-text-file * https://stackoverflow.com/questions/40098771/changing-powershells-default-output-encoding-to-utf-8 * Maybe related: https://github.com/PowerShell/PowerShell/issues/10907 * https://stackoverflow.com/questions/57131654/using-utf-8-encoding-chcp-65001-in-command-prompt-windows-powershell-window (will probably break things :) ) * https://stackoverflow.com/questions/49476326/displaying-unicode-in-powershell/49481797#49481797 * https://stackoverflow.com/questions/25642746/how-do-i-pipe-unicode-into-a-native-application-in-powershell Please note I took time to write this issue as best as I could, I hope it won't be closed without explaining why the current behavior is normal (not that I suppose this will happen, I just don't know how people react here :) ). Thanks a lot for Python, I really enjoy using it, Best, Alexandre ---------- components: Windows messages: 383550 nosy: paul.moore, steve.dower, tim.golden, u36959, zach.ware priority: normal severity: normal status: open title: Python uses ANSI CP for stdio on Windows console instead of using console or OEM CP type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 21:00:48 2020 From: report at bugs.python.org (Reeyarn Li) Date: Tue, 22 Dec 2020 02:00:48 +0000 Subject: [New-bugs-announce] [issue42708] AttributeError when running multiprocessing on MacOS 11 with Apple Silicon (M1) Message-ID: <1608602448.28.0.20225447007.issue42708@roundup.psfhosted.org> New submission from Reeyarn Li : I just run the sample code from multiprocessing's documentation page: #https://docs.python.org/3/library/multiprocessing.html from multiprocessing import Pool def f(x): return x*x with Pool(5) as p: print(p.map(f, [1, 2, 3])) ## end of code And it cannot run, with the following error messages: Process SpawnPoolWorker-2: Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/pool.py", line 114, in worker task = get() File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 358, in get return _ForkingPickler.loads(res) AttributeError: Can't get attribute 'f' on ---------- components: macOS messages: 383565 nosy: ned.deily, reeyarn, ronaldoussoren priority: normal severity: normal status: open title: AttributeError when running multiprocessing on MacOS 11 with Apple Silicon (M1) versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 22:59:18 2020 From: report at bugs.python.org (James B Wilkinson) Date: Tue, 22 Dec 2020 03:59:18 +0000 Subject: [New-bugs-announce] [issue42709] I Message-ID: <1608609558.65.0.94290684892.issue42709@roundup.psfhosted.org> Change by James B Wilkinson : ---------- nosy: the.doc priority: normal severity: normal status: open title: I _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 21 23:26:31 2020 From: report at bugs.python.org (Faris Chugthai) Date: Tue, 22 Dec 2020 04:26:31 +0000 Subject: [New-bugs-announce] [issue42710] Viewing pydoc API documentation Message-ID: <1608611191.07.0.553217513358.issue42710@roundup.psfhosted.org> New submission from Faris Chugthai : I'm sure that this has been observed before, but I was unable to find the original issue where this was discussed so I'm going to apologize in advance for what is probably a duplicate issue. But why is there no documentation available on the classes and functions available in the pydoc module? There are a large number of well documented classes, methods and functions and I could easily imagine legitimate motivations to want to import some of the functionality provided and use it in a project. No information on the 2000 lines of code in pydoc.py are provided at https://docs.python.org/3/library/pydoc.html Running: `python -m pydoc -w pydoc` Generates a similarly sparse file. In addition, running in the interactive interpreter: >>> help(pydoc) shows almost no real information. I'm genuinely unsure of why this happens. Even if this is an intentional design decision for how the Helper class is supposed to work, why isn't there corresponding information for the different classes at `Doc/library/pydoc.rst`? Searching for something like `pydoc.TextRepr` in the search bar of docs.python.org doesn't return anything at all. Sorry if there's something horrifically obvious I'm missing here guys but I'd appreciate any clarification on this matter. ---------- assignee: docs at python components: Documentation messages: 383570 nosy: Faris Chugthai, docs at python priority: normal severity: normal status: open title: Viewing pydoc API documentation _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 01:40:07 2020 From: report at bugs.python.org (Kaleb Barrett) Date: Tue, 22 Dec 2020 06:40:07 +0000 Subject: [New-bugs-announce] [issue42711] lru_cache and NotImplemented Message-ID: <1608619207.84.0.36165935347.issue42711@roundup.psfhosted.org> New submission from Kaleb Barrett : Having to return `NotImplemented` (which counts as a successful function call) in `functools.lru_cache` and `functools.cache` decorated binary dunder methods has the potential to fill the LRU cache with calls that will in ultimately result in errors (read "garbage"). There are a few ways to avoid this IMO: 1. Change `functools.lru_cache` not cache calls that result in `NotImplemented`. 2. Allow a user to easily extend `functools.lru_cache` (the implementation is not extensible right now) to do the previously mentioned operation. 3. Add the ability to *throw* `NotImplemented` in binary dunder methods so the function call does not complete. This would work exactly like returning `NotImplemented` and be handled by the runtime. And my current solution... 4. Copy-paste `functools.lru_cache` and add in changes for solution 1 :) ---------- components: Library (Lib) messages: 383573 nosy: ktbarrett priority: normal severity: normal status: open title: lru_cache and NotImplemented type: performance versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 05:17:25 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 22 Dec 2020 10:17:25 +0000 Subject: [New-bugs-announce] [issue42712] Segmentation fault in running ast.literal_eval() with large expression size. Message-ID: <1608632245.06.0.0184110305677.issue42712@roundup.psfhosted.org> New submission from Xinmeng Xia : Calling function ast.literal_eval() with large size can cause a segmentation fault in Python 3.5 -3.10. Please check the following two examples. The example 1 works as expected, while the second one triggers segmentation fault on Python 3.5,3.6,3.7,3.8,3.9,3.10. The primary difference between these two examples lay on the value of "n". Example 1: ========================================= import ast mylist = [] n = 100000 print(ast.literal_eval("mylist"+"+mylist"*n)) ========================================== The actual output: value Error on Python 3.5,3.7,3.8,3.9,3.10, Recursive Error on Python 3.6 (as expected) Example 2: =================================== import ast mylist = [] n = 1000000 print(ast.literal_eval("mylist"+"+mylist"*n)) =================================== The actual output: segmentation fault on Python 3.5 - 3.10 (not as expected) My system information: >> python3.10 -V Python 3.10.0a2 >> python3.9 -V Python 3.9.0rc1 >> python3.8 -V Python 3.8.0 >> python3.7 -V Python 3.7.3 >> python3.6 -V Python 3.6.12 >> uname -v #73~16.04.1-Ubuntu ---------- components: Interpreter Core messages: 383577 nosy: xxm priority: normal severity: normal status: open title: Segmentation fault in running ast.literal_eval() with large expression size. type: crash versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 05:18:43 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 22 Dec 2020 10:18:43 +0000 Subject: [New-bugs-announce] [issue42713] Segmentation fault in running eval() with large expression size. Message-ID: <1608632323.64.0.924637507568.issue42713@roundup.psfhosted.org> New submission from Xinmeng Xia : Calling function eval() with large size can cause a segmentation fault in Python 3.7 -3.10. Please check the following two examples. The example 1 works as expected, while the second one triggers segmentation fault on Python 3.7,3.8,3.9,3.10. The primary difference between these two examples lay on the value of "n". Example 1: ========================================= mylist = [] n = 100000 print(eval("mylist"+"+mylist"*n)) ========================================== The actual output: Recursion Error on Python 3.5-3.10 (as expected) Example 2: =================================== mylist = [] n = 1000000 print(eval("mylist"+"+mylist"*n)) =================================== The actual output: Recursive Error on Python 3.5, 3.6 (as expected), segmentation fault on Python 3.7, 3.8, 3.9, 3.10 (not as expected) My system information: >> python3.10 -V Python 3.10.0a2 >> python3.9 -V Python 3.9.0rc1 >> python3.8 -V Python 3.8.0 >> python3.7 -V Python 3.7.3 >> python3.6 -V Python 3.6.12 >> uname -v #73~16.04.1-Ubuntu ---------- components: Interpreter Core messages: 383578 nosy: xxm priority: normal severity: normal status: open title: Segmentation fault in running eval() with large expression size. type: crash versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 05:19:47 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 22 Dec 2020 10:19:47 +0000 Subject: [New-bugs-announce] [issue42714] Segmentation fault in running compile() with large expression size. Message-ID: <1608632387.72.0.614247098104.issue42714@roundup.psfhosted.org> New submission from Xinmeng Xia : Calling function compile() with large size can cause a segmentation fault in Python 3.7 -3.10. Please check the following two examples. The example 1 works as expected, while the second one triggers segmentation fault on Python 3.7,3.8,3.9,3.10. The primary difference between these two examples lay on the value of "n". Example 1: ========================================= mylist = [] n = 100000 print(compile("mylist"+"+mylist"*n,'','single')) ========================================== The actual output: Recursion Error on Python 3.5-3.10 (as expected) Example 2: ========================================= mylist = [] n = 1000000 print(compile("mylist"+"+mylist"*n,'','single')) =================================== The actual output: Recursive Error on Python 3.5, 3.6 (as expected), segmentation fault on Python 3.7, 3.8, 3.9, 3.10 (not as expected) My system information: >> python3.10 -V Python 3.10.0a2 >> python3.9 -V Python 3.9.0rc1 >> python3.8 -V Python 3.8.0 >> python3.7 -V Python 3.7.3 >> python3.6 -V Python 3.6.12 >> uname -v #73~16.04.1-Ubuntu ---------- components: Interpreter Core messages: 383579 nosy: xxm priority: normal severity: normal status: open title: Segmentation fault in running compile() with large expression size. type: crash versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 05:20:37 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 22 Dec 2020 10:20:37 +0000 Subject: [New-bugs-announce] [issue42715] Segmentation fault in running exec() with large expression size. Message-ID: <1608632437.39.0.152674435504.issue42715@roundup.psfhosted.org> New submission from Xinmeng Xia : Calling function exec() with large size can cause a segmentation fault in Python 3.7 -3.10. Please check the following two examples. The example 1 works as expected, while the second one triggers segmentation fault on Python 3.7,3.8,3.9,3.10. The primary difference between these two examples lay on the value of "n". Example 1: ========================================= mylist = [] n = 100000 print(exec("mylist"+"+mylist"*n)) ========================================== The actual output: Recursion Error on Python 3.5-3.10 (as expected) Example 2: =================================== mylist = [] n = 1000000 print(exec("mylist"+"+mylist"*n)) =================================== The actual output: Recursive Error on Python 3.5, 3.6 (as expected), segmentation fault on Python 3.7, 3.8, 3.9, 3.10 (not as expected) My system information: >> python3.10 -V Python 3.10.0a2 >> python3.9 -V Python 3.9.0rc1 >> python3.8 -V Python 3.8.0 >> python3.7 -V Python 3.7.3 >> python3.6 -V Python 3.6.12 >> uname -v #73~16.04.1-Ubuntu ---------- components: Interpreter Core messages: 383580 nosy: xxm priority: normal severity: normal status: open title: Segmentation fault in running exec() with large expression size. type: crash versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 05:21:33 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 22 Dec 2020 10:21:33 +0000 Subject: [New-bugs-announce] [issue42716] Segmentation fault in running ast.parse() with large expression size. Message-ID: <1608632493.48.0.125975965886.issue42716@roundup.psfhosted.org> New submission from Xinmeng Xia : Calling function ast.parse() with large size can cause a segmentation fault in Python 3.5 -3.10. Please check the following two examples. The example 1 works as expected, while the second one triggers segmentation fault on Python 3.5,3.6,3.7,3.8,3.9,3.10. The primary difference between these two examples lay on the value of "n". Example 1: ========================================= import ast mylist = [] n = 100000 print(ast.parse("mylist"+"+mylist"*n)) ========================================== The actual output: AST nodes on Python 3.5-3.10 (as expected) # <_ast.Module object at 0x7f78d7b672e8> Example 2: =================================== import ast mylist = [] n = 1000000 print(ast.parse("mylist"+"+mylist"*n)) # <_ast.Module object at 0x7f78d7b672e8> =================================== The actual output: segmentation fault on Python 3.5 - 3.10 (not as expected) My system information: >> python3.10 -V Python 3.10.0a2 >> python3.9 -V Python 3.9.0rc1 >> python3.8 -V Python 3.8.0 >> python3.7 -V Python 3.7.3 >> python3.6 -V Python 3.6.12 >> uname -v #73~16.04.1-Ubuntu ---------- components: Interpreter Core messages: 383581 nosy: xxm priority: normal severity: normal status: open title: Segmentation fault in running ast.parse() with large expression size. type: crash versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 05:25:03 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Tue, 22 Dec 2020 10:25:03 +0000 Subject: [New-bugs-announce] [issue42717] The python interpreter crashed with "_enter_buffered_busy" Message-ID: <1608632703.27.0.708410911987.issue42717@roundup.psfhosted.org> New submission from Xinmeng Xia : The following program can work well in Python 2. However it crashes in Python 3( 3.6-3.10 ) with the following error messages. Program: ============================================ import sys,time, threading class test: def test(self): pass class test1: def run(self): for i in range(0,10000000): connection = test() sys.stderr.write(' =_= ') def testrun(): client = test1() thread = threading.Thread(target=client.run, args=()) thread.setDaemon(True) thread.start() time.sleep(0.1) testrun() ============================================ Error message: ------------------------------------------------------------------------------ =_= =_= =_= =_= =_= ...... =_= =_= =_= =_= Fatal Python error: _enter_buffered_busy: could not acquire lock for <_io.BufferedWriter name=''> at interpreter shutdown, possibly due to daemon threads Python runtime state: finalizing (tstate=0xd0c180) Current thread 0x00007f08a638f700 (most recent call first): Aborted (core dumped) ------------------------------------------------------------------------------ When I remove "time.sleep(0.1)" or "thread.setDaemon(True)" or "sys.stderr.write(' =_= ')" or "for i in range(0,10000000)":, the python interpreter seems to work well. ---------- components: Interpreter Core messages: 383582 nosy: xxm priority: normal severity: normal status: open title: The python interpreter crashed with "_enter_buffered_busy" type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 07:02:51 2020 From: report at bugs.python.org (Mark Shannon) Date: Tue, 22 Dec 2020 12:02:51 +0000 Subject: [New-bugs-announce] [issue42718] Allow zero-width entries in code.co_lines() Message-ID: <1608638571.52.0.00224943237142.issue42718@roundup.psfhosted.org> New submission from Mark Shannon : While the impact of making `if 0` and `while True` appear when tracing can be mitigated, the impact of `continue` is more of a concern. The following loop: while True: if test: continue rest PEP 626 requires that the `continue` is traced, and continue can occur once per iteration. So inserting a NOP for a continue will have a measurable impact on performance. In some cases the NOP can be folded into the preceding or following bytecode, but often it cannot because the code is both branchy and spread across several lines. If PEP 626 allowed zero-width entries in the line number table, then any remaining NOPs could be eliminated in the assembler, at the cost of a little additional complexity in `maybe_call_line_trace()` ---------- assignee: Mark.Shannon components: Interpreter Core messages: 383585 nosy: Mark.Shannon, pablogsal, rhettinger, serhiy.storchaka priority: normal severity: normal stage: needs patch status: open title: Allow zero-width entries in code.co_lines() type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 09:08:13 2020 From: report at bugs.python.org (Mark Shannon) Date: Tue, 22 Dec 2020 14:08:13 +0000 Subject: [New-bugs-announce] [issue42719] Eliminate NOPs in the assembler, by emitting zero-width entries in the line number table Message-ID: <1608646093.23.0.458945572076.issue42719@roundup.psfhosted.org> New submission from Mark Shannon : This will require a change to the internal line number table format. PEP 626 requires all lines are traced, which makes handling of 'continue' and other jump-to-jumps inefficient if spread across multiple lines. In 3.9 many jump-to-jumps were eliminated, but PEP 626 requires that many are converted to a NOP. Zero-width line number table entries will allow us to eliminate most, if not all, remaining NOPs in the assembler. ---------- assignee: Mark.Shannon components: Interpreter Core messages: 383590 nosy: Mark.Shannon priority: normal severity: normal stage: needs patch status: open title: Eliminate NOPs in the assembler, by emitting zero-width entries in the line number table type: performance versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 14:00:22 2020 From: report at bugs.python.org (Nandish) Date: Tue, 22 Dec 2020 19:00:22 +0000 Subject: [New-bugs-announce] [issue42720] << operator has a bug Message-ID: <1608663622.99.0.883125192536.issue42720@roundup.psfhosted.org> New submission from Nandish : I verified the following on both Python 2.7 and Python 3.8 I did print(100<<3) Converting 100 to Binary gives 1100100. What I did is I droped the first 3 bits and added 3 bits with the value '0' at the end. So it should result as 0100000, and I converted this to Decimal and the answer was 32. For my suprise when I executed print(100<<3) the answer was 800. I was puzzled. I converted 800 to Binary to check whats going on. And this is what I got 1100100000. If you see how 800 was Python answer, they did not shift or drop the first 3 bits but they added value '0' to last 3 bits. Where as print(100>>3) , worked perfect. I did manual calculation and cheked the print result from python. It worked correctly. Dropped last 3 bits and added value '0' to first 3 bits. Looks like (100<<3) , left shift operator has a bug on Python. ---------- messages: 383599 nosy: Nandish priority: normal severity: normal status: open title: << operator has a bug type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 15:35:40 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Tue, 22 Dec 2020 20:35:40 +0000 Subject: [New-bugs-announce] [issue42721] Using of simple dialogs without default root window Message-ID: <1608669340.97.0.219315442285.issue42721@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently, standard message boxes in tkinter.messagebox (like showinfo() or askyesnocancel()) can be called without master or default root window. If arguments master or parent are not specified and there is no default root window is still created, the root window is created and set as the default root window. It is kept visible after closing a message box. It was done for testing from REPL. It affects also tkinter.colorchooser.askcolor(). The drawback is that the root window is kept visible and that it is set as the default root window (so following explicit calls of Tk(), with possible different arguments does not set the default root window). Simple query dialogs in tkinter (like askinteger()) initially had the same behavior. But later it was broken, and currently they do not work if master and parent are not specified and the default root window is not set. Proposed PR improves behavior of message boxes and colorchooser and restore the lost feature for query dialogs. Now if master and parent are not specified and the default root window is not set, the new temporary hidden root window is created, but is not set as the default root window. It will be destroyed right after closing the dialog, so it will not affect other widgets. ---------- components: Tkinter messages: 383612 nosy: gpolo, serhiy.storchaka priority: normal severity: normal status: open title: Using of simple dialogs without default root window type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 22 18:46:51 2020 From: report at bugs.python.org (Dominik V.) Date: Tue, 22 Dec 2020 23:46:51 +0000 Subject: [New-bugs-announce] [issue42722] Add --debug command line option to unittest to enable post-mortem debugging Message-ID: <1608680811.32.0.033035519481.issue42722@roundup.psfhosted.org> New submission from Dominik V. : Currently there is no option to use post-mortem debugging via `pdb` on a `unittest` test case which fails due to an exception being leaked. Consider the following example: ``` import unittest def foo(): for x in [1, 2, 'oops', 4]: print(x + 100) class TestFoo(unittest.TestCase): def test_foo(self): self.assertIs(foo(), None) if __name__ == '__main__': unittest.main() ``` If we were calling `foo` directly we could enter post-mortem debugging via `python -m pdb test.py`. However since `foo` is wrapped in a test case, `unittest` eats the exception and thus prevents post-mortem debugging. So I propose adding a command-line option `--debug` to unittest for running test cases in debug mode so that post-mortem debugging can be used. I see that some third-party distributions enable this, but since both `unittest` and `pdb` are part of the standard library, it would be nice if they played well together. Plus the required methods are already in place (`TestCase.debug` and `TestSuite.debug`). There is also a popular StackOverflow question on this topic: https://stackoverflow.com/questions/4398967/python-unit-testing-automatically-running-the-debugger-when-a-test-fails ---------- messages: 383624 nosy: Dominik V. priority: normal severity: normal status: open title: Add --debug command line option to unittest to enable post-mortem debugging type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 23 06:21:19 2020 From: report at bugs.python.org (Luke Davis) Date: Wed, 23 Dec 2020 11:21:19 +0000 Subject: [New-bugs-announce] [issue42723] Unclear why dict unpacking cannot be used in dict comprehension Message-ID: <1608722479.78.0.847759478082.issue42723@roundup.psfhosted.org> New submission from Luke Davis : Why am I unable to do: dict = { **sub_dict for sub_dict in super_dict.values() } which throws: "dict unpacking cannot be used in dict comprehension" Whereas the equivalent code below doesn't throw any errors?: dict = {} for sub_dict in super_dict.values(): dict = { **dict, **sub_dict } Am I wrong in thinking the first and second block of code should do the same thing behind the scenes? ---------- messages: 383641 nosy: PartlyFluked priority: normal severity: normal status: open title: Unclear why dict unpacking cannot be used in dict comprehension type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 23 07:56:35 2020 From: report at bugs.python.org (Corentin Bolou) Date: Wed, 23 Dec 2020 12:56:35 +0000 Subject: [New-bugs-announce] [issue42724] Change library name when building. Message-ID: <1608728195.78.0.130143828179.issue42724@roundup.psfhosted.org> New submission from Corentin Bolou : Hello python community. I use python in a software I develop. I succeed the standard build into lib/dll on windows. However, I have difficulties when I want to change the name of the lib. And used them with the new name in my project. I follow the Readme that tell to change the name, we have to change in PCbuild the python.props. So I did that and I had the right name I wanted after building python. But when I compile my project, when I reached the link step,my project search for the old library name. And I have nothing in my code that try to search for this library. I add the include python.lib in my project settings, but it said that it cannot open file python38.lib. I think some compile C code may reference to his name. And if it's the case. I want to know if it's possible to build python with a different name than python38 and python38_d.lib/dll on Windows. Sincerely, Corentin ---------- components: Build messages: 383646 nosy: corentin.bolou27 priority: normal severity: normal status: open title: Change library name when building. type: compile error versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 23 08:28:51 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Wed, 23 Dec 2020 13:28:51 +0000 Subject: [New-bugs-announce] [issue42725] PEP 563: Should the behavior change for yield/yield from's Message-ID: <1608730131.08.0.595189962302.issue42725@roundup.psfhosted.org> New submission from Batuhan Taskaya : Since the annotations are processed just like all other expressions in the symbol table, the generated entries for functions etc. This would result with def foo(): for number in range(5): foo: (yield number) return number foo() returning a generator / coroutine (depending on yield/yield from/await usage). Is this something we want to keep or maybe tweak the symbol table generator to not to handle annotations (since there are also more subtle issues regarding analysis of cell / free vars). ---------- messages: 383647 nosy: BTaskaya, gvanrossum priority: normal severity: normal status: open title: PEP 563: Should the behavior change for yield/yield from's _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 23 13:30:18 2020 From: report at bugs.python.org (Augusto Hack) Date: Wed, 23 Dec 2020 18:30:18 +0000 Subject: [New-bugs-announce] [issue42726] gdb/libpython.py InstanceProxy does not work with py3 Message-ID: <1608748218.63.0.956794347013.issue42726@roundup.psfhosted.org> New submission from Augusto Hack : Calling `proxyval` on an instance of a user defined class fails. minimally reproducible example: ``` from time import sleep class A: def __init__(self): self.a = 1 a = A() sleep(10) ``` Attach to process and run: ``` py-up python-interactive Frame.get_selected_python_frame().get_pyop().get_var_by_name('a')[0].proxyval(set()) ``` Will result in the following error: ``` Traceback (most recent call last): File "", line 1, in File "/usr/lib/debug/usr/lib64/libpython3.7m.so.1.0-3.7.9-2.fc33.x86_64.debug-gdb.py", line 471, in __repr__ for arg, val in self.attrdict.iteritems()]) AttributeError: 'dict' object has no attribute 'iteritems' ``` Tested on fedora 33 with python3.7 and debugsymbols ---------- components: Demos and Tools messages: 383654 nosy: hack.augusto priority: normal pull_requests: 22764 severity: normal status: open title: gdb/libpython.py InstanceProxy does not work with py3 versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 23 22:41:22 2020 From: report at bugs.python.org (Ethan Furman) Date: Thu, 24 Dec 2020 03:41:22 +0000 Subject: [New-bugs-announce] [issue42727] [Enum] EnumMeta.__prepare__ needs to accept **kwds Message-ID: <1608781282.89.0.997821175942.issue42727@roundup.psfhosted.org> New submission from Ethan Furman : **kwds are necessary to support __init_subclass__, but __prepare__ currently does not accept them. ---------- assignee: ethan.furman messages: 383670 nosy: ethan.furman priority: normal severity: normal status: open title: [Enum] EnumMeta.__prepare__ needs to accept **kwds type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 24 02:24:53 2020 From: report at bugs.python.org (Tao He) Date: Thu, 24 Dec 2020 07:24:53 +0000 Subject: [New-bugs-announce] [issue42728] Typo in documentation: importlib.metadata Message-ID: <1608794693.94.0.363783921808.issue42728@roundup.psfhosted.org> New submission from Tao He : There's a typo in importlib.metadata.rst: In section: https://docs.python.org/3/library/importlib.metadata.html#distributions ---------- messages: 383678 nosy: sighingnow priority: normal severity: normal status: open title: Typo in documentation: importlib.metadata type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 24 05:19:55 2020 From: report at bugs.python.org (Paul Sokolovsky) Date: Thu, 24 Dec 2020 10:19:55 +0000 Subject: [New-bugs-announce] [issue42729] tokenize, ast: No direct way to parse tokens into AST, a gap in the language processing pipiline Message-ID: <1608805195.98.0.835213865075.issue42729@roundup.psfhosted.org> New submission from Paul Sokolovsky : Currently, it's possible: * To get from stream-of-characters program representation to AST representation (AST.parse()). * To get from AST to code object (compile()). * To get from a code object to first-class function to the execute the program. Python also offers "tokenize" module, but it stands as a disconnected island: the only things it allows to do is to get from stream-of-characters program representation to stream-of-tokens, and back. At the same time, conceptually, tokenization is not a disconnected feature, it's the first stage of language processing pipeline. The fact that "tokenize" is disconnected from the rest of the pipeline, as listed above, is more an artifact of CPython implementation: both "ast" module and compile() module are backed by the underlying bytecode compiler implementation written in C, and that's what connects them. On the other hand, "tokenize" module is pure-Python, while the underlying compiler has its own tokenizer implementation (not exposed). That's the likely reason of such disconnection between "tokenize" and the rest of the infrastructure. I propose to close that gap, and establish an API which would allow to parse token stream (iterable) into an AST. An initial implementation for CPython can (and likely should) be naive, making a loop thru surface program representation. That's ok, again, the idea is to establish a standard API to be able to go tokens -> AST, then individual Python implementation can make/optimize it based on their needs. The proposed name is ast.parse_tokens(). It follows the signature of the existing ast.parse(), except that first parameter is "token_stream" instead of "source". Another alternative would be to overload existing ast.parse() to accept token iterable. I guess, at the current stage, where we try to tighten up type strictness of API, and have clear typing signatures for API functions, this is not favored solution. ---------- components: Library (Lib) messages: 383680 nosy: BTaskaya, pablogsal, pfalcon, serhiy.storchaka priority: normal severity: normal status: open title: tokenize, ast: No direct way to parse tokens into AST, a gap in the language processing pipiline versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 24 12:29:09 2020 From: report at bugs.python.org (Abdulrahman Alabdulkareem) Date: Thu, 24 Dec 2020 17:29:09 +0000 Subject: [New-bugs-announce] [issue42730] TypeError in Time.Sleep when invoked from shell script background Message-ID: <1608830949.51.0.513581784179.issue42730@roundup.psfhosted.org> New submission from Abdulrahman Alabdulkareem : I get a random TypeError exception thrown whenever I interrupt/kill a sleeping parent thread from a child thread but ONLY if the python script was invoked by a shell script with an & argument to make it run in the background. (this is in vanilla python) This doesn't happen if invoked (1) in command line (2) in command line with & (3) in shell script (without &) Only in shell script with &. Here is the python script named psc.py https://pastebin.com/raw/KZQptCMr And here is the shell script named ssc.sh https://pastebin.com/raw/QQzs4Tpz that when ran will run the python script and cause the strange behaviour Here is the output I'm seeing: ------------------- parent looping child interrupting parent why would I ever catch a TypeError? Traceback (most recent call last): File "m.py", line 17, in time.sleep(1) TypeError: 'int' object is not callable Here is the output I'm expecting: -------------------- parent looping child interrupting parent caught interruption raised from user or child thread :) Another unexpected behaviour might be that python suddenly hangs. Here is a stackoverflow question raised on this issue with discussion in the comments https://stackoverflow.com/questions/65440353/python-time-sleep1-raises-typeerror?noredirect=1#comment115697237_65440353 ---------- messages: 383695 nosy: master3243 priority: normal severity: normal status: open title: TypeError in Time.Sleep when invoked from shell script background versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 24 14:13:42 2020 From: report at bugs.python.org (Karl Nelson) Date: Thu, 24 Dec 2020 19:13:42 +0000 Subject: [New-bugs-announce] [issue42731] Enhancement request for proxying PyString Message-ID: <1608837222.47.0.436477012182.issue42731@roundup.psfhosted.org> New submission from Karl Nelson : When developing with JPype, the largest hole currently is that Java returns a string type which cannot be represented as a str. Java strings are string like and immutable and can be pulled to Python when needed, but it is best if they remain in Java until Python requests it as pulling all string values through the API and pushing them back can result in serious overhead. Thus they need to be represented as a Proxy to a string, which can be accessed as a string at anytime. Throughout the Python API str is treated as a concrete type (though it is somewhat polymorphic due to different storage for code points sizes.) There is also handling for an "unready" string which needs additional treatment before it can be accessed. Unfortunately this does not appear to be suitable for creating a proxy object which can be pulled from another source to create a string on demand. Having a "__str__()" method is insufficient as that merely makes an object able to become a string rather than considered to be a string by the rest of the API. Would it be possible to generalize the concept of an unready string so that when Ready is called it fetches the actually string contents, creates a piece of memory to store the string contents (outside of the object itself), and sets the access flags for so that the code points can be interpreted? Is this already possible in the API? Are there any other plans to make the str type able to operate as a proxy? ---------- components: Extension Modules messages: 383701 nosy: Thrameos priority: normal severity: normal status: open title: Enhancement request for proxying PyString versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 24 14:36:20 2020 From: report at bugs.python.org (Ken Jin) Date: Thu, 24 Dec 2020 19:36:20 +0000 Subject: [New-bugs-announce] [issue42732] Buildbot s390x Fedora LTO + PGO 3.x fails intermittently Message-ID: <1608838580.26.0.691630726472.issue42732@roundup.psfhosted.org> New submission from Ken Jin : Dear core developers, I noticed that for many recent commits, the s390x Fedora LTO + PGO 3.x buildbot often fails. Here's an error log:: gcc -pthread -fno-semantic-interposition -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -Xlinker -export-dynamic -o python Programs/python.o libpython3.10.a -lcrypt -lpthread -ldl -lutil -lm -lm gcc -pthread -fno-semantic-interposition -flto -fuse-linker-plugin -ffat-lto-objects -flto-partition=none -g -Xlinker -export-dynamic -o Programs/_testembed Programs/_testembed.o libpython3.10.a -lcrypt -lpthread -ldl -lutil -lm -lm /usr/bin/ld: python.lto.o: in function `run_mod': /home/dje/cpython-buildarea/3.x.edelsohn-fedora-z.lto-pgo/build/Python/pythonrun.c:1230: undefined reference to `PyAST_CompileObject' /usr/bin/ld: python.lto.o: in function `builtin_compile': /home/dje/cpython-buildarea/3.x.edelsohn-fedora-z.lto-pgo/build/Python/bltinmodule.c:808: undefined reference to `PyAST_CompileObject' /usr/bin/ld: python.lto.o: in function `Py_CompileStringObject': /home/dje/cpython-buildarea/3.x.edelsohn-fedora-z.lto-pgo/build/Python/pythonrun.c:1307: undefined reference to `PyAST_CompileObject' /usr/bin/ld: python.lto.o: in function `symtable_lookup': /home/dje/cpython-buildarea/3.x.edelsohn-fedora-z.lto-pgo/build/Python/symtable.c:1004: undefined reference to `_Py_Mangle' /usr/bin/ld: python.lto.o: in function `symtable_record_directive': /home/dje/cpython-buildarea/3.x.edelsohn-fedora-z.lto-pgo/build/Python/symtable.c:1161: undefined reference to `_Py_Mangle' /usr/bin/ld: python.lto.o: in function `type_new': /home/dje/cpython-buildarea/3.x.edelsohn-fedora-z.lto-pgo/build/Objects/typeobject.c:2544: undefined reference to `_Py_Mangle' /usr/bin/ld: python.lto.o: in function `symtable_add_def_helper': /home/dje/cpython-buildarea/3.x.edelsohn-fedora-z.lto-pgo/build/Python/symtable.c:1018: undefined reference to `_Py_Mangle' collect2: error: ld returned 1 exit status make[1]: *** [Makefile:592: python] Error 1 make[1]: *** Waiting for unfinished jobs.... [.............. I truncated the rest] I have a hunch the error is caused by a lack of the '-fprofile-generate' flag for the first two gcc commands. The only issue is that it's sometimes there, and sometimes not, and I'm not familiar enough with the buildbot code to find out why. Sorry. Eg. this commit https://github.com/python/cpython/commit/cc3467a57b61b0e7ef254b36790a1c44b13f2228 has s390x Fedora LTO + PGO 3.x succeeding, the 2 gcc lines also have the '-fprofile-generate' flag. This commit https://github.com/python/cpython/commit/c6c43b28746b0642cc3c49dd8138b896bed3028f has s390x Fedora LTO + PGO 3.x failing, the 2 gcc lines have a blank space in place of the '-fprofile-generate' flag. I could also be completely wrong and off track, so please feel free to correct me :), thanks. ---------- components: Demos and Tools messages: 383702 nosy: kj, pablogsal, vstinner, zach.ware priority: normal severity: normal status: open title: Buildbot s390x Fedora LTO + PGO 3.x fails intermittently versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 24 22:23:11 2020 From: report at bugs.python.org (=?utf-8?b?5pa95paH5bOw?=) Date: Fri, 25 Dec 2020 03:23:11 +0000 Subject: [New-bugs-announce] [issue42733] [issue] io's r+ mode truncate(0) Message-ID: <1608866591.42.0.709266826628.issue42733@roundup.psfhosted.org> New submission from ??? : happen at io's reading & updating(r+) mode after read, file object's postion stay in last if you remove whole content ( truncate(0) ), postion wil not back to 0 still stay in the last then you start writing from last position(not 0) that's why problem happen test case can check here https://github.com/841020/open_source ---------- components: IO files: Screenshot from 2020-12-25 10-46-42.png hgrepos: 396 messages: 383710 nosy: ke265379ke priority: normal severity: normal status: open title: [issue] io's r+ mode truncate(0) type: enhancement versions: Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49697/Screenshot from 2020-12-25 10-46-42.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 00:45:29 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Fri, 25 Dec 2020 05:45:29 +0000 Subject: [New-bugs-announce] [issue42734] "bogus_code_obj.py" should be removed from "cPython/Lib/test/crashers" Message-ID: <1608875129.15.0.565092953865.issue42734@roundup.psfhosted.org> New submission from Xinmeng Xia : In "Python/Lib/test/crashers/README", it said "This directory only contains tests for outstanding bugs that cause the interpreter to segfault..... Once the crash is fixed, the test case should be moved into an appropriate test." The file "bogus_code_obj.py" has been fixed on Python 3.8, 3.9, 3.10. No segmentation fault will be caused any more. I think this file should be removed from "Python/Lib/test/crashers". ---------- components: Library (Lib) messages: 383721 nosy: xxm priority: normal severity: normal status: open title: "bogus_code_obj.py" should be removed from "cPython/Lib/test/crashers" type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 00:48:49 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Fri, 25 Dec 2020 05:48:49 +0000 Subject: [New-bugs-announce] [issue42735] "trace_at_recursion_limit.py" should be removed from "Python/Lib/test/crashers" Message-ID: <1608875329.89.0.262110269237.issue42735@roundup.psfhosted.org> New submission from Xinmeng Xia : In "Python/Lib/test/crashers/", only tests for outstanding bugs that cause the interpreter to segfault should be included. The file "trace_at_recursion_limit.py" has been fixed on Python 3.7, 3.8, 3.9, 3.10. No segmentation fault will be caused any more. I think this file should be removed from "Python/Lib/test/crashers". ---------- components: Library (Lib) messages: 383722 nosy: xxm priority: normal severity: normal status: open title: "trace_at_recursion_limit.py" should be removed from "Python/Lib/test/crashers" type: enhancement versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 00:55:16 2020 From: report at bugs.python.org (Gregory P. Smith) Date: Fri, 25 Dec 2020 05:55:16 +0000 Subject: [New-bugs-announce] [issue42736] Add support for making Linux prctl(...) calls to subprocess Message-ID: <1608875716.63.0.116569716148.issue42736@roundup.psfhosted.org> New submission from Gregory P. Smith : Another use of `subprocess preexec_fn=` that I've come across has been to call Linux's `prctl` in the child process before the `exec`. `_libc.prctl(_PR_SET_PDEATHSIG, value)` for example. Adding a linux_prctl_calls= parameter listing information about which prctl call(s) to make in the child before exec would satisfy this needed. No need to go overboard here, this is a _very_ low level syscall. users need to know what they're doing and will likely abstract use away from actual users via a wrapper library. i.e: Lets not attempt to reinvent the https://pythonhosted.org/python-prctl/ interface. Proposal: linux_prctl_calls: Sequence[tuple] Where every tuple indicates one prctl call. the tuple [0] contains a bool indicating if an error returned by that prctl call should be ignored or not. the tuple[1:6] contain the five int arguments to be passed to prctl. If the tuple is shorter than 2 elements, the remaining values are assumed 0. At most, a namedtuple type could be created for this purpose to allow the user to use the https://man7.org/linux/man-pages/man2/prctl.2.html argument names rather than just blindly listing a tuple. ``` namedtuple('LinuxPrctlDescription', field_names='check_error, option, arg2, arg3, arg4, arg5', defaults=(0,0,0,0)) ``` This existing helps https://bugs.python.org/issue38435 deprecate preexec_fn. ---------- components: Extension Modules, Library (Lib) messages: 383723 nosy: gregory.p.smith priority: normal severity: normal stage: needs patch status: open title: Add support for making Linux prctl(...) calls to subprocess type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 04:32:50 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Fri, 25 Dec 2020 09:32:50 +0000 Subject: [New-bugs-announce] [issue42737] PEP 563: drop annotations for complex assign targets Message-ID: <1608888770.02.0.483696766952.issue42737@roundup.psfhosted.org> New submission from Batuhan Taskaya : PEP 526 classifies everything but simple, unparenthesized names (a.b, (a), a[b]) as complex targets. The way the we handle annotations for them right now is, doing literally nothing but evaluating every part of it (just pushing the name to the stack, and popping, without even doing the attribute access); $ cat t.py foo[bar]: int $ python -m dis t.py 1 0 SETUP_ANNOTATIONS 2 LOAD_NAME 0 (foo) 4 POP_TOP 6 LOAD_NAME 1 (bar) 8 POP_TOP 10 LOAD_NAME 2 (int) 12 POP_TOP 14 LOAD_CONST 0 (None) 16 RETURN_VALUE $ cat t.py a.b: int $ python -m dis t.py 1 0 SETUP_ANNOTATIONS 2 LOAD_NAME 0 (a) 4 POP_TOP 6 LOAD_NAME 1 (int) 8 POP_TOP 10 LOAD_CONST 0 (None) 12 RETURN_VALUE I noticed this while creating a patch for issue 42725, since I had to create an extra check for non-simple annassign targets (because compiler tries to find their scope, `int` in this case is not compiled to string). Since they have no real side effect but just accessing a name, I'd propose we drop this from 3.10 so that both I can simply the patch for issue 42725, and also we have consistency with what we do when the target is simple (instead of storing this time, we'll just drop the bytecode). $ cat t.py a.b: int = 5 $ python -m dis t.py 1 0 SETUP_ANNOTATIONS 2 LOAD_CONST 0 (5) 4 LOAD_NAME 0 (a) 6 STORE_ATTR 1 (b) 8 LOAD_NAME 2 (int) 10 POP_TOP 12 LOAD_CONST 1 (None) 14 RETURN_VALUE 8/10 will be gone in this case. If agreed upon, I can propose a patch. ---------- components: Interpreter Core messages: 383729 nosy: BTaskaya, gvanrossum, lys.nikolaou, pablogsal, serhiy.storchaka priority: normal severity: normal status: open title: PEP 563: drop annotations for complex assign targets versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 06:53:03 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 25 Dec 2020 11:53:03 +0000 Subject: [New-bugs-announce] [issue42738] subprocess: don't close all file descriptors by default (close_fds=False) Message-ID: <1608897183.26.0.576206535349.issue42738@roundup.psfhosted.org> New submission from STINNER Victor : To make subprocess faster, I propose to no longer close all file descriptors by default in subprocess: change Popen close_fds parameter default to False (close_fds=False). Using close_fds=False, subprocess can use posix_spawn() which is safer and faster than fork+exec. For example, on Linux, the glibc implements it as a function using vfork which is faster than fork if the parent allocated a lot of memory. On macOS, posix_spawn() is even a syscall. The main drawback is that close_fds=False reopens a security vulnerability if a file descriptor is not marked as non-inheritable. The PEP 446 "Make newly created file descriptors non-inheritable" was implemented in Python 3.4: Python should only create non-inheritable FDs, if it's not the case, Python should be fixed. Sadly, 3rd party Python modules may not implement the PEP 446. In this case, close_fds=True should be used explicitly, or these modules should be fixed. os.set_inheritable() can be used to make FDs as non-inheritable. close_fds=True has a cost on subprocess performance. When the maximum number of file descriptors is larger than 10,000 and Python has no way to list open file descriptors, calling close(fd) once per file descriptor can take several milliseconds. When I wrote the PEP 446 (in 2013), on a FreeBSD buildbot with MAXFD=655,000, closing all FDs took 300 ms (see bpo-11284 "slow close file descriptors"). FreeBSD has been fixed recently by using closefrom() function which makes _posixsubprocess and os.closerange() faster. In 2020, my Red Hat colleagues still report the issue in Linux containers using... Python 2.7, since Python 2.7 subprocess also close all file descriptors in a loop (there was no code to list open file descriptors). The problem still exists in Python 3 if subprocess cannot open /proc/self/fd/ directory, when /proc pseudo filesystem is not mounted (or if the access is blocked, ex: by a sandbox). The problem is that some containers are created a very high limit for the maximum number of FDs: os.sysconf("SC_OPEN_MAX") returns 1,048,576. Calling close() more than 1 million of times is slow... See also related issue bpo-38435 "Start the deprecation cycle for subprocess preexec_fn". -- Notes about close_fds=True. Python 3.9 can now use closefrom() function on FreeBSD: bpo-38061. Linux 5.10 has a new closerange() syscall: https://lwn.net/Articles/789023/ Linux 5.11 (not released yet) will add a new CLOSE_RANGE_CLOEXEC flag to close_range(): https://lwn.net/Articles/837816/ -- History of the close_fds parameter default value. In Python 2.7, subprocess didn't close all file descriptors by default: close_fds=False by default. Dec 4, 2010: In Python 3.2 (bpo-7213, bpo-2320), subprocess.Popen started to emit a deprecating warning when close_fds was not specified explicitly (commit d23047b62c6f885def9020bd9b304110f9b9c52d): + if close_fds is None: + # Notification for http://bugs.python.org/issue7213 & issue2320 + warnings.warn( + 'The close_fds parameter was not specified. Its default' + ' will change from False to True in a future Python' + ' version. Most users should set it to True. Please' + ' update your code explicitly set close_fds.', + DeprecationWarning) Dec 13 2010, bpo-7213: close_fds default value was changed to True on non-Windows platforms, and False on Windows (commit f5604853889bfbbf84b48311c63c0e775dff38cc). The implementation was adjusted in bpo-6559 (commit 8edd99d0852c45f70b6abc851e6b326d4250cd33) to use a new _PLATFORM_DEFAULT_CLOSE_FDS singleton object. See issues: * bpo-2320: Race condition in subprocess using stdin * bpo-6559: add pass_fds paramter to subprocess.Popen() * bpo-7213: subprocess leaks open file descriptors between Popen instances causing hangs -- On Windows, there is also the question of handles (HANDLE type). Python 3.7 added the support for the PROC_THREAD_ATTRIBUTE_HANDLE_LIST in subprocess.STARTUPINFO: lpAttributeList['handle_list'] parameter. Hopefully, Windows has a way better default than Unix: all handles are created as non-inheritable by default, so these is no need to explicitly close them. ---------- components: Library (Lib) messages: 383739 nosy: gregory.p.smith, vstinner priority: normal severity: normal status: open title: subprocess: don't close all file descriptors by default (close_fds=False) versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 06:55:58 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Fri, 25 Dec 2020 11:55:58 +0000 Subject: [New-bugs-announce] [issue42739] Crash when try to disassemble bogus code object Message-ID: <1608897358.97.0.834158885511.issue42739@roundup.psfhosted.org> New submission from Serhiy Storchaka : >>> def f(): pass ... >>> co = f.__code__.replace(co_linetable=b'') >>> import dis >>> dis.dis(co) python: Objects/codeobject.c:1185: PyLineTable_NextAddressRange: Assertion `!at_end(range)' failed. Aborted (core dumped) It is expected that executing bogus code object can crash (or cause any other effect). But it is surprising that just inspecting it causes a crash. ---------- components: Interpreter Core messages: 383741 nosy: Mark.Shannon, serhiy.storchaka priority: normal severity: normal status: open title: Crash when try to disassemble bogus code object type: crash versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 10:19:22 2020 From: report at bugs.python.org (Ken Jin) Date: Fri, 25 Dec 2020 15:19:22 +0000 Subject: [New-bugs-announce] [issue42740] typing.py get_args and get_origin should support PEP 604 and 612 Message-ID: <1608909562.21.0.0257515668126.issue42740@roundup.psfhosted.org> New submission from Ken Jin : Currently get_args doesn't work for PEP 604 Union: >>> get_args(int | str) or new Callables with PEP 612: >>> P = ParamSpec('P) >>> get_args(Callable[P, int]) ([~P], ) get_origin doesn't work with PEP 604 Unions: >>> get_origin(int | str) PS: the fix has to be backported partly to 3.9. Because get_args doesn't handle collections.abc.Callable either. ---------- components: Library (Lib) messages: 383747 nosy: gvanrossum, kj, levkivskyi priority: normal severity: normal status: open title: typing.py get_args and get_origin should support PEP 604 and 612 type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 11:04:14 2020 From: report at bugs.python.org (Ken Jin) Date: Fri, 25 Dec 2020 16:04:14 +0000 Subject: [New-bugs-announce] [issue42741] Sync 3.9's whatsnew document in 3.10 with 3.9 branch Message-ID: <1608912254.65.0.400957747647.issue42741@roundup.psfhosted.org> New submission from Ken Jin : On the 3.10 branch, the what's new document for 3.9 isn't synced with the one on the 3.9 branch. Currently it's missing two entries: macOS 11.0 (Big Sur) and Apple Silicon Mac support issue41100 (next one's my fault, sorry) collections.abc.Callable changes issue42195 ---------- assignee: docs at python components: Documentation messages: 383750 nosy: docs at python, kj, lukasz.langa, pablogsal priority: normal severity: normal status: open title: Sync 3.9's whatsnew document in 3.10 with 3.9 branch versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 11:32:43 2020 From: report at bugs.python.org (Anton Abrosimov) Date: Fri, 25 Dec 2020 16:32:43 +0000 Subject: [New-bugs-announce] [issue42742] Add abc.Mapping to dataclass Message-ID: <1608913963.27.0.877253757924.issue42742@roundup.psfhosted.org> New submission from Anton Abrosimov : I want to add `abc.Mapping` extension to `dataclasses.dataclass`. Motivation: 1. `asdict` makes a deep copy of the `dataclass` object. If I only want to iterate over the `field` attributes, I don't want to do a deep copy. 2. `dict(my_dataclass)` can be used as a `dict` representation of `my_dataclass` class without deep copying. 3. `myfunc(**my_dataclass)` looks better and is faster then `myfunc(**asdict(my_dataclass))`. 4. `len(my_dataclass) == len(asdict(my_dataclass))` is expected behavior. 5. `my_dataclass.my_field is my_dataclass['my_field']` is expected behavior. Looks like a prototype: from collections.abc import Mapping from dataclasses import dataclass, fields, _FIELDS, _FIELD @dataclass # `(mapping=True)` creates such a class: class MyDataclass(Mapping): a: int = 1 b: int = 2 # In `dataclasses._process_class`: # if `mapping` is `True`. # Make sure 'get', 'items', 'keys', 'values' is not in `MyDataclass` fields. def __iter__(self): return (f.name for f in fields(self)) def __getitem__(self, key): fields = getattr(self, _FIELDS) f = fields[key] if f._field_type is not _FIELD: raise KeyError(f"'{key}' is not a field of the dataclass.") return getattr(self, f.name) def __len__(self): return len(fields(self)) my_dataclass = MyDataclass(b=3) print(my_dataclass['a']) print(my_dataclass['b']) print(dict(my_dataclass)) print(dict(**my_dataclass)) Stdout: 1 3 {'a': 1, 'b': 3} {'a': 1, 'b': 3} Realisation: Updating the `dataclasses.py`: `dataclass`, `_process_class`, `_DataclassParams`. Set `mapping` argument to default `False`. Can this enhancement be accepted? ---------- components: Library (Lib) messages: 383752 nosy: abrosimov.a.a priority: normal severity: normal status: open title: Add abc.Mapping to dataclass type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 12:02:16 2020 From: report at bugs.python.org (Daniel Schreck) Date: Fri, 25 Dec 2020 17:02:16 +0000 Subject: [New-bugs-announce] [issue42743] pdb vanishing breakpoints Message-ID: <1608915736.39.0.746707238377.issue42743@roundup.psfhosted.org> New submission from Daniel Schreck : Using pdb, breakpoints disappear when stepping into a function in another module. They're not hit from then on. HOWEVER, if any new breakpoint is entered, all the breakpoints reappear. They vanish every time the debugger steps across the module, and only reappear with a new breakpoint entry. Behavior is reproducible. ---------- messages: 383753 nosy: ds2606 priority: normal severity: normal status: open title: pdb vanishing breakpoints type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 14:35:20 2020 From: report at bugs.python.org (RhinosF1) Date: Fri, 25 Dec 2020 19:35:20 +0000 Subject: [New-bugs-announce] [issue42744] pkg_resources seems to treat python 3.10 as python 3.1 Message-ID: <1608924920.91.0.314661686668.issue42744@roundup.psfhosted.org> New submission from RhinosF1 : As seen in https://github.com/MirahezeBots/MirahezeBots/pull/380/checks?check_run_id=1609121656, pkg_resources is throwing errors about version conflicts as it seems it thinks 3.10 is 3.1 or similar. This was fixed for PyPA/Pip in https://github.com/pypa/pip/issues/6730 so it installs from pip fine. ---------- components: Installation messages: 383760 nosy: RhinosF1 priority: normal severity: normal status: open title: pkg_resources seems to treat python 3.10 as python 3.1 type: compile error versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 17:50:34 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 25 Dec 2020 22:50:34 +0000 Subject: [New-bugs-announce] [issue42745] [subinterpreters] Make the type lookup cache per-interpreter Message-ID: <1608936634.73.0.0790209715879.issue42745@roundup.psfhosted.org> New submission from STINNER Victor : Currently, the type lookup cache is shared by all interpreter which causes multiple issues: * The version tag is currently protected by the GIL, but it would require a new lock if the GIL is made per interpreter (bpo-40512) * Clearing the cache in an interpreter clears the cache in all interpreters * The cache has a fixed size of 4096 entries. The cache misses increase with the number of interpreters, since each interpreter has its own types. I propose to make the type lookup cache per interpreter. ---------- components: Interpreter Core, Subinterpreters messages: 383777 nosy: vstinner priority: normal severity: normal status: open title: [subinterpreters] Make the type lookup cache per-interpreter versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 18:36:19 2020 From: report at bugs.python.org (yaksha nyx) Date: Fri, 25 Dec 2020 23:36:19 +0000 Subject: [New-bugs-announce] [issue42746] python3.7.3 - ssl.SSLContext() - "Killed" Message-ID: <1608939379.31.0.291479274657.issue42746@roundup.psfhosted.org> New submission from yaksha nyx : I got a very strange issue with my Python3.7.3. I use ssl module with urllin.request , when I visit some https website my script always die .so I got this : ******************************************************************* Python 3.7.3 (default, Dec 26 2020, 06:35:45) [GCC 6.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import ssl >>> ssl.OPENSSL_VERSION 'LibreSSL 3.1.1' >>> ssl.SSLContext() Killed ******************************************************************* Just "Killed" , no more infomation . How to check this problem ? ---------- assignee: christian.heimes components: SSL messages: 383779 nosy: christian.heimes, hgmmym priority: normal severity: normal status: open title: python3.7.3 - ssl.SSLContext() - "Killed" type: crash versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Fri Dec 25 18:48:07 2020 From: report at bugs.python.org (STINNER Victor) Date: Fri, 25 Dec 2020 23:48:07 +0000 Subject: [New-bugs-announce] [issue42747] Remove Py_TPFLAGS_HAVE_VERSION_TAG flag? Message-ID: <1608940087.88.0.458230907712.issue42747@roundup.psfhosted.org> New submission from STINNER Victor : Since the PyTypeObject structure is excluded from the limited C API and the stable ABI on purpose (PEP 384), I don't see the purpose of the Py_TPFLAGS_HAVE_VERSION_TAG flag. Moreover, a new flag was added recently: #if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x030A0000 /* Type has am_send entry in tp_as_async slot */ #define Py_TPFLAGS_HAVE_AM_SEND (1UL << 21) #endif Should it be also removed? For example, Py_TPFLAGS_HAVE_FINALIZE was deprecated in bpo-32388 by: commit ada319bb6d0ebcc68d3e0ef2b4279ea061877ac8 Author: Antoine Pitrou Date: Wed May 29 22:12:38 2019 +0200 bpo-32388: Remove cross-version binary compatibility requirement in tp_flags (GH-4944) It is now allowed to add new fields at the end of the PyTypeObject struct without having to allocate a dedicated compatibility flag i n tp_flags. This will reduce the risk of running out of bits in the 32-bit tp_flags value. By the way, is it worth it to remove Py_TPFLAGS_HAVE_FINALIZE? Or is it going to break too many extension modules? ---------- components: C API messages: 383782 nosy: petr.viktorin, pitrou, vstinner priority: normal severity: normal status: open title: Remove Py_TPFLAGS_HAVE_VERSION_TAG flag? versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 26 02:25:53 2020 From: report at bugs.python.org (Batuhan Taskaya) Date: Sat, 26 Dec 2020 07:25:53 +0000 Subject: [New-bugs-announce] [issue42748] test_asdl_parser: load_module() method is deprecated Message-ID: <1608967553.31.0.855033389199.issue42748@roundup.psfhosted.org> New submission from Batuhan Taskaya : Running test_asdl_parser raises a deprecation warning: 0:00:26 load avg: 1.05 [ 23/426] test_asdl_parser :283: DeprecationWarning: the load_module() method is deprecated and slated for removal in Python 3.12; use exec_module() instead probably related with this line: https://github.com/python/cpython/blob/ea251806b8dffff11b30d2182af1e589caf88acf/Lib/test/test_asdl_parser.py#L29 ---------- messages: 383795 nosy: BTaskaya priority: normal severity: normal status: open title: test_asdl_parser: load_module() method is deprecated _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 26 06:11:58 2020 From: report at bugs.python.org (STINNER Victor) Date: Sat, 26 Dec 2020 11:11:58 +0000 Subject: [New-bugs-announce] [issue42749] test_tcl failed on 32-bit POWER6 AIX 3.9: big int issue with 9223372036854775808 Message-ID: <1608981118.95.0.264551211038.issue42749@roundup.psfhosted.org> New submission from STINNER Victor : POWER6 AIX 3.9: https://buildbot.python.org/all/#/builders/330/builds/221 test.pythoninfo: platform.architecture: 32bit platform.platform: AIX-2-00F9C1964C00-powerpc-32bit tkinter.TCL_VERSION: 8.4 tkinter.TK_VERSION: 8.4 tkinter.info_patchlevel: 8.5.9 ====================================================================== FAIL: test_expr_bignum (test.test_tcl.TclTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/aixtools/buildarea/3.9.aixtools-aix-power6/build/Lib/test/test_tcl.py", line 441, in test_expr_bignum self.assertEqual(result, i) AssertionError: != 9223372036854775808 ====================================================================== FAIL: test_getint (test.test_tcl.TclTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/aixtools/buildarea/3.9.aixtools-aix-power6/build/Lib/test/test_tcl.py", line 148, in test_getint self.assertEqual(tcl.getint(' %d ' % i), i) AssertionError: -9223372036854775808 != 9223372036854775808 ====================================================================== FAIL: test_passing_values (test.test_tcl.TclTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/aixtools/buildarea/3.9.aixtools-aix-power6/build/Lib/test/test_tcl.py", line 475, in test_passing_values self.assertEqual(passValue(i), i if self.wantobjects else str(i)) AssertionError: '9223372036854775808' != 9223372036854775808 ---------------------------------------------------------------------- ---------- components: Tests, Tkinter messages: 383802 nosy: gpolo, serhiy.storchaka, vstinner priority: normal severity: normal status: open title: test_tcl failed on 32-bit POWER6 AIX 3.9: big int issue with 9223372036854775808 versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 26 10:06:16 2020 From: report at bugs.python.org (Ivo Shipkaliev) Date: Sat, 26 Dec 2020 15:06:16 +0000 Subject: [New-bugs-announce] [issue42750] tkinter.Variable equality consistency Message-ID: <1608995176.07.0.806534213214.issue42750@roundup.psfhosted.org> New submission from Ivo Shipkaliev : Greetings! I just noticed: >>> import tkinter as tk >>> root = tk.Tk() >>> str_0 = tk.StringVar() >>> str_0.set("same value") >>> str_1 = tk.StringVar() >>> str_1.set("same value") So: >>> str_0.get() == str_1.get() True But: >>> str_0 == str_1 False So, maybe a Variable should be compared by value, and not by identity (._name) as currently? (please view attached) Does it make sense? ---------- components: Tkinter files: equality.diff keywords: patch messages: 383807 nosy: epaine, serhiy.storchaka, shippo_ priority: normal severity: normal status: open title: tkinter.Variable equality consistency type: behavior versions: Python 3.10 Added file: https://bugs.python.org/file49698/equality.diff _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 26 15:56:28 2020 From: report at bugs.python.org (=?utf-8?q?Rodolfo_Garc=C3=ADa_Pe=C3=B1as?=) Date: Sat, 26 Dec 2020 20:56:28 +0000 Subject: [New-bugs-announce] [issue42751] Imaplib Message-ID: <1609016188.21.0.0203916799222.issue42751@roundup.psfhosted.org> New submission from Rodolfo Garc?a Pe?as : Hi, Is it possible to move the imaplib2 implementation (https://github.com/imaplib2/imaplib2/) as imaplib stdlib? What steps should I do? Regards, kix ---------- components: Library (Lib) messages: 383828 nosy: rodolfogarciap priority: normal severity: normal status: open title: Imaplib type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 26 18:03:41 2020 From: report at bugs.python.org (Alex Orange) Date: Sat, 26 Dec 2020 23:03:41 +0000 Subject: [New-bugs-announce] [issue42752] multiprocessing Queue leaks a file descriptor associated with the pipe writer (#33081 still a problem) Message-ID: <1609023821.09.0.189504509221.issue42752@roundup.psfhosted.org> New submission from Alex Orange : Didn't feel like necroing #33081, but this is basically that problem. The trouble is the cleanup that appeared to fix #33081 only kicks in once something has been put in the queue. So if for instance a Process function puts something in the queue and the parent gets it, then calls q.close() the writer on the parent side doesn't get culled until the object does. This is particularly a problem for PyPy and isn't exactly great for any weird corner cases if anyone holds onto Queue objects after they're closed for any reason (horders!). Attached file test_queue.py is an example of how to trigger this problem. Run it without a command line argument "python test_queue.py" and it won't crash (though it will take a very long time to complete). Run with an argument "python test_queue.py fail" and it will fail once you run out of file descriptors (one leaked per queue). My suggestion on how to handle this is to set self._close to something that will close self._writer. Then, when _start_thread is called, instead of directly passing the self._writer.close object, pass a small function that will switch out self._close to the Finalize method used later on and return self._writer. Finally, inside _feed, use this method to get the _writer object and wrap the outer while 1 with a contextlib.closer on this object. This is a fair bit of stitching things together here and there so let me know if anyone has any suggestions on this before I get started. ---------- components: Library (Lib) files: test_queue.py messages: 383832 nosy: Alex.Orange priority: normal severity: normal status: open title: multiprocessing Queue leaks a file descriptor associated with the pipe writer (#33081 still a problem) type: resource usage versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49701/test_queue.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sat Dec 26 23:19:27 2020 From: report at bugs.python.org (Gerrik Labra) Date: Sun, 27 Dec 2020 04:19:27 +0000 Subject: [New-bugs-announce] [issue42753] "./configure" linux chrome beta Message-ID: <1609042767.15.0.899316514178.issue42753@roundup.psfhosted.org> New submission from Gerrik Labra : Ok. So I tried to run the setup in README.txt for the latest linux beta emulator on the google pixel chromebook. Here is what I got back, and the commands, "make, make test, sudo make test" didn't work. ennuilysis at penguin:~/Python/Python-3.8.7$ ./configure checking build system type... x86_64-pc-linux-gnu checking host system type... x86_64-pc-linux-gnu checking for python3.8... no checking for python3... python3 checking for --enable-universalsdk... no checking for --with-universal-archs... no checking MACHDEP... "linux" checking for gcc... no checking for cc... no checking for cl.exe... no configure: error: in `/home/ennuilysis/Python/Python-3.8.7': configure: error: no acceptable C compiler found in $PATH See `config.log' for more details ---------- components: Installation messages: 383836 nosy: gerriklabra priority: normal severity: normal status: open title: "./configure" linux chrome beta versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 02:56:34 2020 From: report at bugs.python.org (Patrick Reader) Date: Sun, 27 Dec 2020 07:56:34 +0000 Subject: [New-bugs-announce] [issue42754] Unpacking of literals inside other literals should be optimised away by the compiler Message-ID: <1609055794.85.0.312864529219.issue42754@roundup.psfhosted.org> New submission from Patrick Reader : When unpacking a collection or string literal inside another literal, the compiler should optimise the unpacking away and store the resultant collection simply as another constant tuple, so that `[*'123', '4', '5']` is the exact same as `['1', '2', '3', '4', '5']`. Compare: ``` >>> dis.dis("[*'123', '4', '5']") 1 0 BUILD_LIST 0 2 BUILD_LIST 0 4 LOAD_CONST 0 ('123') 6 LIST_EXTEND 1 8 LIST_EXTEND 1 10 LOAD_CONST 1 ('4') 12 LIST_APPEND 1 14 LOAD_CONST 2 ('5') 16 LIST_APPEND 1 ``` vs. ``` >>> dis.dis("['1', '2', '3', '4', '5']") 1 0 BUILD_LIST 0 2 LOAD_CONST 0 (('1', '2', '3', '4', '5')) 4 LIST_EXTEND 1 ``` and `timeit` shows the latter to be over 3 times as fast. For example, when generating a list of characters, it is easier and more readable to do `alphabet = [*"abcde"]` instead of `alphabet = ["a", "b", "c", "d", "e"]`. The programmer can do what is most obvious without worrying about performance, because the compiler can do it itself. ---------- components: Interpreter Core messages: 383837 nosy: pxeger priority: normal severity: normal status: open title: Unpacking of literals inside other literals should be optimised away by the compiler _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 03:11:58 2020 From: report at bugs.python.org (Dong-hee Na) Date: Sun, 27 Dec 2020 08:11:58 +0000 Subject: [New-bugs-announce] [issue42755] sqlite3.Connection.backup default value is diffrent between implmentation and docs Message-ID: <1609056718.52.0.719197971794.issue42755@roundup.psfhosted.org> New submission from Dong-hee Na : Docs says that pages default value is 0 but the implementation is -1 docs: https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.backup impl: https://github.com/python/cpython/blob/f4507231e3f0cf8827cec5592571ce371c6813e8/Modules/_sqlite/connection.c#L1565 But the behavior will be the same and if the pages is set to zero, we update this value to -1. https://github.com/python/cpython/blob/f4507231e3f0cf8827cec5592571ce371c6813e8/Modules/_sqlite/connection.c#L1625 So IMHO, I'd like to suggest updating the docs rather than updating the implementation. ---------- components: Extension Modules messages: 383838 nosy: berker.peksag, corona10, erlendaasland, serhiy.storchaka priority: normal severity: normal status: open title: sqlite3.Connection.backup default value is diffrent between implmentation and docs _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 08:11:53 2020 From: report at bugs.python.org (=?utf-8?q?W=C3=BCstengecko?=) Date: Sun, 27 Dec 2020 13:11:53 +0000 Subject: [New-bugs-announce] [issue42756] smtplib.LMTP.connect() raises TypeError if `timeout` is not specified Message-ID: <1609074713.39.0.771360640486.issue42756@roundup.psfhosted.org> New submission from W?stengecko : Since Python 3.9, calling `smtplib.LMTP.connect()` without explicitly specifying any `timeout=` raises a `TypeError`. Specifying `None` or any integer value works correctly. ``` >>> import smtplib >>> smtplib.LMTP("/tmp/lmtp.sock") Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.9/smtplib.py", line 1071, in __init__ super().__init__(host, port, local_hostname=local_hostname, File "/usr/lib/python3.9/smtplib.py", line 253, in __init__ (code, msg) = self.connect(host, port) File "/usr/lib/python3.9/smtplib.py", line 1085, in connect self.sock.settimeout(self.timeout) TypeError: an integer is required (got type object) >>> l = smtplib.LMTP() >>> l.connect("/tmp/lmtp.sock") Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.9/smtplib.py", line 1085, in connect self.sock.settimeout(self.timeout) TypeError: an integer is required (got type object) ``` Upon investigation with `pdb`, the default object for the `timeout` parameter (`socket._GLOBAL_DEFAULT_TIMEOUT`) is passed through to the `self.sock.settimeout` call, instead of being handled as "no timeout specified". The relevant changes were introduced as fix for bpo-39329. ---------- components: Library (Lib) messages: 383850 nosy: wuestengecko priority: normal severity: normal status: open title: smtplib.LMTP.connect() raises TypeError if `timeout` is not specified type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 13:01:57 2020 From: report at bugs.python.org (Johann Bernhardt) Date: Sun, 27 Dec 2020 18:01:57 +0000 Subject: [New-bugs-announce] [issue42757] Class has two prototypes Message-ID: <1609092117.63.0.33154013313.issue42757@roundup.psfhosted.org> New submission from Johann Bernhardt : Greetings, tl;dr Depending on how a class is imported it will have a differenct class name and therefore prototype. I ran into this issue when i used a class variable to implement an event bus. To reproduce you need a module and a submodule. The example creates two child classes that and collects their references for further use in a class/static variable in the parent. Depending on the import description this might be broken. Here, the code: module |--main.py |--child.py |--submodule |-parent.py |-__init.py__ parent.py: class Parent: listeners = [] def __init__(self): Parent.listeners.append(self) print("Parent: " + str(len(Parent.listeners))) child.py from submodule.parent import Parent class Child(Parent): def __init__(self): super(Child, self).__init__() print("Child: " + str(Parent.listeners)) main.py from module.child import Child from module.submodule.parent import Parent class ChildLocal(Parent): def __init__(self): super(ChildLocal, self).__init__() print("ChildLocal: " + str(Parent.listeners)) if __name__ == '__main__': child= Child() local = ChildLocal() Result: Parent: 1 Child: [] Parent: 1 ChildLocal: [<__main__.ChildLocal object at 0x0000020F51EE2E48>] Here, the parent object is imported in two different ways and two separate parental classes are created. Hence, each parent class collects only one child If Parent is importet in child.py as 'from module.submodule.parent import Parent" (difference is the module.) the result looks as follows: Parent: 1 Child: [] Parent: 2 ChildLocal: [, <__main__.ChildLocal object at 0x00000182B0982E48>] Here, both children are registered in the same parent class as intenden In conclusion, there is a nasty (to debug) difference between a relative import path and a project root path. I am quite certain this behaviour is as intenden, but on the little chance it isn't i wanted to bring this to attentions, since it is a nightmare to find. Is it intended? Cheers, Sin ---------- messages: 383856 nosy: SinTh0r4s priority: normal severity: normal status: open title: Class has two prototypes type: enhancement versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 13:54:36 2020 From: report at bugs.python.org (Anton Hvornum) Date: Sun, 27 Dec 2020 18:54:36 +0000 Subject: [New-bugs-announce] [issue42758] pathlib.Path to support the "in" operator (x in y) Message-ID: <1609095276.8.0.751847777656.issue42758@roundup.psfhosted.org> New submission from Anton Hvornum : I would like to propose that the `pathlib.Path()` gets a `in` operator, much like that ipaddress has IP in Subnet, it would be nice if we could be able to do: ``` import pathlib pathlib.Path('/home/Torxed/machine.qcow2') pathlib.Path('/home/Torxed/machine.qcow2') in pathlib.Path('/home/Torxed') ``` Currently that would generate: ``` Traceback (most recent call last): File "", line 1, in TypeError: argument of type 'PosixPath' is not iterable ``` This would avoid "complicated" implementations such as: * https://stackoverflow.com/questions/21411904/python-how-to-check-if-path-is-a-subpath * https://stackoverflow.com/questions/3812849/how-to-check-whether-a-directory-is-a-sub-directory-of-another-directory Which tend to be half-complete truths and would result in potential security issues. pathlib.Path() could help prevent some of those. ---------- components: Library (Lib) messages: 383857 nosy: Torxed priority: normal severity: normal status: open title: pathlib.Path to support the "in" operator (x in y) type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 15:16:59 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Sun, 27 Dec 2020 20:16:59 +0000 Subject: [New-bugs-announce] [issue42759] Take into acount a Tcl interpreter when compare variables and fonts Message-ID: <1609100219.29.0.633808020235.issue42759@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently instances of tkinter.Variable and tkinter.font.Font are considered equal when they have the same name even if they belong to different Tcl interpreters. But Tcl interpreters are isolated, and variables and fonts in different interpreters refer to different things. There is note in the docstring of tkinter.Variable.__eq__ about taking into account master. The following PR fixes this omission. ---------- components: Library (Lib), Tkinter messages: 383860 nosy: serhiy.storchaka priority: normal severity: normal status: open title: Take into acount a Tcl interpreter when compare variables and fonts type: behavior versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 15:41:27 2020 From: report at bugs.python.org (Paolo Lammens) Date: Sun, 27 Dec 2020 20:41:27 +0000 Subject: [New-bugs-announce] [issue42760] inspect.iscoroutine returns False for asynchronous generator functions Message-ID: <1609101687.37.0.151159530312.issue42760@roundup.psfhosted.org> New submission from Paolo Lammens : The `inspect.iscoroutinefunction` and `inspect.iscoroutine` functions return `False` for the `asend`, `athrow` and `aclose` methods of asynchronous generators (PEP 525). These are coroutine functions (i.e. one does e.g. `await gen.asend(value)`) so I would have expected these to return `True`. Example: ```python async def generator(): return yield ``` ```python >>> import inspect >>> g = generator() >>> inspect.iscoroutinefunction(g.asend) False >>> inspect.iscoroutine(g.asend(None)) False ``` ---------- components: Library (Lib), asyncio messages: 383862 nosy: asvetlov, plammens, yselivanov priority: normal severity: normal status: open title: inspect.iscoroutine returns False for asynchronous generator functions type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 18:28:36 2020 From: report at bugs.python.org (AustEcon) Date: Sun, 27 Dec 2020 23:28:36 +0000 Subject: [New-bugs-announce] [issue42761] Why does python's Popen fail to pass environment variables on Mac OS X? Message-ID: <1609111716.94.0.537110439722.issue42761@roundup.psfhosted.org> New submission from AustEcon : I have described the issue in a stackoverflow question here: https://stackoverflow.com/questions/65466303/why-does-pythons-popen-fail-to-pass-environment-variables-on-mac-os-x In summary, I want to pass environment variables to a child process on Mac OS (Big Sur) with Popen but it is not working (works on Win32 and linux platforms). Any assistance much appreciated, AustEcon ---------- components: Library (Lib) messages: 383877 nosy: AustEcon priority: normal severity: normal status: open title: Why does python's Popen fail to pass environment variables on Mac OS X? type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 23:38:15 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Mon, 28 Dec 2020 04:38:15 +0000 Subject: [New-bugs-announce] [issue42762] infinite loop resulted by "yield" Message-ID: <1609130295.32.0.912620547313.issue42762@roundup.psfhosted.org> New submission from Xinmeng Xia : Let's see the following program: ============================ def foo(): try: yield except: yield from foo() for m in foo(): print(i) =========================== Expected output: On line"print(i)", NameError: name 'i' is not defined However, the program will fall into infinite loops when running it on Python 3.7-3.10 with the error messages like the following.(no infinite loop on Python 3.5 and Python 3.6) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/report/test1.py", line 160, in print(i) RuntimeError: generator ignored GeneratorExit Exception ignored in: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/report/test1.py", line 160, in print(i) RuntimeError: generator ignored GeneratorExit Exception ignored in: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/report/test1.py", line 160, in print(i) RuntimeError: generator ignored GeneratorExit Exception ignored in: Traceback (most recent call last): File "/home/xxm/Desktop/nameChanging/report/test1.py", line 160, in print(i) ...... ---------------------------------------------------------------------- ---------- components: Interpreter Core messages: 383882 nosy: xxm priority: normal severity: normal status: open title: infinite loop resulted by "yield" type: crash versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Sun Dec 27 23:39:59 2020 From: report at bugs.python.org (Xinmeng Xia) Date: Mon, 28 Dec 2020 04:39:59 +0000 Subject: [New-bugs-announce] [issue42763] Exposing a race in the "_warnings" resulting Python parser crash Message-ID: <1609130399.05.0.938510382701.issue42763@roundup.psfhosted.org> New submission from Xinmeng Xia : This program is initially from "cpython/Lib/test/crashers/warnings_del_crasher.py" in Python 2.7. The original case is fixed for all version of Python and removed from "crashers" directory. However, if we replace the statement "for i in range(10):" of original program with the statement "for do_work in range(10):" . The race will happen again, and it will crash Python 3.7 - 3.10. ================================================== import threading import warnings class WarnOnDel(object): def __del__(self): warnings.warn("oh no something went wrong", UserWarning) def do_work(): while True: w = WarnOnDel() -for i in range(10): +for do_work in range(10): t = threading.Thread(target=do_work) t.setDaemon(1) t.start() ================================================= Error messages on Python 3.7-3.10: ------------------------------------------------------------------------------- Exception in thread Thread-2: Traceback (most recent call last): File "/usr/local/python310/lib/python3.10/threading.py", line 960, in _bootstrap_inner Exception in thread Thread-3: Traceback (most recent call last): Exception in thread Thread-4: File "/usr/local/python310/lib/python3.10/threading.py", line 960, in _bootstrap_inner Exception in thread Thread-5: Traceback (most recent call last): self.run() Traceback (most recent call last): self.run() Exception in thread Thread-6: Exception in thread Thread-8: Exception in thread Thread-9: Exception in thread Thread-10: Fatal Python error: _enter_buffered_busy: could not acquire lock for <_io.BufferedWriter name=''> at interpreter shutdown, possibly due to daemon threads Python runtime state: finalizing (tstate=0x2679180) Current thread 0x00007f3481d3a700 (most recent call first): Aborted (core dumped) ---------- components: Interpreter Core messages: 383883 nosy: xxm priority: normal severity: normal status: open title: Exposing a race in the "_warnings" resulting Python parser crash type: crash versions: Python 3.10, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 09:49:01 2020 From: report at bugs.python.org (Anthony Hodson) Date: Mon, 28 Dec 2020 14:49:01 +0000 Subject: [New-bugs-announce] [issue42764] HTMLParser close() issue Message-ID: <1609166941.45.0.327676479076.issue42764@roundup.psfhosted.org> New submission from Anthony Hodson : HTMLParser close() does not seem to dispose of previous HTML analyses. I an sending a simple test to demonstrate this, complete with functions that are part of a set of testing facilities. The problem is present for more complex HTML. ---------- components: Library (Lib) files: htmlparse_bug.py messages: 383896 nosy: aeh priority: normal severity: normal status: open title: HTMLParser close() issue versions: Python 3.9 Added file: https://bugs.python.org/file49702/htmlparse_bug.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 10:26:40 2020 From: report at bugs.python.org (Richard Neumann) Date: Mon, 28 Dec 2020 15:26:40 +0000 Subject: [New-bugs-announce] [issue42765] Introduce new data model method __iter_items__ Message-ID: <1609169200.88.0.326770807054.issue42765@roundup.psfhosted.org> New submission from Richard Neumann : I have use cases in which I use named tuples to represent data sets, e.g: class BasicStats(NamedTuple): """Basic statistics response packet.""" type: Type session_id: BigEndianSignedInt32 motd: str game_type: str map: str num_players: int max_players: int host_port: int host_ip: IPAddressOrHostname I want them to behave as intended, i.e. that unpacking them should behave as expected from a tuple: type, session_id, motd, ? = BasicStats(?) I also want to be able to serialize them to a JSON-ish dict. The NamedTuple has an _asdict method, that I could use. json = BasicStats(?)._asdict() But for the dict to be passed to JSON, I need customization of the dict representation, e.g. set host_ip to str(self.host_ip), since it might be a non-serializable ipaddress.IPv{4,6}Address. Doing this in an object hook of json.dumps() is a non-starter, since I cannot force the user to remember, which types need to be converted on the several data structures. Also, using _asdict() seems strange as an exposed API, since it's an underscore method and users hence might not be inclined to use it. So what I did is to add a method to_json() to convert the named tuple into a JSON-ish dict: def to_json(self) -> dict: """Returns a JSON-ish dict.""" return { 'type': self.type.value, 'session_id': self.session_id, 'motd': self.motd, 'game_type': self.game_type, 'map': self.map, 'num_players': self.num_players, 'max_players': self.max_players, 'host_port': self.host_port, 'host_ip': str(self.host_ip) } It would be nicer to have my type just return this appropriate dict when invoking dict(BasicStats(?)). This would require me to override the __iter__() method to yield key / value tuples for the dict. However, this would break the natural behaviour of tuple unpacking as described above. Hence, I propose to add a method __iter_items__(self) to the python data model with the following properties: 1) __iter_items__ is expected to return an iterator of 2-tuples representing key / value pairs. 2) the built-in function dict(), when called on an object, will attempt to create the object from __iter_items__ first and fall back to __iter__. Alternative names could also be __items__ or __iter_dict__. ---------- components: C API messages: 383897 nosy: conqp priority: normal severity: normal status: open title: Introduce new data model method __iter_items__ type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 10:42:33 2020 From: report at bugs.python.org (=?utf-8?q?Don=C3=A1t_Nagy?=) Date: Mon, 28 Dec 2020 15:42:33 +0000 Subject: [New-bugs-announce] [issue42766] urllib.request.HTTPPasswordMgr uses commonprefix instead of commonpath Message-ID: <1609170153.96.0.504551551924.issue42766@roundup.psfhosted.org> New submission from Don?t Nagy : The is_suburi(self, base, test) method of HTTPPasswordMgr in the urllib.request module tries to "Check if test is below base in a URI tree", but it uses the posixpath.commonprefix() function. This is problematic because commonprefix ignores the path structure (for example commonprefix(['/usr/lib', '/usr/local/lib'])=='/usr/l') and therefore the current implementation of is_suburi is essentially equivalent to calling str.startswith after some normalization steps. If we want to say that example.com/resource101 is *NOT* below example.com/resource1 in a URI tree, then the call to commonprefix should be replaced by a call to posixpath.commonpath(), which does the right thing. ---------- components: Library (Lib) messages: 383898 nosy: nagdon priority: normal severity: normal status: open title: urllib.request.HTTPPasswordMgr uses commonprefix instead of commonpath type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 11:06:14 2020 From: report at bugs.python.org (STINNER Victor) Date: Mon, 28 Dec 2020 16:06:14 +0000 Subject: [New-bugs-announce] [issue42767] Review usage of atomic variables in signamodule.c Message-ID: <1609171574.21.0.863053109272.issue42767@roundup.psfhosted.org> New submission from STINNER Victor : In bpo-41713, I ported the _signal module to the multi-phase initialization API. I tried to move the signal state into a structure to cleanup the code, but I was scared by the following atomic variables: --------------- static volatile struct { _Py_atomic_int tripped; PyObject *func; } Handlers[NSIG]; #ifdef MS_WINDOWS #define INVALID_FD ((SOCKET_T)-1) static volatile struct { SOCKET_T fd; int warn_on_full_buffer; int use_send; } wakeup = {.fd = INVALID_FD, .warn_on_full_buffer = 1, .use_send = 0}; #else #define INVALID_FD (-1) static volatile struct { #ifdef __VXWORKS__ int fd; #else sig_atomic_t fd; #endif int warn_on_full_buffer; } wakeup = {.fd = INVALID_FD, .warn_on_full_buffer = 1}; #endif /* Speed up sigcheck() when none tripped */ static _Py_atomic_int is_tripped; --------------- For me, the most surprising part is Handlers[signum].tripped which is declared as volatile *and* an atomic variable. I'm not sure if Handlers[signum].func must be volatile neither. Also, wakeup.fd is declared as sig_atomic_t on Unix. Could we use an atomic variable instead, or is it important to use "sig_atomic_t"? -- I recently added pycore_atomic_funcs.h which provides functions to access variables atomically. It uses atomic functions if available, or falls back on "volatile" otherwise. Maybe this approach would be interesting here, maybe for Handlers[signum].func? ---------- components: Interpreter Core messages: 383901 nosy: vstinner priority: normal severity: normal status: open title: Review usage of atomic variables in signamodule.c versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 11:23:51 2020 From: report at bugs.python.org (Richard Neumann) Date: Mon, 28 Dec 2020 16:23:51 +0000 Subject: [New-bugs-announce] [issue42768] super().__new__() of list expands arguments Message-ID: <1609172631.57.0.956527842974.issue42768@roundup.psfhosted.org> New submission from Richard Neumann : When sublassing the built-in list, the invocation of super().__new__ will unexpectedly expand the passed arguments: class MyTuple(tuple): def __new__(cls, *items): print(cls, items) return super().__new__(cls, items) class MyList(list): def __new__(cls, *items): print(cls, items) return super().__new__(cls, items) def main(): my_tuple = MyTuple(1, 2, 3, 'foo', 'bar') print('My tuple:', my_tuple) my_list = MyList(1, 2, 3, 'foo', 'bar') print('My list:', my_list) if __name__ == '__main__': main() Actual result: (1, 2, 3, 'foo', 'bar') My tuple: (1, 2, 3, 'foo', 'bar') (1, 2, 3, 'foo', 'bar') Traceback (most recent call last): File "/home/neumann/listbug.py", line 24, in main() File "/home/neumann/listbug.py", line 19, in main my_list = MyList(1, 2, 3, 'foo', 'bar') TypeError: list expected at most 1 argument, got 5 Expected: (1, 2, 3, 'foo', 'bar') My tuple: (1, 2, 3, 'foo', 'bar') (1, 2, 3, 'foo', 'bar') My list: [1, 2, 3, 'foo', 'bar'] ---------- components: ctypes messages: 383902 nosy: conqp priority: normal severity: normal status: open title: super().__new__() of list expands arguments type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 11:37:04 2020 From: report at bugs.python.org (Damien Levac) Date: Mon, 28 Dec 2020 16:37:04 +0000 Subject: [New-bugs-announce] [issue42769] concurrent.futures.ProcessPoolExecutor is unable to forward exceptions with state. Message-ID: <1609173424.79.0.660348890279.issue42769@roundup.psfhosted.org> New submission from Damien Levac : When running tasks on a `ProcessPoolExecutor`, exceptions raised by the dispatched function should be pickled and accessible to the parent process through the `Future.exception` method. On Python 3.9.1 (Linux ryzen3950x 5.4.0-58-generic #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux) the behavior works with exceptions which are stateless but not if they hold state. I suspect it is related to the multiprocessing/pickle bug mentioned in the release notes to 3.9.1 but I didn't dig much deeper. Let me know if I can assist in any way or if any pertinent information is missing: it is my first time reporting a bug here :) Thank you for your hard work! ---------- components: Library (Lib) files: repro.py messages: 383903 nosy: damien.levac priority: normal severity: normal status: open title: concurrent.futures.ProcessPoolExecutor is unable to forward exceptions with state. type: behavior versions: Python 3.9 Added file: https://bugs.python.org/file49703/repro.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 13:13:16 2020 From: report at bugs.python.org (bazwal) Date: Mon, 28 Dec 2020 18:13:16 +0000 Subject: [New-bugs-announce] [issue42770] Typo in email.headerregistry docs Message-ID: <1609179196.61.0.989200266115.issue42770@roundup.psfhosted.org> New submission from bazwal : The section for class email.headerregistry.ContentDispositionHeader has a typo in an attribute name: "content-disposition" should be corrected to "content_disposition". ---------- assignee: docs at python components: Documentation messages: 383910 nosy: bazwal, docs at python priority: normal severity: normal status: open title: Typo in email.headerregistry docs type: enhancement versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 14:12:23 2020 From: report at bugs.python.org (Mike Miller) Date: Mon, 28 Dec 2020 19:12:23 +0000 Subject: [New-bugs-announce] [issue42771] Implement interactive hotkey, Ctrl+L to clear the console in Windows. Message-ID: <1609182743.02.0.584212046771.issue42771@roundup.psfhosted.org> New submission from Mike Miller : The Ctrl+L as clear-screen hotkey is supported just about everywhere, Unix and Windows, with the exceptions of cmd.exe and python.exe interactive mode. As the legacy cmd.exe can be easily replaced, that leaves python.exe. Likely needs to be configured via readline or its analog used under Windows. Documenting it would be good too. Am happy to help, write, or test. ---------- components: Windows messages: 383917 nosy: mixmastamyk, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Implement interactive hotkey, Ctrl+L to clear the console in Windows. type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 14:23:37 2020 From: report at bugs.python.org (Raymond Hettinger) Date: Mon, 28 Dec 2020 19:23:37 +0000 Subject: [New-bugs-announce] [issue42772] randrange() mishandles step when stop is None Message-ID: <1609183417.92.0.845527263557.issue42772@roundup.psfhosted.org> New submission from Raymond Hettinger : When stop is None, the step argument is ignored: >>> randrange(1000, None, 100) 651 >>> randrange(1000, step=100) 673 ---------- components: Library (Lib) messages: 383919 nosy: rhettinger, serhiy.storchaka priority: normal severity: normal status: open title: randrange() mishandles step when stop is None type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 17:31:27 2020 From: report at bugs.python.org (Ammar Askar) Date: Mon, 28 Dec 2020 22:31:27 +0000 Subject: [New-bugs-announce] [issue42773] build.yml workflow not testing on pushes Message-ID: <1609194687.05.0.86275810312.issue42773@roundup.psfhosted.org> New submission from Ammar Askar : It looks like on pushes to Github, we currently aren't running tests. (Though this isn't too big of a concern since they get run on pull requests). Here's an example of a recent run https://github.com/python/cpython/runs/1609911031 ``` fatal: ambiguous argument 'origin/..': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git [...] -- [...]' ``` ---------- components: Build messages: 383935 nosy: FFY00, Mariatta, ammar2, vstinner priority: normal severity: normal status: open title: build.yml workflow not testing on pushes type: behavior versions: Python 3.10, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 19:42:59 2020 From: report at bugs.python.org (Trevor Marvin) Date: Tue, 29 Dec 2020 00:42:59 +0000 Subject: [New-bugs-announce] [issue42774] 'ipaddress' module, bad result for 'is_private' on "192.0.0.0" Message-ID: <1609202579.47.0.156999705644.issue42774@roundup.psfhosted.org> New submission from Trevor Marvin : Tested on Python 3.6.9 with "ipaddress" module, module version 1.0. ipaddress.ip_address('192.0.0.0').is_private Incorrectly returns as 'True'. Per RFC 1918 / BCP 5, section 3, the private IPv4 space sarting with '192' is only '192.168.0.0/16'. ---------- components: Library (Lib) messages: 383942 nosy: trevormarvin priority: normal severity: normal status: open title: 'ipaddress' module, bad result for 'is_private' on "192.0.0.0" type: behavior versions: Python 3.6 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 22:06:49 2020 From: report at bugs.python.org (Ethan Furman) Date: Tue, 29 Dec 2020 03:06:49 +0000 Subject: [New-bugs-announce] [issue42775] __init_subclass__ should be called in __init__ Message-ID: <1609211209.61.0.873859112725.issue42775@roundup.psfhosted.org> New submission from Ethan Furman : PEP 487 introduced __init_subclass__ and __set_name__, and both of those were wins for the common cases of metaclass usage. Unfortunately, the implementation of PEP 487 with regards to __init_subclass__ has made the writing of correct metaclasses significantly harder, if not impossible. The cause is that when a metaclass calls type.__new__ to actually create the class, type.__new__ calls the __init_subclass__ methods of the new class' parents, passing it the newly created, but incomplete, class. In code: ``` class Meta(type): # def __new__(mcls, name, bases, namespace, **kwds): # create new class, which will call __init_subclass__ and __set_name__ new_class = type.__new__(mcls, name, bases, namespace, **kwds) # finish setting up class new_class.some_attr = 9 ``` As you can deduce, when the parent __init_subclass__ is called with the new class, `some_attr` has not been added yet -- the new class is incomplete. For Enum, this means that __init_subclass__ doesn't have access to the new Enum's members (they haven't beet added yet). For ABC, this means that __init_subclass__ doesn't have access to __abstract_methods__ (it hasn't been created yet). Because Enum is pure Python code I was able to work around it: - remove new __init_subclass__ (if it exists) - insert dummy class with a no-op __init_subclass__ - call type.__new__ - save any actual __init_subclass__ - add back any new __init_subclass__ - rewrite the new class' __bases__, removing the no-op dummy class - finish creating the class - call the parent __init_subclass__ with the now complete Enum class I have not been able to work around the problem for ABC. The solution would seem to be to move the calls to __init_subclass__ and __set_names__ to type.__init__. ---------- assignee: ethan.furman components: Interpreter Core messages: 383946 nosy: ethan.furman, serhiy.storchaka priority: high severity: normal stage: needs patch status: open title: __init_subclass__ should be called in __init__ type: behavior versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 22:19:59 2020 From: report at bugs.python.org (andy ye) Date: Tue, 29 Dec 2020 03:19:59 +0000 Subject: [New-bugs-announce] [issue42776] The string find method shows the problem Message-ID: <1609211999.49.0.259573414464.issue42776@roundup.psfhosted.org> New submission from andy ye : data = 'abcddd' # True data.find('a', 0) # False data.find('a', start=0) # document class str(obj): def find(self, sub, start=None, end=None): # real signature unknown; restored from __doc__ """ S.find(sub[, start[, end]]) -> int Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation. Return -1 on failure. """ return 0 ---------- assignee: docs at python components: Documentation files: ??2020-12-29 ??10.57.53.png messages: 383949 nosy: andyye, docs at python priority: normal severity: normal status: open title: The string find method shows the problem versions: Python 3.6 Added file: https://bugs.python.org/file49704/??2020-12-29 ??10.57.53.png _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Mon Dec 28 23:17:59 2020 From: report at bugs.python.org (David) Date: Tue, 29 Dec 2020 04:17:59 +0000 Subject: [New-bugs-announce] [issue42777] WindowsPath does not implement is_mount but ntpath implements and offers a ismount method Message-ID: <1609215479.73.0.163162738943.issue42777@roundup.psfhosted.org> New submission from David : pathlib.WindowsPath[0] does not implement is_mount but ntpath implements and offers a ismount[1] method. Perhaps WindowsPath is_mount can make use of ntpath.ismount ? [0] https://github.com/python/cpython/blob/master/Lib/pathlib.py#L1578 [1] https://github.com/python/cpython/blob/master/Lib/ntpath.py#L248 ---------- messages: 383955 nosy: db priority: normal severity: normal status: open title: WindowsPath does not implement is_mount but ntpath implements and offers a ismount method _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 29 02:51:01 2020 From: report at bugs.python.org (Tom Hale) Date: Tue, 29 Dec 2020 07:51:01 +0000 Subject: [New-bugs-announce] [issue42778] Add follow_symlinks=True to {os.path, Path}.samefile Message-ID: <1609228261.66.0.888283704166.issue42778@roundup.psfhosted.org> New submission from Tom Hale : The os.path and Path implementations of samefile() do not allow comparisons of symbolic links: % mkdir empty && chdir empty % ln -s non-existant broken % ln broken lnbroken % ls -i # Show inode numbers 19325632 broken@ 19325632 lnbroken@ % Yup, they are the same file... but... % python -c 'import os; print(os.path.samefile("lnbroken", "broken", follow_symlinks=False))' Traceback (most recent call last): File "", line 1, in TypeError: samefile() got an unexpected keyword argument 'follow_symlinks' % python -c 'import os; print(os.path.samefile("lnbroken", "broken"))' Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.8/genericpath.py", line 100, in samefile s1 = os.stat(f1) FileNotFoundError: [Errno 2] No such file or directory: 'lnbroken' % Both samefile()s use os.stat under the hood, but neither allow setting os.stat()'s `follow_symlinks` parameter. https://docs.python.org/3/library/os.html#os.stat https://docs.python.org/3/library/os.path.html#os.path.samefile https://docs.python.org/3/library/pathlib.html#pathlib.Path.samefile ---------- components: Library (Lib) messages: 383965 nosy: Tom Hale priority: normal severity: normal status: open title: Add follow_symlinks=True to {os.path,Path}.samefile type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 29 04:49:50 2020 From: report at bugs.python.org (minipython) Date: Tue, 29 Dec 2020 09:49:50 +0000 Subject: [New-bugs-announce] [issue42779] Pow compute can only available in python3.7 Message-ID: <1609235390.01.0.221368482362.issue42779@roundup.psfhosted.org> New submission from minipython <599192367 at qq.com>: The code can only computed based on python 3.7.Python 3.9 and python 3.6 cannot compute the code.It is very strange problem. ---------- components: C API files: chinarest.py messages: 383969 nosy: minipython priority: normal severity: normal status: open title: Pow compute can only available in python3.7 type: resource usage versions: Python 3.7 Added file: https://bugs.python.org/file49705/chinarest.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 29 12:18:38 2020 From: report at bugs.python.org (cptpcrd) Date: Tue, 29 Dec 2020 17:18:38 +0000 Subject: [New-bugs-announce] [issue42780] os.set_inheritable() fails for O_PATH file descriptors on Linux Message-ID: <1609262318.62.0.976413615492.issue42780@roundup.psfhosted.org> New submission from cptpcrd : Note: I filed this bug report after seeing https://github.com/rust-lang/rust/pull/62425 and verifying that it was also reproducible on Python. Credit for discovering the underlying issue should go to Aleksa Sarai, and further discussion can be found there. # Background Linux has O_PATH file descriptors. These are file descriptors that refer to a specific path, without allowing any other kind of access to the file. They can't be used to read or write data; instead, they're intended to be used for use cases like the *at() functions. In that respect, they have similar semantics to O_SEARCH on other platforms (except that they also work on other file types, not just directories). More information on O_PATH file descriptors can be found in open(2) (https://www.man7.org/linux/man-pages/man2/open.2.html), or in the Rust PR linked above. # The problem As documented in the Rust PR linked above, *no* ioctl() calls will succeed on O_PATH file descriptors (they always fail with EBADF). Since os.set_inheritable() uses ioctl(FIOCLEX)/ioctl(FIONCLEX), it will fail on O_PATH file descriptors. This is easy to reproduce: >>> import os >>> a = os.open("/", os.O_RDONLY) >>> b = os.open("/", os.O_PATH) >>> os.set_inheritable(a, True) >>> os.set_inheritable(b, True) # Should succeed! Traceback (most recent call last): File "", line 1, in OSError: [Errno 9] Bad file descriptor >>> I believe this affects all versions of Python going back to version 3.4 (where os.set_inheritable()/os.get_inheritable() were introduced). # Possible fixes I see two potential paths for fixing this: 1. Don't use ioctl(FIOCLEX) at all on Linux. This is what Rust did. However, based on bpo-22258 I'm guessing there would be opposition to implementing this strategy in Python, on the grounds that the fcntl() route takes an extra syscall (which is fair). 2. On Linux, fall back on fcntl() if ioctl(FIOCLEX) fails with EBADF. This could be a very simple patch to Python/fileutils.c. I've attached a basic version of said patch (not sure if it matches standard coding conventions). Downsides: This would add 2 extra syscalls for O_PATH file descriptors, and 1 extra syscall for actual cases of invalid file descriptors (i.e. EBADF). However, I believe these are edge cases that shouldn't come up frequently. ---------- files: set-inheritable-o-path.patch keywords: patch messages: 384016 nosy: cptpcrd priority: normal severity: normal status: open title: os.set_inheritable() fails for O_PATH file descriptors on Linux type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 Added file: https://bugs.python.org/file49706/set-inheritable-o-path.patch _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 29 13:12:58 2020 From: report at bugs.python.org (Luciano Ramalho) Date: Tue, 29 Dec 2020 18:12:58 +0000 Subject: [New-bugs-announce] [issue42781] functools.cached_property docs should explain that it is non-overriding Message-ID: <1609265578.67.0.475985633003.issue42781@roundup.psfhosted.org> New submission from Luciano Ramalho : functools.cached_property is a great addition to the standard library, thanks! However, the docs do not say that @cached_property produces a non-overriding descriptor, in contrast with @property. If a user replaces a @property with a @cached_property, her code may or may not break depending on the existence of an instance attribute with the same name as the decorated method. This is surprising and may affect correctness, so it deserves even more attention than the possible performance loss already mentioned in the docs, related to the shared-dict optimization. In the future, perhaps we can add an argument to @cached_property to optionally make it produce overriding descriptors. ---------- assignee: docs at python components: Documentation messages: 384019 nosy: docs at python, ramalho priority: normal severity: normal status: open title: functools.cached_property docs should explain that it is non-overriding versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 29 13:38:15 2020 From: report at bugs.python.org (Winson Luk) Date: Tue, 29 Dec 2020 18:38:15 +0000 Subject: [New-bugs-announce] [issue42782] shutil.move creates a new directory even on failure Message-ID: <1609267095.96.0.244357223295.issue42782@roundup.psfhosted.org> Change by Winson Luk : ---------- components: Library (Lib) nosy: winsonluk priority: normal severity: normal status: open title: shutil.move creates a new directory even on failure type: behavior versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Tue Dec 29 14:40:26 2020 From: report at bugs.python.org (Simon Willison) Date: Tue, 29 Dec 2020 19:40:26 +0000 Subject: [New-bugs-announce] [issue42783] asyncio.sleep(0) idiom is not documented Message-ID: <1609270826.19.0.331548885122.issue42783@roundup.psfhosted.org> New submission from Simon Willison : asyncio.sleep(0) is the recommended idiom for co-operatively yielding control of the event loop to another task: https://github.com/python/asyncio/issues/284 and https://til.simonwillison.net/python/yielding-in-asyncio This isn't currently explained in the documentation. ---------- assignee: docs at python components: Documentation, asyncio messages: 384025 nosy: asvetlov, docs at python, simonw, yselivanov priority: normal severity: normal status: open title: asyncio.sleep(0) idiom is not documented type: enhancement _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 02:44:25 2020 From: report at bugs.python.org (Big Boss) Date: Wed, 30 Dec 2020 07:44:25 +0000 Subject: [New-bugs-announce] [issue42784] issues with object.h includes Message-ID: <1609314265.69.0.183951284891.issue42784@roundup.psfhosted.org> New submission from Big Boss : using #include in header files is known to cause conflict with other projects using similar header file. Best workaround should be renaming to . Probably better to do the same thing to other header files as well. Or wrap it around with a folder. Like ---------- components: C API messages: 384050 nosy: bigbossbro08 priority: normal severity: normal status: open title: issues with object.h includes type: compile error versions: Python 3.7 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 07:38:08 2020 From: report at bugs.python.org (Paolo Lammens) Date: Wed, 30 Dec 2020 12:38:08 +0000 Subject: [New-bugs-announce] [issue42785] Support operator module callables in inspect.signature Message-ID: <1609331888.16.0.35363195517.issue42785@roundup.psfhosted.org> New submission from Paolo Lammens : Currently, `inspect.signature` doesn't support all callables from the `operator` module, e.g. `operator.attrgetter`: ```python >>> import inspect >>> import operator >>> inspect.signature(operator.attrgetter("spam")) ValueError: callable operator.attrgetter('is_host') is not supported by signature ``` Support for this could be added either directly to `inspect.signature` or by adding `__signature__` attributes to `operator`'s classes. ---------- components: Library (Lib) messages: 384061 nosy: plammens priority: normal severity: normal status: open title: Support operator module callables in inspect.signature type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 11:00:56 2020 From: report at bugs.python.org (Svyatoslav) Date: Wed, 30 Dec 2020 16:00:56 +0000 Subject: [New-bugs-announce] [issue42786] Different repr for collections.abc.Callable and typing.Callable Message-ID: <1609344056.94.0.962444681663.issue42786@roundup.psfhosted.org> New submission from Svyatoslav : Was making some typing conversions to string and noticed a different behavior of collections.abc.Callable and typing.Callable here in 3.9.1. Issues issue42195 and issue40494 can be related. >>> import collections, typing >>> repr(collections.abc.Callable[[int], str]) "collections.abc.Callable[[], str]" >>> repr(typing.Callable[[int], str]) 'typing.Callable[[int], str]' The variant from collections is wrong, it is not consistent with other GenericAlias reprs like >>> repr(tuple[list[float], int, str]) 'tuple[list[float], int, str]' The problem is actually in the list usage to denote callable arguments: >>> repr(tuple[[float], int, str]) "tuple[[], int, str]" Here is the same error. This error disallows to use typing in eval/exec statements: >>> c = collections.abc.Callable[[int], str] >>> d = eval(repr(c)) Traceback (most recent call last): File "", line 1, in File "", line 1 collections.abc.Callable[[], str] ^ SyntaxError: invalid syntax Ofc there is no such problem with typing.Callable: >>> c = typing.Callable[[int], str] >>> d = eval(repr(c)) >>> Interesting, if pass a list into typing.Tuple, an exception is raised: >>> repr(typing.Tuple[[float], int, str]) Traceback (most recent call last): File "", line 1, in File "f:\software\python3.9\lib\typing.py", line 262, in inner return func(*args, **kwds) File "f:\software\python3.9\lib\typing.py", line 896, in __getitem__ params = tuple(_type_check(p, msg) for p in params) File "f:\software\python3.9\lib\typing.py", line 896, in params = tuple(_type_check(p, msg) for p in params) File "f:\software\python3.9\lib\typing.py", line 151, in _type_check raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Tuple[t0, t1, ...]: each t must be a type. Got []. I think it is not correct that tuple accepts lists as generics, i. e. for tuple[[float], int, str] an exception as above should be raised. Bu this is the topic for other issue (maybe even already reported). ---------- messages: 384068 nosy: Prometheus3375 priority: normal severity: normal status: open title: Different repr for collections.abc.Callable and typing.Callable type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 11:01:25 2020 From: report at bugs.python.org (Konstantin Ryabitsev) Date: Wed, 30 Dec 2020 16:01:25 +0000 Subject: [New-bugs-announce] [issue42787] email.utils.getaddresses improper parsing of unicode realnames Message-ID: <1609344085.02.0.266691103598.issue42787@roundup.psfhosted.org> New submission from Konstantin Ryabitsev : What it currently does: >>> import email.utils >>> email.utils.getaddresses(['Shuming [???] ']) [('', 'Shuming'), ('', ''), ('', '???'), ('', ''), ('', 'shumingf at realtek.com')] What it should do: >>> import email.utils >>> email.utils.getaddresses(['Shuming [???] ']) [('Shuming [???]'), 'shumingf at realtek.com')] ---------- components: email messages: 384069 nosy: barry, konstantin2, r.david.murray priority: normal severity: normal status: open title: email.utils.getaddresses improper parsing of unicode realnames type: enhancement versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 12:31:26 2020 From: report at bugs.python.org (yao_way) Date: Wed, 30 Dec 2020 17:31:26 +0000 Subject: [New-bugs-announce] =?utf-8?q?=5Bissue42788=5D_Issue_with_Python?= =?utf-8?q?=E2=80=99s_Floor_Division?= Message-ID: <1609349486.6.0.659948013801.issue42788@roundup.psfhosted.org> New submission from yao_way : There might be an issue with Python?s floor division: Experienced Behavior: >>> (1 / 1) // 1 --> 1.0; >>> (0 / 1) // 1 --> 0.0 Expected Behavior: >>> (1 / 1) // 1 --> 1; >>> (0 / 1) // 1 --> 0 ---------- components: Interpreter Core messages: 384073 nosy: yao_way priority: normal severity: normal status: open title: Issue with Python?s Floor Division type: behavior versions: Python 3.8 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 14:06:35 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 30 Dec 2020 19:06:35 +0000 Subject: [New-bugs-announce] [issue42789] Do not skip test_curses on non-tty Message-ID: <1609355195.18.0.84938555409.issue42789@roundup.psfhosted.org> New submission from Serhiy Storchaka : Currently many tests in test_curses are skipped if sys.__stdout__ is not attached to terminal. All tests are ran only when run them manually, and without using pager. This leads to passing bugs, like in issue42694. The proposed PR makes tests always ran. If __stdout__ is not attached to terminal, it tries to attach it to terminal, using __stderr__ if it is attached to terminal, or opening /dev/tty. If neither __stdout__ nor __stderr__ are attached to terminal, it tries to use temporary file, but some functions do not work in this case and will be untested. It could be better to use os.openpty(), but too many curses outputs can overflow the buffer (2 KiB) and cause the test and all subsequent tests to fail. Currently all tests are passed if use os.openpty(), but I afraid that adding more tests will overflow the buffer and it will confuse future developers. My attempts to solve this issue was unsuccessful for that time, so I left more complicated and less flexible, but more reliable solution. ---------- components: Tests messages: 384083 nosy: serhiy.storchaka, twouters priority: normal severity: normal status: open title: Do not skip test_curses on non-tty type: enhancement versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 14:26:59 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Wed, 30 Dec 2020 19:26:59 +0000 Subject: [New-bugs-announce] [issue42790] test.regrtest outputs to stdout instead of stderr Message-ID: <1609356419.4.0.571157981474.issue42790@roundup.psfhosted.org> New submission from Serhiy Storchaka : unittest outputs progress and summary to stderr, but test.regrtest outputs it to stdout. Except that it outputs progress to stderr if option -W is used. It caused some problems with test_curses, because curses uses stdout, and when re-attach it to other terminal or file we can lose the regrtest output. It makes the code more complicated. ---------- components: Tests messages: 384084 nosy: serhiy.storchaka, vstinner priority: normal severity: normal status: open title: test.regrtest outputs to stdout instead of stderr type: behavior _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 15:22:51 2020 From: report at bugs.python.org (Artyom Kaltovich) Date: Wed, 30 Dec 2020 20:22:51 +0000 Subject: [New-bugs-announce] [issue42791] There is no way to json encode object to str. Message-ID: <1609359771.92.0.340715050179.issue42791@roundup.psfhosted.org> New submission from Artyom Kaltovich : Hello. At first I want to say thank you for all your efforts in python development. I really appreciate it. :) I am trying to convert custom object to json. But, I've found a problem. JSONEncoder has ``default`` method for converting custom objects to some primitives and ``encode`` for converting structures. But what if I want to return completed json string? I can't do it in ``default``, because JSONEncoder will think it is string and encode it accordingly later in ``iterencode`` method. Then I tried redefine encode, but it is called with the dict(array_name=default(o)) so I should convert the dict as a whole and basically reimplement all conversions (for int, float, and lists of course). Did I missed something or there is no way to do it? I suggest to introduce another method, e.g. encode_obj and call it there: https://github.com/python/cpython/blob/master/Lib/json/encoder.py#L438 ``` o = _default(o) yield from _iterencode(o, _current_indent_level) ``` -> ``` o = _encode_obj(_default(o)) yield from _iterencode(o, _current_indent_level) ``` If you are agree I would be happy to implement it. Best Regards, Artsiom. ---------- components: Library (Lib) messages: 384085 nosy: kaltovichartyom priority: normal severity: normal status: open title: There is no way to json encode object to str. _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Wed Dec 30 17:08:17 2020 From: report at bugs.python.org (Jah-On) Date: Wed, 30 Dec 2020 22:08:17 +0000 Subject: [New-bugs-announce] [issue42792] [MacOS] Can't open file in a separate (threading.Thread) thread Message-ID: <1609366097.56.0.326063377127.issue42792@roundup.psfhosted.org> New submission from Jah-On : Tested on MacOS Big Sur... Most recent version as of posting. ---------- components: macOS files: test.py messages: 384088 nosy: Jah-On, ned.deily, ronaldoussoren priority: normal severity: normal status: open title: [MacOS] Can't open file in a separate (threading.Thread) thread versions: Python 3.8 Added file: https://bugs.python.org/file49709/test.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 31 03:10:49 2020 From: report at bugs.python.org (Narek) Date: Thu, 31 Dec 2020 08:10:49 +0000 Subject: [New-bugs-announce] [issue42793] Bug of round function Message-ID: <1609402249.13.0.578194517068.issue42793@roundup.psfhosted.org> New submission from Narek : The my code is folowing for i in range(1, 5): a = i + 0.5 b = round(a) print(a, "rounded as =>", b) output is 1.5 => 2 2.5 => 2 3.5 => 4 4.5 => 4 the rounding is not correct in output ---------- files: main.py messages: 384101 nosy: Narek2018 priority: normal severity: normal status: open title: Bug of round function type: performance versions: Python 3.9 Added file: https://bugs.python.org/file49711/main.py _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 31 04:36:38 2020 From: report at bugs.python.org (Serhiy Storchaka) Date: Thu, 31 Dec 2020 09:36:38 +0000 Subject: [New-bugs-announce] [issue42794] test_nntplib fails on CI Message-ID: <1609407398.52.0.195663540983.issue42794@roundup.psfhosted.org> New submission from Serhiy Storchaka : Since yesterday ALL PRs are blocked by failing test_nntplib. For example https://github.com/python/cpython/runs/1629664606?check_suite_focus=true: ====================================================================== ERROR: test_descriptions (test.test_nntplib.NetworkedNNTP_SSLTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/runner/work/cpython/cpython/Lib/test/test_nntplib.py", line 250, in wrapped meth(self) File "/home/runner/work/cpython/cpython/Lib/test/test_nntplib.py", line 99, in test_descriptions desc = descs[self.GROUP_NAME] KeyError: 'comp.lang.python' ====================================================================== FAIL: test_description (test.test_nntplib.NetworkedNNTP_SSLTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/runner/work/cpython/cpython/Lib/test/test_nntplib.py", line 250, in wrapped meth(self) File "/home/runner/work/cpython/cpython/Lib/test/test_nntplib.py", line 85, in test_description self.assertIn("Python", desc) AssertionError: 'Python' not found in '' ---------------------------------------------------------------------- ---------- components: Tests messages: 384106 nosy: serhiy.storchaka priority: critical severity: normal status: open title: test_nntplib fails on CI versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 31 06:37:17 2020 From: report at bugs.python.org (Paolo Lammens) Date: Thu, 31 Dec 2020 11:37:17 +0000 Subject: [New-bugs-announce] [issue42795] Asyncio loop.create_server doesn't bind to any interface if host is a sequence with jus the empty string Message-ID: <1609414637.07.0.323924138606.issue42795@roundup.psfhosted.org> New submission from Paolo Lammens : When a sequence containing just the empty string (e.g. `['']`) is passed as the `host` parameter of `loop.create_server`, the server seems not to bind to any network interface. Since, per the [documentation](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.create_server) for `create_server`, the empty string means "bind to all interfaces", > If host is an empty string or None all interfaces are assumed and a list of multiple > sockets will be returned (most likely one for IPv4 and another one for IPv6). and also > The host parameter can also be a sequence (e.g. list) of hosts to bind to. I would have expected a list containing the empty string to also work as expected, i.e. "binding to all hosts in the sequence", so binding to "" and thus to every interface. Example: Server script: ```python import asyncio async def server(): async def connection_callback(reader, writer: asyncio.StreamWriter): print(f"got connection from {writer.get_extra_info('peername')}") writer.close() await writer.wait_closed() s = await asyncio.start_server(connection_callback, host=[''], port=4567) async with s: print("starting server") await s.serve_forever() asyncio.run(server()) ``` Client script: ```python import asyncio async def client(): reader, writer = await asyncio.open_connection("127.0.0.1", 4567) print(f"connected to {writer.get_extra_info('peername')}") writer.close() await writer.wait_closed() asyncio.run(client()) ``` Expected: - Server: ``` starting server got connection from ('127.0.0.1', xxxxx) ``` - Client: ``` connected to ('127.0.0.1', xxxxx) ``` Actual: - Server: ``` starting server ``` - Client: a ConnectionError is raised (the host machine refused the connection) ---------- components: asyncio messages: 384109 nosy: asvetlov, plammens, yselivanov priority: normal severity: normal status: open title: Asyncio loop.create_server doesn't bind to any interface if host is a sequence with jus the empty string type: behavior versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 31 10:44:22 2020 From: report at bugs.python.org (Gabriele Tornetta) Date: Thu, 31 Dec 2020 15:44:22 +0000 Subject: [New-bugs-announce] [issue42796] tempfile doesn't seem to play nicely with os.chdir on Windows Message-ID: <1609429462.99.0.549073524259.issue42796@roundup.psfhosted.org> New submission from Gabriele Tornetta : The following script causes havoc on Windows while it works as expected on Linux ~~~ python import os import tempfile def test_chdir(): with tempfile.TemporaryDirectory() as tempdir: os.chdir(tempdir) ~~~ Running the above on Windows results in RecursionError: maximum recursion depth exceeded while calling a Python object (see attachment for the full stacktrace). ---------- components: Library (Lib) files: tempfile_st.txt messages: 384125 nosy: Gabriele Tornetta priority: normal severity: normal status: open title: tempfile doesn't seem to play nicely with os.chdir on Windows type: crash versions: Python 3.9 Added file: https://bugs.python.org/file49712/tempfile_st.txt _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 31 14:18:13 2020 From: report at bugs.python.org (Costas Basdekis) Date: Thu, 31 Dec 2020 19:18:13 +0000 Subject: [New-bugs-announce] [issue42797] Allow doctest to select tests via -m/--match option Message-ID: <1609442293.63.0.396324558319.issue42797@roundup.psfhosted.org> New submission from Costas Basdekis : Most testing frameworks allow for only a subset of tests to be run, and the reason is usually either to focus on a specific test among many failing ones, or for speed purposes if running the whole suite is too slow. With doctests, it's usually the case that the are light and fast, but if you make a breaking change you can still have many tests failing, and you want to focus only on one. This proposition adds an `-m`/`--match` option to the doctest runner, to allow to select 1 or more specific tests to run. The proposed format for the parameters is : * : * is a glob-like string where '*' characters can match any substring of a test's name, and an '*' is implicitly added to the start of the pattern: eg 'do_*_method' matches '__main__.do_a_method' and '__main__.MyClass.do_b_method' * is a list of numbers or ranges to match the 0-based index examples within the test: eg '1' matches the second example, `-3` matches the first 4 examples, '4-` matches all but the first 4 examples, and `-3,5,7-10,20-` matches examples 0, 1, 2, 3, 5, 7, 8, 9, 10, 20, and the rest ---------- components: Tests messages: 384132 nosy: costas-basdekis priority: normal severity: normal status: open title: Allow doctest to select tests via -m/--match option type: enhancement versions: Python 3.10 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 31 14:20:11 2020 From: report at bugs.python.org (Paul Watson) Date: Thu, 31 Dec 2020 19:20:11 +0000 Subject: [New-bugs-announce] [issue42798] pip search fails Message-ID: <1609442411.49.0.32642630503.issue42798@roundup.psfhosted.org> New submission from Paul Watson : Fresh install of 3.9.1 Created venv Activated venv (py3.9) 13:12:59.52 C:\venv\py3.9\Scripts C:>pip search astronomy ERROR: Exception: Traceback (most recent call last): File "c:\venv\py3.9\lib\site-packages\pip\_internal\cli\base_command.py", line 228, in _main status = self.run(options, args) File "c:\venv\py3.9\lib\site-packages\pip\_internal\commands\search.py", line 60, in run pypi_hits = self.search(query, options) File "c:\venv\py3.9\lib\site-packages\pip\_internal\commands\search.py", line 80, in search hits = pypi.search({'name': query, 'summary': query}, 'or') File "C:\Users\mike\AppData\Local\Programs\Python\Python39\lib\xmlrpc\client.py", line 1116, in __call__ return self.__send(self.__name, args) File "C:\Users\mike\AppData\Local\Programs\Python\Python39\lib\xmlrpc\client.py", line 1456, in __request response = self.__transport.request( File "c:\venv\py3.9\lib\site-packages\pip\_internal\network\xmlrpc.py", line 45, in request return self.parse_response(response.raw) File "C:\Users\mike\AppData\Local\Programs\Python\Python39\lib\xmlrpc\client.py", line 1348, in parse_response return u.close() File "C:\Users\mike\AppData\Local\Programs\Python\Python39\lib\xmlrpc\client.py", line 662, in close raise Fault(**self._stack[0]) xmlrpc.client.Fault: (py3.9) 13:13:08.09 C:\venv\py3.9\Scripts C:>python --version Python 3.9.1 ---------- components: Demos and Tools messages: 384133 nosy: Paul Watson priority: normal severity: normal status: open title: pip search fails type: crash versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 31 17:14:20 2020 From: report at bugs.python.org (Josh Triplett) Date: Thu, 31 Dec 2020 22:14:20 +0000 Subject: [New-bugs-announce] [issue42799] Please document fnmatch LRU cache size (256) and suggest alternatives Message-ID: <1609452860.68.0.697253388073.issue42799@roundup.psfhosted.org> New submission from Josh Triplett : fnmatch translates shell patterns to regexes, using an LRU cache of 256 elements. The documentation doesn't mention the cache size, just "They cache the compiled regular expressions for speed.". Without this knowledge, it's possible to get pathologically bad performance by exceeding the cache size. Please consider adding documentation of the cache size to the module documentation for fnmatch, along with a suggestion to use fnmatch.translate directly if you have more patterns than that. ---------- components: Library (Lib) messages: 384141 nosy: joshtriplett priority: normal severity: normal status: open title: Please document fnmatch LRU cache size (256) and suggest alternatives versions: Python 3.9 _______________________________________ Python tracker _______________________________________ From report at bugs.python.org Thu Dec 31 19:16:58 2020 From: report at bugs.python.org (Ammar Askar) Date: Fri, 01 Jan 2021 00:16:58 +0000 Subject: [New-bugs-announce] [issue42800] Traceback objects allow accessing frame objects without triggering audit hooks Message-ID: <1609460218.67.0.01717828776.issue42800@roundup.psfhosted.org> New submission from Ammar Askar : It is possible to access all the frame objects in the interpret without triggering any audit hooks through the use of exceptions. Namely, through the traceback's tb_frame property. Ordinarily one would trigger the "sys._current_frames" or "sys._getframe" event but this code path bypasses those. There is already precedent for raising events for certain sensitive properties such as `__code__` in funcobject. (through a "object.__getattr__" event) so perhaps this property should be protected in a similar way. This issue was recently demonstrated in a security competition: * https://github.com/hstocks/ctf_writeups/blob/master/2020/hxp/audited/README.md * https://github.com/fab1ano/hxp-ctf-20/blob/master/audited/README.md ---------- assignee: steve.dower components: Library (Lib) keywords: security_issue messages: 384143 nosy: ammar2, christian.heimes, steve.dower priority: normal severity: normal status: open title: Traceback objects allow accessing frame objects without triggering audit hooks type: security versions: Python 3.10, Python 3.8, Python 3.9 _______________________________________ Python tracker _______________________________________